Compare commits

..

80 Commits

Author SHA1 Message Date
Richard Lee
9f3986e6b4 Checkpoint before e2e test 2026-04-06 15:08:56 -07:00
Richard Lee
24ce07949e Checkpoint before e2e test 2026-04-04 16:50:19 -07:00
Richard Lee
e6d2da4716 Initial Clippy 2026-04-04 13:20:29 -07:00
Michael Bolin
dedd1c386a fix: suppress status card expect_used warnings after #16351 (#16378)
## Why

Follow-up to #16351.

That PR synchronized Bazel clippy lint levels with Cargo, but two
intentional `expect()` calls in `codex-rs/tui/src/status/card.rs` still
tripped `clippy::expect_used` (I believe #16201 raced with #16351, which
is why it was missed).
2026-03-31 17:38:26 -07:00
Michael Bolin
2e942ce830 ci: sync Bazel clippy lints and fix uncovered violations (#16351)
## Why

Follow-up to #16345, the Bazel clippy rollout in #15955, and the cleanup
pass in #16353.

`cargo clippy` was enforcing the workspace deny-list from
`codex-rs/Cargo.toml` because the member crates opt into `[lints]
workspace = true`, but Bazel clippy was only using `rules_rust` plus
`clippy.toml`. That left the Bazel lane vulnerable to drift:
`clippy.toml` can tune lint behavior, but it cannot set
allow/warn/deny/forbid levels.

This PR now closes both sides of the follow-up. It keeps `.bazelrc` in
sync with `[workspace.lints.clippy]`, and it fixes the real clippy
violations that the newly-synced Windows Bazel lane surfaced once that
deny-list started matching Cargo.

## What Changed

- added `.github/scripts/verify_bazel_clippy_lints.py`, a Python check
that parses `codex-rs/Cargo.toml` with `tomllib`, reads the Bazel
`build:clippy` `clippy_flag` entries from `.bazelrc`, and reports
missing, extra, or mismatched lint levels
- ran that verifier from the lightweight `ci.yml` workflow so the sync
check does not depend on a Rust toolchain being installed first
- expanded the `.bazelrc` comment to explain the Cargo `workspace =
true` linkage and why Bazel needs the deny-list duplicated explicitly
- fixed the Windows-only `codex-windows-sandbox` violations that Bazel
clippy reported after the sync, using the same style as #16353: inline
`format!` args, method references instead of trivial closures, removed
redundant clones, and replaced SID conversion `unwrap` and `expect`
calls with proper errors
- cleaned up the remaining cross-platform violations the Bazel lane
exposed in `codex-backend-client` and `core_test_support`

## Testing

Key new test introduced by this PR:

`python3 .github/scripts/verify_bazel_clippy_lints.py`
2026-03-31 17:09:48 -07:00
Eric Traut
ae057e0bb9 Fix stale /status rate limits in active TUI sessions (#16201)
Fix stale weekly limit in `/status` (#16194): /status reused the
session’s cached rate-limit snapshot, so the weekly remaining limit
could stay frozen within an active session.

With this change, we now dynamically update the rate limits after status
is displayed.

I needed to delete a few low-value test cases from the chatWidget tests
because the test.rs file is really large, and the new tests in this PR
pushed us over the 512K mandated limit. I'm working on a separate PR to
refactor that test file.
2026-03-31 17:03:05 -06:00
Eric Traut
424e532a6b Refactor chatwidget tests into topical modules (#16361)
Problem: `chatwidget/tests.rs` had grown into a single oversized test
blob that was hard to maintain and exceeded the repo's blob size limit.

Solution: split the chatwidget tests into topical modules with a thin
root `tests.rs`, shared helper utilities, preserved snapshot naming, and
hermetic test config so the refactor stays stable and passes the
`codex-tui` test suite.
2026-03-31 16:45:58 -06:00
Michael Bolin
9a8730f31e ci: verify codex-rs Cargo manifests inherit workspace settings (#16353)
## Why

Bazel clippy now catches lints that `cargo clippy` can still miss when a
crate under `codex-rs` forgets to opt into workspace lints. The concrete
example here was `codex-rs/app-server/tests/common/Cargo.toml`: Bazel
flagged a clippy violation in `models_cache.rs`, but Cargo did not
because that crate inherited workspace package metadata without
declaring `[lints] workspace = true`.

We already mirror the workspace clippy deny list into Bazel after
[#15955](https://github.com/openai/codex/pull/15955), so we also need a
repo-side check that keeps every `codex-rs` manifest opted into the same
workspace settings.

## What changed

- add `.github/scripts/verify_cargo_workspace_manifests.py`, which
parses every `codex-rs/**/Cargo.toml` with `tomllib` and verifies:
  - `version.workspace = true`
  - `edition.workspace = true`
  - `license.workspace = true`
  - `[lints] workspace = true`
- top-level crate names follow the `codex-*` / `codex-utils-*`
conventions, with explicit exceptions for `windows-sandbox-rs` and
`utils/path-utils`
- run that script in `.github/workflows/ci.yml`
- update the current outlier manifests so the check is enforceable
immediately
- fix the newly exposed clippy violations in the affected crates
(`app-server/tests/common`, `file-search`, `feedback`,
`shell-escalation`, and `debug-client`)






---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16353).
* #16351
* __->__ #16353
2026-03-31 21:59:28 +00:00
Michael Bolin
04ec9ef8af Fix Windows external bearer refresh test (#16366)
## Why

https://github.com/openai/codex/pull/16287 introduced a change to
`codex-rs/login/src/auth/auth_tests.rs` that uses a PowerShell helper to
read the next token from `tokens.txt` and rewrite the remainder back to
disk. On Windows, `Get-Content` can return a scalar when the file has
only one remaining line, so `$lines[0]` reads the first character
instead of the full token. That breaks the external bearer refresh test
once the token list is nearly exhausted.

https://github.com/openai/codex/pull/16288 introduced similar changes to
`codex-rs/core/src/models_manager/manager_tests.rs` and
`codex-rs/core/tests/suite/client.rs`.

These went unnoticed because the failures showed up when the test was
run via Cargo on Windows, but not in our Bazel harness. Figuring out
that Cargo-vs-Bazel delta will happen in a follow-up PR.

## Verification

On my Windows machine, I verified `cargo test` passes when run in
`codex-rs/login` and `codex-rs/core`. Once this PR is merged, I will
keep an eye on
https://github.com/openai/codex/actions/workflows/rust-ci-full.yml to
verify it goes green.

## What changed

- Wrap `Get-Content -Path tokens.txt` in `@(...)` so the script always
gets array semantics before counting, indexing, and rewriting the
remaining lines.
2026-03-31 14:44:54 -07:00
Eric Traut
103acdfb06 Refactor external auth to use a single trait (#16356)
## Summary
- Replace the separate external auth enum and refresher trait with a
single `ExternalAuth` trait in login auth flow
- Move bearer token auth behind `BearerTokenRefresher` and update
`AuthManager` and app-server wiring to use the generic external auth API
2026-03-31 14:54:18 -06:00
Eric Traut
0fe873ad5f Fix PR babysitter review comment monitoring (#16363)
## Summary
- prioritize newly surfaced review comments ahead of CI and mergeability
handling in the PR babysitter watcher
- keep `--watch` running for open PRs even when they are currently
merge-ready so later review feedback is not missed
2026-03-31 14:25:32 -06:00
rhan-oai
e8de4ea953 [codex-analytics] thread events (#15690)
- add event for thread initialization
- thread/start, thread/fork, thread/resume
- feature flagged behind `FeatureFlag::GeneralAnalytics`
- does not yet support threads started by subagents

PR stack:
- --> [[telemetry] thread events
#15690](https://github.com/openai/codex/pull/15690)
- [[telemetry] subagent events
#15915](https://github.com/openai/codex/pull/15915)
- [[telemetry] turn events
#15591](https://github.com/openai/codex/pull/15591)
- [[telemetry] steer events
#15697](https://github.com/openai/codex/pull/15697)
- [[telemetry] queued prompt data
#15804](https://github.com/openai/codex/pull/15804)


Sample extracted logs in Codex-backend
```
INFO     | 2026-03-29 16:39:37 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bf7-9f5f-7f82-9877-6d48d1052531 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774827577 | 
INFO     | 2026-03-29 16:45:46 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3b84-5731-79d0-9b3b-9c6efe5f5066 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=resumed subagent_source=None parent_thread_id=None created_at=1774820022 | 
INFO     | 2026-03-29 16:45:49 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bfd-4cd6-7c12-a13e-48cef02e8c4d product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=forked subagent_source=None parent_thread_id=None created_at=1774827949 | 
INFO     | 2026-03-29 17:20:29 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3c1d-0412-7ed2-ad24-c9c0881a36b0 product_surface=codex product_client_id=CODEX_SERVICE_EXEC client_name=codex_exec client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774830027 | 
```

Notes
- `product_client_id` gets canonicalized in codex-backend
- subagent threads are addressed in a following pr
2026-03-31 12:16:44 -07:00
jif-oai
868ac158d7 feat: log db better maintenance (#16330)
Run a DB clean-up more frequently with an incremental `VACCUM` in it
2026-03-31 19:15:44 +02:00
Eric Traut
f396454097 Route TUI /feedback submission through the app server (#16184)
The TUI’s `/feedback` flow was still uploading directly through the
local feedback crate, which bypassed app-server behavior such as
auth-derived feedback tags like chatgpt_user_id and made TUI feedback
handling diverge from other clients. It also meant that remove TUI
sessions failed to upload the correct feedback logs and session details.

Testing: Manually tested `/feedback` flow and confirmed that it didn't
regress.
2026-03-31 10:36:47 -06:00
Michael Bolin
03b2465591 fix: fix clippy issue caught by cargo but not bazel (#16345)
I noticed that
https://github.com/openai/codex/actions/workflows/rust-ci-full.yml
started failing on my own PR,
https://github.com/openai/codex/pull/16288, even though CI was green
when I merged it.

Apparently, it introduced a lint violation that was [correctly!] caught
by our Cargo-based clippy runner, but not our Bazel-based one.

My next step is to figure out the reason for the delta between the two
setups, but I wanted to get us green again quickly, first.
2026-03-31 16:01:06 +00:00
jif-oai
b09b58ce2d chore: drop interrupt from send_message (#16324) 2026-03-31 16:02:45 +02:00
jif-oai
285f4ea817 feat: restrict spawn_agent v2 to messages (#16325) 2026-03-31 14:52:55 +02:00
jif-oai
4c72e62d0b fix: update fork boundaries computation (#16322) 2026-03-31 14:10:43 +02:00
jif-oai
1fc8aa0e16 feat: fork pattern v2 (#15771)
Adds this:
```
properties.insert(
            "fork_turns".to_string(),
            JsonSchema::String {
                description: Some(
                    "Optional MultiAgentV2 fork mode. Use `none`, `all`, or a positive integer string such as `3` to fork only the most recent turns."
                        .to_string(),
                ),
            },
        );
        ```

---------

Co-authored-by: Codex <noreply@openai.com>
2026-03-31 13:06:08 +02:00
jif-oai
2b8d29ac0d nit: update aborted line (#16318) 2026-03-31 13:06:00 +02:00
jif-oai
ec21e1fd01 chore: clean wait v2 (#16317) 2026-03-31 12:18:10 +02:00
jif-oai
25fbd7e40e fix: ma2 (#16238) 2026-03-31 11:22:38 +02:00
jif-oai
873e466549 fix: one shot end of turn (#16308)
Fix the death of the end of turn watcher
2026-03-31 11:11:33 +02:00
Michael Bolin
20f43c1e05 core: support dynamic auth tokens for model providers (#16288)
## Summary

Fixes #15189.

Custom model providers that set `requires_openai_auth = false` could
only use static credentials via `env_key` or
`experimental_bearer_token`. That is not enough for providers that mint
short-lived bearer tokens, because Codex had no way to run a command to
obtain a bearer token, cache it briefly in memory, and retry with a
refreshed token after a `401`.

This PR adds that provider config and wires it through the existing auth
design: request paths still go through `AuthManager.auth()` and
`UnauthorizedRecovery`, with `core` only choosing when to use a
provider-backed bearer-only `AuthManager`.

## Scope

To keep this PR reviewable, `/models` only uses provider auth for the
initial request in this change. It does **not** add a dedicated `401`
retry path for `/models`; that can be follow-up work if we still need it
after landing the main provider-token support.

## Example Usage

```toml
model_provider = "corp-openai"

[model_providers.corp-openai]
name = "Corp OpenAI"
base_url = "https://gateway.example.com/openai"
requires_openai_auth = false

[model_providers.corp-openai.auth]
command = "gcloud"
args = ["auth", "print-access-token"]
timeout_ms = 5000
refresh_interval_ms = 300000
```

The command contract is intentionally small:

- write the bearer token to `stdout`
- exit `0`
- any leading or trailing whitespace is trimmed before the token is used

## What Changed

- add `model_providers.<id>.auth` to the config model and generated
schema
- validate that command-backed provider auth is mutually exclusive with
`env_key`, `experimental_bearer_token`, and `requires_openai_auth`
- build a bearer-only `AuthManager` for `ModelClient` and
`ModelsManager` when a provider configures `auth`
- let normal Responses requests and realtime websocket connects use the
provider-backed bearer source through the same `AuthManager.auth()` path
- allow `/models` online refresh for command-auth providers and attach
the provider token to the initial `/models` request
- keep `auth.cwd` available as an advanced escape hatch and include it
in the generated config schema

## Testing

- `cargo test -p codex-core provider_auth_command`
- `cargo test -p codex-core
refresh_available_models_uses_provider_auth_token`
- `cargo test -p codex-core
test_deserialize_provider_auth_config_defaults`

## Docs

- `developers.openai.com/codex` should document the new
`[model_providers.<id>.auth]` block and the token-command contract
2026-03-31 01:37:27 -07:00
Michael Bolin
0071968829 auth: let AuthManager own external bearer auth (#16287)
## Summary

`AuthManager` and `UnauthorizedRecovery` already own token resolution
and staged `401` recovery. The missing piece for provider auth was a
bearer-only mode that still fit that design, instead of pushing a second
auth abstraction into `codex-core`.

This PR keeps the design centered on `AuthManager`: it teaches
`codex-login` how to own external bearer auth directly so later provider
work can keep calling `AuthManager.auth()` and `UnauthorizedRecovery`.

## Motivation

This is the middle layer for #15189.

The intended design is still:

- `AuthManager` encapsulates token storage and refresh
- `UnauthorizedRecovery` powers staged `401` recovery
- all request tokens go through `AuthManager.auth()`

This PR makes that possible for provider-backed bearer tokens by adding
a bearer-only auth mode inside `AuthManager` instead of building
parallel request-auth plumbing in `core`.

## What Changed

- move `ModelProviderAuthInfo` into `codex-protocol` so `core` and
`login` share one config shape
- add `login/src/auth/external_bearer.rs`, which runs the configured
command, caches the bearer token in memory, and refreshes it after `401`
- add `AuthManager::external_bearer_only(...)` for provider-scoped
request paths that should use command-backed bearer auth without
mutating the shared OpenAI auth manager
- add `AuthManager::shared_with_external_chatgpt_auth_refresher(...)`
and rename the other `AuthManager` helpers that only apply to external
ChatGPT auth so the ChatGPT-only path is explicit at the call site
- keep external ChatGPT refresh behavior unchanged while ensuring
bearer-only external auth never persists to `auth.json`

## Testing

- `cargo test -p codex-login`
- `cargo test -p codex-protocol`





---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16287).
* #16288
* __->__ #16287
2026-03-31 01:26:17 -07:00
Michael Bolin
ea650a91b3 auth: generalize external auth tokens for bearer-only sources (#16286)
## Summary

`ExternalAuthRefresher` was still shaped around external ChatGPT auth:
`ExternalAuthTokens` always implied ChatGPT account metadata even when a
caller only needed a bearer token.

This PR generalizes that contract so bearer-only sources are
first-class, while keeping the existing ChatGPT paths strict anywhere we
persist or rebuild ChatGPT auth state.

## Motivation

This is the first step toward #15189.

The follow-on provider-auth work needs one shared external-auth contract
that can do both of these things:

- resolve the current bearer token before a request is sent
- return a refreshed bearer token after a `401`

That should not require a second token result type just because there is
no ChatGPT account metadata attached.

## What Changed

- change `ExternalAuthTokens` to carry `access_token` plus optional
`ExternalAuthChatgptMetadata`
- add helper constructors for bearer-only tokens and ChatGPT-backed
tokens
- add `ExternalAuthRefresher::resolve()` with a default no-op
implementation so refreshers can optionally provide the current token
before a request is sent
- keep ChatGPT-only persistence strict by continuing to require ChatGPT
metadata anywhere the login layer seeds or reloads ChatGPT auth state
- update the app-server bridge to construct the new token shape for
external ChatGPT auth refreshes

## Testing

- `cargo test -p codex-login`


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16286).
* #16288
* #16287
* __->__ #16286
2026-03-31 01:02:46 -07:00
Michael Bolin
19f0d196d1 ci: run Windows argument-comment-lint via native Bazel (#16120)
## Why

Follow-up to #16106.

`argument-comment-lint` already runs as a native Bazel aspect on Linux
and macOS, but Windows is still the long pole in `rust-ci`. To move
Windows onto the same native Bazel lane, the toolchain split has to let
exec-side helper binaries build in an MSVC environment while still
linting repo crates as `windows-gnullvm`.

Pushing the Windows lane onto the native Bazel path exposed a second
round of Windows-only issues in the mixed exec-toolchain plumbing after
the initial wrapper/target fixes landed.

## What Changed

- keep the Windows lint lanes on the native Bazel/aspect path in
`rust-ci.yml` and `rust-ci-full.yml`
- add a dedicated `local_windows_msvc` platform for exec-side helper
binaries while keeping `local_windows` as the `windows-gnullvm` target
platform
- patch `rules_rust` so `repository_set(...)` preserves explicit
exec-platform constraints for the generated toolchains, keep the
Windows-specific bootstrap/direct-link fixes needed for the nightly lint
driver, and expose exec-side `rustc-dev` `.rlib`s to the MSVC sysroot
- register the custom Windows nightly toolchain set with MSVC exec
constraints while still exposing both `x86_64-pc-windows-msvc` and
`x86_64-pc-windows-gnullvm` targets
- enable `dev_components` on the custom Windows nightly repository set
so the MSVC exec helper toolchain actually downloads the
compiler-internal crates that `clippy_utils` needs
- teach `run-argument-comment-lint-bazel.sh` to enumerate concrete
Windows Rust rules, normalize the resulting labels, and skip explicitly
requested incompatible targets instead of failing before the lint run
starts
- patch `rules_rust` build-script env propagation so exec-side
`windows-msvc` helper crates drop forwarded MinGW include and linker
search paths as whole flag/path pairs instead of emitting malformed
`CFLAGS`, `CXXFLAGS`, and `LDFLAGS`
- export the Windows VS/MSVC SDK environment in `setup-bazel-ci` and
pass the relevant variables through `run-bazel-ci.sh` via `--action_env`
/ `--host_action_env` so Bazel build scripts can see the MSVC and UCRT
headers on native Windows runs
- add inline comments to the Windows `setup-bazel-ci` MSVC environment
export step so it is easier to audit how `vswhere`, `VsDevCmd.bat`, and
the filtered `GITHUB_ENV` export fit together
- patch `aws-lc-sys` to skip its standalone `memcmp` probe under Bazel
`windows-msvc` build-script environments, which avoids a Windows-native
toolchain mismatch that blocked the lint lane before it reached the
aspect execution
- patch `aws-lc-sys` to prefer its bundled `prebuilt-nasm` objects for
Bazel `windows-msvc` build-script runs, which avoids missing
`generated-src/win-x86_64/*.asm` runfiles in the exec-side helper
toolchain
- annotate the Linux test-only callsites in `codex-rs/linux-sandbox` and
`codex-rs/core` that the wider native lint coverage surfaced

## Patches

This PR introduces a large patch stack because the Windows Bazel lint
lane currently depends on behavior that upstream dependencies do not
provide out of the box in the mixed `windows-gnullvm` target /
`windows-msvc` exec-toolchain setup.

- Most of the `rules_rust` patches look like upstream candidates rather
than OpenAI-only policy. Preserving explicit exec-platform constraints,
forwarding the right MSVC/UCRT environment into exec-side build scripts,
exposing exec-side `rustc-dev` artifacts, and keeping the Windows
bootstrap/linker behavior coherent all look like fixes to the Bazel/Rust
integration layer itself.
- The two `aws-lc-sys` patches are more tactical. They special-case
Bazel `windows-msvc` build-script environments to avoid a `memcmp` probe
mismatch and missing NASM runfiles. Those may be harder to upstream
as-is because they rely on Bazel-specific detection instead of a general
Cargo/build-script contract.
- Short term, carrying these patches in-tree is reasonable because they
unblock a real CI lane and are still narrow enough to audit. Long term,
the goal should not be to keep growing a permanent local fork of either
dependency.
- My current expectation is that the `rules_rust` patches are less
controversial and should be broken out into focused upstream proposals,
while the `aws-lc-sys` patches are more likely to be temporary escape
hatches unless that crate wants a more general hook for hermetic build
systems.

Suggested follow-up plan:

1. Split the `rules_rust` deltas into upstream-sized PRs or issues with
minimized repros.
2. Revisit the `aws-lc-sys` patches during the next dependency bump and
see whether they can be replaced by an upstream fix, a crate upgrade, or
a cleaner opt-in mechanism.
3. Treat each dependency update as a chance to delete patches one by one
so the local patch set only contains still-needed deltas.

## Verification

- `./.github/scripts/run-argument-comment-lint-bazel.sh
--config=argument-comment-lint --keep_going`
- `RUNNER_OS=Windows
./.github/scripts/run-argument-comment-lint-bazel.sh --nobuild
--config=argument-comment-lint --platforms=//:local_windows
--keep_going`
- `cargo test -p codex-linux-sandbox`
- `cargo test -p codex-core shell_snapshot_tests`
- `just argument-comment-lint`

## References

- #16106
2026-03-30 15:32:04 -07:00
Andrey Mishchenko
390b644b21 Update code mode exec() instructions (#16279) 2026-03-30 12:31:13 -10:00
rhan-oai
28a9807f84 [codex-analytics] refactor analytics to use reducer architecture (#16225)
- rework codex analytics crate to use reducer / publish architecture
- in anticipation of extensive codex analytics
2026-03-30 14:27:12 -07:00
Michael Bolin
9313c49e4c fix: close Bazel argument-comment-lint CI gaps (#16253)
## Why

The Bazel-backed `argument-comment-lint` CI path had two gaps:

- Bazel wildcard target expansion skipped inline unit-test crates from
`src/` modules because the generated `*-unit-tests-bin` `rust_test`
targets are tagged `manual`.
- `argument-comment-mismatch` was still only a warning in the Bazel and
packaged-wrapper entrypoints, so a typoed `/*param_name*/` comment could
still pass CI even when the lint detected it.

That left CI blind to real linux-sandbox examples, including the missing
`/*local_port*/` comment in
`codex-rs/linux-sandbox/src/proxy_routing.rs` and typoed argument
comments in `codex-rs/linux-sandbox/src/landlock.rs`.

## What Changed

- Added `tools/argument-comment-lint/list-bazel-targets.sh` so Bazel
lint runs cover `//codex-rs/...` plus the manual `rust_test`
`*-unit-tests-bin` targets.
- Updated `just argument-comment-lint`, `rust-ci.yml`, and
`rust-ci-full.yml` to use that helper.
- Promoted both `argument-comment-mismatch` and
`uncommented-anonymous-literal-argument` to errors in every strict
entrypoint:
  - `tools/argument-comment-lint/lint_aspect.bzl`
  - `tools/argument-comment-lint/src/bin/argument-comment-lint.rs`
  - `tools/argument-comment-lint/wrapper_common.py`
- Added wrapper/bin coverage for the stricter lint flags and documented
the behavior in `tools/argument-comment-lint/README.md`.
- Fixed the now-covered callsites in
`codex-rs/linux-sandbox/src/proxy_routing.rs`,
`codex-rs/linux-sandbox/src/landlock.rs`, and
`codex-rs/core/src/shell_snapshot_tests.rs`.

This keeps the Bazel target expansion narrow while making the Bazel and
prebuilt-linter paths enforce the same strict lint set.

## Verification

- `python3 -m unittest discover -s tools/argument-comment-lint -p
'test_*.py'`
- `cargo +nightly-2025-09-18 test --manifest-path
tools/argument-comment-lint/Cargo.toml`
- `just argument-comment-lint`
2026-03-30 11:59:50 -07:00
Michael Bolin
258ba436f1 codex-tools: extract discoverable tool models (#16254)
## Why

`#16193` moved the pure `tool_search` and `tool_suggest` spec builders
into `codex-tools`, but `codex-core` still owned the shared
discoverable-tool model that those builders and the `tool_suggest`
runtime both depend on. This change continues the migration by moving
that reusable model boundary out of `codex-core` as well, so the
discovery/suggestion stack uses one shared set of types and
`core/src/tools` no longer needs its own `discoverable.rs` module.

## What changed

- Moved `DiscoverableTool`, `DiscoverablePluginInfo`, and
`filter_tool_suggest_discoverable_tools_for_client()` into
`codex-rs/tools/src/tool_discovery.rs` alongside the extracted
discovery/suggestion spec builders.
- Added `codex-app-server-protocol` as a `codex-tools` dependency so the
shared discoverable-tool model can own the connector-side `AppInfo`
variant directly.
- Updated `core/src/tools/handlers/tool_suggest.rs`,
`core/src/tools/spec.rs`, `core/src/tools/router.rs`,
`core/src/connectors.rs`, and `core/src/codex.rs` to consume the shared
`codex-tools` model instead of the old core-local declarations.
- Changed `core/src/plugins/discoverable.rs` to return
`DiscoverablePluginInfo` directly, moved the pure client-filter coverage
into `tool_discovery_tests.rs`, and deleted the old
`core/src/tools/discoverable.rs` module.
- Updated `codex-rs/tools/README.md` so the crate boundary documents
that `codex-tools` now owns the discoverable-tool models in addition to
the discovery/suggestion spec builders.

## Test plan

- `cargo test -p codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p
codex-core --lib tools::handlers::tool_suggest::`
- `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p
codex-core --lib tools::spec::`
- `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p
codex-core --lib plugins::discoverable::`
- `just bazel-lock-check`
- `just argument-comment-lint`

## References

- #16193
- #16154
- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
- #16132
- #16138
- #16141
2026-03-30 10:48:49 -07:00
Michael Bolin
716f7b0428 codex-tools: extract discovery tool specs (#16193)
## Why

`core/src/tools/spec.rs` still owned the pure `tool_search` and
`tool_suggest` spec builders even though that logic no longer needed
`codex-core` runtime state. This change continues the `codex-tools`
migration by moving the reusable discovery and suggestion spec
construction out of `codex-core` so `spec.rs` is left with the
core-owned policy decisions about when these tools are exposed and what
metadata is available.

## What changed

- Added `codex-rs/tools/src/tool_discovery.rs` with the shared
`tool_search` and `tool_suggest` spec builders, plus focused unit tests
in `tool_discovery_tests.rs`.
- Moved the shared `DiscoverableToolAction` and `DiscoverableToolType`
declarations into `codex-tools` so the `tool_suggest` handler and the
extracted spec builders use the same wire-model enums.
- Updated `core/src/tools/spec.rs` to translate `ToolInfo` and
`DiscoverableTool` values into neutral `codex-tools` inputs and delegate
the actual spec building there.
- Removed the old template-based description rendering helpers from
`core/src/tools/spec.rs` and deleted the now-dead helper methods in
`core/src/tools/discoverable.rs`.
- Updated `codex-rs/tools/README.md` to document that discovery and
suggestion models/spec builders now live in `codex-tools`.

## Test plan

- `cargo test -p codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-discovery-specs cargo test -p
codex-core --lib tools::spec::`
- `CARGO_TARGET_DIR=/tmp/codex-core-discovery-specs cargo test -p
codex-core --lib tools::handlers::tool_suggest::`
- `just argument-comment-lint`

## References

- #16154
- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
- #16132
- #16138
- #16141
2026-03-30 08:15:12 -07:00
jif-oai
c74190a622 fix: ma1 (#16237) 2026-03-30 15:42:17 +02:00
jif-oai
213756c9ab feat: add mailbox concept for wait (#16010)
Add a mailbox we can use for inter-agent communication
`wait` is now based on it and don't take target anymore
2026-03-30 11:47:20 +02:00
Eric Traut
bb95ec3ec6 [codex] Normalize Windows path in MCP startup snapshot test (#16204)
## Summary
A Windows-only snapshot assertion in the app-server MCP startup warning
test compared the raw rendered path, so CI saw `C:\tmp\project` instead
of the normalized `/tmp/project` snapshot fixture.

## Fix
Route that snapshot assertion through the existing
`normalize_snapshot_paths(...)` helper so the test remains
platform-stable.
2026-03-29 17:54:17 -06:00
Michael Bolin
af568afdd5 codex-tools: extract utility tool specs (#16154)
## Why

The previous `codex-tools` migration steps moved the shared schema
models, local-host specs, collaboration specs, and related adapters out
of `codex-core`, but `core/src/tools/spec.rs` still contained a grab bag
of pure utility tool builders. Those specs do not need session state or
handler logic; they only describe wire shapes for tools that
`codex-core` already knows how to execute.

Moving that remaining low-coupling layer into `codex-tools` keeps the
migration moving in meaningful chunks and trims another large block of
passive tool-spec construction out of `codex-core` without touching the
runtime-coupled handlers.

## What changed

- extended `codex-tools` to own the pure spec builders for:
  - code-mode `exec` / `wait`
  - `js_repl` / `js_repl_reset`
- MCP resource tools `list_mcp_resources`,
`list_mcp_resource_templates`, and `read_mcp_resource`
  - utility tools `list_dir` and `test_sync_tool`
- split those builders across small module files with sibling
`*_tests.rs` coverage, keeping `src/lib.rs` exports-only
- rewired `core/src/tools/spec.rs` to call the extracted builders and
deleted the duplicated core-local implementations
- moved the direct JS REPL grammar seam test out of
`core/src/tools/spec_tests.rs` so it now lives with the extracted
implementation in `codex-tools`
- updated `codex-rs/tools/README.md` so the documented crate boundary
matches the new utility-spec surface

## Test plan

- `CARGO_TARGET_DIR=/tmp/codex-tools-utility-specs cargo test -p
codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-utility-specs cargo test -p
codex-core --lib tools::spec::`
- `just fix -p codex-tools -p codex-core`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
- #16132
- #16138
- #16141
2026-03-29 14:34:36 -07:00
Eric Traut
38e648ca67 Fix tui_app_server ghost subagent entries in /agent (#16110)
Fixes #16092

The app-server-backed TUI could accumulate ghost subagent entries in
`/agent` after resume/backfill flows. Some of those rows were no longer
live according to the backend, but still appeared selectable in the
picker and could open as blank threads.

*Cause*
Unlike the legacy tui behavior, tui_app_server was creating local
picker/replay state for subagents discovered through metadata refresh
and loaded-thread backfill, even when no real local session or
transcript had been attached. That let stale ids survive in the picker
as if they were replayable threads.

*Fix*
Stop creating empty local thread channels during subagent metadata
hydration and loaded-thread backfill.
When opening /agent, prune metadata-only entries that thread/read
reports as terminally unavailable.
When selecting a discovered subagent that is still live but not yet
locally attached, materialize a real local session on demand from
thread/read instead of falling back to an empty replay state.
2026-03-29 12:19:34 -06:00
Eric Traut
54d3ad1ede Fix app-server TUI MCP startup warnings regression (#16041)
This addresses #16038

The default `tui_app_server` path stopped surfacing MCP startup failures
during cold start, even though the legacy TUI still showed warnings like
`MCP startup incomplete (...)`. The app-server bridge emitted per-server
startup status notifications, but `tui_app_server` ignored them, so
failed MCP handshakes could look like a clean startup.

This change teaches `tui_app_server` to consume MCP startup status
notifications, preserve the immediate per-server failure warning, and
synthesize the same aggregate startup warning the legacy TUI shows once
startup settles.
2026-03-29 11:57:00 -06:00
Michael Bolin
7880414a27 codex-tools: extract collaboration tool specs (#16141)
## Why

The recent `codex-tools` migration steps have moved shared tool models
and low-coupling spec helpers out of `codex-core`, but
`core/src/tools/spec.rs` still owned a large block of pure
collaboration-tool spec construction. Those builders do not need session
state or runtime behavior; they only need a small amount of core-owned
configuration injected at the seam.

Moving that cohesive slice into `codex-tools` makes the crate boundary
more honest and removes a substantial amount of passive tool-spec logic
from `codex-core` without trying to move the runtime-coupled multi-agent
handlers at the same time.

## What changed

- added `agent_tool.rs`, `request_user_input_tool.rs`, and
`agent_job_tool.rs` to `codex-tools`, with sibling `*_tests.rs` coverage
and an exports-only `lib.rs`
- moved the pure `ToolSpec` builders for:
- collaboration tools such as `spawn_agent`, `send_input`,
`send_message`, `assign_task`, `resume_agent`, `wait_agent`,
`list_agents`, and `close_agent`
  - `request_user_input`
  - agent-job specs `spawn_agents_on_csv` and `report_agent_job_result`
- rewired `core/src/tools/spec.rs` to call the extracted builders while
still supplying the core-owned inputs, such as spawn-agent role
descriptions and wait timeout bounds
- updated the `core/src/tools/spec.rs` seam tests to build expected
collaboration specs through `codex-tools`
- updated `codex-rs/tools/README.md` so the crate documentation reflects
the broader collaboration-tool boundary

## Test plan

- `CARGO_TARGET_DIR=/tmp/codex-tools-collab-specs cargo test -p
codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-collab-specs cargo test -p
codex-core --lib tools::spec::`
- `just fix -p codex-tools -p codex-core`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
- #16132
- #16138
2026-03-28 20:39:47 -07:00
Matthew Zeng
3807807f91 [mcp] Increase MCP startup timeout. (#16080)
- [x] Increase MCP startup timeout to 30s, as the current 10s causes a
lot of local MCPs to timeout.
2026-03-28 19:58:00 -07:00
Eric Traut
3bbc1ce003 Remove TUI voice transcription feature (#16114)
Removes the partially-completed TUI composer voice transcription flow,
including its feature flag, app events, and hold-to-talk state machine.
2026-03-29 00:20:25 +00:00
Michael Bolin
4e119a3b38 codex-tools: extract local host tool specs (#16138)
## Why

`core/src/tools/spec.rs` still bundled a set of pure local-host tool
builders with the orchestration that actually decides when those tools
are exposed and which handlers back them. That made `codex-core`
responsible for JSON/tool-shape construction that does not depend on
session state, and it kept the `codex-tools` migration from taking a
meaningfully larger bite out of `spec.rs`.

This PR moves that reusable spec-building layer into `codex-tools` while
leaving feature gating, handler registration, and runtime-coupled
descriptions in `codex-core`.

## What changed

- added `codex-rs/tools/src/local_tool.rs` for the pure builders for
`exec_command`, `write_stdin`, `shell`, `shell_command`, and
`request_permissions`
- added `codex-rs/tools/src/view_image.rs` for the `view_image` tool
spec and output schema so the extracted modules stay right-sized
- rewired `codex-rs/core/src/tools/spec.rs` to call those extracted
builders instead of constructing these specs inline
- kept the `request_permissions` description source in `codex-core`,
with `codex-tools` taking the description as input so the crate boundary
does not grow a dependency on handler/runtime code
- moved the direct constructor coverage for this slice from
`codex-rs/core/src/tools/spec_tests.rs` into
`codex-rs/tools/src/local_tool_tests.rs` and
`codex-rs/tools/src/view_image_tests.rs`
- updated `codex-rs/tools/README.md` to reflect that `codex-tools` now
owns this local-host spec layer

## Test plan

- `CARGO_TARGET_DIR=/tmp/codex-tools-local-host cargo test -p
codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-local-tools cargo test -p codex-core
--lib tools::spec::`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
- #16132
2026-03-28 16:33:58 -07:00
Eric Traut
46b653e73c Fix skills picker scrolling in tui app server (#16109)
Fixes #16091.

The app-server TUI was truncating the filtered mention candidate list to
`MAX_POPUP_ROWS`, so the `$` skills picker only exposed the first 8
matches. That made it look like many skills were missing and prevented
keyboard navigation beyond the first page, even though direct
`$skill-name` insertion still worked.

Testing: I manually verified the regression and confirmed the fix.
2026-03-28 17:22:25 -06:00
Michael Bolin
f7ef9599ed exec: make review-policy tests hermetic (#16137)
## Why

`thread_start_params_from_config()` is supposed to forward the effective
`approvals_reviewer` into the app-server request, but these tests were
constructing that config through `ConfigBuilder::build()`, which also
loads ambient system and managed config layers. On machines with an
admin or host-level reviewer override, the manual-only case could
inherit `guardian_subagent` and fail even though the exec-side mapping
was correct.

## What changed

- Set `approvals_reviewer` explicitly via `harness_overrides` in the two
`thread_start_params_*review_policy*` tests in
`codex-rs/exec/src/lib.rs`.
- Removed the dependence on default config resolution and temp
`config.toml` writes so the tests exercise only the reviewer-to-request
mapping in `codex-exec`.

## Testing

- `cargo test -p codex-exec`
2026-03-28 23:01:04 +00:00
Michael Bolin
a16a9109d7 ci: use BuildBuddy for rust-ci-full non-Windows argument-comment-lint (#16136)
## Why

PR #16130 fixed the Windows `argument-comment-lint` regression in
`rust-ci-full`, but the next `main` runs still left the Linux and macOS
lint legs timing out.

In [run
23695263729](https://github.com/openai/codex/actions/runs/23695263729),
both non-Windows `argument-comment-lint` jobs were cancelled almost
exactly 30 minutes after they started. The remaining workflow difference
versus `rust-ci.yml` was that `rust-ci-full` did not pass
`BUILDBUDDY_API_KEY` into the non-Windows Bazel lint step, so
`run-bazel-ci.sh` fell back to local Bazel configuration instead of
using the faster remote-backed path available on `main`.

## What changed

- passed `BUILDBUDDY_API_KEY` to the non-Windows `rust-ci-full`
`argument-comment-lint` Bazel step
- left the Windows packaged-wrapper path from #16130 unchanged
- kept the change scoped to `rust-ci-full.yml`

## Test plan

- loaded `.github/workflows/rust-ci-full.yml` and
`.github/workflows/rust-ci.yml` with `python3` + `yaml.safe_load(...)`
- inspected run `23695263729` and confirmed `Argument comment lint -
Linux` and `Argument comment lint - macOS` were cancelled about 30
minutes after start
- verified the updated `rust-ci-full` step now matches the non-Windows
secret wiring already present in `rust-ci.yml`

## References

- #16130
- #16106
2026-03-28 15:36:01 -07:00
Michael Bolin
2238c16a91 codex-tools: extract code mode tool spec adapters (#16132)
## Why

The longer-term `codex-tools` migration is to move pure tool-definition
and tool-spec plumbing out of `codex-core` while leaving session- and
runtime-coupled orchestration behind.

The remaining code-mode adapter layer in
`core/src/tools/code_mode_description.rs` was a good next extraction
seam because it only transformed `ToolSpec` values for code mode and
already delegated the low-level description rendering to
`codex-code-mode`.

## What Changed

- added `codex-rs/tools/src/code_mode.rs` with
`augment_tool_spec_for_code_mode()` and
`tool_spec_to_code_mode_tool_definition()`
- added focused unit coverage in `codex-rs/tools/src/code_mode_tests.rs`
- rewired `core/src/tools/spec.rs` and `core/src/tools/code_mode/mod.rs`
to use the extracted adapters from `codex-tools`
- removed the old `core/src/tools/code_mode_description.rs` shim and its
test file from `codex-core`
- added the `codex-code-mode` dependency to `codex-tools`, updated
`Cargo.lock`, and refreshed the `codex-tools` README to reflect the
expanded boundary

## Test Plan

- `cargo test -p codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-code-mode-adapters cargo test -p
codex-core --lib tools::spec::`
- `CARGO_TARGET_DIR=/tmp/codex-core-code-mode-adapters cargo test -p
codex-core --lib tools::code_mode::`
- `just bazel-lock-update`
- `just bazel-lock-check`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
- #16129
2026-03-28 15:32:35 -07:00
Michael Bolin
c25c0d6e9e core: fix stale curated plugin cache refresh races (#16126)
## Why

The `plugin/list` force-sync path can race app-server startup's curated
plugin cache refresh.

Startup was capturing the configured curated plugin IDs from the initial
config snapshot. If `plugin/list` with `forceRemoteSync` removed curated
plugin entries from `config.toml` while that background refresh was
still in flight, the startup task could recreate cache directories for
plugins that had just been uninstalled.

That leaves the `plugin/list` response logically correct but the on-disk
cache stale, which matches the flaky Ubuntu arm failure seen in
`codex-app-server::all
suite::v2::plugin_list::plugin_list_force_remote_sync_reconciles_curated_plugin_state`
while validating [#16047](https://github.com/openai/codex/pull/16047).

## What

- change `codex-rs/core/src/plugins/manager.rs` so startup curated-repo
refresh rereads the current user `config.toml` before deciding which
curated plugin cache entries to refresh
- factor the configured-plugin parsing so the same logic can be reused
from either the config layer stack or the persisted user config value
- add a regression test that verifies curated plugin IDs are read from
the latest user config state before cache refresh runs

## Testing

- `cargo test -p codex-core
configured_curated_plugin_ids_from_codex_home_reads_latest_user_config
-- --nocapture`
- `cargo test -p codex-app-server
suite::v2::plugin_list::plugin_list_force_remote_sync_reconciles_curated_plugin_state
-- --nocapture`
- `just argument-comment-lint`
2026-03-28 15:00:39 -07:00
Michael Bolin
313fb95989 ci: keep rust-ci-full Windows argument-comment-lint on packaged wrapper (#16130)
## Why

PR #16106 switched `rust-ci-full` over to the native Bazel-backed
`argument-comment-lint` path on all three platforms.

That works on Linux and macOS, but the Windows leg in `rust-ci-full` now
fails before linting starts: Bazel dies while building `rules_rust`'s
`process_wrapper` tool, so `main` reports an `argument-comment-lint`
failure even though no Rust lint finding was produced.

Until native Windows Bazel linting is repaired, `rust-ci-full` should
keep the same Windows split that `rust-ci.yml` already uses.

## What changed

- restored the Windows-only nightly `argument-comment-lint` toolchain
setup in `rust-ci-full`
- limited the Bazel-backed lint step in `rust-ci-full` to non-Windows
runners
- routed the Windows runner back through
`tools/argument-comment-lint/run-prebuilt-linter.py`
- left the Linux and macOS `rust-ci-full` behavior unchanged

## Test plan

- loaded `.github/workflows/rust-ci-full.yml` and
`.github/workflows/rust-ci.yml` with `python3` + `yaml.safe_load(...)`
- inspected failing Actions run `23692864849`, especially job
`69023229311`, to confirm the Windows failure occurs in Bazel
`process_wrapper` setup before lint output is emitted

## References

- #16106
2026-03-28 14:50:19 -07:00
Michael Bolin
4e27a87ec6 codex-tools: extract configured tool specs (#16129)
## Why

This continues the `codex-tools` migration by moving another passive
tool-spec layer out of `codex-core`.

After `ToolSpec` moved into `codex-tools`, `codex-core` still owned
`ConfiguredToolSpec` and `create_tools_json_for_responses_api()`. Both
are data-model and serialization helpers rather than runtime
orchestration, so keeping them in `core/src/tools/registry.rs` and
`core/src/tools/spec.rs` left passive tool-definition code coupled to
`codex-core` longer than necessary.

## What changed

- moved `ConfiguredToolSpec` into `codex-rs/tools/src/tool_spec.rs`
- moved `create_tools_json_for_responses_api()` into
`codex-rs/tools/src/tool_spec.rs`
- re-exported the new surface from `codex-rs/tools/src/lib.rs`, which
remains exports-only
- updated `core/src/client.rs`, `core/src/tools/registry.rs`, and
`core/src/tools/router.rs` to consume the extracted types and serializer
from `codex-tools`
- moved the tool-list serialization test into
`codex-rs/tools/src/tool_spec_tests.rs`
- added focused unit coverage for `ConfiguredToolSpec::name()`
- simplified `core/src/tools/spec_tests.rs` to use the extracted
`ConfiguredToolSpec::name()` directly and removed the now-redundant
local `tool_name()` helper
- updated `codex-rs/tools/README.md` so the crate boundary reflects the
newly extracted tool-spec wrapper and serialization helper

## Test plan

- `cargo test -p codex-tools`
- `CARGO_TARGET_DIR=/tmp/codex-core-configured-spec cargo test -p
codex-core --lib tools::spec::`
- `CARGO_TARGET_DIR=/tmp/codex-core-configured-spec cargo test -p
codex-core --lib client::`
- `just fix -p codex-tools -p codex-core`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
- #16047
2026-03-28 14:24:14 -07:00
Michael Bolin
ae8a3be958 bazel: refresh the expired macOS SDK pin (#16128)
## Why

macOS BuildBuddy started failing before target analysis because the
Apple CDN object pinned in
[`MODULE.bazel`](fce0f76d57/MODULE.bazel (L28-L36))
now returns `403 Forbidden`. The failure report that triggered this
change was this [BuildBuddy
invocation](https://app.buildbuddy.io/invocation/c57590e0-1bdb-4e19-a86f-74d4a7ded228).

This repo uses `@llvm//extensions:osx.bzl` via `osx.from_archive(...)`,
and that API does not discover a current SDK URL for us. It fetches
exactly the `urls`, `sha256`, and `strip_prefix` we pin. Once Apple
retires that `swcdn.apple.com` object, `@macos_sdk` stops resolving and
every downstream macOS build fails during external repository fetch.

This is the same basic failure mode we hit in
[b9fa08ec61](b9fa08ec61):
the pin itself aged out.

## How I tracked it down

1. I started from the BuildBuddy error and copied the exact
`swcdn.apple.com/.../CLTools_macOSNMOS_SDK.pkg` URL from the failure.
2. I reproduced the issue outside CI by opening that URL directly in a
browser and by running `curl -I` against it locally. Both returned `403
Forbidden`, which ruled out BuildBuddy as the root cause.
3. I searched the repo for that URL and found it hardcoded in
`MODULE.bazel`.
4. I inspected the `llvm` Bzlmod `osx` extension implementation to
confirm that `osx.from_archive(...)` is just a literal fetch of the
pinned archive metadata. There is no automatic fallback or catalog
lookup behind it.
5. I queried Apple's software update catalogs to find the current
Command Line Tools package for macOS 26.x. The useful catalog was:
-
`https://swscan.apple.com/content/catalogs/others/index-26-15-14-13-12-10.16-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog.gz`

This is scriptable; it does not require opening a website in a browser.
The catalog is a gzip-compressed plist served over HTTP, so the workflow
is just:

   1. fetch the catalog,
   2. decompress it,
   3. search or parse the plist for `CLTools_macOSNMOS_SDK.pkg` entries,
   4. inspect the matching product metadata.

   The quick shell version I used was:

   ```shell
   curl -L <catalog-url> \
     | gzip -dc \
     | rg -n -C 6 'CLTools_macOSNMOS_SDK\.pkg|PostDate|English\.dist'
   ```

That is enough to surface the current product id, package URL, post
date, and the matching `.dist` file. If we want something less
grep-driven next time, the same catalog can be parsed structurally. For
example:

   ```python
   import gzip
   import plistlib
   import urllib.request

url =
"https://swscan.apple.com/content/catalogs/others/index-26-15-14-13-12-10.16-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog.gz"
   with urllib.request.urlopen(url) as resp:
       catalog = plistlib.loads(gzip.decompress(resp.read()))

   for product_id, product in catalog["Products"].items():
       for package in product.get("Packages", []):
           package_url = package.get("URL", "")
           if package_url.endswith("CLTools_macOSNMOS_SDK.pkg"):
               print(product_id)
               print(product.get("PostDate"))
               print(package_url)
               print(product.get("Distributions", {}).get("English"))
   ```

In practice, `curl` was only the transport. The important part is that
the catalog itself is a machine-readable plist, so this can be
automated.
6. That catalog contains the newer `047-96692` Command Line Tools
release, and its distribution file identifies it as [Command Line Tools
for Xcode
26.4](https://swdist.apple.com/content/downloads/32/53/047-96692-A_OAHIHT53YB/ybtshxmrcju8m2qvw3w5elr4rajtg1x3y3/047-96692.English.dist).
7. I downloaded that package locally, computed its SHA-256, expanded it
with `pkgutil --expand-full`, and verified that it contains
`Payload/Library/Developer/CommandLineTools/SDKs/MacOSX26.4.sdk`, which
is the correct new `strip_prefix` for this pin.

The core debugging loop looked like this:

```shell
curl -I <stale swcdn URL>
rg 'swcdn\.apple\.com|osx\.from_archive' MODULE.bazel
curl -L <apple 26.x sucatalog> | gzip -dc | rg 'CLTools_macOSNMOS_SDK.pkg'
pkgutil --expand-full CLTools_macOSNMOS_SDK.pkg expanded
find expanded/Payload/Library/Developer/CommandLineTools/SDKs -maxdepth 1 -mindepth 1
```

## What changed

- Updated `MODULE.bazel` to point `osx.from_archive(...)` at the
currently live `047-96692` `CLTools_macOSNMOS_SDK.pkg` object.
- Updated the pinned `sha256` to match that package.
- Updated the `strip_prefix` from `MacOSX26.2.sdk` to `MacOSX26.4.sdk`.

## Verification

- `bazel --output_user_root="$(mktemp -d
/tmp/codex-bazel-sdk-fetch.XXXXXX)" build @macos_sdk//sysroot`

## Notes for next time

As long as we pin raw `swcdn.apple.com` objects, this will likely happen
again. When it does, the expected recovery path is:

1. Reproduce the `403` against the exact URL from CI.
2. Find the stale pin in `MODULE.bazel`.
3. Look up the current CLTools package in the relevant Apple software
update catalog for that macOS major version.
4. Download the replacement package and refresh both `sha256` and
`strip_prefix`.
5. Validate the new pin with a fresh `@macos_sdk` fetch, not just an
incremental Bazel build.

The important detail is that the non-`26` catalog did not surface the
macOS 26.x SDK package here; the `index-26-15-14-...` catalog was the
one that exposed the currently live replacement.
2026-03-28 21:08:19 +00:00
Michael Bolin
bc53d42fd9 codex-tools: extract tool spec models (#16047)
## Why

This continues the `codex-tools` migration by moving another passive
tool-definition layer out of `codex-core`.

After `ResponsesApiTool` and the lower-level schema adapters moved into
`codex-tools`, `core/src/client_common.rs` was still owning `ToolSpec`
and the web-search request wire types even though they are serialized
data models rather than runtime orchestration. Keeping those types in
`codex-core` makes the crate boundary look smaller than it really is and
leaves non-runtime tool-shape code coupled to core.

## What changed

- moved `ToolSpec`, `ResponsesApiWebSearchFilters`, and
`ResponsesApiWebSearchUserLocation` into
`codex-rs/tools/src/tool_spec.rs`
- added focused unit tests in `codex-rs/tools/src/tool_spec_tests.rs`
for:
  - `ToolSpec::name()`
  - web-search config conversions
  - `ToolSpec` serialization for `web_search` and `tool_search`
- kept `codex-rs/tools/src/lib.rs` exports-only by re-exporting the new
module from `lib.rs`
- reduced `core/src/client_common.rs` to a compatibility shim that
re-exports the extracted tool-spec types for current core call sites
- updated `core/src/tools/spec_tests.rs` to consume the extracted
web-search types directly from `codex-tools`
- updated `codex-rs/tools/README.md` so the crate contract reflects that
`codex-tools` now owns the passive tool-spec request models in addition
to the lower-level Responses API structs

## Test plan

- `cargo test -p codex-tools`
- `cargo test -p codex-core --lib tools::spec::`
- `cargo test -p codex-core --lib client_common::`
- `just fix -p codex-tools -p codex-core`
- `just argument-comment-lint`

## References

- #15923
- #15928
- #15944
- #15953
- #16031
2026-03-28 13:37:00 -07:00
Eric Traut
178d2b00b1 Remove the codex-tui app-server originator workaround (#16116)
## Summary
- remove the temporary `codex-tui` special-case when setting the default
originator during app-server initialization
2026-03-28 13:53:33 -06:00
Eric Traut
48144a7fa4 Remove remaining custom prompt support (#16115)
## Summary
- remove protocol and core support for discovering and listing custom
prompts
- simplify the TUI slash-command flow and command popup to built-in
commands only
- delete obsolete custom prompt tests, helpers, and docs references
- clean up downstream event handling for the removed protocol events
2026-03-28 13:49:37 -06:00
Michael Bolin
fce0f76d57 build: migrate argument-comment-lint to a native Bazel aspect (#16106)
## Why

`argument-comment-lint` had become a PR bottleneck because the repo-wide
lane was still effectively running a `cargo dylint`-style flow across
the workspace instead of reusing Bazel's Rust dependency graph. That
kept the lint enforced, but it threw away the main benefit of moving
this job under Bazel in the first place: metadata reuse and cacheable
per-target analysis in the same shape as Clippy.

This change moves the repo-wide lint onto a native Bazel Rust aspect so
Linux and macOS can lint `codex-rs` without rebuilding the world
crate-by-crate through the wrapper path.

## What Changed

- add a nightly Rust toolchain with `rustc-dev` for Bazel and a
dedicated crate-universe repo for `tools/argument-comment-lint`
- add `tools/argument-comment-lint/driver.rs` and
`tools/argument-comment-lint/lint_aspect.bzl` so Bazel can run the lint
as a custom `rustc_driver`
- switch repo-wide `just argument-comment-lint` and the Linux/macOS
`rust-ci` lanes to `bazel build --config=argument-comment-lint
//codex-rs/...`
- keep the Python/DotSlash wrappers as the package-scoped fallback path
and as the current Windows CI path
- gate the Dylint entrypoint behind a `bazel_native` feature so the
Bazel-native library avoids the `dylint_*` packaging stack
- update the aspect runtime environment so the driver can locate
`rustc_driver` correctly under remote execution
- keep the dedicated `tools/argument-comment-lint` package tests and
wrapper unit tests in CI so the source and packaged entrypoints remain
covered

## Verification

- `python3 -m unittest discover -s tools/argument-comment-lint -p
'test_*.py'`
- `cargo test` in `tools/argument-comment-lint`
- `bazel build
//tools/argument-comment-lint:argument-comment-lint-driver
--@rules_rust//rust/toolchain/channel=nightly`
- `bazel build --config=argument-comment-lint
//codex-rs/utils/path-utils:all`
- `bazel build --config=argument-comment-lint
//codex-rs/rollout:rollout`







---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16106).
* #16120
* __->__ #16106
2026-03-28 12:41:56 -07:00
Michael Bolin
65f631c3d6 fix: fix comment linter lint violations in Linux-only code (#16118)
https://github.com/openai/codex/pull/16071 took care of this for
Windows, so this takes care of things for Linux.

We don't touch the CI jobs in this PR because
https://github.com/openai/codex/pull/16106 is going to be the real fix
there (including a major speedup!).
2026-03-28 11:09:41 -07:00
Eric Traut
61429a6c10 Rename tui_app_server to tui (#16104)
This is a follow-up to https://github.com/openai/codex/pull/15922. That
previous PR deleted the old `tui` directory and left the new
`tui_app_server` directory in place. This PR renames `tui_app_server` to
`tui` and fixes up all references.
2026-03-28 11:23:07 -06:00
Eric Traut
3d1abf3f3d Update PR babysitter skill for review replies and resolution (#16112)
This PR updates the "PR Babysitter" skill to clarify that non-actionable
review comments should receive a direct reply explaining why no change
is needed, and actionable review comments should be marked "resolved"
after they are addressed.
2026-03-28 10:35:20 -06:00
Felipe Coury
bede1d9e23 fix(tui): refresh footer on collaboration mode changes (#16026)
## Summary
- Moves status surface refresh (`refresh_status_surfaces` /
`refresh_status_line`) from `App` event handlers into `ChatWidget`
setters via a new `refresh_model_dependent_surfaces()` method
- Ensures model-dependent UI stays in sync whenever collaboration mode,
model, or reasoning effort changes, including the footer and terminal
title in both `tui` and `tui_app_server`
- Applies the fix to both `tui` and `tui_app_server` widgets

#15961

## Test plan
- [x] Added snapshot test
`status_line_model_with_reasoning_plan_mode_footer` verifying footer
renders correctly in plan mode
- [x] Added
`terminal_title_model_updates_on_model_change_without_manual_refresh` in
`tui_app_server`
- [ ] Verify switching collaboration modes updates the footer in real
TUI
- [ ] Verify model/reasoning effort changes reflect in the status bar
and terminal title

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2026-03-28 08:55:32 -06:00
Michael Bolin
e39ddc61b1 bazel: add Windows gnullvm stack flags to unit test binaries (#16074)
## Summary

Add the Windows gnullvm stack-reserve flags to the `*-unit-tests-bin`
path in `codex_rust_crate()`.

## Why

This is the narrow code fix behind the earlier review comment on
[#16067](https://github.com/openai/codex/pull/16067). That comment was
stale relative to the workflow-only PR it landed on, but it pointed at a
real gap in `defs.bzl`.

Today, `codex_rust_crate()` already appends
`WINDOWS_GNULLVM_RUSTC_STACK_FLAGS` for:

- `rust_binary()` targets
- integration-test `rust_test()` targets

But the unit-test binary path still omitted those flags. That meant the
generated `*-unit-tests-bin` executables were not built the same way as
the rest of the Windows gnullvm executables in the macro.

## What Changed

- Added `WINDOWS_GNULLVM_RUSTC_STACK_FLAGS` to the `unit_test_binary`
`rust_test()` rule in `defs.bzl`
- Added a short comment explaining why unit-test binaries need the same
stack-reserve treatment as binaries and integration tests on Windows
gnullvm

## Testing

- `bazel query '//codex-rs/core:*'`
- `bazel query '//codex-rs/shell-command:*'`

Those queries load packages that exercise `codex_rust_crate()`,
including `*-unit-tests-bin` targets. The actual runtime effect is
Windows-specific, so the real end-to-end confirmation still comes from
Windows CI.
2026-03-27 22:11:49 -07:00
Michael Bolin
b94366441e ci: split fast PR Rust CI from full post-merge Cargo CI (#16072)
## Summary

Split the old all-in-one `rust-ci.yml` into:

- a PR-time Cargo workflow in `rust-ci.yml`
- a full post-merge Cargo workflow in `rust-ci-full.yml`

This keeps the PR path focused on fast Cargo-native hygiene plus the
Bazel `build` / `test` / `clippy` coverage in `bazel.yml`, while moving
the heavyweight Cargo-native matrix to `main`.

## Why

`bazel.yml` is now the main Rust verification workflow for pull
requests. It already covers the Bazel build, test, and clippy signal we
care about pre-merge, and it also runs on pushes to `main` to re-verify
the merged tree and help keep the BuildBuddy caches warm.

What was still missing was a clean split for the Cargo-native checks
that Bazel does not replace yet. The old `rust-ci.yml` mixed together:

- fast hygiene checks such as `cargo fmt --check` and `cargo shear`
- `argument-comment-lint`
- the full Cargo clippy / nextest / release-build matrix

That made every PR pay for the full Cargo matrix even though most of
that coverage is better treated as post-merge verification. The goal of
this change is to leave PRs with the checks we still want before merge,
while moving the heavier Cargo-native matrix off the review path.

## What Changed

- Renamed the old heavyweight workflow to `rust-ci-full.yml` and limited
it to `push` on `main` plus `workflow_dispatch`.
- Added a new PR-only `rust-ci.yml` that runs:
  - changed-path detection
  - `cargo fmt --check`
  - `cargo shear`
  - `argument-comment-lint` on Linux, macOS, and Windows
- `tools/argument-comment-lint` package tests when the lint itself or
its workflow wiring changes
- Kept the PR workflow's gatherer as the single required Cargo-native
status so branch protection can stay simple.
- Added `.github/workflows/README.md` to document the intended split
between `bazel.yml`, `rust-ci.yml`, and `rust-ci-full.yml`.
- Preserved the recent Windows `argument-comment-lint` behavior from
`e02fd6e1d3` in `rust-ci-full.yml`, and mirrored cross-platform lint
coverage into the PR workflow.

A few details are deliberate:

- The PR workflow still keeps the Linux lint lane on the
default-targets-only invocation for now, while macOS and Windows use the
broader released-linter path.
- This PR does not change `bazel.yml`; it changes the Cargo-native
workflow around the existing Bazel PR path.

## Testing

- Rebasing this change onto `main` after `e02fd6e1d3`
- `ruby -e 'require "yaml"; %w[.github/workflows/rust-ci.yml
.github/workflows/rust-ci-full.yml .github/workflows/bazel.yml].each {
|f| YAML.load_file(f) }'`
2026-03-27 21:08:08 -07:00
Michael Bolin
e02fd6e1d3 fix: clean up remaining Windows argument-comment-lint violations (#16071)
## Why

The initial `argument-comment-lint` rollout left Windows on
default-target coverage because there were still Windows-only callsites
failing under `--all-targets`. This follow-up cleans up those remaining
Windows-specific violations so the Windows CI lane can enforce the same
stricter coverage, leaving Linux as the remaining platform-specific
follow-up.

## What changed

- switched the Windows `rust-ci` argument-comment-lint step back to the
default wrapper invocation so it runs full-target coverage again
- added the required `/*param_name*/` annotations at Windows-gated
literal callsites in:
  - `codex-rs/windows-sandbox-rs/src/lib.rs`
  - `codex-rs/windows-sandbox-rs/src/elevated_impl.rs`
  - `codex-rs/tui_app_server/src/multi_agents.rs`
  - `codex-rs/network-proxy/src/proxy.rs`

## Validation

- Windows `argument comment lint` CI on this PR
2026-03-27 20:48:21 -07:00
Michael Bolin
f4d0cbfda6 ci: run Bazel clippy on Windows gnullvm (#16067)
## Why

We want more of the pre-merge Rust signal to come from `bazel.yml`,
especially on Windows. The Bazel test workflow already exercises
`x86_64-pc-windows-gnullvm`, but the Bazel clippy job still only ran on
Linux x64 and macOS arm64. That left a gap where Windows-only Bazel lint
breakages could slip through until the Cargo-based workflow ran.

This change keeps the fix narrow. Rather than expanding the Bazel clippy
target set or changing the shared setup logic, it extends the existing
clippy matrix to the same Windows GNU toolchain that the Bazel test job
already uses.

## What Changed

- add `windows-latest` / `x86_64-pc-windows-gnullvm` to the `clippy` job
matrix in `.github/workflows/bazel.yml`
- update the nearby workflow comment to explain that the goal is to get
Bazel-native Windows lint coverage on the same toolchain as the Bazel
test lane
- leave the Bazel clippy scope unchanged at `//codex-rs/...
-//codex-rs/v8-poc:all`

## Verification

- parsed `.github/workflows/bazel.yml` successfully with Ruby
`YAML.load_file`
2026-03-27 20:47:22 -07:00
Michael Bolin
343d1af3da bazel: enable the full Windows gnullvm CI path (#15952)
## Why

This PR is the current, consolidated follow-up to the earlier Windows
Bazel attempt in #11229. The goal is no longer just to get a tiny
Windows smoke job limping along: it is to make the ordinary Bazel CI
path usable on `windows-latest` for `x86_64-pc-windows-gnullvm`, with
the same broad `//...` test shape that macOS and Linux already use.

The earlier smoke-list version of this work was useful as a foothold,
but it was not a good long-term landing point. Windows Bazel kept
surfacing real issues outside that allowlist:

- GitHub's Windows runner exposed runfiles-manifest bugs such as
`FINDSTR: Cannot open D:MANIFEST`, which broke Bazel test launchers even
when the manifest file existed.
- `rules_rs`, `rules_rust`, LLVM extraction, and Abseil still needed
`windows-gnullvm`-specific fixes for our hermetic toolchain.
- the V8 path needed more work than just turning the Windows matrix
entry back on: `rusty_v8` does not ship Windows GNU artifacts in the
same shape we need, and Bazel's in-tree V8 build needed a set of Windows
GNU portability fixes.

Windows performance pressure also pushed this toward a full solution
instead of a permanent smoke suite. During this investigation we hit
targets such as `//codex-rs/shell-command:shell-command-unit-tests` that
were much more expensive on Windows because they repeatedly spawn real
PowerShell parsers (see #16057 for one concrete example of that
pressure). That made it much more valuable to get the real Windows Bazel
path working than to keep iterating on a narrowly curated subset.

The net result is that this PR now aims for the same CI contract on
Windows that we already expect elsewhere: keep standalone
`//third_party/v8:all` out of the ordinary Bazel lane, but allow V8
consumers under `//codex-rs/...` to build and test transitively through
`//...`.

## What Changed

### CI and workflow wiring

- re-enable the `windows-latest` / `x86_64-pc-windows-gnullvm` Bazel
matrix entry in `.github/workflows/bazel.yml`
- move the Windows Bazel output root to `D:\b` and enable `git config
--global core.longpaths true` in
`.github/actions/setup-bazel-ci/action.yml`
- keep the ordinary Bazel target set on Windows aligned with macOS and
Linux by running `//...` while excluding only standalone
`//third_party/v8:all` targets from the normal lane

### Toolchain and module support for `windows-gnullvm`

- patch `rules_rs` so `windows-gnullvm` is modeled as a distinct Windows
exec/toolchain platform instead of collapsing into the generic Windows
shape
- patch `rules_rust` build-script environment handling so llvm-mingw
build-script probes do not inherit unsupported `-fstack-protector*`
flags
- patch the LLVM module archive so it extracts cleanly on Windows and
provides the MinGW libraries this toolchain needs
- patch Abseil so its thread-local identity path matches the hermetic
`windows-gnullvm` toolchain instead of taking an incompatible MinGW
pthread path
- keep both MSVC and GNU Windows targets in the generated Cargo metadata
because the current V8 release-asset story still uses MSVC-shaped names
in some places while the Bazel build targets the GNU ABI

### Windows test-launch and binary-behavior fixes

- update `workspace_root_test_launcher.bat.tpl` to read the runfiles
manifest directly instead of shelling out to `findstr`, which was the
source of the `D:MANIFEST` failures on the GitHub Windows runner
- thread a larger Windows GNU stack reserve through `defs.bzl` so
Bazel-built binaries that pull in V8 behave correctly both under normal
builds and under `bazel test`
- remove the no-longer-needed Windows bootstrap sh-toolchain override
from `.bazelrc`

### V8 / `rusty_v8` Windows GNU support

- export and apply the new Windows GNU patch set from
`patches/BUILD.bazel` / `MODULE.bazel`
- patch the V8 module/rules/source layers so the in-tree V8 build can
produce Windows GNU archives under Bazel
- teach `third_party/v8/BUILD.bazel` to build Windows GNU static
archives in-tree instead of aliasing them to the MSVC prebuilts
- reuse the Linux release binding for the experimental Windows GNU path
where `rusty_v8` does not currently publish a Windows GNU binding
artifact

## Testing

- the primary end-to-end validation for this work is the `Bazel`
workflow plus `v8-canary`, since the hard parts are Windows-specific and
depend on real GitHub runner behavior
- before consolidation back onto this PR, the same net change passed the
full Bazel matrix in [run
23675590471](https://github.com/openai/codex/actions/runs/23675590471)
and passed `v8-canary` in [run
23675590453](https://github.com/openai/codex/actions/runs/23675590453)
- those successful runs included the `windows-latest` /
`x86_64-pc-windows-gnullvm` Bazel job with the ordinary `//...` path,
not the earlier Windows smoke allowlist

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/15952).
* #16067
* __->__ #15952
2026-03-27 20:37:03 -07:00
Michael Bolin
5037a2d199 refactor: rewrite argument-comment lint wrappers in Python (#16063)
## Why

The `argument-comment-lint` entrypoints had grown into two shell
wrappers with duplicated parsing, environment setup, and Cargo
forwarding logic. The recent `--` separator regression was a good
example of the problem: the behavior was subtle, easy to break, and hard
to verify.

This change rewrites those wrappers in Python so the control flow is
easier to follow, the shared behavior lives in one place, and the tricky
argument/defaulting paths have direct test coverage.

## What changed

- replaced `tools/argument-comment-lint/run.sh` and
`tools/argument-comment-lint/run-prebuilt-linter.sh` with Python
entrypoints: `run.py` and `run-prebuilt-linter.py`
- moved shared wrapper behavior into
`tools/argument-comment-lint/wrapper_common.py`, including:
  - splitting lint args from forwarded Cargo args after `--`
- defaulting repo runs to `--manifest-path codex-rs/Cargo.toml
--workspace --no-deps`
- defaulting non-`--fix` runs to `--all-targets` unless the caller
explicitly narrows the target set
  - setting repo defaults for `DYLINT_RUSTFLAGS` and `CARGO_INCREMENTAL`
- kept the prebuilt wrapper thin: it still just resolves the packaged
DotSlash entrypoint, keeps `rustup` shims first on `PATH`, infers
`RUSTUP_HOME` when needed, and then launches the packaged `cargo-dylint`
path
- updated `justfile`, `rust-ci.yml`, and
`tools/argument-comment-lint/README.md` to use the Python entrypoints
- updated `rust-ci` so the package job runs Python syntax checks plus
the new wrapper unit tests, and the OS-specific lint jobs invoke the
wrappers through an explicit Python interpreter

This is a follow-up to #16054: it keeps the current lint semantics while
making the wrapper logic maintainable enough to iterate on safely.

## Validation

- `python3 -m py_compile tools/argument-comment-lint/wrapper_common.py
tools/argument-comment-lint/run.py
tools/argument-comment-lint/run-prebuilt-linter.py
tools/argument-comment-lint/test_wrapper_common.py`
- `python3 -m unittest discover -s tools/argument-comment-lint -p
'test_*.py'`
- `python3 ./tools/argument-comment-lint/run-prebuilt-linter.py -p
codex-terminal-detection -- --lib`
- `python3 ./tools/argument-comment-lint/run.py -p
codex-terminal-detection -- --lib`
2026-03-27 19:42:30 -07:00
Michael Bolin
142681ef93 shell-command: reuse a PowerShell parser process on Windows (#16057)
## Why

`//codex-rs/shell-command:shell-command-unit-tests` became a real
bottleneck in the Windows Bazel lane because repeated calls to
`is_safe_command_windows()` were starting a fresh PowerShell parser
process for every `powershell.exe -Command ...` assertion.

PR #16056 was motivated by that same bottleneck, but its test-only
shortcut was the wrong layer to optimize because it weakened the
end-to-end guarantee that our runtime path really asks PowerShell to
parse the command the way we expect.

This PR attacks the actual cost center instead: it keeps the real
PowerShell parser in the loop, but turns that parser into a long-lived
helper process so both tests and the runtime safe-command path can reuse
it across many requests.

## What Changed

- add `shell-command/src/command_safety/powershell_parser.rs`, which
keeps one mutex-protected parser process per PowerShell executable path
and speaks a simple JSON-over-stdio request/response protocol
- turn `shell-command/src/command_safety/powershell_parser.ps1` into a
long-running parser server with comments explaining the protocol, the
AST-shape restrictions, and why unsupported constructs are rejected
conservatively
- keep request ids and a one-time respawn path so a dead or
desynchronized cached child fails closed instead of silently returning
mixed parser output
- preserve separate parser processes for `powershell.exe` and
`pwsh.exe`, since they do not accept the same language surface
- avoid a direct `PipelineChainAst` type reference in the PowerShell
script so the parser service still runs under Windows PowerShell 5.1 as
well as newer `pwsh`
- make `shell-command/src/command_safety/windows_safe_commands.rs`
delegate to the new parser utility instead of spawning a fresh
PowerShell process for every parse
- add a Windows-only unit test that exercises multiple sequential
requests against the same parser process

## Testing

- adds a Windows-only parser-reuse unit test in `powershell_parser.rs`
- the main end-to-end verification for this change is the Windows CI
lane, because the new service depends on real `powershell.exe` /
`pwsh.exe` behavior
2026-03-27 19:33:41 -07:00
Joe Liccini
71923f43a7 Support Codex CLI stdin piping for codex exec (#15917)
# Summary

Claude Code supports a useful prompt-plus-stdin workflow:

```bash
echo "complex input..." | claude -p "summarize concisely"
```

Codex previously did not support the equivalent `codex exec` form. While
`codex exec` could read the prompt from stdin, it could not combine
piped input with an explicit prompt argument.

This change adds that missing workflow:

```bash
echo "complex input..." | codex exec "summarize concisely"
```

With this change, when `codex exec` receives both a positional prompt
and piped stdin, the prompt remains the instruction and stdin is passed
along as structured `<stdin>...</stdin>` context.

Example:

```bash
curl https://jsonplaceholder.typicode.com/comments \
  | ./target/debug/codex exec --skip-git-repo-check "format the top 20 items into a markdown table" \
  > table.md
```

This PR also adds regression coverage for:
- prompt argument + piped stdin
- legacy stdin-as-prompt behavior
- `codex exec -` forced-stdin behavior
- empty-stdin error cases

---------

Co-authored-by: Codex <noreply@openai.com>
2026-03-28 02:21:22 +00:00
Michael Bolin
61dfe0b86c chore: clean up argument-comment lint and roll out all-target CI on macOS (#16054)
## Why

`argument-comment-lint` was green in CI even though the repo still had
many uncommented literal arguments. The main gap was target coverage:
the repo wrapper did not force Cargo to inspect test-only call sites, so
examples like the `latest_session_lookup_params(true, ...)` tests in
`codex-rs/tui_app_server/src/lib.rs` never entered the blocking CI path.

This change cleans up the existing backlog, makes the default repo lint
path cover all Cargo targets, and starts rolling that stricter CI
enforcement out on the platform where it is currently validated.

## What changed

- mechanically fixed existing `argument-comment-lint` violations across
the `codex-rs` workspace, including tests, examples, and benches
- updated `tools/argument-comment-lint/run-prebuilt-linter.sh` and
`tools/argument-comment-lint/run.sh` so non-`--fix` runs default to
`--all-targets` unless the caller explicitly narrows the target set
- fixed both wrappers so forwarded cargo arguments after `--` are
preserved with a single separator
- documented the new default behavior in
`tools/argument-comment-lint/README.md`
- updated `rust-ci` so the macOS lint lane keeps the plain wrapper
invocation and therefore enforces `--all-targets`, while Linux and
Windows temporarily pass `-- --lib --bins`

That temporary CI split keeps the stricter all-targets check where it is
already cleaned up, while leaving room to finish the remaining Linux-
and Windows-specific target-gated cleanup before enabling
`--all-targets` on those runners. The Linux and Windows failures on the
intermediate revision were caused by the wrapper forwarding bug, not by
additional lint findings in those lanes.

## Validation

- `bash -n tools/argument-comment-lint/run.sh`
- `bash -n tools/argument-comment-lint/run-prebuilt-linter.sh`
- shell-level wrapper forwarding check for `-- --lib --bins`
- shell-level wrapper forwarding check for `-- --tests`
- `just argument-comment-lint`
- `cargo test` in `tools/argument-comment-lint`
- `cargo test -p codex-terminal-detection`

## Follow-up

- Clean up remaining Linux-only target-gated callsites, then switch the
Linux lint lane back to the plain wrapper invocation.
- Clean up remaining Windows-only target-gated callsites, then switch
the Windows lint lane back to the plain wrapper invocation.
2026-03-27 19:00:44 -07:00
Eric Traut
ed977b42ac Fix tui_app_server agent picker closed-state regression (#16014)
Addresses #15992

The app-server TUI was treating tracked agent threads as closed based on
listener-task bookkeeping that does not reflect live thread state during
normal thread switching. That caused the `/agent` picker to gray out
live agents and could show a false "Agent thread ... is closed" replay
message after switching branches.

This PR fixes the picker refresh path to query the app server for each
tracked thread and derive closed vs loaded state from `thread/read`
status, while preserving cached agent metadata for replay-only threads.
2026-03-27 19:05:43 -06:00
Eric Traut
8e24d5aaea Fix tui_app_server resume-by-name lookup regression (#16050)
Addresses #16049

`codex resume <name>` and `/resume <name>` could fail in the app-server
TUI path because name lookup pre-filtered `thread/list` with the backend
`search_term`, but saved thread names are hydrated after listing and are
not part of that search index. Resolve names by scanning listed threads
client-side instead, and add a regression test for saved sessions whose
rollout title does not match the thread name.
2026-03-27 19:04:48 -06:00
Michael Bolin
2ffb32db98 ci: run SDK tests with a Bazel-built codex (#16046)
## Why

Before this change, the SDK CI job built `codex` with Cargo before
running the TypeScript package tests. That step has been getting more
expensive as the Rust workspace grows, while the repo already has a
Bazel-backed build path for the CLI.

The SDK tests also need a normal executable path they can spawn
repeatedly. Moving the job to Bazel exposed an extra CI detail: a plain
`bazel-bin/...` lookup is not reliable under the Linux config because
top-level outputs may stay remote and the wrapper emits status lines
around `cquery` output.

## What Changed

- taught `sdk/typescript/tests/testCodex.ts` to honor `CODEX_EXEC_PATH`
before falling back to the local Cargo-style `target/debug/codex` path
- added `--remote-download-toplevel` to
`.github/scripts/run-bazel-ci.sh` so workflows can force Bazel to
materialize top-level outputs on disk after a build
- switched `.github/workflows/sdk.yml` from `cargo build --bin codex` to
the shared Bazel CI setup and `//codex-rs/cli:codex` build target
- changed the SDK workflow to resolve the built CLI with wrapper-backed
`cquery --output=files`, stage the binary into
`${GITHUB_WORKSPACE}/.tmp/sdk-ci/codex`, and point the SDK tests at that
path via `CODEX_EXEC_PATH`
- kept the warm-up step before Jest and the Bazel repository-cache save
step

## Verification

- `bash -n .github/scripts/run-bazel-ci.sh`
- `./.github/scripts/run-bazel-ci.sh -- cquery --output=files --
//codex-rs/cli:codex | grep -E '^(/|bazel-out/)' | tail -n 1`
- `./.github/scripts/run-bazel-ci.sh --remote-download-toplevel -- build
--build_metadata=TAG_job=sdk -- //codex-rs/cli:codex`
- `CODEX_EXEC_PATH="$PWD/.tmp/sdk-ci/codex" pnpm --dir sdk/typescript
test --runInBand`
- `pnpm --dir sdk/typescript lint`
2026-03-27 17:17:22 -07:00
Drew Hintz
f4f6eca871 [codex] Pin GitHub Actions workflow references (#15828)
Pin floating external GitHub Actions workflow refs to immutable SHAs.

Why are we doing this? Please see the rationale doc:
https://docs.google.com/document/d/1qOURCNx2zszQ0uWx7Fj5ERu4jpiYjxLVWBWgKa2wTsA/edit?tab=t.0

Did this break you? Please roll back and let hintz@ know
2026-03-27 23:00:05 +00:00
Eric Traut
d65deec617 Remove the legacy TUI split (#15922)
This is the part 1 of 2 PRs that will delete the `tui` /
`tui_app_server` split. This part simply deletes the existing `tui`
directory and marks the `tui_app_server` feature flag as removed. I left
the `tui_app_server` feature flag in place for now so its presence
doesn't result in an error. It is simply ignored.

Part 2 will rename the `tui_app_server` directory `tui`. I did this as
two parts to reduce visible code churn.
2026-03-27 22:56:44 +00:00
iceweasel-oai
307e427a9b don't include redundant write roots in apply_patch (#16030)
apply_patch sometimes provides additional parent dir as a writable root
when it is already writable. This is mostly a no-op on Mac/Linux but
causes actual ACL churn on Windows that is best avoided. We are also
seeing some actual failures with these ACLs in the wild, which I haven't
fully tracked down, but it's safe/best to avoid doing it altogether.
2026-03-27 15:41:51 -07:00
Matthew Zeng
5b71e5104f [mcp] Bypass read-only tool checks. (#16044)
- [x] Auto / unspecified approval mode: read-only tools now skip before
guardian routing.
- [x] Approve / always-allow mode: read-only tools still skip, now via
the shared early return.
- [x] Prompt mode: read-only tools no longer skip; they continue to
approval.
2026-03-27 15:22:04 -07:00
Eric Traut
465897dd0f Fix /copy regression in tui_app_server turn completion (#16021)
Addresses #16019

`tui_app_server` renders completed assistant messages from item
notifications, but it only updated `/copy` state from `turn/completed`.
After the app-server migration, turn completion no longer repeats the
final assistant text, so `/copy` could stay unavailable even after the
first normal response.

This PR track the last completed final-answer agent message during an
active app-server turn and promote it into the `/copy` cache when the
turn completes. This restores the pre-migration behavior without
changing rollback handling.
2026-03-27 16:00:24 -06:00
Eric Traut
c5778dfca2 Fix tui_app_server hook notification rendering and replay (#16013)
Addresses #15984

HookStarted/HookCompleted notifications were being translated through a
fragile JSON bridge, so hook status/output never reached the renderer.
Early hook notifications could also be dropped during session refresh
before replay.

This PR fixes `tui_app_server` by mapping app-server hook notifications
into TUI hook events explicitly and preserving buffered hook
notifications across refresh, so cold-start and resumed sessions render
the same hook UI as the legacy TUI.
2026-03-27 15:33:51 -06:00
Michael Bolin
16d4ea9ca8 codex-tools: extract responses API tool models (#16031)
## Why

The previous extraction steps moved shared tool-schema parsing into
`codex-tools`, but `codex-core` still owned the generic Responses API
tool models and the last adapter layer that turned parsed tool
definitions into `ResponsesApiTool` values.

That left `core/src/tools/spec.rs` and `core/src/client_common.rs`
holding a chunk of tool-shaping code that does not need session state,
runtime plumbing, or any other `codex-core`-specific dependency. As a
result, `codex-tools` owned the parsed tool definition, but `codex-core`
still owned the generic wire model that those definitions are converted
into.

This change moves that boundary one step further. `codex-tools` now owns
the reusable Responses/tool wire structs and the shared conversion
helpers for dynamic tools, MCP tools, and deferred MCP aliases.
`codex-core` continues to own `ToolSpec` orchestration and the remaining
web-search-specific request shapes.

## What changed

- added `tools/src/responses_api.rs` to own `ResponsesApiTool`,
`FreeformTool`, `ToolSearchOutputTool`, namespace output types, and the
shared `ToolDefinition -> ResponsesApiTool` adapter helpers
- added `tools/src/responses_api_tests.rs` for deferred-loading
behavior, adapter coverage, and namespace serialization coverage
- rewired `core/src/tools/spec.rs` to use the extracted dynamic/MCP
adapter helpers instead of defining those conversions locally
- rewired `core/src/tools/handlers/tool_search.rs` to use the extracted
deferred MCP adapter and namespace output types directly
- slimmed `core/src/client_common.rs` so it now keeps `ToolSpec` and the
web-search-specific wire types, while reusing the extracted tool models
from `codex-tools`
- moved the extracted seam tests out of `core` and updated
`codex-rs/tools/README.md` plus `tools/src/lib.rs` to reflect the
expanded `codex-tools` boundary

## Test plan

- `cargo test -p codex-tools`
- `cargo test -p codex-core --lib tools::spec::`
- `cargo test -p codex-core --lib tools::handlers::tool_search::`
- `just fix -p codex-tools -p codex-core`
- `just argument-comment-lint`

## References

- [#15923](https://github.com/openai/codex/pull/15923) `codex-tools:
extract shared tool schema parsing`
- [#15928](https://github.com/openai/codex/pull/15928) `codex-tools:
extract MCP schema adapters`
- [#15944](https://github.com/openai/codex/pull/15944) `codex-tools:
extract dynamic tool adapters`
- [#15953](https://github.com/openai/codex/pull/15953) `codex-tools:
introduce named tool definitions`
2026-03-27 14:26:54 -07:00
bwanner-oai
82e8031338 Add usage-based business plan types (#15934)
## Summary
- add `self_serve_business_usage_based` and `enterprise_cbp_usage_based`
to the public/internal plan enums and regenerate the app-server + Python
SDK artifacts
- map both plans through JWT login and backend rate-limit payloads, then
bucket them with the existing Team/Business entitlement behavior in
cloud requirements, usage-limit copy, tooltips, and status display
- keep the earlier display-label remap commit on this branch so the new
Team-like and Business-like plans render consistently in the UI

## Testing
- `just write-app-server-schema`
- `uv run --project sdk/python python
sdk/python/scripts/update_sdk_artifacts.py generate-types`
- `just fix -p codex-protocol -p codex-login -p codex-core -p
codex-backend-client -p codex-cloud-requirements -p codex-tui -p
codex-tui-app-server -p codex-backend-openapi-models`
- `just fmt`
- `just argument-comment-lint`
- `cargo test -p codex-protocol
usage_based_plan_types_use_expected_wire_names`
- `cargo test -p codex-login usage_based`
- `cargo test -p codex-backend-client usage_based`
- `cargo test -p codex-cloud-requirements usage_based`
- `cargo test -p codex-core usage_limit_reached_error_formats_`
- `cargo test -p codex-tui plan_type_display_name_remaps_display_labels`
- `cargo test -p codex-tui remapped`
- `cargo test -p codex-tui-app-server
plan_type_display_name_remaps_display_labels`
- `cargo test -p codex-tui-app-server remapped`
- `cargo test -p codex-tui-app-server
preserves_usage_based_plan_type_wire_name`

## Notes
- a broader multi-crate `cargo test` run still hits unrelated existing
guardian-approval config failures in
`codex-rs/core/src/config/config_tests.rs`
2026-03-27 14:25:13 -07:00
xl-openai
81abb44f68 plugins: Clean up stale curated plugin sync temp dirs and add sync metrics (#16035)
1. Keep curated plugin staging directories under TempDir ownership until
activation succeeds, so failed git/HTTP sync attempts do not leak
plugins-clone-*.
2. Best-effort clean up stale plugins-clone-* directories before
creating a new staged repo, using a conservative age threshold.
3. Emit OTEL counters for curated plugin startup sync transport attempts
and final outcome across git and HTTP paths.
2026-03-27 14:21:18 -07:00
pakrym-oai
8002594ee3 Normalize /mcp tool grouping for hyphenated server names (#15946)
Fix display for servers with special characters.
2026-03-27 14:58:29 -06:00
1799 changed files with 53211 additions and 171312 deletions

View File

@@ -20,9 +20,6 @@ common:windows --host_platform=//:local_windows
common --@rules_cc//cc/toolchains/args/archiver_flags:use_libtool_on_macos=False
common --@llvm//config:experimental_stub_libgcc_s
# We need to use the sh toolchain on windows so we don't send host bash paths to the linux executor.
common:windows --@rules_rust//rust/settings:experimental_use_sh_toolchain_for_bootstrap_process_wrapper
# TODO(zbarsky): rules_rust doesn't implement this flag properly with remote exec...
# common --@rules_rust//rust/settings:pipelined_compilation
@@ -79,6 +76,49 @@ common:ci-bazel --build_metadata=TAG_workflow=bazel
build:clippy --aspects=@rules_rust//rust:defs.bzl%rust_clippy_aspect
build:clippy --output_groups=+clippy_checks
build:clippy --@rules_rust//rust/settings:clippy.toml=//codex-rs:clippy.toml
# Keep this deny-list in sync with `codex-rs/Cargo.toml` `[workspace.lints.clippy]`.
# Cargo applies those lint levels to member crates that opt into `[lints] workspace = true`
# in their own `Cargo.toml`, but `rules_rust` Bazel clippy does not read Cargo lint levels.
# `clippy.toml` can configure lint behavior, but it cannot set allow/warn/deny/forbid levels.
build:clippy --@rules_rust//rust/settings:clippy_flag=-Dwarnings
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::expect_used
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::identity_op
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_clamp
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_filter
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_find
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_flatten
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_map
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_memcpy
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_non_exhaustive
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_ok_or
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_range_contains
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_retain
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_strip
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_try_fold
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_unwrap_or
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_borrow
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_borrowed_reference
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_collect
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_late_init
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_option_as_deref
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_question_mark
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_update
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_clone
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_closure
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_closure_for_method_calls
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_static_lifetimes
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::trivially_copy_pass_by_ref
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::uninlined_format_args
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_filter_map
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_lazy_evaluations
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_sort_by
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_to_owned
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unwrap_used
# Shared config for Bazel-backed argument-comment-lint.
build:argument-comment-lint --aspects=//tools/argument-comment-lint:lint_aspect.bzl%rust_argument_comment_lint_aspect
build:argument-comment-lint --output_groups=argument_comment_lint_checks
build:argument-comment-lint --@rules_rust//rust/toolchain/channel=nightly
# Rearrange caches on Windows so they're on the same volume as the checkout.
common:ci-windows --config=ci-bazel

View File

@@ -3,4 +3,4 @@
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts,*.jsonl,frame*.txt,*.snap,*.snap.new,*meriyah.umd.min.js
check-hidden = true
ignore-regex = ^\s*"image/\S+": ".*|\b(afterAll)\b
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm,te,TE
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm,te,TE,PASE,SEH

View File

@@ -1,6 +1,6 @@
---
name: babysit-pr
description: Babysit a GitHub pull request after creation by continuously polling CI checks/workflow runs, new review comments, and mergeability state until the PR is ready to merge (or merged/closed). Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and stop only when user help is required (for example CI infrastructure issues, exhausted flaky retries, or ambiguous/blocking situations). Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
description: Babysit a GitHub pull request after creation by continuously polling review comments, CI checks/workflow runs, and mergeability state until the PR is merged/closed or user help is required. Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and keep watching open PRs so fresh review feedback is surfaced promptly. Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
---
# PR Babysitter
@@ -9,8 +9,8 @@ description: Babysit a GitHub pull request after creation by continuously pollin
Babysit a PR persistently until one of these terminal outcomes occurs:
- The PR is merged or closed.
- CI is successful, there are no unaddressed review comments surfaced by the watcher, required review approval is not blocking merge, and there are no potential merge conflicts (PR is mergeable / not reporting conflict risk).
- A situation requires user help (for example CI infrastructure issues, repeated flaky failures after retry budget is exhausted, permission problems, or ambiguity that cannot be resolved safely).
- Optional handoff milestone: the PR is currently green + mergeable + review-clean. Treat this as a progress state, not a watcher stop, so late-arriving review comments are still surfaced promptly while the PR remains open.
Do not stop merely because a single snapshot returns `idle` while checks are still pending.
@@ -24,19 +24,20 @@ Accept any of the following:
## Core Workflow
1. When the user asks to "monitor"/"watch"/"babysit" a PR, start with the watcher's continuous mode (`--watch`) unless you are intentionally doing a one-shot diagnostic snapshot.
2. Run the watcher script to snapshot PR/CI/review state (or consume each streamed snapshot from `--watch`).
2. Run the watcher script to snapshot PR/review/CI state (or consume each streamed snapshot from `--watch`).
3. Inspect the `actions` list in the JSON response.
4. If `diagnose_ci_failure` is present, inspect failed run logs and classify the failure.
5. If the failure is likely caused by the current branch, patch code locally, commit, and push.
6. If `process_review_comment` is present, inspect surfaced review items and decide whether to address them.
7. If a review item is actionable and correct, patch code locally, commit, and push.
8. If the failure is likely flaky/unrelated and `retry_failed_checks` is present, rerun failed jobs with `--retry-failed-now`.
9. If both actionable review feedback and `retry_failed_checks` are present, prioritize review feedback first; a new commit will retrigger CI, so avoid rerunning flaky checks on the old SHA unless you intentionally defer the review change.
10. On every loop, verify mergeability / merge-conflict status (for example via `gh pr view`) in addition to CI and review state.
11. After any push or rerun action, immediately return to step 1 and continue polling on the updated SHA/state.
12. If you had been using `--watch` before pausing to patch/commit/push, relaunch `--watch` yourself in the same turn immediately after the push (do not wait for the user to re-invoke the skill).
13. Repeat polling until the PR is green + review-clean + mergeable, `stop_pr_closed` appears, or a user-help-required blocker is reached.
14. Maintain terminal/session ownership: while babysitting is active, keep consuming watcher output in the same turn; do not leave a detached `--watch` process running and then end the turn as if monitoring were complete.
7. If a review item is actionable and correct, patch code locally, commit, push, and then mark the associated review thread/comment as resolved once the fix is on GitHub.
8. If a review item from another author is non-actionable, already addressed, or not valid, post one reply on the comment/thread explaining that decision (for example answering the question or explaining why no change is needed). If the watcher later surfaces your own reply, treat that self-authored item as already handled and do not reply again.
9. If the failure is likely flaky/unrelated and `retry_failed_checks` is present, rerun failed jobs with `--retry-failed-now`.
10. If both actionable review feedback and `retry_failed_checks` are present, prioritize review feedback first; a new commit will retrigger CI, so avoid rerunning flaky checks on the old SHA unless you intentionally defer the review change.
11. On every loop, look for newly surfaced review feedback before acting on CI failures or mergeability state, then verify mergeability / merge-conflict status (for example via `gh pr view`) alongside CI.
12. After any push or rerun action, immediately return to step 1 and continue polling on the updated SHA/state.
13. If you had been using `--watch` before pausing to patch/commit/push, relaunch `--watch` yourself in the same turn immediately after the push (do not wait for the user to re-invoke the skill).
14. Repeat polling until `stop_pr_closed` appears or a user-help-required blocker is reached. A green + review-clean + mergeable PR is a progress milestone, not a reason to stop the watcher while the PR is still open.
15. Maintain terminal/session ownership: while babysitting is active, keep consuming watcher output in the same turn; do not leave a detached `--watch` process running and then end the turn as if monitoring were complete.
## Commands
@@ -94,10 +95,11 @@ When you agree with a comment and it is actionable:
1. Patch code locally.
2. Commit with `codex: address PR review feedback (#<n>)`.
3. Push to the PR head branch.
4. Resume watching on the new SHA immediately (do not stop after reporting the push).
5. If monitoring was running in `--watch` mode, restart `--watch` immediately after the push in the same turn; do not wait for the user to ask again.
4. After the push succeeds, mark the associated GitHub review thread/comment as resolved.
5. Resume watching on the new SHA immediately (do not stop after reporting the push).
6. If monitoring was running in `--watch` mode, restart `--watch` immediately after the push in the same turn; do not wait for the user to ask again.
If you disagree or the comment is non-actionable/already addressed, record it as handled by continuing the watcher loop (the script de-duplicates surfaced items via state after surfacing them).
If you disagree or the comment is non-actionable/already addressed, reply once directly on the GitHub comment/thread so the reviewer gets an explicit answer, then continue the watcher loop. If the watcher later surfaces your own reply because the authenticated operator is treated as a trusted review author, treat that self-authored item as already handled and do not reply again.
If a code review comment/thread is already marked as resolved in GitHub, treat it as non-actionable and safely ignore it unless new unresolved follow-up feedback appears.
## Git Safety Rules
@@ -124,13 +126,14 @@ Use this loop in a live Codex session:
3. First check whether the PR is now merged or otherwise closed; if so, report that terminal state and stop polling immediately.
4. Check CI summary, new review items, and mergeability/conflict status.
5. Diagnose CI failures and classify branch-related vs flaky/unrelated.
6. Process actionable review comments before flaky reruns when both are present; if a review fix requires a commit, push it and skip rerunning failed checks on the old SHA.
7. Retry failed checks only when `retry_failed_checks` is present and you are not about to replace the current SHA with a review/CI fix commit.
8. If you pushed a commit or triggered a rerun, report the action briefly and continue polling (do not stop).
9. After a review-fix push, proactively restart continuous monitoring (`--watch`) in the same turn unless a strict stop condition has already been reached.
10. If everything is passing, mergeable, not blocked on required review approval, and there are no unaddressed review items, report success and stop.
11. If blocked on a user-help-required issue (infra outage, exhausted flaky retries, unclear reviewer request, permissions), report the blocker and stop.
12. Otherwise sleep according to the polling cadence below and repeat.
6. For each surfaced review item from another author, either reply once with an explanation if it is non-actionable or patch/commit/push and then resolve it if it is actionable. If a later snapshot surfaces your own reply, treat it as informational and continue without responding again.
7. Process actionable review comments before flaky reruns when both are present; if a review fix requires a commit, push it and skip rerunning failed checks on the old SHA.
8. Retry failed checks only when `retry_failed_checks` is present and you are not about to replace the current SHA with a review/CI fix commit.
9. If you pushed a commit, resolved a review thread, replied to a review comment, or triggered a rerun, report the action briefly and continue polling (do not stop).
10. After a review-fix push, proactively restart continuous monitoring (`--watch`) in the same turn unless a strict stop condition has already been reached.
11. If everything is passing, mergeable, not blocked on required review approval, and there are no unaddressed review items, report that the PR is currently ready to merge but keep the watcher running so new review comments are surfaced quickly while the PR remains open.
12. If blocked on a user-help-required issue (infra outage, exhausted flaky retries, unclear reviewer request, permissions), report the blocker and stop.
13. Otherwise sleep according to the polling cadence below and repeat.
When the user explicitly asks to monitor/watch/babysit a PR, prefer `--watch` so polling continues autonomously in one command. Use repeated `--once` snapshots only for debugging, local testing, or when the user explicitly asks for a one-shot check.
Do not stop to ask the user whether to continue polling; continue autonomously until a strict stop condition is met or the user explicitly interrupts.
@@ -138,19 +141,18 @@ Do not hand control back to the user after a review-fix push just because a new
If a `--watch` process is still running and no strict stop condition has been reached, the babysitting task is still in progress; keep streaming/consuming watcher output instead of ending the turn.
## Polling Cadence
Use adaptive polling and continue monitoring even after CI turns green:
Keep review polling aggressive and continue monitoring even after CI turns green:
- While CI is not green (pending/running/queued or failing): poll every 1 minute.
- After CI turns green: start at every 1 minute, then back off exponentially when there is no change (for example 1m, 2m, 4m, 8m, 16m, 32m), capping at every 1 hour.
- Reset the green-state polling interval back to 1 minute whenever anything changes (new commit/SHA, check status changes, new review comments, mergeability changes, review decision changes).
- If CI stops being green again (new commit, rerun, or regression): return to 1-minute polling.
- After CI turns green: keep polling at the base cadence while the PR remains open so newly posted review comments are surfaced promptly instead of waiting on a long green-state backoff.
- Reset the cadence immediately whenever anything changes (new commit/SHA, check status changes, new review comments, mergeability changes, review decision changes).
- If CI stops being green again (new commit, rerun, or regression): stay on the base polling cadence.
- If any poll shows the PR is merged or otherwise closed: stop polling immediately and report the terminal state.
## Stop Conditions (Strict)
Stop only when one of the following is true:
- PR merged or closed (stop as soon as a poll/snapshot confirms this).
- PR is ready to merge: CI succeeded, no surfaced unaddressed review comments, not blocked on required review approval, and no merge conflict risk.
- User intervention is required and Codex cannot safely proceed alone.
Keep polling when:
@@ -159,14 +161,14 @@ Keep polling when:
- CI is still running/queued.
- Review state is quiet but CI is not terminal.
- CI is green but mergeability is unknown/pending.
- CI is green and mergeable, but the PR is still open and you are waiting for possible new review comments or merge-conflict changes per the green-state cadence.
- The PR is green but blocked on review approval (`REVIEW_REQUIRED` / similar); continue polling on the green-state cadence and surface any new review comments without asking for confirmation to keep watching.
- CI is green and mergeable, but the PR is still open and you are waiting for possible new review comments or merge-conflict changes.
- The PR is green but blocked on review approval (`REVIEW_REQUIRED` / similar); continue polling at the base cadence and surface any new review comments without asking for confirmation to keep watching.
## Output Expectations
Provide concise progress updates while monitoring and a final summary that includes:
- During long unchanged monitoring periods, avoid emitting a full update on every poll; summarize only status changes plus occasional heartbeat updates.
- Treat push confirmations, intermediate CI snapshots, and review-action updates as progress updates only; do not emit the final summary or end the babysitting session unless a strict stop condition is met.
- Treat push confirmations, intermediate CI snapshots, ready-to-merge snapshots, and review-action updates as progress updates only; do not emit the final summary or end the babysitting session unless a strict stop condition is met.
- A user request to "monitor" is not satisfied by a couple of sample polls; remain in the loop until a strict stop condition or an explicit user interruption.
- A review-fix commit + push is not a completion event; immediately resume live monitoring (`--watch`) in the same turn and continue reporting progress updates.
- When CI first transitions to all green for the current SHA, emit a one-time celebratory progress update (do not repeat it on every green poll). Preferred style: `🚀 CI is all green! 33/33 passed. Still on watch for review approval.`

View File

@@ -1,4 +1,4 @@
interface:
display_name: "PR Babysitter"
short_description: "Watch PR CI, reviews, and merge conflicts"
default_prompt: "Babysit the current PR: monitor CI, reviewer comments, and merge-conflict status (prefer the watchers --watch mode for live monitoring); fix valid issues, push updates, and rerun flaky failures up to 3 times. Keep exactly one watcher session active for the PR (do not leave duplicate --watch terminals running). If you pause monitoring to patch review/CI feedback, restart --watch yourself immediately after the push in the same turn. If a watcher is still running and no strict stop condition has been reached, the task is still in progress: keep consuming watcher output and sending progress updates instead of ending the turn. Continue polling autonomously after any push/rerun until a strict terminal stop condition is reached or the user interrupts."
short_description: "Watch PR review comments, CI, and merge conflicts"
default_prompt: "Babysit the current PR: monitor reviewer comments, CI, and merge-conflict status (prefer the watchers --watch mode for live monitoring); surface new review feedback before acting on CI or mergeability work, fix valid issues, push updates, and rerun flaky failures up to 3 times. Keep exactly one watcher session active for the PR (do not leave duplicate --watch terminals running). If you pause monitoring to patch review/CI feedback, restart --watch yourself immediately after the push in the same turn. If a watcher is still running and no strict stop condition has been reached, the task is still in progress: keep consuming watcher output and sending progress updates instead of ending the turn. Do not treat a green + mergeable PR as a terminal stop while it is still open; continue polling autonomously after any push/rerun so newly posted review comments are surfaced until a strict terminal stop condition is reached or the user interrupts."

View File

@@ -45,7 +45,6 @@ MERGE_CONFLICT_OR_BLOCKING_STATES = {
"DRAFT",
"UNKNOWN",
}
GREEN_STATE_MAX_POLL_SECONDS = 60 * 60
class GhCommandError(RuntimeError):
@@ -578,7 +577,7 @@ def recommend_actions(pr, checks_summary, failed_runs, new_review_items, retries
return unique_actions(actions)
if is_pr_ready_to_merge(pr, checks_summary, new_review_items):
actions.append("stop_ready_to_merge")
actions.append("ready_to_merge")
return unique_actions(actions)
if new_review_items:
@@ -606,12 +605,6 @@ def collect_snapshot(args):
if not state.get("started_at"):
state["started_at"] = int(time.time())
# `gh pr checks -R <repo>` requires an explicit PR/branch/url argument.
# After resolving `--pr auto`, reuse the concrete PR number.
checks = get_pr_checks(str(pr["number"]), repo=pr["repo"])
checks_summary = summarize_checks(checks)
workflow_runs = get_workflow_runs_for_sha(pr["repo"], pr["head_sha"])
failed_runs = failed_runs_from_workflow_runs(workflow_runs, pr["head_sha"])
authenticated_login = get_authenticated_login()
new_review_items = fetch_new_review_items(
pr,
@@ -619,6 +612,15 @@ def collect_snapshot(args):
fresh_state=fresh_state,
authenticated_login=authenticated_login,
)
# Surface review feedback before drilling into CI and mergeability details.
# That keeps the babysitter responsive to new comments even when other
# actions are also available.
# `gh pr checks -R <repo>` requires an explicit PR/branch/url argument.
# After resolving `--pr auto`, reuse the concrete PR number.
checks = get_pr_checks(str(pr["number"]), repo=pr["repo"])
checks_summary = summarize_checks(checks)
workflow_runs = get_workflow_runs_for_sha(pr["repo"], pr["head_sha"])
failed_runs = failed_runs_from_workflow_runs(workflow_runs, pr["head_sha"])
retries_used = current_retry_count(state, pr["head_sha"])
actions = recommend_actions(
@@ -761,7 +763,6 @@ def run_watch(args):
if (
"stop_pr_closed" in actions
or "stop_exhausted_retries" in actions
or "stop_ready_to_merge" in actions
):
print_event("stop", {"actions": snapshot.get("actions"), "pr": snapshot.get("pr")})
return 0
@@ -769,13 +770,13 @@ def run_watch(args):
current_change_key = snapshot_change_key(snapshot)
changed = current_change_key != last_change_key
green = is_ci_green(snapshot)
pr = snapshot.get("pr") or {}
pr_open = not bool(pr.get("closed")) and not bool(pr.get("merged"))
if not green:
if not green or pr_open:
poll_seconds = args.poll_seconds
elif changed or last_change_key is None:
poll_seconds = args.poll_seconds
else:
poll_seconds = min(poll_seconds * 2, GREEN_STATE_MAX_POLL_SECONDS)
last_change_key = current_change_key
time.sleep(poll_seconds)

View File

@@ -0,0 +1,155 @@
import argparse
import importlib.util
from pathlib import Path
import pytest
MODULE_PATH = Path(__file__).with_name("gh_pr_watch.py")
MODULE_SPEC = importlib.util.spec_from_file_location("gh_pr_watch", MODULE_PATH)
gh_pr_watch = importlib.util.module_from_spec(MODULE_SPEC)
assert MODULE_SPEC.loader is not None
MODULE_SPEC.loader.exec_module(gh_pr_watch)
def sample_pr():
return {
"number": 123,
"url": "https://github.com/openai/codex/pull/123",
"repo": "openai/codex",
"head_sha": "abc123",
"head_branch": "feature",
"state": "OPEN",
"merged": False,
"closed": False,
"mergeable": "MERGEABLE",
"merge_state_status": "CLEAN",
"review_decision": "",
}
def sample_checks(**overrides):
checks = {
"pending_count": 0,
"failed_count": 0,
"passed_count": 12,
"all_terminal": True,
}
checks.update(overrides)
return checks
def test_collect_snapshot_fetches_review_items_before_ci(monkeypatch, tmp_path):
call_order = []
pr = sample_pr()
monkeypatch.setattr(gh_pr_watch, "resolve_pr", lambda *args, **kwargs: pr)
monkeypatch.setattr(gh_pr_watch, "load_state", lambda path: ({}, True))
monkeypatch.setattr(
gh_pr_watch,
"get_authenticated_login",
lambda: call_order.append("auth") or "octocat",
)
monkeypatch.setattr(
gh_pr_watch,
"fetch_new_review_items",
lambda *args, **kwargs: call_order.append("review") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"get_pr_checks",
lambda *args, **kwargs: call_order.append("checks") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"summarize_checks",
lambda checks: call_order.append("summarize") or sample_checks(),
)
monkeypatch.setattr(
gh_pr_watch,
"get_workflow_runs_for_sha",
lambda *args, **kwargs: call_order.append("workflow") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"failed_runs_from_workflow_runs",
lambda *args, **kwargs: call_order.append("failed_runs") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"recommend_actions",
lambda *args, **kwargs: call_order.append("recommend") or ["idle"],
)
monkeypatch.setattr(gh_pr_watch, "save_state", lambda *args, **kwargs: None)
args = argparse.Namespace(
pr="123",
repo=None,
state_file=str(tmp_path / "watcher-state.json"),
max_flaky_retries=3,
)
gh_pr_watch.collect_snapshot(args)
assert call_order.index("review") < call_order.index("checks")
assert call_order.index("review") < call_order.index("workflow")
def test_recommend_actions_prioritizes_review_comments():
actions = gh_pr_watch.recommend_actions(
sample_pr(),
sample_checks(failed_count=1),
[{"run_id": 99}],
[{"kind": "review_comment", "id": "1"}],
0,
3,
)
assert actions == [
"process_review_comment",
"diagnose_ci_failure",
"retry_failed_checks",
]
def test_run_watch_keeps_polling_open_ready_to_merge_pr(monkeypatch):
sleeps = []
events = []
snapshot = {
"pr": sample_pr(),
"checks": sample_checks(),
"failed_runs": [],
"new_review_items": [],
"actions": ["ready_to_merge"],
"retry_state": {
"current_sha_retries_used": 0,
"max_flaky_retries": 3,
},
}
monkeypatch.setattr(
gh_pr_watch,
"collect_snapshot",
lambda args: (snapshot, Path("/tmp/codex-babysit-pr-state.json")),
)
monkeypatch.setattr(
gh_pr_watch,
"print_event",
lambda event, payload: events.append((event, payload)),
)
class StopWatch(Exception):
pass
def fake_sleep(seconds):
sleeps.append(seconds)
if len(sleeps) >= 2:
raise StopWatch
monkeypatch.setattr(gh_pr_watch.time, "sleep", fake_sleep)
with pytest.raises(StopWatch):
gh_pr_watch.run_watch(argparse.Namespace(poll_seconds=30))
assert sleeps == [30, 30]
assert [event for event, _ in events] == ["snapshot", "snapshot"]

View File

@@ -57,5 +57,77 @@ runs:
if: runner.os == 'Windows'
shell: pwsh
run: |
# Use a very short path to reduce argv/path length issues.
"BAZEL_OUTPUT_USER_ROOT=C:\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
# Use the shortest available drive to reduce argv/path length issues,
# but avoid the drive root because some Windows test launchers mis-handle
# MANIFEST paths there.
$hasDDrive = Test-Path 'D:\'
$bazelOutputUserRoot = if ($hasDDrive) { 'D:\b' } else { 'C:\b' }
$repoContentsCache = Join-Path $env:RUNNER_TEMP "bazel-repo-contents-cache-$env:GITHUB_RUN_ID-$env:GITHUB_JOB"
"BAZEL_OUTPUT_USER_ROOT=$bazelOutputUserRoot" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
"BAZEL_REPO_CONTENTS_CACHE=$repoContentsCache" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
if (-not $hasDDrive) {
$repositoryCache = Join-Path $env:USERPROFILE '.cache\bazel-repo-cache'
"BAZEL_REPOSITORY_CACHE=$repositoryCache" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
}
- name: Expose MSVC SDK environment (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Bazel exec-side Rust build scripts do not reliably inherit the MSVC developer
# shell on GitHub-hosted Windows runners, so discover the latest VS install and
# ask `VsDevCmd.bat` to materialize the x64/x64 compiler + SDK environment.
$vswhere = "${env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\vswhere.exe"
if (-not (Test-Path $vswhere)) {
throw "vswhere.exe not found"
}
$installPath = & $vswhere -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath 2>$null
if (-not $installPath) {
throw "Could not locate a Visual Studio installation with VC tools"
}
$vsDevCmd = Join-Path $installPath 'Common7\Tools\VsDevCmd.bat'
if (-not (Test-Path $vsDevCmd)) {
throw "VsDevCmd.bat not found at $vsDevCmd"
}
# Keep the export surface explicit: these are the paths and SDK roots that the
# MSVC toolchain probes need later when Bazel runs Windows exec-platform build
# scripts such as `aws-lc-sys`.
$varsToExport = @(
'INCLUDE',
'LIB',
'LIBPATH',
'PATH',
'UCRTVersion',
'UniversalCRTSdkDir',
'VCINSTALLDIR',
'VCToolsInstallDir',
'WindowsLibPath',
'WindowsSdkBinPath',
'WindowsSdkDir',
'WindowsSDKLibVersion',
'WindowsSDKVersion'
)
# `VsDevCmd.bat` is a batch file, so invoke it under `cmd.exe`, suppress its
# banner, then dump the resulting environment with `set`. Re-export only the
# approved keys into `GITHUB_ENV` so later steps inherit the same MSVC context.
$envLines = & cmd.exe /c ('"{0}" -no_logo -arch=x64 -host_arch=x64 >nul && set' -f $vsDevCmd)
foreach ($line in $envLines) {
if ($line -notmatch '^(.*?)=(.*)$') {
continue
}
$name = $matches[1]
$value = $matches[2]
if ($varsToExport -contains $name) {
"$name=$value" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
}
}
- name: Enable Git long paths (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: git config --global core.longpaths true

View File

@@ -0,0 +1,115 @@
#!/usr/bin/env bash
set -euo pipefail
ci_config=ci-linux
case "${RUNNER_OS:-}" in
macOS)
ci_config=ci-macos
;;
Windows)
ci_config=ci-windows
;;
esac
bazel_lint_args=("$@")
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
has_host_platform_override=0
for arg in "${bazel_lint_args[@]}"; do
if [[ "$arg" == --host_platform=* ]]; then
has_host_platform_override=1
break
fi
done
if [[ $has_host_platform_override -eq 0 ]]; then
# The nightly Windows lint toolchain is registered with an MSVC exec
# platform even though the lint target platform stays on `windows-gnullvm`.
# Override the host platform here so the exec-side helper binaries actually
# match the registered toolchain set.
bazel_lint_args+=("--host_platform=//:local_windows_msvc")
fi
# Native Windows lint runs need exec-side Rust helper binaries and proc-macros
# to use rust-lld instead of the C++ linker path. The default `none`
# preference resolves to `cc` when a cc_toolchain is present, which currently
# routes these exec actions through clang++ with an argument shape it cannot
# consume.
bazel_lint_args+=("--@rules_rust//rust/settings:toolchain_linker_preference=rust")
# Some Rust top-level targets are still intentionally incompatible with the
# local Windows MSVC exec platform. Skip those explicit targets so the native
# lint aspect can run across the compatible crate graph instead of failing the
# whole build after analysis.
bazel_lint_args+=("--skip_incompatible_explicit_targets")
fi
bazel_startup_args=()
if [[ -n "${BAZEL_OUTPUT_USER_ROOT:-}" ]]; then
bazel_startup_args+=("--output_user_root=${BAZEL_OUTPUT_USER_ROOT}")
fi
run_bazel() {
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
MSYS2_ARG_CONV_EXCL='*' bazel "$@"
return
fi
bazel "$@"
}
run_bazel_with_startup_args() {
if [[ ${#bazel_startup_args[@]} -gt 0 ]]; then
run_bazel "${bazel_startup_args[@]}" "$@"
return
fi
run_bazel "$@"
}
read_query_labels() {
local query="$1"
local query_stdout
local query_stderr
query_stdout="$(mktemp)"
query_stderr="$(mktemp)"
if ! run_bazel_with_startup_args \
--noexperimental_remote_repo_contents_cache \
query \
--keep_going \
--output=label \
"$query" >"$query_stdout" 2>"$query_stderr"; then
cat "$query_stderr" >&2
rm -f "$query_stdout" "$query_stderr"
exit 1
fi
cat "$query_stdout"
rm -f "$query_stdout" "$query_stderr"
}
final_build_targets=(//codex-rs/...)
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
# Bazel's local Windows platform currently lacks a default test toolchain for
# `rust_test`, so target the concrete Rust crate rules directly. The lint
# aspect still walks their crate graph, which preserves incremental reuse for
# non-test code while avoiding non-Rust wrapper targets such as platform_data.
final_build_targets=()
while IFS= read -r label; do
[[ -n "$label" ]] || continue
final_build_targets+=("$label")
done < <(read_query_labels 'kind("rust_(library|binary|proc_macro) rule", //codex-rs/...)')
if [[ ${#final_build_targets[@]} -eq 0 ]]; then
echo "Failed to discover Windows Bazel lint targets." >&2
exit 1
fi
fi
./.github/scripts/run-bazel-ci.sh \
-- \
build \
"${bazel_lint_args[@]}" \
-- \
"${final_build_targets[@]}"

View File

@@ -4,6 +4,7 @@ set -euo pipefail
print_failed_bazel_test_logs=0
use_node_test_env=0
remote_download_toplevel=0
while [[ $# -gt 0 ]]; do
case "$1" in
@@ -15,6 +16,10 @@ while [[ $# -gt 0 ]]; do
use_node_test_env=1
shift
;;
--remote-download-toplevel)
remote_download_toplevel=1
shift
;;
--)
shift
break
@@ -27,7 +32,7 @@ while [[ $# -gt 0 ]]; do
done
if [[ $# -eq 0 ]]; then
echo "Usage: $0 [--print-failed-test-logs] [--use-node-test-env] -- <bazel args> -- <targets>" >&2
echo "Usage: $0 [--print-failed-test-logs] [--use-node-test-env] [--remote-download-toplevel] -- <bazel args> -- <targets>" >&2
exit 1
fi
@@ -36,6 +41,15 @@ if [[ -n "${BAZEL_OUTPUT_USER_ROOT:-}" ]]; then
bazel_startup_args+=("--output_user_root=${BAZEL_OUTPUT_USER_ROOT}")
fi
run_bazel() {
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
MSYS2_ARG_CONV_EXCL='*' bazel "$@"
return
fi
bazel "$@"
}
ci_config=ci-linux
case "${RUNNER_OS:-}" in
macOS)
@@ -55,7 +69,7 @@ print_bazel_test_log_tails() {
bazel_info_cmd+=("${bazel_startup_args[@]}")
fi
testlogs_dir="$("${bazel_info_cmd[@]}" info bazel-testlogs 2>/dev/null || echo bazel-testlogs)"
testlogs_dir="$(run_bazel "${bazel_info_cmd[@]:1}" info bazel-testlogs 2>/dev/null || echo bazel-testlogs)"
local failed_targets=()
while IFS= read -r target; do
@@ -114,6 +128,48 @@ if [[ $use_node_test_env -eq 1 && "${RUNNER_OS:-}" != "Windows" ]]; then
bazel_args+=("--test_env=CODEX_JS_REPL_NODE_PATH=${node_bin}")
fi
post_config_bazel_args=()
if [[ $remote_download_toplevel -eq 1 ]]; then
# Override the CI config's remote_download_minimal setting when callers need
# the built artifact to exist on disk after the command completes.
post_config_bazel_args+=(--remote_download_toplevel)
fi
if [[ -n "${BAZEL_REPO_CONTENTS_CACHE:-}" ]]; then
# Windows self-hosted runners can run multiple Bazel jobs concurrently. Give
# each job its own repo contents cache so they do not fight over the shared
# path configured in `ci-windows`.
post_config_bazel_args+=("--repo_contents_cache=${BAZEL_REPO_CONTENTS_CACHE}")
fi
if [[ -n "${BAZEL_REPOSITORY_CACHE:-}" ]]; then
post_config_bazel_args+=("--repository_cache=${BAZEL_REPOSITORY_CACHE}")
fi
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
windows_action_env_vars=(
INCLUDE
LIB
LIBPATH
PATH
UCRTVersion
UniversalCRTSdkDir
VCINSTALLDIR
VCToolsInstallDir
WindowsLibPath
WindowsSdkBinPath
WindowsSdkDir
WindowsSDKLibVersion
WindowsSDKVersion
)
for env_var in "${windows_action_env_vars[@]}"; do
if [[ -n "${!env_var:-}" ]]; then
post_config_bazel_args+=("--action_env=${env_var}" "--host_action_env=${env_var}")
fi
done
fi
bazel_console_log="$(mktemp)"
trap 'rm -f "$bazel_console_log"' EXIT
@@ -128,12 +184,18 @@ if [[ -n "${BUILDBUDDY_API_KEY:-}" ]]; then
# seen in CI (for example "is not a symlink" or permission errors while
# materializing external repos such as rules_perl). We still use BuildBuddy for
# remote execution/cache; this only disables the startup-level repo contents cache.
bazel_run_args=(
"${bazel_args[@]}"
"--config=${ci_config}"
"--remote_header=x-buildbuddy-api-key=${BUILDBUDDY_API_KEY}"
)
if (( ${#post_config_bazel_args[@]} > 0 )); then
bazel_run_args+=("${post_config_bazel_args[@]}")
fi
set +e
"${bazel_cmd[@]}" \
run_bazel "${bazel_cmd[@]:1}" \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
"--config=${ci_config}" \
"--remote_header=x-buildbuddy-api-key=${BUILDBUDDY_API_KEY}" \
"${bazel_run_args[@]}" \
-- \
"${bazel_targets[@]}" \
2>&1 | tee "$bazel_console_log"
@@ -157,12 +219,18 @@ else
# clear remote cache/execution endpoints configured in .bazelrc.
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_cache
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_executor
bazel_run_args=(
"${bazel_args[@]}"
--remote_cache=
--remote_executor=
)
if (( ${#post_config_bazel_args[@]} > 0 )); then
bazel_run_args+=("${post_config_bazel_args[@]}")
fi
set +e
"${bazel_cmd[@]}" \
run_bazel "${bazel_cmd[@]:1}" \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
--remote_cache= \
--remote_executor= \
"${bazel_run_args[@]}" \
-- \
"${bazel_targets[@]}" \
2>&1 | tee "$bazel_console_log"

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import re
import sys
import tomllib
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
DEFAULT_CARGO_TOML = ROOT / "codex-rs" / "Cargo.toml"
DEFAULT_BAZELRC = ROOT / ".bazelrc"
BAZEL_CLIPPY_FLAG_PREFIX = "build:clippy --@rules_rust//rust/settings:clippy_flag="
BAZEL_SPECIAL_FLAGS = {"-Dwarnings"}
VALID_LEVELS = {"allow", "warn", "deny", "forbid"}
LONG_FLAG_RE = re.compile(
r"^--(?P<level>allow|warn|deny|forbid)=clippy::(?P<lint>[a-z0-9_]+)$"
)
SHORT_FLAG_RE = re.compile(r"^-(?P<level>[AWDF])clippy::(?P<lint>[a-z0-9_]+)$")
SHORT_LEVEL_NAMES = {
"A": "allow",
"W": "warn",
"D": "deny",
"F": "forbid",
}
def main() -> int:
parser = argparse.ArgumentParser(
description=(
"Verify that Bazel clippy flags in .bazelrc stay in sync with "
"codex-rs/Cargo.toml [workspace.lints.clippy]."
)
)
parser.add_argument(
"--cargo-toml",
type=Path,
default=DEFAULT_CARGO_TOML,
help="Path to the workspace Cargo.toml to inspect.",
)
parser.add_argument(
"--bazelrc",
type=Path,
default=DEFAULT_BAZELRC,
help="Path to the .bazelrc file to inspect.",
)
args = parser.parse_args()
cargo_toml = args.cargo_toml.resolve()
bazelrc = args.bazelrc.resolve()
cargo_lints = load_workspace_clippy_lints(cargo_toml)
bazel_lints = load_bazel_clippy_lints(bazelrc)
missing = sorted(cargo_lints.keys() - bazel_lints.keys())
extra = sorted(bazel_lints.keys() - cargo_lints.keys())
mismatched = sorted(
lint
for lint in cargo_lints.keys() & bazel_lints.keys()
if cargo_lints[lint] != bazel_lints[lint]
)
if missing or extra or mismatched:
print_sync_error(
cargo_toml=cargo_toml,
bazelrc=bazelrc,
cargo_lints=cargo_lints,
bazel_lints=bazel_lints,
missing=missing,
extra=extra,
mismatched=mismatched,
)
return 1
print(
"Bazel clippy flags in "
f"{display_path(bazelrc)} match "
f"{display_path(cargo_toml)} [workspace.lints.clippy]."
)
return 0
def load_workspace_clippy_lints(cargo_toml: Path) -> dict[str, str]:
workspace = tomllib.loads(cargo_toml.read_text())["workspace"]
clippy_lints = workspace["lints"]["clippy"]
parsed: dict[str, str] = {}
for lint, level in clippy_lints.items():
if not isinstance(level, str):
raise SystemExit(
f"expected string lint level for clippy::{lint} in {cargo_toml}, got {level!r}"
)
normalized = level.strip().lower()
if normalized not in VALID_LEVELS:
raise SystemExit(
f"unsupported lint level {level!r} for clippy::{lint} in {cargo_toml}"
)
parsed[lint] = normalized
return parsed
def load_bazel_clippy_lints(bazelrc: Path) -> dict[str, str]:
parsed: dict[str, str] = {}
line_numbers: dict[str, int] = {}
for lineno, line in enumerate(bazelrc.read_text().splitlines(), start=1):
if not line.startswith(BAZEL_CLIPPY_FLAG_PREFIX):
continue
flag = line.removeprefix(BAZEL_CLIPPY_FLAG_PREFIX).strip()
if flag in BAZEL_SPECIAL_FLAGS:
continue
parsed_flag = parse_bazel_lint_flag(flag)
if parsed_flag is None:
continue
lint, level = parsed_flag
if lint in parsed:
raise SystemExit(
f"duplicate Bazel clippy entry for clippy::{lint} at "
f"{bazelrc}:{line_numbers[lint]} and {bazelrc}:{lineno}"
)
parsed[lint] = level
line_numbers[lint] = lineno
return parsed
def parse_bazel_lint_flag(flag: str) -> tuple[str, str] | None:
long_match = LONG_FLAG_RE.match(flag)
if long_match:
return long_match["lint"], long_match["level"]
short_match = SHORT_FLAG_RE.match(flag)
if short_match:
return short_match["lint"], SHORT_LEVEL_NAMES[short_match["level"]]
return None
def print_sync_error(
*,
cargo_toml: Path,
bazelrc: Path,
cargo_lints: dict[str, str],
bazel_lints: dict[str, str],
missing: list[str],
extra: list[str],
mismatched: list[str],
) -> None:
cargo_toml_display = display_path(cargo_toml)
bazelrc_display = display_path(bazelrc)
example_manifest = find_workspace_lints_example_manifest()
print(
"ERROR: Bazel clippy flags are out of sync with Cargo workspace clippy lints.",
file=sys.stderr,
)
print(file=sys.stderr)
print(
f"Cargo defines the source of truth in {cargo_toml_display} "
"[workspace.lints.clippy].",
file=sys.stderr,
)
if example_manifest is not None:
print(
"Cargo applies those lint levels to member crates that opt into "
f"`[lints] workspace = true`, for example {example_manifest}.",
file=sys.stderr,
)
print(
"Bazel clippy does not ingest Cargo lint levels automatically, and "
"`clippy.toml` can configure lint behavior but cannot set allow/warn/deny/forbid.",
file=sys.stderr,
)
print(
f"Update {bazelrc_display} so its `build:clippy` "
"`clippy_flag` entries match Cargo.",
file=sys.stderr,
)
if missing:
print(file=sys.stderr)
print("Missing Bazel entries:", file=sys.stderr)
for lint in missing:
print(f" {render_bazelrc_line(lint, cargo_lints[lint])}", file=sys.stderr)
if mismatched:
print(file=sys.stderr)
print("Mismatched lint levels:", file=sys.stderr)
for lint in mismatched:
cargo_level = cargo_lints[lint]
bazel_level = bazel_lints[lint]
print(
f" clippy::{lint}: Cargo has {cargo_level}, Bazel has {bazel_level}",
file=sys.stderr,
)
print(
f" expected: {render_bazelrc_line(lint, cargo_level)}",
file=sys.stderr,
)
if extra:
print(file=sys.stderr)
print("Extra Bazel entries with no Cargo counterpart:", file=sys.stderr)
for lint in extra:
print(f" {render_bazelrc_line(lint, bazel_lints[lint])}", file=sys.stderr)
def render_bazelrc_line(lint: str, level: str) -> str:
return f"{BAZEL_CLIPPY_FLAG_PREFIX}--{level}=clippy::{lint}"
def display_path(path: Path) -> str:
try:
return str(path.relative_to(ROOT))
except ValueError:
return str(path)
def find_workspace_lints_example_manifest() -> str | None:
for cargo_toml in sorted((ROOT / "codex-rs").glob("**/Cargo.toml")):
if cargo_toml == DEFAULT_CARGO_TOML:
continue
data = tomllib.loads(cargo_toml.read_text())
if data.get("lints", {}).get("workspace") is True:
return str(cargo_toml.relative_to(ROOT))
return None
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,125 @@
#!/usr/bin/env python3
"""Verify that codex-rs crates inherit workspace metadata, lints, and names.
This keeps `cargo clippy` aligned with the workspace lint policy by ensuring
each crate opts into `[lints] workspace = true`, and it also checks the crate
name conventions for top-level `codex-rs/*` crates and `codex-rs/utils/*`
crates.
"""
from __future__ import annotations
import sys
import tomllib
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
CARGO_RS_ROOT = ROOT / "codex-rs"
WORKSPACE_PACKAGE_FIELDS = ("version", "edition", "license")
TOP_LEVEL_NAME_EXCEPTIONS = {
"windows-sandbox-rs": "codex-windows-sandbox",
}
UTILITY_NAME_EXCEPTIONS = {
"path-utils": "codex-utils-path",
}
def main() -> int:
failures = [
(path.relative_to(ROOT), errors)
for path in cargo_manifests()
if (errors := manifest_errors(path))
]
if not failures:
return 0
print(
"Cargo manifests under codex-rs must inherit workspace package metadata and "
"opt into workspace lints."
)
print(
"Cargo only applies `codex-rs/Cargo.toml` `[workspace.lints.clippy]` "
"entries to a crate when that crate declares:"
)
print()
print("[lints]")
print("workspace = true")
print()
print(
"Without that opt-in, `cargo clippy` can miss violations that Bazel clippy "
"catches."
)
print()
print(
"Package-name checks apply to `codex-rs/<crate>/Cargo.toml` and "
"`codex-rs/utils/<crate>/Cargo.toml`."
)
print()
for path, errors in failures:
print(f"{path}:")
for error in errors:
print(f" - {error}")
return 1
def manifest_errors(path: Path) -> list[str]:
manifest = load_manifest(path)
package = manifest.get("package")
if not isinstance(package, dict):
return []
errors = []
for field in WORKSPACE_PACKAGE_FIELDS:
if not is_workspace_reference(package.get(field)):
errors.append(f"set `{field}.workspace = true` in `[package]`")
lints = manifest.get("lints")
if not (isinstance(lints, dict) and lints.get("workspace") is True):
errors.append("add `[lints]` with `workspace = true`")
expected_name = expected_package_name(path)
if expected_name is not None:
actual_name = package.get("name")
if actual_name != expected_name:
errors.append(
f"set `[package].name` to `{expected_name}` (found `{actual_name}`)"
)
return errors
def expected_package_name(path: Path) -> str | None:
parts = path.relative_to(CARGO_RS_ROOT).parts
if len(parts) == 2 and parts[1] == "Cargo.toml":
directory = parts[0]
return TOP_LEVEL_NAME_EXCEPTIONS.get(
directory,
directory if directory.startswith("codex-") else f"codex-{directory}",
)
if len(parts) == 3 and parts[0] == "utils" and parts[2] == "Cargo.toml":
directory = parts[1]
return UTILITY_NAME_EXCEPTIONS.get(directory, f"codex-utils-{directory}")
return None
def is_workspace_reference(value: object) -> bool:
return isinstance(value, dict) and value.get("workspace") is True
def load_manifest(path: Path) -> dict:
return tomllib.loads(path.read_text())
def cargo_manifests() -> list[Path]:
return sorted(
path
for path in CARGO_RS_ROOT.rglob("Cargo.toml")
if path != CARGO_RS_ROOT / "Cargo.toml"
)
if __name__ == "__main__":
sys.exit(main())

33
.github/workflows/README.md vendored Normal file
View File

@@ -0,0 +1,33 @@
# Workflow Strategy
The workflows in this directory are split so that pull requests get fast, review-friendly signal while `main` still gets the full cross-platform verification pass.
## Pull Requests
- `bazel.yml` is the main pre-merge verification path for Rust code.
It runs Bazel `test` and Bazel `clippy` on the supported Bazel targets.
- `rust-ci.yml` keeps the Cargo-native PR checks intentionally small:
- `cargo fmt --check`
- `cargo shear`
- `argument-comment-lint` on Linux, macOS, and Windows
- `tools/argument-comment-lint` package tests when the lint or its workflow wiring changes
The PR workflow still keeps the Linux lint lane on the default-targets-only invocation for now, but the released linter runs on Linux, macOS, and Windows before merge.
## Post-Merge On `main`
- `bazel.yml` also runs on pushes to `main`.
This re-verifies the merged Bazel path and helps keep the BuildBuddy caches warm.
- `rust-ci-full.yml` is the full Cargo-native verification workflow.
It keeps the heavier checks off the PR path while still validating them after merge:
- the full Cargo `clippy` matrix
- the full Cargo `nextest` matrix
- release-profile Cargo builds
- cross-platform `argument-comment-lint`
- Linux remote-env tests
## Rule Of Thumb
- If a build/test/clippy check can be expressed in Bazel, prefer putting the PR-time version in `bazel.yml`.
- Keep `rust-ci.yml` fast enough that it usually does not dominate PR latency.
- Reserve `rust-ci-full.yml` for heavyweight Cargo-native coverage that Bazel does not replace yet.

View File

@@ -17,6 +17,7 @@ concurrency:
cancel-in-progress: ${{ github.ref_name != 'main' }}
jobs:
test:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
@@ -39,16 +40,16 @@ jobs:
# - os: ubuntu-24.04-arm
# target: aarch64-unknown-linux-gnu
# TODO: Enable Windows once we fix the toolchain issues there.
#- os: windows-latest
# target: x86_64-pc-windows-gnullvm
# Windows
- os: windows-latest
target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
# Configure a human readable name for each job
name: Local Bazel build on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel CI
id: setup_bazel
@@ -67,46 +68,57 @@ jobs:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
# Keep V8 out of the ordinary Bazel CI path. Only the dedicated
# canary and release workflows should build `third_party/v8`.
bazel_targets=(
//...
# Keep standalone V8 library targets out of the ordinary Bazel CI
# path. V8 consumers under `//codex-rs/...` still participate
# transitively through `//...`.
-//third_party/v8:all
)
./.github/scripts/run-bazel-ci.sh \
--print-failed-test-logs \
--use-node-test-env \
-- \
test \
--test_tag_filters=-argument-comment-lint \
--test_verbose_timeout_warnings \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
//... \
-//third_party/v8:all
"${bazel_targets[@]}"
# Save bazel repository cache explicitly; make non-fatal so cache uploading
# never fails the overall job. Only save when key wasn't hit.
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-${{ matrix.target }}-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}
clippy:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
# Keep Linux lint coverage on x64 and add the arm64 macOS path that
# the Bazel test job already exercises.
# the Bazel test job already exercises. Add Windows gnullvm as well
# so PRs get Bazel-native lint signal on the same Windows toolchain
# that the Bazel test job uses.
- os: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- os: macos-15-xlarge
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
name: Bazel clippy on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel CI
id: setup_bazel
@@ -136,7 +148,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache

View File

@@ -8,7 +8,7 @@ jobs:
name: Blob size policy
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0

View File

@@ -14,13 +14,13 @@ jobs:
working-directory: ./codex-rs
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Run cargo-deny
uses: EmbarkStudios/cargo-deny-action@v2
uses: EmbarkStudios/cargo-deny-action@82eb9f621fbc699dd0918f3ea06864c14cc84246 # v2
with:
rust-version: stable
manifest-path: ./codex-rs/Cargo.toml

View File

@@ -12,15 +12,21 @@ jobs:
NODE_OPTIONS: --max-old-space-size=4096
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Verify codex-rs Cargo manifests inherit workspace settings
run: python3 .github/scripts/verify_cargo_workspace_manifests.py
- name: Verify Bazel clippy flags match Cargo workspace lints
run: python3 .github/scripts/verify_bazel_clippy_lints.py
- name: Setup pnpm
uses: pnpm/action-setup@v5
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v6
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
@@ -28,7 +34,7 @@ jobs:
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- name: Stage npm package
id: stage_npm_package
@@ -47,7 +53,7 @@ jobs:
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-npm-staging
path: ${{ steps.stage_npm_package.outputs.pack_output }}

View File

@@ -18,7 +18,7 @@ jobs:
if: ${{ github.repository_owner == 'openai' }}
runs-on: ubuntu-latest
steps:
- uses: contributor-assistant/github-action@v2.6.1
- uses: contributor-assistant/github-action@ca4a40a7d1004f18d9960b404b97e5f30a505a08 # v2.6.1
# Run on close only if the PR was merged. This will lock the PR to preserve
# the CLA agreement. We don't want to lock PRs that have been closed without
# merging because the contributor may want to respond with additional comments.

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Close inactive PRs from contributors
uses: actions/github-script@v8
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |

View File

@@ -18,7 +18,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
- name: Codespell

View File

@@ -19,7 +19,7 @@ jobs:
reason: ${{ steps.normalize-all.outputs.reason }}
has_matches: ${{ steps.normalize-all.outputs.has_matches }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Codex inputs
env:
@@ -61,7 +61,7 @@ jobs:
# .github/prompts/issue-deduplicator.txt file is obsolete and removed.
- id: codex-all
name: Find duplicates (pass 1, all issues)
uses: openai/codex-action@main
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
@@ -155,7 +155,7 @@ jobs:
reason: ${{ steps.normalize-open.outputs.reason }}
has_matches: ${{ steps.normalize-open.outputs.has_matches }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Codex inputs
env:
@@ -195,7 +195,7 @@ jobs:
- id: codex-open
name: Find duplicates (pass 2, open issues)
uses: openai/codex-action@main
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
@@ -342,7 +342,7 @@ jobs:
issues: write
steps:
- name: Comment on issue
uses: actions/github-script@v8
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
CODEX_OUTPUT: ${{ needs.select-final.outputs.codex_output }}
with:

View File

@@ -17,10 +17,10 @@ jobs:
outputs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- id: codex
uses: openai/codex-action@main
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"

775
.github/workflows/rust-ci-full.yml vendored Normal file
View File

@@ -0,0 +1,775 @@
name: rust-ci-full
on:
push:
branches:
- main
workflow_dispatch:
# CI builds in debug (dev) for faster signal.
jobs:
# --- CI that doesn't need specific targets ---------------------------------
general:
name: Format / etc
runs-on: ubuntu-24.04
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
- name: cargo fmt
run: cargo fmt -- --config imports_granularity=Item --check
cargo_shear:
name: cargo shear
runs-on: ubuntu-24.04
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
version: 1.5.1
- name: cargo shear
run: cargo shear
argument_comment_lint_package:
name: Argument comment lint package
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
components: llvm-tools-preview, rustc-dev, rust-src
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
~/.cargo/bin/dylint-link
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: argument-comment-lint-${{ runner.os }}-${{ hashFiles('tools/argument-comment-lint/Cargo.lock', 'tools/argument-comment-lint/rust-toolchain', '.github/workflows/rust-ci.yml', '.github/workflows/rust-ci-full.yml') }}
- name: Install cargo-dylint tooling
if: ${{ steps.cargo_dylint_cache.outputs.cache-hit != 'true' }}
run: cargo install --locked cargo-dylint dylint-link
- name: Check Python wrapper syntax
run: python3 -m py_compile tools/argument-comment-lint/wrapper_common.py tools/argument-comment-lint/run.py tools/argument-comment-lint/run-prebuilt-linter.py tools/argument-comment-lint/test_wrapper_common.py
- name: Test Python wrapper helpers
run: python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'
- name: Test argument comment lint package
working-directory: tools/argument-comment-lint
run: cargo test
argument_comment_lint_prebuilt:
name: Argument comment lint - ${{ matrix.name }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- name: Linux
runner: ubuntu-24.04
- name: macOS
runner: macos-15-xlarge
- name: Windows
runner: windows-x64
runs_on:
group: codex-runners
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
install-test-prereqs: true
- name: Install Linux sandbox build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
sudo DEBIAN_FRONTEND=noninteractive apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os != 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
bazel_targets="$(./tools/argument-comment-lint/list-bazel-targets.sh)"
./.github/scripts/run-bazel-ci.sh \
-- \
build \
--config=argument-comment-lint \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
${bazel_targets}
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os == 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
./.github/scripts/run-argument-comment-lint-bazel.sh \
--config=argument-comment-lint \
--platforms=//:local_windows \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA}
# --- CI to validate on different os/targets --------------------------------
lint_build:
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
# In rust-ci, representative release-profile checks use thin LTO for faster feedback.
CARGO_PROFILE_RELEASE_LTO: ${{ matrix.profile == 'release' && 'thin' || 'fat' }}
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: macos-15-xlarge
target: x86_64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
packages=(pkg-config libcap-dev)
if [[ "${{ matrix.target }}" == 'x86_64-unknown-linux-musl' || "${{ matrix.target }}" == 'aarch64-unknown-linux-musl' ]]; then
packages+=(libubsan1)
fi
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends "${packages[@]}"
fi
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
components: clippy
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
# Explicit cache restore: split cargo home vs target, so we can
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
# Install and restore sccache cache
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Disable sccache wrapper (musl)
shell: bash
run: |
set -euo pipefail
echo "RUSTC_WRAPPER=" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Prepare APT cache directories (musl)
shell: bash
run: |
set -euo pipefail
sudo mkdir -p /var/cache/apt/archives /var/lib/apt/lists
sudo chown -R "$USER:$USER" /var/cache/apt /var/lib/apt/lists
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Restore APT cache (musl)
id: cache_apt_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
name: Configure musl rusty_v8 artifact overrides
env:
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
version="$(python3 "${GITHUB_WORKSPACE}/.github/scripts/rusty_v8_bazel.py" resolved-v8-crate-version)"
release_tag="rusty-v8-v${version}"
base_url="https://github.com/openai/codex/releases/download/${release_tag}"
archive="https://github.com/openai/codex/releases/download/rusty-v8-v${version}/librusty_v8_release_${TARGET}.a.gz"
binding_dir="${RUNNER_TEMP}/rusty_v8"
binding_path="${binding_dir}/src_binding_release_${TARGET}.rs"
mkdir -p "${binding_dir}"
curl -fsSL "${base_url}/src_binding_release_${TARGET}.rs" -o "${binding_path}"
echo "RUSTY_V8_ARCHIVE=${archive}" >> "$GITHUB_ENV"
echo "RUSTY_V8_SRC_BINDING_PATH=${binding_path}" >> "$GITHUB_ENV"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
- name: Pre-warm dependency cache (cargo-chef)
if: ${{ matrix.profile == 'release' }}
shell: bash
run: |
set -euo pipefail
RECIPE="${RUNNER_TEMP}/chef-recipe.json"
cargo chef prepare --recipe-path "$RECIPE"
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release --all-features
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} --timings -- -D warnings
- name: Upload Cargo timings (clippy)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-clippy-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
# Save caches explicitly; make non-fatal so cache packaging
# never fails the overall job. Only save when key wasn't hit.
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (${{ matrix.profile }})";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Save APT cache (musl)
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
tests:
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.remote_env == 'true' && ' (remote)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
# Perhaps we can bring this back down to 30m once we finish the cutover
# from tui_app_server/ to tui/. Incidentally, windows-arm64 was the main
# offender for exceeding the timeout.
timeout-minutes: 45
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
remote_env: "true"
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Node.js for js_repl tests
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version-file: codex-rs/node-version.txt
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
fi
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
- name: Enable unprivileged user namespaces (Linux)
if: runner.os == 'Linux'
run: |
# Required for bubblewrap to work on Linux CI runners.
sudo sysctl -w kernel.unprivileged_userns_clone=1
# Ubuntu 24.04+ can additionally gate unprivileged user namespaces
# behind AppArmor.
if sudo sysctl -a 2>/dev/null | grep -q '^kernel.apparmor_restrict_unprivileged_userns'; then
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
fi
- name: Set up remote test env (Docker)
if: ${{ runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set -euo pipefail
export CODEX_TEST_REMOTE_ENV_CONTAINER_NAME=codex-remote-test-env
source "${GITHUB_WORKSPACE}/scripts/test-remote-env.sh"
echo "CODEX_TEST_REMOTE_ENV=${CODEX_TEST_REMOTE_ENV}" >> "$GITHUB_ENV"
- name: tests
id: test
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test --timings
env:
RUST_BACKTRACE: 1
NEXTEST_STATUS_LEVEL: leak
- name: Upload Cargo timings (nextest)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-nextest-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (tests)";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Tear down remote test env
if: ${{ always() && runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set +e
if [[ "${{ steps.test.outcome }}" != "success" ]]; then
docker logs codex-remote-test-env || true
fi
docker rm -f codex-remote-test-env >/dev/null 2>&1 || true
- name: verify tests passed
if: steps.test.outcome == 'failure'
run: |
echo "Tests failed. See logs for details."
exit 1
# --- Gatherer job for the full post-merge workflow --------------------------
results:
name: Full CI results
needs:
[
general,
cargo_shear,
argument_comment_lint_package,
argument_comment_lint_prebuilt,
lint_build,
tests,
]
if: always()
runs-on: ubuntu-24.04
steps:
- name: Summarize
shell: bash
run: |
echo "argpkg : ${{ needs.argument_comment_lint_package.result }}"
echo "arglint: ${{ needs.argument_comment_lint_prebuilt.result }}"
echo "general: ${{ needs.general.result }}"
echo "shear : ${{ needs.cargo_shear.result }}"
echo "lint : ${{ needs.lint_build.result }}"
echo "tests : ${{ needs.tests.result }}"
[[ '${{ needs.argument_comment_lint_package.result }}' == 'success' ]] || { echo 'argument_comment_lint_package failed'; exit 1; }
[[ '${{ needs.argument_comment_lint_prebuilt.result }}' == 'success' ]] || { echo 'argument_comment_lint_prebuilt failed'; exit 1; }
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
[[ '${{ needs.lint_build.result }}' == 'success' ]] || { echo 'lint_build failed'; exit 1; }
[[ '${{ needs.tests.result }}' == 'success' ]] || { echo 'tests failed'; exit 1; }
- name: sccache summary note
if: always()
run: |
echo "Per-job sccache stats are attached to each matrix job's Step Summary."

View File

@@ -1,15 +1,10 @@
name: rust-ci
on:
pull_request: {}
push:
branches:
- main
workflow_dispatch:
# CI builds in debug (dev) for faster signal.
jobs:
# --- Detect what changed to detect which tests to run (always runs) -------------------------------------
# --- Detect what changed so the fast PR workflow only runs relevant jobs ----
changed:
name: Detect changed areas
runs-on: ubuntu-24.04
@@ -19,7 +14,7 @@ jobs:
codex: ${{ steps.detect.outputs.codex }}
workflows: ${{ steps.detect.outputs.workflows }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0
- name: Detect changed paths (no external action)
@@ -33,11 +28,10 @@ jobs:
HEAD_SHA='${{ github.event.pull_request.head.sha }}'
echo "Base SHA: $BASE_SHA"
echo "Head SHA: $HEAD_SHA"
# List files changed between base and PR head
mapfile -t files < <(git diff --name-only --no-renames "$BASE_SHA" "$HEAD_SHA")
else
# On push / manual runs, default to running everything
files=("codex-rs/force" ".github/force")
# On manual runs, default to the full fast-PR bundle.
files=("codex-rs/force" "tools/argument-comment-lint/force" ".github/force")
fi
codex=false
@@ -47,7 +41,7 @@ jobs:
for f in "${files[@]}"; do
[[ $f == codex-rs/* ]] && codex=true
[[ $f == codex-rs/* || $f == tools/argument-comment-lint/* || $f == justfile ]] && argument_comment_lint=true
[[ $f == tools/argument-comment-lint/* || $f == .github/workflows/rust-ci.yml ]] && argument_comment_lint_package=true
[[ $f == tools/argument-comment-lint/* || $f == .github/workflows/rust-ci.yml || $f == .github/workflows/rust-ci-full.yml ]] && argument_comment_lint_package=true
[[ $f == .github/* ]] && workflows=true
done
@@ -56,18 +50,18 @@ jobs:
echo "codex=$codex" >> "$GITHUB_OUTPUT"
echo "workflows=$workflows" >> "$GITHUB_OUTPUT"
# --- CI that doesn't need specific targets ---------------------------------
# --- Fast Cargo-native PR checks -------------------------------------------
general:
name: Format / etc
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' }}
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.93.0
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
- name: cargo fmt
@@ -77,13 +71,13 @@ jobs:
name: cargo shear
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' }}
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.93.0
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
@@ -95,16 +89,23 @@ jobs:
name: Argument comment lint package
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.argument_comment_lint_package == 'true' || github.event_name == 'push' }}
if: ${{ needs.changed.outputs.argument_comment_lint_package == 'true' }}
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.93.0
with:
toolchain: nightly-2025-09-18
components: llvm-tools-preview, rustc-dev, rust-src
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Install nightly argument-comment-lint toolchain
shell: bash
run: |
rustup toolchain install nightly-2025-09-18 \
--profile minimal \
--component llvm-tools-preview \
--component rustc-dev \
--component rust-src \
--no-self-update
rustup default nightly-2025-09-18
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@v5
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
@@ -112,12 +113,14 @@ jobs:
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: argument-comment-lint-${{ runner.os }}-${{ hashFiles('tools/argument-comment-lint/Cargo.lock', 'tools/argument-comment-lint/rust-toolchain', '.github/workflows/rust-ci.yml') }}
key: argument-comment-lint-${{ runner.os }}-${{ hashFiles('tools/argument-comment-lint/Cargo.lock', 'tools/argument-comment-lint/rust-toolchain', '.github/workflows/rust-ci.yml', '.github/workflows/rust-ci-full.yml') }}
- name: Install cargo-dylint tooling
if: ${{ steps.cargo_dylint_cache.outputs.cache-hit != 'true' }}
run: cargo install --locked cargo-dylint dylint-link
- name: Check source wrapper syntax
run: bash -n tools/argument-comment-lint/run.sh
- name: Check Python wrapper syntax
run: python3 -m py_compile tools/argument-comment-lint/wrapper_common.py tools/argument-comment-lint/run.py tools/argument-comment-lint/run-prebuilt-linter.py tools/argument-comment-lint/test_wrapper_common.py
- name: Test Python wrapper helpers
run: python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'
- name: Test argument comment lint package
working-directory: tools/argument-comment-lint
run: cargo test
@@ -125,654 +128,63 @@ jobs:
argument_comment_lint_prebuilt:
name: Argument comment lint - ${{ matrix.name }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: ${{ matrix.timeout_minutes }}
needs: changed
if: ${{ needs.changed.outputs.argument_comment_lint == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
if: ${{ needs.changed.outputs.argument_comment_lint == 'true' || needs.changed.outputs.workflows == 'true' }}
strategy:
fail-fast: false
matrix:
include:
- name: Linux
runner: ubuntu-24.04
timeout_minutes: 30
- name: macOS
runner: macos-15-xlarge
timeout_minutes: 30
- name: Windows
runner: windows-x64
timeout_minutes: 30
runs_on:
group: codex-runners
labels: codex-windows-x64
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
install-test-prereqs: true
- name: Install Linux sandbox build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
sudo DEBIAN_FRONTEND=noninteractive apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- uses: dtolnay/rust-toolchain@1.93.0
with:
toolchain: nightly-2025-09-18
components: llvm-tools-preview, rustc-dev, rust-src
- uses: facebook/install-dotslash@v2
- name: Run argument comment lint on codex-rs
shell: bash
run: ./tools/argument-comment-lint/run-prebuilt-linter.sh
# --- CI to validate on different os/targets --------------------------------
lint_build:
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
needs: changed
# Keep job-level if to avoid spinning up runners when not needed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
# In rust-ci, representative release-profile checks use thin LTO for faster feedback.
CARGO_PROFILE_RELEASE_LTO: ${{ matrix.profile == 'release' && 'thin' || 'fat' }}
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: macos-15-xlarge
target: x86_64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
packages=(pkg-config libcap-dev)
if [[ "${{ matrix.target }}" == 'x86_64-unknown-linux-musl' || "${{ matrix.target }}" == 'aarch64-unknown-linux-musl' ]]; then
packages+=(libubsan1)
fi
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends "${packages[@]}"
fi
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
components: clippy
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
# Explicit cache restore: split cargo home vs target, so we can
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
# Install and restore sccache cache
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Disable sccache wrapper (musl)
shell: bash
run: |
set -euo pipefail
echo "RUSTC_WRAPPER=" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Prepare APT cache directories (musl)
shell: bash
run: |
set -euo pipefail
sudo mkdir -p /var/cache/apt/archives /var/lib/apt/lists
sudo chown -R "$USER:$USER" /var/cache/apt /var/lib/apt/lists
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Restore APT cache (musl)
id: cache_apt_restore
uses: actions/cache/restore@v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os != 'Windows' }}
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
name: Configure musl rusty_v8 artifact overrides
bazel_targets="$(./tools/argument-comment-lint/list-bazel-targets.sh)"
./.github/scripts/run-bazel-ci.sh \
-- \
build \
--config=argument-comment-lint \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
${bazel_targets}
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os == 'Windows' }}
env:
TARGET: ${{ matrix.target }}
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
set -euo pipefail
version="$(python3 "${GITHUB_WORKSPACE}/.github/scripts/rusty_v8_bazel.py" resolved-v8-crate-version)"
release_tag="rusty-v8-v${version}"
base_url="https://github.com/openai/codex/releases/download/${release_tag}"
archive="https://github.com/openai/codex/releases/download/rusty-v8-v${version}/librusty_v8_release_${TARGET}.a.gz"
binding_dir="${RUNNER_TEMP}/rusty_v8"
binding_path="${binding_dir}/src_binding_release_${TARGET}.rs"
mkdir -p "${binding_dir}"
curl -fsSL "${base_url}/src_binding_release_${TARGET}.rs" -o "${binding_path}"
echo "RUSTY_V8_ARCHIVE=${archive}" >> "$GITHUB_ENV"
echo "RUSTY_V8_SRC_BINDING_PATH=${binding_path}" >> "$GITHUB_ENV"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
- name: Pre-warm dependency cache (cargo-chef)
if: ${{ matrix.profile == 'release' }}
shell: bash
run: |
set -euo pipefail
RECIPE="${RUNNER_TEMP}/chef-recipe.json"
cargo chef prepare --recipe-path "$RECIPE"
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release --all-features
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} --timings -- -D warnings
- name: Upload Cargo timings (clippy)
if: always()
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-ci-clippy-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
# Save caches explicitly; make non-fatal so cache packaging
# never fails the overall job. Only save when key wasn't hit.
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (${{ matrix.profile }})";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Save APT cache (musl)
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
tests:
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.remote_env == 'true' && ' (remote)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
# Perhaps we can bring this back down to 30m once we finish the cutover
# from tui_app_server/ to tui/. Incidentally, windows-arm64 was the main
# offender for exceeding the timeout.
timeout-minutes: 45
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
remote_env: "true"
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- name: Set up Node.js for js_repl tests
uses: actions/setup-node@v6
with:
node-version-file: codex-rs/node-version.txt
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
fi
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
- name: Enable unprivileged user namespaces (Linux)
if: runner.os == 'Linux'
run: |
# Required for bubblewrap to work on Linux CI runners.
sudo sysctl -w kernel.unprivileged_userns_clone=1
# Ubuntu 24.04+ can additionally gate unprivileged user namespaces
# behind AppArmor.
if sudo sysctl -a 2>/dev/null | grep -q '^kernel.apparmor_restrict_unprivileged_userns'; then
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
fi
- name: Set up remote test env (Docker)
if: ${{ runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set -euo pipefail
export CODEX_TEST_REMOTE_ENV_CONTAINER_NAME=codex-remote-test-env
source "${GITHUB_WORKSPACE}/scripts/test-remote-env.sh"
echo "CODEX_TEST_REMOTE_ENV=${CODEX_TEST_REMOTE_ENV}" >> "$GITHUB_ENV"
- name: tests
id: test
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test --timings
env:
RUST_BACKTRACE: 1
NEXTEST_STATUS_LEVEL: leak
- name: Upload Cargo timings (nextest)
if: always()
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-ci-nextest-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (tests)";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Tear down remote test env
if: ${{ always() && runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set +e
if [[ "${{ steps.test.outcome }}" != "success" ]]; then
docker logs codex-remote-test-env || true
fi
docker rm -f codex-remote-test-env >/dev/null 2>&1 || true
- name: verify tests passed
if: steps.test.outcome == 'failure'
run: |
echo "Tests failed. See logs for details."
exit 1
./.github/scripts/run-argument-comment-lint-bazel.sh \
--config=argument-comment-lint \
--platforms=//:local_windows \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA}
# --- Gatherer job that you mark as the ONLY required status -----------------
results:
@@ -784,8 +196,6 @@ jobs:
cargo_shear,
argument_comment_lint_package,
argument_comment_lint_prebuilt,
lint_build,
tests,
]
if: always()
runs-on: ubuntu-24.04
@@ -797,32 +207,23 @@ jobs:
echo "arglint: ${{ needs.argument_comment_lint_prebuilt.result }}"
echo "general: ${{ needs.general.result }}"
echo "shear : ${{ needs.cargo_shear.result }}"
echo "lint : ${{ needs.lint_build.result }}"
echo "tests : ${{ needs.tests.result }}"
# If nothing relevant changed (PR touching only root README, etc.),
# declare success regardless of other jobs.
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' != 'true' && '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' && '${{ github.event_name }}' != 'push' ]]; then
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' != 'true' && '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' ]]; then
echo 'No relevant changes -> CI not required.'
exit 0
fi
if [[ '${{ needs.changed.outputs.argument_comment_lint_package }}' == 'true' || '${{ github.event_name }}' == 'push' ]]; then
if [[ '${{ needs.changed.outputs.argument_comment_lint_package }}' == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_package.result }}' == 'success' ]] || { echo 'argument_comment_lint_package failed'; exit 1; }
fi
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' || '${{ github.event_name }}' == 'push' ]]; then
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_prebuilt.result }}' == 'success' ]] || { echo 'argument_comment_lint_prebuilt failed'; exit 1; }
fi
if [[ '${{ needs.changed.outputs.codex }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' || '${{ github.event_name }}' == 'push' ]]; then
if [[ '${{ needs.changed.outputs.codex }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
[[ '${{ needs.lint_build.result }}' == 'success' ]] || { echo 'lint_build failed'; exit 1; }
[[ '${{ needs.tests.result }}' == 'success' ]] || { echo 'tests failed'; exit 1; }
fi
- name: sccache summary note
if: always()
run: |
echo "Per-job sccache stats are attached to each matrix job's Step Summary."

View File

@@ -53,9 +53,9 @@ jobs:
labels: codex-windows-x64
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@1.93.0
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
targets: ${{ matrix.target }}
@@ -97,7 +97,7 @@ jobs:
(cd "${RUNNER_TEMP}" && tar -czf "$GITHUB_WORKSPACE/$archive_path" argument-comment-lint)
fi
- uses: actions/upload-artifact@v7
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: argument-comment-lint-${{ matrix.target }}
path: dist/argument-comment-lint/${{ matrix.target }}/*

View File

@@ -18,7 +18,7 @@ jobs:
if: github.repository == 'openai/codex'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
ref: main
fetch-depth: 0
@@ -43,7 +43,7 @@ jobs:
curl --http1.1 --fail --show-error --location "${headers[@]}" "${url}" | jq '.' > codex-rs/core/models.json
- name: Open pull request (if changed)
uses: peter-evans/create-pull-request@v8
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
commit-message: "Update models.json"
title: "Update models.json"

View File

@@ -67,7 +67,7 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Print runner specs (Windows)
shell: powershell
run: |
@@ -82,7 +82,7 @@ jobs:
Write-Host "Total RAM: $ramGiB GiB"
Write-Host "Disk usage:"
Get-PSDrive -PSProvider FileSystem | Format-Table -AutoSize Name, @{Name='Size(GB)';Expression={[math]::Round(($_.Used + $_.Free) / 1GB, 1)}}, @{Name='Free(GB)';Expression={[math]::Round($_.Free / 1GB, 1)}}
- uses: dtolnay/rust-toolchain@1.93.0
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
@@ -92,7 +92,7 @@ jobs:
cargo build --target ${{ matrix.target }} --release --timings ${{ matrix.build_args }}
- name: Upload Cargo timings
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-release-windows-${{ matrix.target }}-${{ matrix.bundle }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -112,7 +112,7 @@ jobs:
fi
- name: Upload Windows binaries
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: windows-binaries-${{ matrix.target }}-${{ matrix.bundle }}
path: |
@@ -147,16 +147,16 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Download prebuilt Windows primary binaries
uses: actions/download-artifact@v8
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
name: windows-binaries-${{ matrix.target }}-primary
path: codex-rs/target/${{ matrix.target }}/release
- name: Download prebuilt Windows helper binaries
uses: actions/download-artifact@v8
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
name: windows-binaries-${{ matrix.target }}-helpers
path: codex-rs/target/${{ matrix.target }}/release
@@ -193,7 +193,7 @@ jobs:
cp target/${{ matrix.target }}/release/codex-command-runner.exe "$dest/codex-command-runner-${{ matrix.target }}.exe"
- name: Install DotSlash
uses: facebook/install-dotslash@v2
uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- name: Compress artifacts
shell: bash
@@ -257,7 +257,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/workflows/zstd" -T0 -19 "$dest/$base"
done
- uses: actions/upload-artifact@v7
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: ${{ matrix.target }}
path: |

View File

@@ -45,7 +45,7 @@ jobs:
git \
libncursesw5-dev
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
@@ -53,7 +53,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@v7
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*
@@ -81,7 +81,7 @@ jobs:
brew install autoconf
fi
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
@@ -89,7 +89,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@v7
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*

View File

@@ -19,8 +19,8 @@ jobs:
tag-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@c2b55edffaf41a251c410bb32bed22afefa800f1 # 1.92
- name: Validate tag matches Cargo.toml version
shell: bash
run: |
@@ -79,7 +79,7 @@ jobs:
target: aarch64-unknown-linux-gnu
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Print runner specs (Linux)
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -125,7 +125,7 @@ jobs:
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libubsan1
fi
- uses: dtolnay/rust-toolchain@1.93.0
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
@@ -235,7 +235,7 @@ jobs:
cargo build --target ${{ matrix.target }} --release --timings --bin codex --bin codex-responses-api-proxy
- name: Upload Cargo timings
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-release-${{ matrix.target }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -374,7 +374,7 @@ jobs:
zstd -T0 -19 --rm "$dest/$base"
done
- uses: actions/upload-artifact@v7
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: ${{ matrix.target }}
# Upload the per-binary .zst files as well as the new .tar.gz
@@ -420,7 +420,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Generate release notes from tag commit message
id: release_notes
@@ -442,7 +442,7 @@ jobs:
echo "path=${notes_path}" >> "${GITHUB_OUTPUT}"
- uses: actions/download-artifact@v8
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
path: dist
@@ -492,12 +492,12 @@ jobs:
fi
- name: Setup pnpm
uses: pnpm/action-setup@v5
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
with:
run_install: false
- name: Setup Node.js for npm packaging
uses: actions/setup-node@v6
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
@@ -505,7 +505,7 @@ jobs:
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- name: Stage npm packages
env:
GH_TOKEN: ${{ github.token }}
@@ -523,7 +523,7 @@ jobs:
cp scripts/install/install.ps1 dist/install.ps1
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
name: ${{ steps.release_name.outputs.name }}
tag_name: ${{ github.ref_name }}
@@ -533,21 +533,21 @@ jobs:
# (e.g. -alpha, -beta). Otherwise publish a normal release.
prerelease: ${{ contains(steps.release_name.outputs.name, '-') }}
- uses: facebook/dotslash-publish-release@v2
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
- uses: facebook/dotslash-publish-release@v2
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-zsh-config.json
- uses: facebook/dotslash-publish-release@v2
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -582,7 +582,7 @@ jobs:
steps:
- name: Setup Node.js
uses: actions/setup-node@v6
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
registry-url: "https://registry.npmjs.org"

View File

@@ -25,10 +25,10 @@ jobs:
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -75,13 +75,13 @@ jobs:
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v3
uses: bazelbuild/setup-bazelisk@6ecf4fd8b7d1f9721785f1dd656a689acf9add47 # v3
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -135,7 +135,7 @@ jobs:
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: rusty-v8-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*
@@ -174,12 +174,12 @@ jobs:
exit 1
fi
- uses: actions/download-artifact@v8
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
path: dist
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
tag_name: ${{ needs.metadata.outputs.release_tag }}
name: ${{ needs.metadata.outputs.release_tag }}

View File

@@ -13,7 +13,7 @@ jobs:
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux bwrap build dependencies
shell: bash
@@ -23,21 +23,82 @@ jobs:
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- name: Setup pnpm
uses: pnpm/action-setup@v5
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v6
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
cache: pnpm
- uses: dtolnay/rust-toolchain@1.93.0
- name: Set up Bazel CI
id: setup_bazel
uses: ./.github/actions/setup-bazel-ci
with:
target: x86_64-unknown-linux-gnu
- name: build codex
run: cargo build --bin codex
working-directory: codex-rs
- name: Build codex with Bazel
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
set -euo pipefail
# Use the shared CI wrapper so fork PRs fall back cleanly when
# BuildBuddy credentials are unavailable. This workflow needs the
# built `codex` binary on disk afterwards, so ask the wrapper to
# override CI's default remote_download_minimal behavior.
./.github/scripts/run-bazel-ci.sh \
--remote-download-toplevel \
-- \
build \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
--build_metadata=TAG_job=sdk \
-- \
//codex-rs/cli:codex
# Resolve the exact output file using the same wrapper/config path as
# the build instead of guessing which Bazel convenience symlink is
# available on the runner.
cquery_output="$(
./.github/scripts/run-bazel-ci.sh \
-- \
cquery \
--output=files \
-- \
//codex-rs/cli:codex \
| grep -E '^(/|bazel-out/)' \
| tail -n 1
)"
if [[ "${cquery_output}" = /* ]]; then
codex_bazel_output_path="${cquery_output}"
else
codex_bazel_output_path="${GITHUB_WORKSPACE}/${cquery_output}"
fi
if [[ -z "${codex_bazel_output_path}" ]]; then
echo "Bazel did not report an output path for //codex-rs/cli:codex." >&2
exit 1
fi
if [[ ! -e "${codex_bazel_output_path}" ]]; then
echo "Unable to locate the Bazel-built codex binary at ${codex_bazel_output_path}." >&2
exit 1
fi
# Stage the binary into the workspace and point the SDK tests at that
# stable path. The tests spawn `codex` directly many times, so using a
# normal executable path is more reliable than invoking Bazel for each
# test process.
install_dir="${GITHUB_WORKSPACE}/.tmp/sdk-ci"
mkdir -p "${install_dir}"
install -m 755 "${codex_bazel_output_path}" "${install_dir}/codex"
echo "CODEX_EXEC_PATH=${install_dir}/codex" >> "$GITHUB_ENV"
- name: Warm up Bazel-built codex
shell: bash
run: |
set -euo pipefail
"${CODEX_EXEC_PATH}" --version
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -50,3 +111,12 @@ jobs:
- name: Test SDK packages
run: pnpm -r --filter ./sdk/typescript run test
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-x86_64-unknown-linux-gnu-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}

View File

@@ -38,10 +38,10 @@ jobs:
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -72,13 +72,13 @@ jobs:
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v3
uses: bazelbuild/setup-bazelisk@6ecf4fd8b7d1f9721785f1dd656a689acf9add47 # v3
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -126,7 +126,7 @@ jobs:
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@v7
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: v8-canary-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*

View File

@@ -17,6 +17,7 @@ In the codex-rs folder where the rust code lives:
- Do not add these comments for string or char literals unless the comment adds real clarity; those literals are intentionally exempt from the lint.
- If you add one of these comments, the parameter name must exactly match the callee signature.
- When possible, make `match` statements exhaustive and avoid wildcard arms.
- Newly added traits should include doc comments that explain their role and how implementations are expected to use them.
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
@@ -70,8 +71,6 @@ See `codex-rs/tui/styles.md`.
## TUI code conventions
- When a change lands in `codex-rs/tui` and `codex-rs/tui_app_server` has a parallel implementation of the same behavior, reflect the change in `codex-rs/tui_app_server` too unless there is a documented reason not to.
- Use concise styling helpers from ratatuis Stylize trait.
- Basic spans: use "text".into()
- Styled spans: use "text".red(), "text".green(), "text".magenta(), "text".dim(), etc.

View File

@@ -17,12 +17,19 @@ platform(
platform(
name = "local_windows",
constraint_values = [
# We just need to pick one of the ABIs. Do the same one we target.
"@rules_rs//rs/experimental/platforms/constraints:windows_gnullvm",
],
parents = ["@platforms//host"],
)
platform(
name = "local_windows_msvc",
constraint_values = [
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
parents = ["@platforms//host"],
)
alias(
name = "rbe",
actual = "@rbe_platform",

View File

@@ -3,16 +3,35 @@ module(name = "codex")
bazel_dep(name = "bazel_skylib", version = "1.8.2")
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "llvm", version = "0.6.8")
# The upstream LLVM archive contains a few unix-only symlink entries and is
# missing a couple of MinGW compatibility archives that windows-gnullvm needs
# during extraction and linking, so patch it until upstream grows native support.
single_version_override(
module_name = "llvm",
patch_strip = 1,
patches = [
"//patches:llvm_windows_symlink_extract.patch",
],
)
# Abseil picks a MinGW pthread TLS path that does not match our hermetic
# windows-gnullvm toolchain; force it onto the portable C++11 thread-local path.
single_version_override(
module_name = "abseil-cpp",
patch_strip = 1,
patches = [
"//patches:abseil_windows_gnullvm_thread_identity.patch",
],
)
register_toolchains("@llvm//toolchain:all")
osx = use_extension("@llvm//extensions:osx.bzl", "osx")
osx.from_archive(
sha256 = "6a4922f89487a96d7054ec6ca5065bfddd9f1d017c74d82f1d79cecf7feb8228",
strip_prefix = "Payload/Library/Developer/CommandLineTools/SDKs/MacOSX26.2.sdk",
sha256 = "1bde70c0b1c2ab89ff454acbebf6741390d7b7eb149ca2a3ca24cc9203a408b7",
strip_prefix = "Payload/Library/Developer/CommandLineTools/SDKs/MacOSX26.4.sdk",
type = "pkg",
urls = [
"https://swcdn.apple.com/content/downloads/26/44/047-81934-A_28TPKM5SD1/ps6pk6dk4x02vgfa5qsctq6tgf23t5f0w2/CLTools_macOSNMOS_SDK.pkg",
"https://swcdn.apple.com/content/downloads/32/53/047-96692-A_OAHIHT53YB/ybtshxmrcju8m2qvw3w5elr4rajtg1x3y3/CLTools_macOSNMOS_SDK.pkg",
],
)
osx.frameworks(names = [
@@ -44,10 +63,77 @@ bazel_dep(name = "apple_support", version = "2.1.0")
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "rules_platform", version = "0.1.0")
bazel_dep(name = "rules_rs", version = "0.0.43")
# `rules_rs` 0.0.43 does not model `windows-gnullvm` as a distinct Windows exec
# platform, so patch it until upstream grows that support for both x86_64 and
# aarch64.
single_version_override(
module_name = "rules_rs",
patch_strip = 1,
patches = [
"//patches:rules_rs_windows_gnullvm_exec.patch",
],
version = "0.0.43",
)
rules_rust = use_extension("@rules_rs//rs/experimental:rules_rust.bzl", "rules_rust")
# Build-script probe binaries inherit CFLAGS/CXXFLAGS from Bazel's C++
# toolchain. On `windows-gnullvm`, llvm-mingw does not ship
# `libssp_nonshared`, so strip the forwarded stack-protector flags there.
rules_rust.patch(
patches = [
"//patches:rules_rust_windows_gnullvm_build_script.patch",
"//patches:rules_rust_windows_exec_msvc_build_script_env.patch",
"//patches:rules_rust_windows_bootstrap_process_wrapper_linker.patch",
"//patches:rules_rust_windows_msvc_direct_link_args.patch",
"//patches:rules_rust_windows_exec_bin_target.patch",
"//patches:rules_rust_windows_exec_std.patch",
"//patches:rules_rust_windows_exec_rustc_dev_rlib.patch",
"//patches:rules_rust_repository_set_exec_constraints.patch",
],
strip = 1,
)
use_repo(rules_rust, "rules_rust")
nightly_rust = use_extension(
"@rules_rs//rs/experimental:rules_rust_reexported_extensions.bzl",
"rust",
)
nightly_rust.toolchain(
versions = ["nightly/2025-09-18"],
dev_components = True,
edition = "2024",
)
# Keep Windows exec tools on MSVC so Bazel helper binaries link correctly, but
# lint crate targets as `windows-gnullvm` to preserve the repo's actual cfgs.
nightly_rust.repository_set(
name = "rust_windows_x86_64",
dev_components = True,
edition = "2024",
exec_triple = "x86_64-pc-windows-msvc",
exec_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
target_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
target_triple = "x86_64-pc-windows-msvc",
versions = ["nightly/2025-09-18"],
)
nightly_rust.repository_set(
name = "rust_windows_x86_64",
target_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_gnullvm",
],
target_triple = "x86_64-pc-windows-gnullvm",
)
use_repo(nightly_rust, "rust_toolchains")
toolchains = use_extension("@rules_rs//rs/experimental/toolchains:module_extension.bzl", "toolchains")
toolchains.toolchain(
edition = "2024",
@@ -56,6 +142,7 @@ toolchains.toolchain(
use_repo(toolchains, "default_rust_toolchains")
register_toolchains("@default_rust_toolchains//:all")
register_toolchains("@rust_toolchains//:all")
crate = use_extension("@rules_rs//rs:extensions.bzl", "crate")
crate.from_cargo(
@@ -65,10 +152,33 @@ crate.from_cargo(
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
# Keep both Windows ABIs in the generated Cargo metadata: the V8
# experiment still consumes release assets that only exist under the
# MSVC names while targeting the GNU toolchain.
"aarch64-pc-windows-msvc",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-msvc",
"x86_64-pc-windows-gnullvm",
],
use_experimental_platforms = True,
)
crate.from_cargo(
name = "argument_comment_lint_crates",
cargo_lock = "//tools/argument-comment-lint:Cargo.lock",
cargo_toml = "//tools/argument-comment-lint:Cargo.toml",
platform_triples = [
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-msvc",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-msvc",
"x86_64-pc-windows-gnullvm",
],
use_experimental_platforms = True,
@@ -89,10 +199,19 @@ crate.annotation(
patch_args = ["-p1"],
patches = [
"//patches:aws-lc-sys_memcmp_check.patch",
"//patches:aws-lc-sys_windows_msvc_prebuilt_nasm.patch",
"//patches:aws-lc-sys_windows_msvc_memcmp_probe.patch",
],
)
crate.annotation(
# The build script only validates embedded source/version metadata.
crate = "rustc_apfloat",
gen_build_script = "off",
)
inject_repo(crate, "zstd")
use_repo(crate, "argument_comment_lint_crates")
bazel_dep(name = "bzip2", version = "1.0.8.bcr.3")

73
MODULE.bazel.lock generated

File diff suppressed because one or more lines are too long

View File

@@ -2,3 +2,17 @@ exports_files([
"clippy.toml",
"node-version.txt",
])
filegroup(
name = "workspace-files",
srcs = glob(
[
"*",
".cargo/**",
],
exclude = [
"BUILD.bazel",
],
),
visibility = ["//visibility:public"],
)

111
codex-rs/Cargo.lock generated
View File

@@ -1337,10 +1337,12 @@ checksum = "e9b18233253483ce2f65329a24072ec414db782531bdbb7d0bbc4bd2ce6b7e21"
name = "codex-analytics"
version = "0.0.0"
dependencies = [
"codex-app-server-protocol",
"codex-git-utils",
"codex-login",
"codex-plugin",
"codex-protocol",
"os_info",
"pretty_assertions",
"serde",
"serde_json",
@@ -1636,7 +1638,6 @@ dependencies = [
"codex-stdio-to-uds",
"codex-terminal-detection",
"codex-tui",
"codex-tui-app-server",
"codex-utils-cargo-bin",
"codex-utils-cli",
"codex-windows-sandbox",
@@ -1977,6 +1978,7 @@ dependencies = [
"codex-app-server-protocol",
"codex-apply-patch",
"codex-arg0",
"codex-backend-client",
"codex-cloud-requirements",
"codex-core",
"codex-feedback",
@@ -2614,6 +2616,8 @@ dependencies = [
name = "codex-tools"
version = "0.0.0"
dependencies = [
"codex-app-server-protocol",
"codex-code-mode",
"codex-protocol",
"pretty_assertions",
"rmcp",
@@ -2635,105 +2639,8 @@ dependencies = [
"codex-app-server-client",
"codex-app-server-protocol",
"codex-arg0",
"codex-backend-client",
"codex-chatgpt",
"codex-cli",
"codex-client",
"codex-cloud-requirements",
"codex-core",
"codex-exec-server",
"codex-features",
"codex-feedback",
"codex-file-search",
"codex-git-utils",
"codex-login",
"codex-otel",
"codex-protocol",
"codex-shell-command",
"codex-state",
"codex-terminal-detection",
"codex-tui-app-server",
"codex-utils-absolute-path",
"codex-utils-approval-presets",
"codex-utils-cargo-bin",
"codex-utils-cli",
"codex-utils-elapsed",
"codex-utils-fuzzy-match",
"codex-utils-oss",
"codex-utils-pty",
"codex-utils-sandbox-summary",
"codex-utils-sleep-inhibitor",
"codex-utils-string",
"codex-windows-sandbox",
"color-eyre",
"cpal",
"crossterm",
"derive_more 2.1.1",
"diffy",
"dirs",
"dunce",
"hound",
"image",
"insta",
"itertools 0.14.0",
"lazy_static",
"libc",
"pathdiff",
"pretty_assertions",
"pulldown-cmark",
"rand 0.9.2",
"ratatui",
"ratatui-macros",
"regex-lite",
"reqwest",
"rmcp",
"serde",
"serde_json",
"serial_test",
"shlex",
"strum 0.27.2",
"strum_macros 0.28.0",
"supports-color 3.0.2",
"syntect",
"tempfile",
"textwrap 0.16.2",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tokio-util",
"toml 0.9.11+spec-1.1.0",
"tracing",
"tracing-appender",
"tracing-subscriber",
"two-face",
"unicode-segmentation",
"unicode-width 0.2.1",
"url",
"uuid",
"vt100",
"webbrowser",
"which 8.0.0",
"windows-sys 0.52.0",
"winsplit",
]
[[package]]
name = "codex-tui-app-server"
version = "0.0.0"
dependencies = [
"anyhow",
"arboard",
"assert_matches",
"base64 0.22.1",
"chrono",
"clap",
"codex-ansi-escape",
"codex-app-server-client",
"codex-app-server-protocol",
"codex-arg0",
"codex-chatgpt",
"codex-cli",
"codex-client",
"codex-cloud-requirements",
"codex-core",
"codex-features",
@@ -2765,7 +2672,6 @@ dependencies = [
"diffy",
"dirs",
"dunce",
"hound",
"image",
"insta",
"itertools 0.14.0",
@@ -4915,12 +4821,6 @@ dependencies = [
"windows-link",
]
[[package]]
name = "hound"
version = "3.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62adaabb884c94955b19907d60019f4e145d091c75345379e70d1ee696f7854f"
[[package]]
name = "http"
version = "0.2.12"
@@ -8187,7 +8087,6 @@ dependencies = [
"js-sys",
"log",
"mime",
"mime_guess",
"native-tls",
"percent-encoding",
"pin-project-lite",

View File

@@ -50,7 +50,6 @@ members = [
"stdio-to-uds",
"otel",
"tui",
"tui_app_server",
"tools",
"v8-poc",
"utils/absolute-path",
@@ -151,7 +150,6 @@ codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-terminal-detection = { path = "terminal-detection" }
codex-tools = { path = "tools" }
codex-tui = { path = "tui" }
codex-tui-app-server = { path = "tui_app_server" }
codex-v8-poc = { path = "v8-poc" }
codex-utils-absolute-path = { path = "utils/absolute-path" }
codex-utils-approval-presets = { path = "utils/approval-presets" }

View File

@@ -50,7 +50,7 @@ You can enable notifications by configuring a script that is run whenever the ag
### `codex exec` to run Codex programmatically/non-interactively
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. If you provide both a prompt argument and piped stdin, Codex appends stdin as a `<stdin>` block after the prompt so patterns like `echo "my output" | codex exec "Summarize this concisely"` work naturally. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
Use `codex exec --ephemeral ...` to run without persisting session rollout files to disk.
### Experimenting with the Codex Sandbox

View File

@@ -13,10 +13,12 @@ path = "src/lib.rs"
workspace = true
[dependencies]
codex-app-server-protocol = { workspace = true }
codex-git-utils = { workspace = true }
codex-login = { workspace = true }
codex-plugin = { workspace = true }
codex-protocol = { workspace = true }
os_info = { workspace = true }
serde = { workspace = true, features = ["derive"] }
sha1 = { workspace = true }
tokio = { workspace = true, features = [

View File

@@ -1,797 +0,0 @@
use codex_git_utils::collect_git_info;
use codex_git_utils::get_git_repo_root;
use codex_login::AuthManager;
use codex_login::default_client::create_client;
use codex_login::default_client::originator;
use codex_plugin::PluginTelemetryMetadata;
use codex_protocol::protocol::SkillScope;
use serde::Serialize;
use sha1::Digest;
use sha1::Sha1;
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use tokio::sync::mpsc;
#[derive(Clone)]
pub struct TrackEventsContext {
pub model_slug: String,
pub thread_id: String,
pub turn_id: String,
}
pub fn build_track_events_context(
model_slug: String,
thread_id: String,
turn_id: String,
) -> TrackEventsContext {
TrackEventsContext {
model_slug,
thread_id,
turn_id,
}
}
#[derive(Clone, Debug)]
pub struct SkillInvocation {
pub skill_name: String,
pub skill_scope: SkillScope,
pub skill_path: PathBuf,
pub invocation_type: InvocationType,
}
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "lowercase")]
pub enum InvocationType {
Explicit,
Implicit,
}
pub struct AppInvocation {
pub connector_id: Option<String>,
pub app_name: Option<String>,
pub invocation_type: Option<InvocationType>,
}
#[derive(Clone)]
pub(crate) struct AnalyticsEventsQueue {
sender: mpsc::Sender<TrackEventsJob>,
app_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
plugin_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
}
#[derive(Clone)]
pub struct AnalyticsEventsClient {
queue: AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
}
impl AnalyticsEventsQueue {
pub(crate) fn new(auth_manager: Arc<AuthManager>, base_url: String) -> Self {
let (sender, mut receiver) = mpsc::channel(ANALYTICS_EVENTS_QUEUE_SIZE);
tokio::spawn(async move {
while let Some(job) = receiver.recv().await {
match job {
TrackEventsJob::SkillInvocations(job) => {
send_track_skill_invocations(&auth_manager, &base_url, job).await;
}
TrackEventsJob::AppMentioned(job) => {
send_track_app_mentioned(&auth_manager, &base_url, job).await;
}
TrackEventsJob::AppUsed(job) => {
send_track_app_used(&auth_manager, &base_url, job).await;
}
TrackEventsJob::PluginUsed(job) => {
send_track_plugin_used(&auth_manager, &base_url, job).await;
}
TrackEventsJob::PluginInstalled(job) => {
send_track_plugin_installed(&auth_manager, &base_url, job).await;
}
TrackEventsJob::PluginUninstalled(job) => {
send_track_plugin_uninstalled(&auth_manager, &base_url, job).await;
}
TrackEventsJob::PluginEnabled(job) => {
send_track_plugin_enabled(&auth_manager, &base_url, job).await;
}
TrackEventsJob::PluginDisabled(job) => {
send_track_plugin_disabled(&auth_manager, &base_url, job).await;
}
}
}
});
Self {
sender,
app_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
plugin_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
}
}
fn try_send(&self, job: TrackEventsJob) {
if self.sender.try_send(job).is_err() {
//TODO: add a metric for this
tracing::warn!("dropping analytics events: queue is full");
}
}
fn should_enqueue_app_used(&self, tracking: &TrackEventsContext, app: &AppInvocation) -> bool {
let Some(connector_id) = app.connector_id.as_ref() else {
return true;
};
let mut emitted = self
.app_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), connector_id.clone()))
}
fn should_enqueue_plugin_used(
&self,
tracking: &TrackEventsContext,
plugin: &PluginTelemetryMetadata,
) -> bool {
let mut emitted = self
.plugin_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), plugin.plugin_id.as_key()))
}
}
impl AnalyticsEventsClient {
pub fn new(
auth_manager: Arc<AuthManager>,
base_url: String,
analytics_enabled: Option<bool>,
) -> Self {
Self {
queue: AnalyticsEventsQueue::new(Arc::clone(&auth_manager), base_url),
analytics_enabled,
}
}
pub fn track_skill_invocations(
&self,
tracking: TrackEventsContext,
invocations: Vec<SkillInvocation>,
) {
track_skill_invocations(
&self.queue,
self.analytics_enabled,
Some(tracking),
invocations,
);
}
pub fn track_app_mentioned(&self, tracking: TrackEventsContext, mentions: Vec<AppInvocation>) {
track_app_mentioned(
&self.queue,
self.analytics_enabled,
Some(tracking),
mentions,
);
}
pub fn track_app_used(&self, tracking: TrackEventsContext, app: AppInvocation) {
track_app_used(&self.queue, self.analytics_enabled, Some(tracking), app);
}
pub fn track_plugin_used(&self, tracking: TrackEventsContext, plugin: PluginTelemetryMetadata) {
track_plugin_used(&self.queue, self.analytics_enabled, Some(tracking), plugin);
}
pub fn track_plugin_installed(&self, plugin: PluginTelemetryMetadata) {
track_plugin_management(
&self.queue,
self.analytics_enabled,
PluginManagementEventType::Installed,
plugin,
);
}
pub fn track_plugin_uninstalled(&self, plugin: PluginTelemetryMetadata) {
track_plugin_management(
&self.queue,
self.analytics_enabled,
PluginManagementEventType::Uninstalled,
plugin,
);
}
pub fn track_plugin_enabled(&self, plugin: PluginTelemetryMetadata) {
track_plugin_management(
&self.queue,
self.analytics_enabled,
PluginManagementEventType::Enabled,
plugin,
);
}
pub fn track_plugin_disabled(&self, plugin: PluginTelemetryMetadata) {
track_plugin_management(
&self.queue,
self.analytics_enabled,
PluginManagementEventType::Disabled,
plugin,
);
}
}
enum TrackEventsJob {
SkillInvocations(TrackSkillInvocationsJob),
AppMentioned(TrackAppMentionedJob),
AppUsed(TrackAppUsedJob),
PluginUsed(TrackPluginUsedJob),
PluginInstalled(TrackPluginManagementJob),
PluginUninstalled(TrackPluginManagementJob),
PluginEnabled(TrackPluginManagementJob),
PluginDisabled(TrackPluginManagementJob),
}
struct TrackSkillInvocationsJob {
analytics_enabled: Option<bool>,
tracking: TrackEventsContext,
invocations: Vec<SkillInvocation>,
}
struct TrackAppMentionedJob {
analytics_enabled: Option<bool>,
tracking: TrackEventsContext,
mentions: Vec<AppInvocation>,
}
struct TrackAppUsedJob {
analytics_enabled: Option<bool>,
tracking: TrackEventsContext,
app: AppInvocation,
}
struct TrackPluginUsedJob {
analytics_enabled: Option<bool>,
tracking: TrackEventsContext,
plugin: PluginTelemetryMetadata,
}
struct TrackPluginManagementJob {
analytics_enabled: Option<bool>,
plugin: PluginTelemetryMetadata,
}
#[derive(Clone, Copy)]
enum PluginManagementEventType {
Installed,
Uninstalled,
Enabled,
Disabled,
}
const ANALYTICS_EVENTS_QUEUE_SIZE: usize = 256;
const ANALYTICS_EVENTS_TIMEOUT: Duration = Duration::from_secs(10);
const ANALYTICS_EVENT_DEDUPE_MAX_KEYS: usize = 4096;
#[derive(Serialize)]
struct TrackEventsRequest {
events: Vec<TrackEventRequest>,
}
#[derive(Serialize)]
#[serde(untagged)]
enum TrackEventRequest {
SkillInvocation(SkillInvocationEventRequest),
AppMentioned(CodexAppMentionedEventRequest),
AppUsed(CodexAppUsedEventRequest),
PluginUsed(CodexPluginUsedEventRequest),
PluginInstalled(CodexPluginEventRequest),
PluginUninstalled(CodexPluginEventRequest),
PluginEnabled(CodexPluginEventRequest),
PluginDisabled(CodexPluginEventRequest),
}
#[derive(Serialize)]
struct SkillInvocationEventRequest {
event_type: &'static str,
skill_id: String,
skill_name: String,
event_params: SkillInvocationEventParams,
}
#[derive(Serialize)]
struct SkillInvocationEventParams {
product_client_id: Option<String>,
skill_scope: Option<String>,
repo_url: Option<String>,
thread_id: Option<String>,
invoke_type: Option<InvocationType>,
model_slug: Option<String>,
}
#[derive(Serialize)]
struct CodexAppMetadata {
connector_id: Option<String>,
thread_id: Option<String>,
turn_id: Option<String>,
app_name: Option<String>,
product_client_id: Option<String>,
invoke_type: Option<InvocationType>,
model_slug: Option<String>,
}
#[derive(Serialize)]
struct CodexAppMentionedEventRequest {
event_type: &'static str,
event_params: CodexAppMetadata,
}
#[derive(Serialize)]
struct CodexAppUsedEventRequest {
event_type: &'static str,
event_params: CodexAppMetadata,
}
#[derive(Serialize)]
struct CodexPluginMetadata {
plugin_id: Option<String>,
plugin_name: Option<String>,
marketplace_name: Option<String>,
has_skills: Option<bool>,
mcp_server_count: Option<usize>,
connector_ids: Option<Vec<String>>,
product_client_id: Option<String>,
}
#[derive(Serialize)]
struct CodexPluginUsedMetadata {
#[serde(flatten)]
plugin: CodexPluginMetadata,
thread_id: Option<String>,
turn_id: Option<String>,
model_slug: Option<String>,
}
#[derive(Serialize)]
struct CodexPluginEventRequest {
event_type: &'static str,
event_params: CodexPluginMetadata,
}
#[derive(Serialize)]
struct CodexPluginUsedEventRequest {
event_type: &'static str,
event_params: CodexPluginUsedMetadata,
}
pub(crate) fn track_skill_invocations(
queue: &AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
tracking: Option<TrackEventsContext>,
invocations: Vec<SkillInvocation>,
) {
if analytics_enabled == Some(false) {
return;
}
let Some(tracking) = tracking else {
return;
};
if invocations.is_empty() {
return;
}
let job = TrackEventsJob::SkillInvocations(TrackSkillInvocationsJob {
analytics_enabled,
tracking,
invocations,
});
queue.try_send(job);
}
pub(crate) fn track_app_mentioned(
queue: &AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
tracking: Option<TrackEventsContext>,
mentions: Vec<AppInvocation>,
) {
if analytics_enabled == Some(false) {
return;
}
let Some(tracking) = tracking else {
return;
};
if mentions.is_empty() {
return;
}
let job = TrackEventsJob::AppMentioned(TrackAppMentionedJob {
analytics_enabled,
tracking,
mentions,
});
queue.try_send(job);
}
pub(crate) fn track_app_used(
queue: &AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
tracking: Option<TrackEventsContext>,
app: AppInvocation,
) {
if analytics_enabled == Some(false) {
return;
}
let Some(tracking) = tracking else {
return;
};
if !queue.should_enqueue_app_used(&tracking, &app) {
return;
}
let job = TrackEventsJob::AppUsed(TrackAppUsedJob {
analytics_enabled,
tracking,
app,
});
queue.try_send(job);
}
pub(crate) fn track_plugin_used(
queue: &AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
tracking: Option<TrackEventsContext>,
plugin: PluginTelemetryMetadata,
) {
if analytics_enabled == Some(false) {
return;
}
let Some(tracking) = tracking else {
return;
};
if !queue.should_enqueue_plugin_used(&tracking, &plugin) {
return;
}
let job = TrackEventsJob::PluginUsed(TrackPluginUsedJob {
analytics_enabled,
tracking,
plugin,
});
queue.try_send(job);
}
fn track_plugin_management(
queue: &AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
event_type: PluginManagementEventType,
plugin: PluginTelemetryMetadata,
) {
if analytics_enabled == Some(false) {
return;
}
let job = TrackPluginManagementJob {
analytics_enabled,
plugin,
};
let job = match event_type {
PluginManagementEventType::Installed => TrackEventsJob::PluginInstalled(job),
PluginManagementEventType::Uninstalled => TrackEventsJob::PluginUninstalled(job),
PluginManagementEventType::Enabled => TrackEventsJob::PluginEnabled(job),
PluginManagementEventType::Disabled => TrackEventsJob::PluginDisabled(job),
};
queue.try_send(job);
}
async fn send_track_skill_invocations(
auth_manager: &AuthManager,
base_url: &str,
job: TrackSkillInvocationsJob,
) {
let TrackSkillInvocationsJob {
analytics_enabled,
tracking,
invocations,
} = job;
let mut events = Vec::with_capacity(invocations.len());
for invocation in invocations {
let skill_scope = match invocation.skill_scope {
SkillScope::User => "user",
SkillScope::Repo => "repo",
SkillScope::System => "system",
SkillScope::Admin => "admin",
};
let repo_root = get_git_repo_root(invocation.skill_path.as_path());
let repo_url = if let Some(root) = repo_root.as_ref() {
collect_git_info(root)
.await
.and_then(|info| info.repository_url)
} else {
None
};
let skill_id = skill_id_for_local_skill(
repo_url.as_deref(),
repo_root.as_deref(),
invocation.skill_path.as_path(),
invocation.skill_name.as_str(),
);
events.push(TrackEventRequest::SkillInvocation(
SkillInvocationEventRequest {
event_type: "skill_invocation",
skill_id,
skill_name: invocation.skill_name.clone(),
event_params: SkillInvocationEventParams {
thread_id: Some(tracking.thread_id.clone()),
invoke_type: Some(invocation.invocation_type),
model_slug: Some(tracking.model_slug.clone()),
product_client_id: Some(originator().value),
repo_url,
skill_scope: Some(skill_scope.to_string()),
},
},
));
}
send_track_events(auth_manager, analytics_enabled, base_url, events).await;
}
async fn send_track_app_mentioned(
auth_manager: &AuthManager,
base_url: &str,
job: TrackAppMentionedJob,
) {
let TrackAppMentionedJob {
analytics_enabled,
tracking,
mentions,
} = job;
let events = mentions
.into_iter()
.map(|mention| {
let event_params = codex_app_metadata(&tracking, mention);
TrackEventRequest::AppMentioned(CodexAppMentionedEventRequest {
event_type: "codex_app_mentioned",
event_params,
})
})
.collect::<Vec<_>>();
send_track_events(auth_manager, analytics_enabled, base_url, events).await;
}
async fn send_track_app_used(auth_manager: &AuthManager, base_url: &str, job: TrackAppUsedJob) {
let TrackAppUsedJob {
analytics_enabled,
tracking,
app,
} = job;
let event_params = codex_app_metadata(&tracking, app);
let events = vec![TrackEventRequest::AppUsed(CodexAppUsedEventRequest {
event_type: "codex_app_used",
event_params,
})];
send_track_events(auth_manager, analytics_enabled, base_url, events).await;
}
async fn send_track_plugin_used(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginUsedJob,
) {
let TrackPluginUsedJob {
analytics_enabled,
tracking,
plugin,
} = job;
let events = vec![TrackEventRequest::PluginUsed(CodexPluginUsedEventRequest {
event_type: "codex_plugin_used",
event_params: codex_plugin_used_metadata(&tracking, plugin),
})];
send_track_events(auth_manager, analytics_enabled, base_url, events).await;
}
async fn send_track_plugin_installed(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginManagementJob,
) {
send_track_plugin_management_event(auth_manager, base_url, job, "codex_plugin_installed").await;
}
async fn send_track_plugin_uninstalled(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginManagementJob,
) {
send_track_plugin_management_event(auth_manager, base_url, job, "codex_plugin_uninstalled")
.await;
}
async fn send_track_plugin_enabled(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginManagementJob,
) {
send_track_plugin_management_event(auth_manager, base_url, job, "codex_plugin_enabled").await;
}
async fn send_track_plugin_disabled(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginManagementJob,
) {
send_track_plugin_management_event(auth_manager, base_url, job, "codex_plugin_disabled").await;
}
async fn send_track_plugin_management_event(
auth_manager: &AuthManager,
base_url: &str,
job: TrackPluginManagementJob,
event_type: &'static str,
) {
let TrackPluginManagementJob {
analytics_enabled,
plugin,
} = job;
let event_params = codex_plugin_metadata(plugin);
let event = CodexPluginEventRequest {
event_type,
event_params,
};
let events = vec![match event_type {
"codex_plugin_installed" => TrackEventRequest::PluginInstalled(event),
"codex_plugin_uninstalled" => TrackEventRequest::PluginUninstalled(event),
"codex_plugin_enabled" => TrackEventRequest::PluginEnabled(event),
"codex_plugin_disabled" => TrackEventRequest::PluginDisabled(event),
_ => unreachable!("unknown plugin management event type"),
}];
send_track_events(auth_manager, analytics_enabled, base_url, events).await;
}
fn codex_app_metadata(tracking: &TrackEventsContext, app: AppInvocation) -> CodexAppMetadata {
CodexAppMetadata {
connector_id: app.connector_id,
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
app_name: app.app_name,
product_client_id: Some(originator().value),
invoke_type: app.invocation_type,
model_slug: Some(tracking.model_slug.clone()),
}
}
fn codex_plugin_metadata(plugin: PluginTelemetryMetadata) -> CodexPluginMetadata {
let capability_summary = plugin.capability_summary;
CodexPluginMetadata {
plugin_id: Some(plugin.plugin_id.as_key()),
plugin_name: Some(plugin.plugin_id.plugin_name),
marketplace_name: Some(plugin.plugin_id.marketplace_name),
has_skills: capability_summary
.as_ref()
.map(|summary| summary.has_skills),
mcp_server_count: capability_summary
.as_ref()
.map(|summary| summary.mcp_server_names.len()),
connector_ids: capability_summary.map(|summary| {
summary
.app_connector_ids
.into_iter()
.map(|connector_id| connector_id.0)
.collect()
}),
product_client_id: Some(originator().value),
}
}
fn codex_plugin_used_metadata(
tracking: &TrackEventsContext,
plugin: PluginTelemetryMetadata,
) -> CodexPluginUsedMetadata {
CodexPluginUsedMetadata {
plugin: codex_plugin_metadata(plugin),
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
model_slug: Some(tracking.model_slug.clone()),
}
}
async fn send_track_events(
auth_manager: &AuthManager,
analytics_enabled: Option<bool>,
base_url: &str,
events: Vec<TrackEventRequest>,
) {
if analytics_enabled == Some(false) {
return;
}
if events.is_empty() {
return;
}
let Some(auth) = auth_manager.auth().await else {
return;
};
if !auth.is_chatgpt_auth() {
return;
}
let access_token = match auth.get_token() {
Ok(token) => token,
Err(_) => return,
};
let Some(account_id) = auth.get_account_id() else {
return;
};
let base_url = base_url.trim_end_matches('/');
let url = format!("{base_url}/codex/analytics-events/events");
let payload = TrackEventsRequest { events };
let response = create_client()
.post(&url)
.timeout(ANALYTICS_EVENTS_TIMEOUT)
.bearer_auth(&access_token)
.header("chatgpt-account-id", &account_id)
.header("Content-Type", "application/json")
.json(&payload)
.send()
.await;
match response {
Ok(response) if response.status().is_success() => {}
Ok(response) => {
let status = response.status();
let body = response.text().await.unwrap_or_default();
tracing::warn!("events failed with status {status}: {body}");
}
Err(err) => {
tracing::warn!("failed to send events request: {err}");
}
}
}
pub(crate) fn skill_id_for_local_skill(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
skill_name: &str,
) -> String {
let path = normalize_path_for_skill_id(repo_url, repo_root, skill_path);
let prefix = if let Some(url) = repo_url {
format!("repo_{url}")
} else {
"personal".to_string()
};
let raw_id = format!("{prefix}_{path}_{skill_name}");
let mut hasher = Sha1::new();
hasher.update(raw_id.as_bytes());
format!("{:x}", hasher.finalize())
}
/// Returns a normalized path for skill ID construction.
///
/// - Repo-scoped skills use a path relative to the repo root.
/// - User/admin/system skills use an absolute path.
fn normalize_path_for_skill_id(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
) -> String {
let resolved_path =
std::fs::canonicalize(skill_path).unwrap_or_else(|_| skill_path.to_path_buf());
match (repo_url, repo_root) {
(Some(_), Some(root)) => {
let resolved_root = std::fs::canonicalize(root).unwrap_or_else(|_| root.to_path_buf());
resolved_path
.strip_prefix(&resolved_root)
.unwrap_or(resolved_path.as_path())
.to_string_lossy()
.replace('\\', "/")
}
_ => resolved_path.to_string_lossy().replace('\\', "/"),
}
}
#[cfg(test)]
#[path = "analytics_client_tests.rs"]
mod tests;

View File

@@ -1,16 +1,47 @@
use super::AnalyticsEventsQueue;
use super::AppInvocation;
use super::CodexAppMentionedEventRequest;
use super::CodexAppUsedEventRequest;
use super::CodexPluginEventRequest;
use super::CodexPluginUsedEventRequest;
use super::InvocationType;
use super::TrackEventRequest;
use super::TrackEventsContext;
use super::codex_app_metadata;
use super::codex_plugin_metadata;
use super::codex_plugin_used_metadata;
use super::normalize_path_for_skill_id;
use crate::client::AnalyticsEventsQueue;
use crate::events::AppServerRpcTransport;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
use crate::events::CodexAppUsedEventRequest;
use crate::events::CodexPluginEventRequest;
use crate::events::CodexPluginUsedEventRequest;
use crate::events::CodexRuntimeMetadata;
use crate::events::ThreadInitializationMode;
use crate::events::ThreadInitializedEvent;
use crate::events::ThreadInitializedEventParams;
use crate::events::TrackEventRequest;
use crate::events::codex_app_metadata;
use crate::events::codex_plugin_metadata;
use crate::events::codex_plugin_used_metadata;
use crate::facts::AnalyticsFact;
use crate::facts::AppInvocation;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::InvocationType;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::PluginUsedInput;
use crate::facts::SkillInvocation;
use crate::facts::SkillInvokedInput;
use crate::facts::TrackEventsContext;
use crate::reducer::AnalyticsReducer;
use crate::reducer::normalize_path_for_skill_id;
use crate::reducer::skill_id_for_local_skill;
use codex_app_server_protocol::ApprovalsReviewer as AppServerApprovalsReviewer;
use codex_app_server_protocol::AskForApproval as AppServerAskForApproval;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxPolicy as AppServerSandboxPolicy;
use codex_app_server_protocol::SessionSource as AppServerSessionSource;
use codex_app_server_protocol::Thread;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStatus as AppServerThreadStatus;
use codex_login::default_client::DEFAULT_ORIGINATOR;
use codex_login::default_client::originator;
use codex_plugin::AppConnectorId;
use codex_plugin::PluginCapabilitySummary;
@@ -24,6 +55,61 @@ use std::sync::Arc;
use std::sync::Mutex;
use tokio::sync::mpsc;
fn sample_thread(thread_id: &str, ephemeral: bool) -> Thread {
Thread {
id: thread_id.to_string(),
preview: "first prompt".to_string(),
ephemeral,
model_provider: "openai".to_string(),
created_at: 1,
updated_at: 2,
status: AppServerThreadStatus::Idle,
path: None,
cwd: PathBuf::from("/tmp"),
cli_version: "0.0.0".to_string(),
source: AppServerSessionSource::Exec,
agent_nickname: None,
agent_role: None,
git_info: None,
name: None,
turns: Vec::new(),
}
}
fn sample_thread_start_response(thread_id: &str, ephemeral: bool, model: &str) -> ClientResponse {
ClientResponse::ThreadStart {
request_id: RequestId::Integer(1),
response: ThreadStartResponse {
thread: sample_thread(thread_id, ephemeral),
model: model.to_string(),
model_provider: "openai".to_string(),
service_tier: None,
cwd: PathBuf::from("/tmp"),
approval_policy: AppServerAskForApproval::OnFailure,
approvals_reviewer: AppServerApprovalsReviewer::User,
sandbox: AppServerSandboxPolicy::DangerFullAccess,
reasoning_effort: None,
},
}
}
fn sample_thread_resume_response(thread_id: &str, ephemeral: bool, model: &str) -> ClientResponse {
ClientResponse::ThreadResume {
request_id: RequestId::Integer(2),
response: ThreadResumeResponse {
thread: sample_thread(thread_id, ephemeral),
model: model.to_string(),
model_provider: "openai".to_string(),
service_tier: None,
cwd: PathBuf::from("/tmp"),
approval_policy: AppServerAskForApproval::OnFailure,
approvals_reviewer: AppServerApprovalsReviewer::User,
sandbox: AppServerSandboxPolicy::DangerFullAccess,
reasoning_effort: None,
},
}
}
fn expected_absolute_path(path: &PathBuf) -> String {
std::fs::canonicalize(path)
.unwrap_or_else(|_| path.to_path_buf())
@@ -49,7 +135,11 @@ fn normalize_path_for_skill_id_repo_scoped_uses_relative_path() {
fn normalize_path_for_skill_id_user_scoped_uses_absolute_path() {
let skill_path = PathBuf::from("/Users/abc/.codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(None, None, skill_path.as_path());
let path = normalize_path_for_skill_id(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
);
let expected = expected_absolute_path(&skill_path);
assert_eq!(path, expected);
@@ -59,7 +149,11 @@ fn normalize_path_for_skill_id_user_scoped_uses_absolute_path() {
fn normalize_path_for_skill_id_admin_scoped_uses_absolute_path() {
let skill_path = PathBuf::from("/etc/codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(None, None, skill_path.as_path());
let path = normalize_path_for_skill_id(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
);
let expected = expected_absolute_path(&skill_path);
assert_eq!(path, expected);
@@ -186,6 +280,171 @@ fn app_used_dedupe_is_keyed_by_turn_and_connector() {
assert_eq!(queue.should_enqueue_app_used(&turn_2, &app), true);
}
#[test]
fn thread_initialized_event_serializes_expected_shape() {
let event = TrackEventRequest::ThreadInitialized(ThreadInitializedEvent {
event_type: "codex_thread_initialized",
event_params: ThreadInitializedEventParams {
thread_id: "thread-0".to_string(),
app_server_client: CodexAppServerClientMetadata {
product_client_id: DEFAULT_ORIGINATOR.to_string(),
client_name: Some("codex-tui".to_string()),
client_version: Some("1.0.0".to_string()),
rpc_transport: AppServerRpcTransport::Stdio,
experimental_api_enabled: Some(true),
},
runtime: CodexRuntimeMetadata {
codex_rs_version: "0.1.0".to_string(),
runtime_os: "macos".to_string(),
runtime_os_version: "15.3.1".to_string(),
runtime_arch: "aarch64".to_string(),
},
model: "gpt-5".to_string(),
ephemeral: true,
thread_source: Some("user"),
initialization_mode: ThreadInitializationMode::New,
subagent_source: None,
parent_thread_id: None,
created_at: 1,
},
});
let payload = serde_json::to_value(&event).expect("serialize thread initialized event");
assert_eq!(
payload,
json!({
"event_type": "codex_thread_initialized",
"event_params": {
"thread_id": "thread-0",
"app_server_client": {
"product_client_id": DEFAULT_ORIGINATOR,
"client_name": "codex-tui",
"client_version": "1.0.0",
"rpc_transport": "stdio",
"experimental_api_enabled": true
},
"runtime": {
"codex_rs_version": "0.1.0",
"runtime_os": "macos",
"runtime_os_version": "15.3.1",
"runtime_arch": "aarch64"
},
"model": "gpt-5",
"ephemeral": true,
"thread_source": "user",
"initialization_mode": "new",
"subagent_source": null,
"parent_thread_id": null,
"created_at": 1
}
})
);
}
#[tokio::test]
async fn initialize_caches_client_and_thread_lifecycle_publishes_once_initialized() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
reducer
.ingest(
AnalyticsFact::Response {
connection_id: 7,
response: Box::new(sample_thread_start_response(
"thread-no-client",
/*ephemeral*/ false,
"gpt-5",
)),
},
&mut events,
)
.await;
assert!(events.is_empty(), "thread events should require initialize");
reducer
.ingest(
AnalyticsFact::Initialize {
connection_id: 7,
params: InitializeParams {
client_info: ClientInfo {
name: "codex-tui".to_string(),
title: None,
version: "1.0.0".to_string(),
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
opt_out_notification_methods: None,
}),
},
product_client_id: DEFAULT_ORIGINATOR.to_string(),
runtime: CodexRuntimeMetadata {
codex_rs_version: "0.99.0".to_string(),
runtime_os: "linux".to_string(),
runtime_os_version: "24.04".to_string(),
runtime_arch: "x86_64".to_string(),
},
rpc_transport: AppServerRpcTransport::Websocket,
},
&mut events,
)
.await;
assert!(events.is_empty(), "initialize should not publish by itself");
reducer
.ingest(
AnalyticsFact::Response {
connection_id: 7,
response: Box::new(sample_thread_resume_response(
"thread-1", /*ephemeral*/ true, "gpt-5",
)),
},
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(payload.as_array().expect("events array").len(), 1);
assert_eq!(payload[0]["event_type"], "codex_thread_initialized");
assert_eq!(
payload[0]["event_params"]["app_server_client"]["product_client_id"],
DEFAULT_ORIGINATOR
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["client_name"],
"codex-tui"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["client_version"],
"1.0.0"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["rpc_transport"],
"websocket"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["experimental_api_enabled"],
false
);
assert_eq!(
payload[0]["event_params"]["runtime"]["codex_rs_version"],
"0.99.0"
);
assert_eq!(payload[0]["event_params"]["runtime"]["runtime_os"], "linux");
assert_eq!(
payload[0]["event_params"]["runtime"]["runtime_os_version"],
"24.04"
);
assert_eq!(
payload[0]["event_params"]["runtime"]["runtime_arch"],
"x86_64"
);
assert_eq!(payload[0]["event_params"]["initialization_mode"], "resumed");
assert_eq!(payload[0]["event_params"]["thread_source"], "user");
assert_eq!(payload[0]["event_params"]["subagent_source"], json!(null));
assert_eq!(payload[0]["event_params"]["parent_thread_id"], json!(null));
}
#[test]
fn plugin_used_event_serializes_expected_shape() {
let tracking = TrackEventsContext {
@@ -272,6 +531,145 @@ fn plugin_used_dedupe_is_keyed_by_turn_and_plugin() {
assert_eq!(queue.should_enqueue_plugin_used(&turn_2, &plugin), true);
}
#[tokio::test]
async fn reducer_ingests_skill_invoked_fact() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
let skill_path = PathBuf::from("/Users/abc/.codex/skills/doc/SKILL.md");
let expected_skill_id = skill_id_for_local_skill(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
"doc",
);
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::SkillInvoked(SkillInvokedInput {
tracking,
invocations: vec![SkillInvocation {
skill_name: "doc".to_string(),
skill_scope: codex_protocol::protocol::SkillScope::User,
skill_path,
invocation_type: InvocationType::Explicit,
}],
})),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(
payload,
json!([{
"event_type": "skill_invocation",
"skill_id": expected_skill_id,
"skill_name": "doc",
"event_params": {
"product_client_id": originator().value,
"skill_scope": "user",
"repo_url": null,
"thread_id": "thread-1",
"invoke_type": "explicit",
"model_slug": "gpt-5"
}
}])
);
}
#[tokio::test]
async fn reducer_ingests_app_and_plugin_facts() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::AppMentioned(AppMentionedInput {
tracking: tracking.clone(),
mentions: vec![AppInvocation {
connector_id: Some("calendar".to_string()),
app_name: Some("Calendar".to_string()),
invocation_type: Some(InvocationType::Explicit),
}],
})),
&mut events,
)
.await;
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::AppUsed(AppUsedInput {
tracking: tracking.clone(),
app: AppInvocation {
connector_id: Some("drive".to_string()),
app_name: Some("Drive".to_string()),
invocation_type: Some(InvocationType::Implicit),
},
})),
&mut events,
)
.await;
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::PluginUsed(PluginUsedInput {
tracking,
plugin: sample_plugin_metadata(),
})),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(payload.as_array().expect("events array").len(), 3);
assert_eq!(payload[0]["event_type"], "codex_app_mentioned");
assert_eq!(payload[1]["event_type"], "codex_app_used");
assert_eq!(payload[2]["event_type"], "codex_plugin_used");
}
#[tokio::test]
async fn reducer_ingests_plugin_state_changed_fact() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::PluginStateChanged(
PluginStateChangedInput {
plugin: sample_plugin_metadata(),
state: PluginState::Disabled,
},
)),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(
payload,
json!([{
"event_type": "codex_plugin_disabled",
"event_params": {
"plugin_id": "sample@test",
"plugin_name": "sample",
"marketplace_name": "test",
"has_skills": true,
"mcp_server_count": 2,
"connector_ids": ["calendar", "drive"],
"product_client_id": originator().value
}
}])
);
}
fn sample_plugin_metadata() -> PluginTelemetryMetadata {
PluginTelemetryMetadata {
plugin_id: PluginId::parse("sample@test").expect("valid plugin id"),

View File

@@ -0,0 +1,272 @@
use crate::events::AppServerRpcTransport;
use crate::events::TrackEventRequest;
use crate::events::TrackEventsRequest;
use crate::events::current_runtime_metadata;
use crate::facts::AnalyticsFact;
use crate::facts::AppInvocation;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::SkillInvocation;
use crate::facts::SkillInvokedInput;
use crate::facts::TrackEventsContext;
use crate::reducer::AnalyticsReducer;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_login::AuthManager;
use codex_login::default_client::create_client;
use codex_plugin::PluginTelemetryMetadata;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use tokio::sync::mpsc;
const ANALYTICS_EVENTS_QUEUE_SIZE: usize = 256;
const ANALYTICS_EVENTS_TIMEOUT: Duration = Duration::from_secs(10);
const ANALYTICS_EVENT_DEDUPE_MAX_KEYS: usize = 4096;
#[derive(Clone)]
pub(crate) struct AnalyticsEventsQueue {
pub(crate) sender: mpsc::Sender<AnalyticsFact>,
pub(crate) app_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
pub(crate) plugin_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
}
#[derive(Clone)]
pub struct AnalyticsEventsClient {
queue: AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
}
impl AnalyticsEventsQueue {
pub(crate) fn new(auth_manager: Arc<AuthManager>, base_url: String) -> Self {
let (sender, mut receiver) = mpsc::channel(ANALYTICS_EVENTS_QUEUE_SIZE);
tokio::spawn(async move {
let mut reducer = AnalyticsReducer::default();
while let Some(input) = receiver.recv().await {
let mut events = Vec::new();
reducer.ingest(input, &mut events).await;
send_track_events(&auth_manager, &base_url, events).await;
}
});
Self {
sender,
app_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
plugin_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
}
}
fn try_send(&self, input: AnalyticsFact) {
if self.sender.try_send(input).is_err() {
//TODO: add a metric for this
tracing::warn!("dropping analytics events: queue is full");
}
}
pub(crate) fn should_enqueue_app_used(
&self,
tracking: &TrackEventsContext,
app: &AppInvocation,
) -> bool {
let Some(connector_id) = app.connector_id.as_ref() else {
return true;
};
let mut emitted = self
.app_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), connector_id.clone()))
}
pub(crate) fn should_enqueue_plugin_used(
&self,
tracking: &TrackEventsContext,
plugin: &PluginTelemetryMetadata,
) -> bool {
let mut emitted = self
.plugin_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), plugin.plugin_id.as_key()))
}
}
impl AnalyticsEventsClient {
pub fn new(
auth_manager: Arc<AuthManager>,
base_url: String,
analytics_enabled: Option<bool>,
) -> Self {
Self {
queue: AnalyticsEventsQueue::new(Arc::clone(&auth_manager), base_url),
analytics_enabled,
}
}
pub fn track_skill_invocations(
&self,
tracking: TrackEventsContext,
invocations: Vec<SkillInvocation>,
) {
if invocations.is_empty() {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::SkillInvoked(
SkillInvokedInput {
tracking,
invocations,
},
)));
}
pub fn track_initialize(
&self,
connection_id: u64,
params: InitializeParams,
product_client_id: String,
rpc_transport: AppServerRpcTransport,
) {
self.record_fact(AnalyticsFact::Initialize {
connection_id,
params,
product_client_id,
runtime: current_runtime_metadata(),
rpc_transport,
});
}
pub fn track_app_mentioned(&self, tracking: TrackEventsContext, mentions: Vec<AppInvocation>) {
if mentions.is_empty() {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::AppMentioned(
AppMentionedInput { tracking, mentions },
)));
}
pub fn track_app_used(&self, tracking: TrackEventsContext, app: AppInvocation) {
if !self.queue.should_enqueue_app_used(&tracking, &app) {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::AppUsed(
AppUsedInput { tracking, app },
)));
}
pub fn track_plugin_used(&self, tracking: TrackEventsContext, plugin: PluginTelemetryMetadata) {
if !self.queue.should_enqueue_plugin_used(&tracking, &plugin) {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::PluginUsed(
crate::facts::PluginUsedInput { tracking, plugin },
)));
}
pub fn track_plugin_installed(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Installed,
}),
));
}
pub fn track_plugin_uninstalled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Uninstalled,
}),
));
}
pub fn track_plugin_enabled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Enabled,
}),
));
}
pub fn track_plugin_disabled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Disabled,
}),
));
}
pub(crate) fn record_fact(&self, input: AnalyticsFact) {
if self.analytics_enabled == Some(false) {
return;
}
self.queue.try_send(input);
}
pub fn track_response(&self, connection_id: u64, response: ClientResponse) {
self.record_fact(AnalyticsFact::Response {
connection_id,
response: Box::new(response),
});
}
}
async fn send_track_events(
auth_manager: &AuthManager,
base_url: &str,
events: Vec<TrackEventRequest>,
) {
if events.is_empty() {
return;
}
let Some(auth) = auth_manager.auth().await else {
return;
};
if !auth.is_chatgpt_auth() {
return;
}
let access_token = match auth.get_token() {
Ok(token) => token,
Err(_) => return,
};
let Some(account_id) = auth.get_account_id() else {
return;
};
let base_url = base_url.trim_end_matches('/');
let url = format!("{base_url}/codex/analytics-events/events");
let payload = TrackEventsRequest { events };
let response = create_client()
.post(&url)
.timeout(ANALYTICS_EVENTS_TIMEOUT)
.bearer_auth(&access_token)
.header("chatgpt-account-id", &account_id)
.header("Content-Type", "application/json")
.json(&payload)
.send()
.await;
match response {
Ok(response) if response.status().is_success() => {}
Ok(response) => {
let status = response.status();
let body = response.text().await.unwrap_or_default();
tracing::warn!("events failed with status {status}: {body}");
}
Err(err) => {
tracing::warn!("failed to send events request: {err}");
}
}
}

View File

@@ -0,0 +1,230 @@
use crate::facts::AppInvocation;
use crate::facts::InvocationType;
use crate::facts::PluginState;
use crate::facts::TrackEventsContext;
use codex_login::default_client::originator;
use codex_plugin::PluginTelemetryMetadata;
use codex_protocol::protocol::SessionSource;
use serde::Serialize;
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub enum AppServerRpcTransport {
Stdio,
Websocket,
InProcess,
}
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ThreadInitializationMode {
New,
Forked,
Resumed,
}
#[derive(Serialize)]
pub(crate) struct TrackEventsRequest {
pub(crate) events: Vec<TrackEventRequest>,
}
#[derive(Serialize)]
#[serde(untagged)]
pub(crate) enum TrackEventRequest {
SkillInvocation(SkillInvocationEventRequest),
ThreadInitialized(ThreadInitializedEvent),
AppMentioned(CodexAppMentionedEventRequest),
AppUsed(CodexAppUsedEventRequest),
PluginUsed(CodexPluginUsedEventRequest),
PluginInstalled(CodexPluginEventRequest),
PluginUninstalled(CodexPluginEventRequest),
PluginEnabled(CodexPluginEventRequest),
PluginDisabled(CodexPluginEventRequest),
}
#[derive(Serialize)]
pub(crate) struct SkillInvocationEventRequest {
pub(crate) event_type: &'static str,
pub(crate) skill_id: String,
pub(crate) skill_name: String,
pub(crate) event_params: SkillInvocationEventParams,
}
#[derive(Serialize)]
pub(crate) struct SkillInvocationEventParams {
pub(crate) product_client_id: Option<String>,
pub(crate) skill_scope: Option<String>,
pub(crate) repo_url: Option<String>,
pub(crate) thread_id: Option<String>,
pub(crate) invoke_type: Option<InvocationType>,
pub(crate) model_slug: Option<String>,
}
#[derive(Clone, Serialize)]
pub(crate) struct CodexAppServerClientMetadata {
pub(crate) product_client_id: String,
pub(crate) client_name: Option<String>,
pub(crate) client_version: Option<String>,
pub(crate) rpc_transport: AppServerRpcTransport,
pub(crate) experimental_api_enabled: Option<bool>,
}
#[derive(Clone, Serialize)]
pub(crate) struct CodexRuntimeMetadata {
pub(crate) codex_rs_version: String,
pub(crate) runtime_os: String,
pub(crate) runtime_os_version: String,
pub(crate) runtime_arch: String,
}
#[derive(Serialize)]
pub(crate) struct ThreadInitializedEventParams {
pub(crate) thread_id: String,
pub(crate) app_server_client: CodexAppServerClientMetadata,
pub(crate) runtime: CodexRuntimeMetadata,
pub(crate) model: String,
pub(crate) ephemeral: bool,
pub(crate) thread_source: Option<&'static str>,
pub(crate) initialization_mode: ThreadInitializationMode,
pub(crate) subagent_source: Option<String>,
pub(crate) parent_thread_id: Option<String>,
pub(crate) created_at: u64,
}
#[derive(Serialize)]
pub(crate) struct ThreadInitializedEvent {
pub(crate) event_type: &'static str,
pub(crate) event_params: ThreadInitializedEventParams,
}
#[derive(Serialize)]
pub(crate) struct CodexAppMetadata {
pub(crate) connector_id: Option<String>,
pub(crate) thread_id: Option<String>,
pub(crate) turn_id: Option<String>,
pub(crate) app_name: Option<String>,
pub(crate) product_client_id: Option<String>,
pub(crate) invoke_type: Option<InvocationType>,
pub(crate) model_slug: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexAppMentionedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexAppMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexAppUsedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexAppMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginMetadata {
pub(crate) plugin_id: Option<String>,
pub(crate) plugin_name: Option<String>,
pub(crate) marketplace_name: Option<String>,
pub(crate) has_skills: Option<bool>,
pub(crate) mcp_server_count: Option<usize>,
pub(crate) connector_ids: Option<Vec<String>>,
pub(crate) product_client_id: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginUsedMetadata {
#[serde(flatten)]
pub(crate) plugin: CodexPluginMetadata,
pub(crate) thread_id: Option<String>,
pub(crate) turn_id: Option<String>,
pub(crate) model_slug: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexPluginMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginUsedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexPluginUsedMetadata,
}
pub(crate) fn plugin_state_event_type(state: PluginState) -> &'static str {
match state {
PluginState::Installed => "codex_plugin_installed",
PluginState::Uninstalled => "codex_plugin_uninstalled",
PluginState::Enabled => "codex_plugin_enabled",
PluginState::Disabled => "codex_plugin_disabled",
}
}
pub(crate) fn codex_app_metadata(
tracking: &TrackEventsContext,
app: AppInvocation,
) -> CodexAppMetadata {
CodexAppMetadata {
connector_id: app.connector_id,
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
app_name: app.app_name,
product_client_id: Some(originator().value),
invoke_type: app.invocation_type,
model_slug: Some(tracking.model_slug.clone()),
}
}
pub(crate) fn codex_plugin_metadata(plugin: PluginTelemetryMetadata) -> CodexPluginMetadata {
let capability_summary = plugin.capability_summary;
CodexPluginMetadata {
plugin_id: Some(plugin.plugin_id.as_key()),
plugin_name: Some(plugin.plugin_id.plugin_name),
marketplace_name: Some(plugin.plugin_id.marketplace_name),
has_skills: capability_summary
.as_ref()
.map(|summary| summary.has_skills),
mcp_server_count: capability_summary
.as_ref()
.map(|summary| summary.mcp_server_names.len()),
connector_ids: capability_summary.map(|summary| {
summary
.app_connector_ids
.into_iter()
.map(|connector_id| connector_id.0)
.collect()
}),
product_client_id: Some(originator().value),
}
}
pub(crate) fn codex_plugin_used_metadata(
tracking: &TrackEventsContext,
plugin: PluginTelemetryMetadata,
) -> CodexPluginUsedMetadata {
CodexPluginUsedMetadata {
plugin: codex_plugin_metadata(plugin),
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
model_slug: Some(tracking.model_slug.clone()),
}
}
pub(crate) fn thread_source_name(thread_source: &SessionSource) -> Option<&'static str> {
match thread_source {
SessionSource::Cli | SessionSource::VSCode | SessionSource::Exec => Some("user"),
SessionSource::SubAgent(_) => Some("subagent"),
SessionSource::Mcp | SessionSource::Custom(_) | SessionSource::Unknown => None,
}
}
pub(crate) fn current_runtime_metadata() -> CodexRuntimeMetadata {
let os_info = os_info::get();
CodexRuntimeMetadata {
codex_rs_version: env!("CARGO_PKG_VERSION").to_string(),
runtime_os: std::env::consts::OS.to_string(),
runtime_os_version: os_info.version().to_string(),
runtime_arch: std::env::consts::ARCH.to_string(),
}
}

View File

@@ -0,0 +1,116 @@
use crate::events::AppServerRpcTransport;
use crate::events::CodexRuntimeMetadata;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_plugin::PluginTelemetryMetadata;
use codex_protocol::protocol::SkillScope;
use serde::Serialize;
use std::path::PathBuf;
#[derive(Clone)]
pub struct TrackEventsContext {
pub model_slug: String,
pub thread_id: String,
pub turn_id: String,
}
pub fn build_track_events_context(
model_slug: String,
thread_id: String,
turn_id: String,
) -> TrackEventsContext {
TrackEventsContext {
model_slug,
thread_id,
turn_id,
}
}
#[derive(Clone, Debug)]
pub struct SkillInvocation {
pub skill_name: String,
pub skill_scope: SkillScope,
pub skill_path: PathBuf,
pub invocation_type: InvocationType,
}
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "lowercase")]
pub enum InvocationType {
Explicit,
Implicit,
}
pub struct AppInvocation {
pub connector_id: Option<String>,
pub app_name: Option<String>,
pub invocation_type: Option<InvocationType>,
}
#[allow(dead_code)]
pub(crate) enum AnalyticsFact {
Initialize {
connection_id: u64,
params: InitializeParams,
product_client_id: String,
runtime: CodexRuntimeMetadata,
rpc_transport: AppServerRpcTransport,
},
Request {
connection_id: u64,
request_id: RequestId,
request: Box<ClientRequest>,
},
Response {
connection_id: u64,
response: Box<ClientResponse>,
},
Notification(Box<ServerNotification>),
// Facts that do not naturally exist on the app-server protocol surface, or
// would require non-trivial protocol reshaping on this branch.
Custom(CustomAnalyticsFact),
}
pub(crate) enum CustomAnalyticsFact {
SkillInvoked(SkillInvokedInput),
AppMentioned(AppMentionedInput),
AppUsed(AppUsedInput),
PluginUsed(PluginUsedInput),
PluginStateChanged(PluginStateChangedInput),
}
pub(crate) struct SkillInvokedInput {
pub tracking: TrackEventsContext,
pub invocations: Vec<SkillInvocation>,
}
pub(crate) struct AppMentionedInput {
pub tracking: TrackEventsContext,
pub mentions: Vec<AppInvocation>,
}
pub(crate) struct AppUsedInput {
pub tracking: TrackEventsContext,
pub app: AppInvocation,
}
pub(crate) struct PluginUsedInput {
pub tracking: TrackEventsContext,
pub plugin: PluginTelemetryMetadata,
}
pub(crate) struct PluginStateChangedInput {
pub plugin: PluginTelemetryMetadata,
pub state: PluginState,
}
#[derive(Clone, Copy)]
pub(crate) enum PluginState {
Installed,
Uninstalled,
Enabled,
Disabled,
}

View File

@@ -1,8 +1,15 @@
mod analytics_client;
mod client;
mod events;
mod facts;
mod reducer;
pub use analytics_client::AnalyticsEventsClient;
pub use analytics_client::AppInvocation;
pub use analytics_client::InvocationType;
pub use analytics_client::SkillInvocation;
pub use analytics_client::TrackEventsContext;
pub use analytics_client::build_track_events_context;
pub use client::AnalyticsEventsClient;
pub use events::AppServerRpcTransport;
pub use facts::AppInvocation;
pub use facts::InvocationType;
pub use facts::SkillInvocation;
pub use facts::TrackEventsContext;
pub use facts::build_track_events_context;
#[cfg(test)]
mod analytics_client_tests;

View File

@@ -0,0 +1,305 @@
use crate::events::AppServerRpcTransport;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
use crate::events::CodexAppUsedEventRequest;
use crate::events::CodexPluginEventRequest;
use crate::events::CodexPluginUsedEventRequest;
use crate::events::CodexRuntimeMetadata;
use crate::events::SkillInvocationEventParams;
use crate::events::SkillInvocationEventRequest;
use crate::events::ThreadInitializationMode;
use crate::events::ThreadInitializedEvent;
use crate::events::ThreadInitializedEventParams;
use crate::events::TrackEventRequest;
use crate::events::codex_app_metadata;
use crate::events::codex_plugin_metadata;
use crate::events::codex_plugin_used_metadata;
use crate::events::plugin_state_event_type;
use crate::events::thread_source_name;
use crate::facts::AnalyticsFact;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::PluginUsedInput;
use crate::facts::SkillInvokedInput;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_git_utils::collect_git_info;
use codex_git_utils::get_git_repo_root;
use codex_login::default_client::originator;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SkillScope;
use sha1::Digest;
use std::collections::HashMap;
use std::path::Path;
#[derive(Default)]
pub(crate) struct AnalyticsReducer {
connections: HashMap<u64, ConnectionState>,
}
struct ConnectionState {
app_server_client: CodexAppServerClientMetadata,
runtime: CodexRuntimeMetadata,
}
impl AnalyticsReducer {
pub(crate) async fn ingest(&mut self, input: AnalyticsFact, out: &mut Vec<TrackEventRequest>) {
match input {
AnalyticsFact::Initialize {
connection_id,
params,
product_client_id,
runtime,
rpc_transport,
} => {
self.ingest_initialize(
connection_id,
params,
product_client_id,
runtime,
rpc_transport,
);
}
AnalyticsFact::Request {
connection_id: _connection_id,
request_id: _request_id,
request: _request,
} => {}
AnalyticsFact::Response {
connection_id,
response,
} => {
self.ingest_response(connection_id, *response, out);
}
AnalyticsFact::Notification(_notification) => {}
AnalyticsFact::Custom(input) => match input {
CustomAnalyticsFact::SkillInvoked(input) => {
self.ingest_skill_invoked(input, out).await;
}
CustomAnalyticsFact::AppMentioned(input) => {
self.ingest_app_mentioned(input, out);
}
CustomAnalyticsFact::AppUsed(input) => {
self.ingest_app_used(input, out);
}
CustomAnalyticsFact::PluginUsed(input) => {
self.ingest_plugin_used(input, out);
}
CustomAnalyticsFact::PluginStateChanged(input) => {
self.ingest_plugin_state_changed(input, out);
}
},
}
}
fn ingest_initialize(
&mut self,
connection_id: u64,
params: InitializeParams,
product_client_id: String,
runtime: CodexRuntimeMetadata,
rpc_transport: AppServerRpcTransport,
) {
self.connections.insert(
connection_id,
ConnectionState {
app_server_client: CodexAppServerClientMetadata {
product_client_id,
client_name: Some(params.client_info.name),
client_version: Some(params.client_info.version),
rpc_transport,
experimental_api_enabled: params
.capabilities
.map(|capabilities| capabilities.experimental_api),
},
runtime,
},
);
}
async fn ingest_skill_invoked(
&mut self,
input: SkillInvokedInput,
out: &mut Vec<TrackEventRequest>,
) {
let SkillInvokedInput {
tracking,
invocations,
} = input;
for invocation in invocations {
let skill_scope = match invocation.skill_scope {
SkillScope::User => "user",
SkillScope::Repo => "repo",
SkillScope::System => "system",
SkillScope::Admin => "admin",
};
let repo_root = get_git_repo_root(invocation.skill_path.as_path());
let repo_url = if let Some(root) = repo_root.as_ref() {
collect_git_info(root)
.await
.and_then(|info| info.repository_url)
} else {
None
};
let skill_id = skill_id_for_local_skill(
repo_url.as_deref(),
repo_root.as_deref(),
invocation.skill_path.as_path(),
invocation.skill_name.as_str(),
);
out.push(TrackEventRequest::SkillInvocation(
SkillInvocationEventRequest {
event_type: "skill_invocation",
skill_id,
skill_name: invocation.skill_name.clone(),
event_params: SkillInvocationEventParams {
thread_id: Some(tracking.thread_id.clone()),
invoke_type: Some(invocation.invocation_type),
model_slug: Some(tracking.model_slug.clone()),
product_client_id: Some(originator().value),
repo_url,
skill_scope: Some(skill_scope.to_string()),
},
},
));
}
}
fn ingest_app_mentioned(&mut self, input: AppMentionedInput, out: &mut Vec<TrackEventRequest>) {
let AppMentionedInput { tracking, mentions } = input;
out.extend(mentions.into_iter().map(|mention| {
let event_params = codex_app_metadata(&tracking, mention);
TrackEventRequest::AppMentioned(CodexAppMentionedEventRequest {
event_type: "codex_app_mentioned",
event_params,
})
}));
}
fn ingest_app_used(&mut self, input: AppUsedInput, out: &mut Vec<TrackEventRequest>) {
let AppUsedInput { tracking, app } = input;
let event_params = codex_app_metadata(&tracking, app);
out.push(TrackEventRequest::AppUsed(CodexAppUsedEventRequest {
event_type: "codex_app_used",
event_params,
}));
}
fn ingest_plugin_used(&mut self, input: PluginUsedInput, out: &mut Vec<TrackEventRequest>) {
let PluginUsedInput { tracking, plugin } = input;
out.push(TrackEventRequest::PluginUsed(CodexPluginUsedEventRequest {
event_type: "codex_plugin_used",
event_params: codex_plugin_used_metadata(&tracking, plugin),
}));
}
fn ingest_plugin_state_changed(
&mut self,
input: PluginStateChangedInput,
out: &mut Vec<TrackEventRequest>,
) {
let PluginStateChangedInput { plugin, state } = input;
let event = CodexPluginEventRequest {
event_type: plugin_state_event_type(state),
event_params: codex_plugin_metadata(plugin),
};
out.push(match state {
PluginState::Installed => TrackEventRequest::PluginInstalled(event),
PluginState::Uninstalled => TrackEventRequest::PluginUninstalled(event),
PluginState::Enabled => TrackEventRequest::PluginEnabled(event),
PluginState::Disabled => TrackEventRequest::PluginDisabled(event),
});
}
fn ingest_response(
&mut self,
connection_id: u64,
response: ClientResponse,
out: &mut Vec<TrackEventRequest>,
) {
let (thread, model, initialization_mode) = match response {
ClientResponse::ThreadStart { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::New,
),
ClientResponse::ThreadResume { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::Resumed,
),
ClientResponse::ThreadFork { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::Forked,
),
_ => return,
};
let thread_source: SessionSource = thread.source.into();
let Some(connection_state) = self.connections.get(&connection_id) else {
return;
};
out.push(TrackEventRequest::ThreadInitialized(
ThreadInitializedEvent {
event_type: "codex_thread_initialized",
event_params: ThreadInitializedEventParams {
thread_id: thread.id,
app_server_client: connection_state.app_server_client.clone(),
runtime: connection_state.runtime.clone(),
model,
ephemeral: thread.ephemeral,
thread_source: thread_source_name(&thread_source),
initialization_mode,
subagent_source: None,
parent_thread_id: None,
created_at: u64::try_from(thread.created_at).unwrap_or_default(),
},
},
));
}
}
pub(crate) fn skill_id_for_local_skill(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
skill_name: &str,
) -> String {
let path = normalize_path_for_skill_id(repo_url, repo_root, skill_path);
let prefix = if let Some(url) = repo_url {
format!("repo_{url}")
} else {
"personal".to_string()
};
let raw_id = format!("{prefix}_{path}_{skill_name}");
let mut hasher = sha1::Sha1::new();
sha1::Digest::update(&mut hasher, raw_id.as_bytes());
format!("{:x}", sha1::Digest::finalize(hasher))
}
/// Returns a normalized path for skill ID construction.
///
/// - Repo-scoped skills use a path relative to the repo root.
/// - User/admin/system skills use an absolute path.
pub(crate) fn normalize_path_for_skill_id(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
) -> String {
let resolved_path =
std::fs::canonicalize(skill_path).unwrap_or_else(|_| skill_path.to_path_buf());
match (repo_url, repo_root) {
(Some(_), Some(root)) => {
let resolved_root = std::fs::canonicalize(root).unwrap_or_else(|_| root.to_path_buf());
resolved_path
.strip_prefix(&resolved_root)
.unwrap_or(resolved_path.as_path())
.to_string_lossy()
.replace('\\', "/")
}
_ => resolved_path.to_string_lossy().replace('\\', "/"),
}
}

View File

@@ -8,6 +8,9 @@ license.workspace = true
name = "codex_ansi_escape"
path = "src/lib.rs"
[lints]
workspace = true
[dependencies]
ansi-to-tui = { workspace = true }
ratatui = { workspace = true, features = [

View File

@@ -917,7 +917,7 @@ mod tests {
+ 'static,
Fut: std::future::Future<Output = ()> + Send + 'static,
{
start_test_remote_server_with_auth(None, handler).await
start_test_remote_server_with_auth(/*expected_auth_token*/ None, handler).await
}
async fn start_test_remote_server_with_auth<F, Fut>(
@@ -1164,7 +1164,8 @@ mod tests {
#[tokio::test]
async fn tiny_channel_capacity_still_supports_request_roundtrip() {
let client = start_test_client_with_capacity(SessionSource::Exec, 1).await;
let client =
start_test_client_with_capacity(SessionSource::Exec, /*channel_capacity*/ 1).await;
let _response: ConfigRequirementsReadResponse = client
.request_typed(ClientRequest::ConfigRequirementsRead {
request_id: RequestId::Integer(1),

View File

@@ -4,9 +4,8 @@ This module implements the websocket-backed app-server client transport.
It owns the remote connection lifecycle, including the initialize/initialized
handshake, JSON-RPC request/response routing, server-request resolution, and
notification streaming. The rest of the crate uses the same `AppServerEvent`
surface for both in-process and remote transports, so callers such as
`tui_app_server` can switch between them without changing their higher-level
session logic.
surface for both in-process and remote transports, so callers such as the TUI
can switch between them without changing their higher-level session logic.
*/
use std::collections::HashMap;

View File

@@ -147,6 +147,115 @@
],
"type": "object"
},
"CodexAvatarAdminAwardGrantParams": {
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"sourceType"
],
"type": "object"
},
"CodexAvatarAdminCapabilitiesReadParams": {
"type": "object"
},
"CodexAvatarAdminProofDropGrantParams": {
"properties": {
"accountUserId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"awardId",
"sourceType"
],
"type": "object"
},
"CodexAvatarEquipParams": {
"properties": {
"avatarId": {
"type": "string"
}
},
"required": [
"avatarId"
],
"type": "object"
},
"CodexAvatarInventoryReadParams": {
"type": "object"
},
"CollaborationMode": {
"description": "Collaboration mode for a Codex session.",
"properties": {
@@ -4538,6 +4647,126 @@
"title": "Account/rateLimits/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/inventory/read"
],
"title": "Avatar/inventory/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarInventoryReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/inventory/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/equip"
],
"title": "Avatar/equipRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarEquipParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/equipRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/award"
],
"title": "Avatar/admin/awardRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminAwardGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/awardRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/proof-drop"
],
"title": "Avatar/admin/proofDropRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminProofDropGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/proofDropRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/capabilities/read"
],
"title": "Avatar/admin/capabilities/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminCapabilitiesReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/capabilities/readRequest",
"type": "object"
},
{
"properties": {
"id": {

View File

@@ -0,0 +1,54 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": "integer"
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"awardedAt",
"sourceType"
],
"title": "CodexAvatarAward",
"type": "object"
}

View File

@@ -1777,7 +1777,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -1319,6 +1319,126 @@
"title": "Account/rateLimits/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"avatar/inventory/read"
],
"title": "Avatar/inventory/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/CodexAvatarInventoryReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/inventory/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"avatar/equip"
],
"title": "Avatar/equipRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/CodexAvatarEquipParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/equipRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"avatar/admin/award"
],
"title": "Avatar/admin/awardRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/CodexAvatarAdminAwardGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/awardRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"avatar/admin/proof-drop"
],
"title": "Avatar/admin/proofDropRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/CodexAvatarAdminProofDropGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/proofDropRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"avatar/admin/capabilities/read"
],
"title": "Avatar/admin/capabilities/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/CodexAvatarAdminCapabilitiesReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/capabilities/readRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -1637,6 +1757,60 @@
],
"title": "ClientRequest"
},
"CodexAvatarAward": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": "integer"
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"awardedAt",
"sourceType"
],
"title": "CodexAvatarAward",
"type": "object"
},
"CommandExecutionApprovalDecision": {
"oneOf": [
{
@@ -5553,6 +5727,426 @@
],
"type": "string"
},
"CodexAvatarAdminAwardGrantParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"sourceType"
],
"title": "CodexAvatarAdminAwardGrantParams",
"type": "object"
},
"CodexAvatarAdminCapabilitiesReadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarAdminCapabilitiesReadParams",
"type": "object"
},
"CodexAvatarAdminCapabilitiesReadResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"canGrantAwards": {
"type": "boolean"
},
"canGrantProofDropBoxes": {
"type": "boolean"
}
},
"required": [
"canGrantAwards",
"canGrantProofDropBoxes"
],
"title": "CodexAvatarAdminCapabilitiesReadResponse",
"type": "object"
},
"CodexAvatarAdminProofDropGrantParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"awardId",
"sourceType"
],
"title": "CodexAvatarAdminProofDropGrantParams",
"type": "object"
},
"CodexAvatarBoxOddsBucket": {
"properties": {
"bucketId": {
"type": "string"
},
"label": {
"type": "string"
},
"probabilityPercent": {
"format": "int64",
"type": "integer"
}
},
"required": [
"bucketId",
"label",
"probabilityPercent"
],
"type": "object"
},
"CodexAvatarBoxRules": {
"properties": {
"guaranteedNewThreshold": {
"format": "int64",
"type": "integer"
},
"legendaryPityThreshold": {
"format": "int64",
"type": "integer"
},
"odds": {
"items": {
"$ref": "#/definitions/v2/CodexAvatarBoxOddsBucket"
},
"type": "array"
},
"oddsTableVersion": {
"type": "string"
},
"rareOrBetterPityThreshold": {
"format": "int64",
"type": "integer"
},
"rulesetVersion": {
"type": "string"
}
},
"required": [
"guaranteedNewThreshold",
"legendaryPityThreshold",
"odds",
"oddsTableVersion",
"rareOrBetterPityThreshold",
"rulesetVersion"
],
"type": "object"
},
"CodexAvatarDefinition": {
"properties": {
"accentClassName": {
"type": "string"
},
"assetRef": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"collectionDescription": {
"type": "string"
},
"collectionName": {
"type": "string"
},
"description": {
"type": "string"
},
"displayName": {
"type": "string"
},
"isProgressVisible": {
"type": "boolean"
},
"lore": {
"type": "string"
},
"rarity": {
"$ref": "#/definitions/v2/CodexAvatarRarity"
},
"silhouetteGlowClassName": {
"type": "string"
},
"sortOrder": {
"format": "int64",
"type": "integer"
},
"status": {
"$ref": "#/definitions/v2/CodexAvatarStatus"
}
},
"required": [
"accentClassName",
"assetRef",
"avatarId",
"collectionDescription",
"collectionName",
"description",
"displayName",
"isProgressVisible",
"lore",
"rarity",
"silhouetteGlowClassName",
"sortOrder",
"status"
],
"type": "object"
},
"CodexAvatarEquipParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"avatarId": {
"type": "string"
}
},
"required": [
"avatarId"
],
"title": "CodexAvatarEquipParams",
"type": "object"
},
"CodexAvatarInventoryReadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarInventoryReadParams",
"type": "object"
},
"CodexAvatarInventoryReadResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarDefinitions": {
"items": {
"$ref": "#/definitions/v2/CodexAvatarDefinition"
},
"type": "array"
},
"boxRules": {
"$ref": "#/definitions/v2/CodexAvatarBoxRules"
},
"equippedAvatarId": {
"type": "string"
},
"ownedAvatars": {
"items": {
"$ref": "#/definitions/v2/CodexAvatarOwnership"
},
"type": "array"
},
"pendingRevealAwards": {
"items": {
"$ref": "#/definitions/v2/CodexAvatarRevealAward"
},
"type": "array"
},
"pityState": {
"$ref": "#/definitions/v2/CodexAvatarPityState"
},
"updatedAt": {
"format": "int64",
"type": "integer"
}
},
"required": [
"accountUserId",
"avatarDefinitions",
"boxRules",
"equippedAvatarId",
"ownedAvatars",
"pendingRevealAwards",
"pityState",
"updatedAt"
],
"title": "CodexAvatarInventoryReadResponse",
"type": "object"
},
"CodexAvatarOwnership": {
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"sourceSummary": {
"type": [
"string",
"null"
]
}
},
"required": [
"accountUserId",
"avatarId"
],
"type": "object"
},
"CodexAvatarPityState": {
"properties": {
"guaranteedNewAvailable": {
"type": "boolean"
},
"nonNewOutcomeStreak": {
"format": "int64",
"type": "integer"
},
"rollsSinceLegendary": {
"format": "int64",
"type": "integer"
},
"rollsSinceRareOrBetter": {
"format": "int64",
"type": "integer"
}
},
"required": [
"guaranteedNewAvailable",
"nonNewOutcomeStreak",
"rollsSinceLegendary",
"rollsSinceRareOrBetter"
],
"type": "object"
},
"CodexAvatarRarity": {
"enum": [
"common",
"rare",
"epic",
"legendary"
],
"type": "string"
},
"CodexAvatarRevealAward": {
"properties": {
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": "integer"
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"outcomeAvatarId": {
"type": [
"string",
"null"
]
},
"outcomeKind": {
"type": "string"
},
"pityStateAfter": {
"$ref": "#/definitions/v2/CodexAvatarPityState"
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"awardId",
"awardedAt",
"outcomeKind",
"pityStateAfter",
"sourceType"
],
"type": "object"
},
"CodexAvatarStatus": {
"enum": [
"active",
"hidden",
"retired"
],
"type": "string"
},
"CodexErrorInfo": {
"description": "This translation layer make sure that we expose codex error code in camel case.\n\nWhen an upstream HTTP status is available (for example, from the Responses API or a provider), it is forwarded in `httpStatusCode` on the relevant `codexErrorInfo` variant.",
"oneOf": [
@@ -9503,7 +10097,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -1894,6 +1894,126 @@
"title": "Account/rateLimits/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/inventory/read"
],
"title": "Avatar/inventory/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarInventoryReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/inventory/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/equip"
],
"title": "Avatar/equipRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarEquipParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/equipRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/award"
],
"title": "Avatar/admin/awardRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminAwardGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/awardRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/proof-drop"
],
"title": "Avatar/admin/proofDropRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminProofDropGrantParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/proofDropRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"avatar/admin/capabilities/read"
],
"title": "Avatar/admin/capabilities/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/CodexAvatarAdminCapabilitiesReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Avatar/admin/capabilities/readRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -2212,6 +2332,426 @@
],
"title": "ClientRequest"
},
"CodexAvatarAdminAwardGrantParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"sourceType"
],
"title": "CodexAvatarAdminAwardGrantParams",
"type": "object"
},
"CodexAvatarAdminCapabilitiesReadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarAdminCapabilitiesReadParams",
"type": "object"
},
"CodexAvatarAdminCapabilitiesReadResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"canGrantAwards": {
"type": "boolean"
},
"canGrantProofDropBoxes": {
"type": "boolean"
}
},
"required": [
"canGrantAwards",
"canGrantProofDropBoxes"
],
"title": "CodexAvatarAdminCapabilitiesReadResponse",
"type": "object"
},
"CodexAvatarAdminProofDropGrantParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"awardId",
"sourceType"
],
"title": "CodexAvatarAdminProofDropGrantParams",
"type": "object"
},
"CodexAvatarBoxOddsBucket": {
"properties": {
"bucketId": {
"type": "string"
},
"label": {
"type": "string"
},
"probabilityPercent": {
"format": "int64",
"type": "integer"
}
},
"required": [
"bucketId",
"label",
"probabilityPercent"
],
"type": "object"
},
"CodexAvatarBoxRules": {
"properties": {
"guaranteedNewThreshold": {
"format": "int64",
"type": "integer"
},
"legendaryPityThreshold": {
"format": "int64",
"type": "integer"
},
"odds": {
"items": {
"$ref": "#/definitions/CodexAvatarBoxOddsBucket"
},
"type": "array"
},
"oddsTableVersion": {
"type": "string"
},
"rareOrBetterPityThreshold": {
"format": "int64",
"type": "integer"
},
"rulesetVersion": {
"type": "string"
}
},
"required": [
"guaranteedNewThreshold",
"legendaryPityThreshold",
"odds",
"oddsTableVersion",
"rareOrBetterPityThreshold",
"rulesetVersion"
],
"type": "object"
},
"CodexAvatarDefinition": {
"properties": {
"accentClassName": {
"type": "string"
},
"assetRef": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"collectionDescription": {
"type": "string"
},
"collectionName": {
"type": "string"
},
"description": {
"type": "string"
},
"displayName": {
"type": "string"
},
"isProgressVisible": {
"type": "boolean"
},
"lore": {
"type": "string"
},
"rarity": {
"$ref": "#/definitions/CodexAvatarRarity"
},
"silhouetteGlowClassName": {
"type": "string"
},
"sortOrder": {
"format": "int64",
"type": "integer"
},
"status": {
"$ref": "#/definitions/CodexAvatarStatus"
}
},
"required": [
"accentClassName",
"assetRef",
"avatarId",
"collectionDescription",
"collectionName",
"description",
"displayName",
"isProgressVisible",
"lore",
"rarity",
"silhouetteGlowClassName",
"sortOrder",
"status"
],
"type": "object"
},
"CodexAvatarEquipParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"avatarId": {
"type": "string"
}
},
"required": [
"avatarId"
],
"title": "CodexAvatarEquipParams",
"type": "object"
},
"CodexAvatarInventoryReadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarInventoryReadParams",
"type": "object"
},
"CodexAvatarInventoryReadResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarDefinitions": {
"items": {
"$ref": "#/definitions/CodexAvatarDefinition"
},
"type": "array"
},
"boxRules": {
"$ref": "#/definitions/CodexAvatarBoxRules"
},
"equippedAvatarId": {
"type": "string"
},
"ownedAvatars": {
"items": {
"$ref": "#/definitions/CodexAvatarOwnership"
},
"type": "array"
},
"pendingRevealAwards": {
"items": {
"$ref": "#/definitions/CodexAvatarRevealAward"
},
"type": "array"
},
"pityState": {
"$ref": "#/definitions/CodexAvatarPityState"
},
"updatedAt": {
"format": "int64",
"type": "integer"
}
},
"required": [
"accountUserId",
"avatarDefinitions",
"boxRules",
"equippedAvatarId",
"ownedAvatars",
"pendingRevealAwards",
"pityState",
"updatedAt"
],
"title": "CodexAvatarInventoryReadResponse",
"type": "object"
},
"CodexAvatarOwnership": {
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"sourceSummary": {
"type": [
"string",
"null"
]
}
},
"required": [
"accountUserId",
"avatarId"
],
"type": "object"
},
"CodexAvatarPityState": {
"properties": {
"guaranteedNewAvailable": {
"type": "boolean"
},
"nonNewOutcomeStreak": {
"format": "int64",
"type": "integer"
},
"rollsSinceLegendary": {
"format": "int64",
"type": "integer"
},
"rollsSinceRareOrBetter": {
"format": "int64",
"type": "integer"
}
},
"required": [
"guaranteedNewAvailable",
"nonNewOutcomeStreak",
"rollsSinceLegendary",
"rollsSinceRareOrBetter"
],
"type": "object"
},
"CodexAvatarRarity": {
"enum": [
"common",
"rare",
"epic",
"legendary"
],
"type": "string"
},
"CodexAvatarRevealAward": {
"properties": {
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": "integer"
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"outcomeAvatarId": {
"type": [
"string",
"null"
]
},
"outcomeKind": {
"type": "string"
},
"pityStateAfter": {
"$ref": "#/definitions/CodexAvatarPityState"
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"awardId",
"awardedAt",
"outcomeKind",
"pityStateAfter",
"sourceType"
],
"type": "object"
},
"CodexAvatarStatus": {
"enum": [
"active",
"hidden",
"retired"
],
"type": "string"
},
"CodexErrorInfo": {
"description": "This translation layer make sure that we expose codex error code in camel case.\n\nWhen an upstream HTTP status is available (for example, from the Responses API or a provider), it is forwarded in `httpStatusCode` on the relevant `codexErrorInfo` variant.",
"oneOf": [
@@ -6317,7 +6857,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -29,7 +29,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -34,7 +34,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -0,0 +1,56 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"awardedBy": {
"type": [
"string",
"null"
]
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"accountUserId",
"avatarId",
"awardId",
"sourceType"
],
"title": "CodexAvatarAdminAwardGrantParams",
"type": "object"
}

View File

@@ -0,0 +1,5 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarAdminCapabilitiesReadParams",
"type": "object"
}

View File

@@ -0,0 +1,17 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"canGrantAwards": {
"type": "boolean"
},
"canGrantProofDropBoxes": {
"type": "boolean"
}
},
"required": [
"canGrantAwards",
"canGrantProofDropBoxes"
],
"title": "CodexAvatarAdminCapabilitiesReadResponse",
"type": "object"
}

View File

@@ -0,0 +1,13 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"avatarId": {
"type": "string"
}
},
"required": [
"avatarId"
],
"title": "CodexAvatarEquipParams",
"type": "object"
}

View File

@@ -0,0 +1,5 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "CodexAvatarInventoryReadParams",
"type": "object"
}

View File

@@ -0,0 +1,286 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"CodexAvatarBoxOddsBucket": {
"properties": {
"bucketId": {
"type": "string"
},
"label": {
"type": "string"
},
"probabilityPercent": {
"format": "int64",
"type": "integer"
}
},
"required": [
"bucketId",
"label",
"probabilityPercent"
],
"type": "object"
},
"CodexAvatarBoxRules": {
"properties": {
"guaranteedNewThreshold": {
"format": "int64",
"type": "integer"
},
"legendaryPityThreshold": {
"format": "int64",
"type": "integer"
},
"odds": {
"items": {
"$ref": "#/definitions/CodexAvatarBoxOddsBucket"
},
"type": "array"
},
"oddsTableVersion": {
"type": "string"
},
"rareOrBetterPityThreshold": {
"format": "int64",
"type": "integer"
},
"rulesetVersion": {
"type": "string"
}
},
"required": [
"guaranteedNewThreshold",
"legendaryPityThreshold",
"odds",
"oddsTableVersion",
"rareOrBetterPityThreshold",
"rulesetVersion"
],
"type": "object"
},
"CodexAvatarDefinition": {
"properties": {
"accentClassName": {
"type": "string"
},
"assetRef": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"collectionDescription": {
"type": "string"
},
"collectionName": {
"type": "string"
},
"description": {
"type": "string"
},
"displayName": {
"type": "string"
},
"isProgressVisible": {
"type": "boolean"
},
"lore": {
"type": "string"
},
"rarity": {
"$ref": "#/definitions/CodexAvatarRarity"
},
"silhouetteGlowClassName": {
"type": "string"
},
"sortOrder": {
"format": "int64",
"type": "integer"
},
"status": {
"$ref": "#/definitions/CodexAvatarStatus"
}
},
"required": [
"accentClassName",
"assetRef",
"avatarId",
"collectionDescription",
"collectionName",
"description",
"displayName",
"isProgressVisible",
"lore",
"rarity",
"silhouetteGlowClassName",
"sortOrder",
"status"
],
"type": "object"
},
"CodexAvatarOwnership": {
"properties": {
"accountUserId": {
"type": "string"
},
"avatarId": {
"type": "string"
},
"sourceSummary": {
"type": [
"string",
"null"
]
}
},
"required": [
"accountUserId",
"avatarId"
],
"type": "object"
},
"CodexAvatarPityState": {
"properties": {
"guaranteedNewAvailable": {
"type": "boolean"
},
"nonNewOutcomeStreak": {
"format": "int64",
"type": "integer"
},
"rollsSinceLegendary": {
"format": "int64",
"type": "integer"
},
"rollsSinceRareOrBetter": {
"format": "int64",
"type": "integer"
}
},
"required": [
"guaranteedNewAvailable",
"nonNewOutcomeStreak",
"rollsSinceLegendary",
"rollsSinceRareOrBetter"
],
"type": "object"
},
"CodexAvatarRarity": {
"enum": [
"common",
"rare",
"epic",
"legendary"
],
"type": "string"
},
"CodexAvatarRevealAward": {
"properties": {
"awardId": {
"type": "string"
},
"awardedAt": {
"format": "int64",
"type": "integer"
},
"metadataJson": {
"type": [
"string",
"null"
]
},
"outcomeAvatarId": {
"type": [
"string",
"null"
]
},
"outcomeKind": {
"type": "string"
},
"pityStateAfter": {
"$ref": "#/definitions/CodexAvatarPityState"
},
"sourceRef": {
"type": [
"string",
"null"
]
},
"sourceSummary": {
"type": [
"string",
"null"
]
},
"sourceType": {
"type": "string"
}
},
"required": [
"awardId",
"awardedAt",
"outcomeKind",
"pityStateAfter",
"sourceType"
],
"type": "object"
},
"CodexAvatarStatus": {
"enum": [
"active",
"hidden",
"retired"
],
"type": "string"
}
},
"properties": {
"accountUserId": {
"type": "string"
},
"avatarDefinitions": {
"items": {
"$ref": "#/definitions/CodexAvatarDefinition"
},
"type": "array"
},
"boxRules": {
"$ref": "#/definitions/CodexAvatarBoxRules"
},
"equippedAvatarId": {
"type": "string"
},
"ownedAvatars": {
"items": {
"$ref": "#/definitions/CodexAvatarOwnership"
},
"type": "array"
},
"pendingRevealAwards": {
"items": {
"$ref": "#/definitions/CodexAvatarRevealAward"
},
"type": "array"
},
"pityState": {
"$ref": "#/definitions/CodexAvatarPityState"
},
"updatedAt": {
"format": "int64",
"type": "integer"
}
},
"required": [
"accountUserId",
"avatarDefinitions",
"boxRules",
"equippedAvatarId",
"ownedAvatars",
"pendingRevealAwards",
"pityState",
"updatedAt"
],
"title": "CodexAvatarInventoryReadResponse",
"type": "object"
}

View File

@@ -29,7 +29,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -52,7 +52,9 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

File diff suppressed because one or more lines are too long

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type PlanType = "free" | "go" | "plus" | "pro" | "team" | "business" | "enterprise" | "edu" | "unknown";
export type PlanType = "free" | "go" | "plus" | "pro" | "team" | "self_serve_business_usage_based" | "business" | "enterprise_cbp_usage_based" | "enterprise" | "edu" | "unknown";

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarAdminAwardGrantParams = { accountUserId: string, awardId: string, avatarId: string, sourceType: string, sourceRef?: string | null, awardedAt?: bigint | null, awardedBy?: string | null, metadataJson?: string | null, sourceSummary?: string | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarAdminCapabilitiesReadParams = Record<string, never>;

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarAdminCapabilitiesReadResponse = { canGrantAwards: boolean, canGrantProofDropBoxes: boolean, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarAward = { awardId: string, accountUserId: string, avatarId: string, sourceType: string, sourceRef: string | null, awardedAt: bigint, awardedBy: string | null, metadataJson: string | null, sourceSummary: string | null, };

View File

@@ -0,0 +1,7 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { CodexAvatarRarity } from "./CodexAvatarRarity";
import type { CodexAvatarStatus } from "./CodexAvatarStatus";
export type CodexAvatarDefinition = { avatarId: string, displayName: string, description: string, rarity: CodexAvatarRarity, assetRef: string, status: CodexAvatarStatus, sortOrder: bigint, collectionName: string, collectionDescription: string, lore: string, accentClassName: string, silhouetteGlowClassName: string, isProgressVisible: boolean, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarEquipParams = { avatarId: string, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarInventoryReadParams = Record<string, never>;

View File

@@ -0,0 +1,10 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { CodexAvatarBoxRules } from "./CodexAvatarBoxRules";
import type { CodexAvatarDefinition } from "./CodexAvatarDefinition";
import type { CodexAvatarOwnership } from "./CodexAvatarOwnership";
import type { CodexAvatarPityState } from "./CodexAvatarPityState";
import type { CodexAvatarRevealAward } from "./CodexAvatarRevealAward";
export type CodexAvatarInventoryReadResponse = { accountUserId: string, avatarDefinitions: Array<CodexAvatarDefinition>, ownedAvatars: Array<CodexAvatarOwnership>, equippedAvatarId: string, boxRules: CodexAvatarBoxRules, pityState: CodexAvatarPityState, pendingRevealAwards: Array<CodexAvatarRevealAward>, updatedAt: bigint, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarOwnership = { accountUserId: string, avatarId: string, sourceSummary: string | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarRarity = "common" | "rare" | "epic" | "legendary";

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CodexAvatarStatus = "active" | "hidden" | "retired";

View File

@@ -31,6 +31,22 @@ export type { CancelLoginAccountStatus } from "./CancelLoginAccountStatus";
export type { ChatgptAuthTokensRefreshParams } from "./ChatgptAuthTokensRefreshParams";
export type { ChatgptAuthTokensRefreshReason } from "./ChatgptAuthTokensRefreshReason";
export type { ChatgptAuthTokensRefreshResponse } from "./ChatgptAuthTokensRefreshResponse";
export type { CodexAvatarAdminAwardGrantParams } from "./CodexAvatarAdminAwardGrantParams";
export type { CodexAvatarAdminCapabilitiesReadParams } from "./CodexAvatarAdminCapabilitiesReadParams";
export type { CodexAvatarAdminCapabilitiesReadResponse } from "./CodexAvatarAdminCapabilitiesReadResponse";
export type { CodexAvatarAdminProofDropGrantParams } from "./CodexAvatarAdminProofDropGrantParams";
export type { CodexAvatarAward } from "./CodexAvatarAward";
export type { CodexAvatarBoxOddsBucket } from "./CodexAvatarBoxOddsBucket";
export type { CodexAvatarBoxRules } from "./CodexAvatarBoxRules";
export type { CodexAvatarDefinition } from "./CodexAvatarDefinition";
export type { CodexAvatarEquipParams } from "./CodexAvatarEquipParams";
export type { CodexAvatarInventoryReadParams } from "./CodexAvatarInventoryReadParams";
export type { CodexAvatarInventoryReadResponse } from "./CodexAvatarInventoryReadResponse";
export type { CodexAvatarOwnership } from "./CodexAvatarOwnership";
export type { CodexAvatarPityState } from "./CodexAvatarPityState";
export type { CodexAvatarRarity } from "./CodexAvatarRarity";
export type { CodexAvatarRevealAward } from "./CodexAvatarRevealAward";
export type { CodexAvatarStatus } from "./CodexAvatarStatus";
export type { CodexErrorInfo } from "./CodexErrorInfo";
export type { CollabAgentState } from "./CollabAgentState";
export type { CollabAgentStatus } from "./CollabAgentStatus";

View File

@@ -118,6 +118,7 @@ pub fn generate_ts_with_options(
ServerRequest::export_all_to(out_dir)?;
export_server_responses(out_dir)?;
ServerNotification::export_all_to(out_dir)?;
crate::protocol::v2::CodexAvatarAward::export_all_to(out_dir)?;
if !options.experimental_api {
filter_experimental_ts(out_dir)?;
@@ -206,6 +207,12 @@ pub fn generate_json_with_experimental(out_dir: &Path, experimental_api: bool) -
|d| write_json_schema_with_return::<crate::ServerRequest>(d, "ServerRequest"),
|d| write_json_schema_with_return::<crate::ClientNotification>(d, "ClientNotification"),
|d| write_json_schema_with_return::<crate::ServerNotification>(d, "ServerNotification"),
|d| {
write_json_schema_with_return::<crate::protocol::v2::CodexAvatarAward>(
d,
"CodexAvatarAward",
)
},
];
let mut schemas: Vec<GeneratedSchema> = Vec::new();
@@ -2690,7 +2697,7 @@ export type Config = { stableField: Keep, unstableField: string | null } & ({ [k
fn generate_json_filters_experimental_fields_and_methods() -> Result<()> {
let output_dir = std::env::temp_dir().join(format!("codex_schema_{}", Uuid::now_v7()));
fs::create_dir(&output_dir)?;
generate_json_with_experimental(&output_dir, false)?;
generate_json_with_experimental(&output_dir, /*experimental_api*/ false)?;
let thread_start_json =
fs::read_to_string(output_dir.join("v2").join("ThreadStartParams.json"))?;

View File

@@ -485,6 +485,31 @@ client_request_definitions! {
response: v2::GetAccountRateLimitsResponse,
},
AvatarInventoryRead => "avatar/inventory/read" {
params: v2::CodexAvatarInventoryReadParams,
response: v2::CodexAvatarInventoryReadResponse,
},
AvatarEquip => "avatar/equip" {
params: v2::CodexAvatarEquipParams,
response: v2::CodexAvatarInventoryReadResponse,
},
AvatarAdminAward => "avatar/admin/award" {
params: v2::CodexAvatarAdminAwardGrantParams,
response: v2::CodexAvatarInventoryReadResponse,
},
AvatarAdminProofDrop => "avatar/admin/proof-drop" {
params: v2::CodexAvatarAdminProofDropGrantParams,
response: v2::CodexAvatarInventoryReadResponse,
},
AvatarAdminCapabilitiesRead => "avatar/admin/capabilities/read" {
params: v2::CodexAvatarAdminCapabilitiesReadParams,
response: v2::CodexAvatarAdminCapabilitiesReadResponse,
},
FeedbackUpload => "feedback/upload" {
params: v2::FeedbackUploadParams,
response: v2::FeedbackUploadResponse,
@@ -1517,6 +1542,129 @@ mod tests {
Ok(())
}
#[test]
fn serialize_avatar_inventory_read() -> Result<()> {
let request = ClientRequest::AvatarInventoryRead {
request_id: RequestId::Integer(7),
params: v2::CodexAvatarInventoryReadParams::default(),
};
assert_eq!(
json!({
"method": "avatar/inventory/read",
"id": 7,
"params": {}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_avatar_equip() -> Result<()> {
let request = ClientRequest::AvatarEquip {
request_id: RequestId::Integer(8),
params: v2::CodexAvatarEquipParams {
avatar_id: "clippy".to_string(),
},
};
assert_eq!(
json!({
"method": "avatar/equip",
"id": 8,
"params": {
"avatarId": "clippy"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_avatar_admin_award() -> Result<()> {
let request = ClientRequest::AvatarAdminAward {
request_id: RequestId::Integer(9),
params: v2::CodexAvatarAdminAwardGrantParams {
account_user_id: "target-user-123".to_string(),
award_id: "manual-grant-1".to_string(),
avatar_id: "prism".to_string(),
source_type: "manual-admin-grant".to_string(),
source_ref: Some("support-ticket-1".to_string()),
awarded_at: Some(123),
awarded_by: Some("admin-user".to_string()),
metadata_json: Some("{\"reason\":\"support\"}".to_string()),
source_summary: Some("Manual support grant".to_string()),
},
};
assert_eq!(
json!({
"method": "avatar/admin/award",
"id": 9,
"params": {
"accountUserId": "target-user-123",
"awardId": "manual-grant-1",
"avatarId": "prism",
"sourceType": "manual-admin-grant",
"sourceRef": "support-ticket-1",
"awardedAt": 123,
"awardedBy": "admin-user",
"metadataJson": "{\"reason\":\"support\"}",
"sourceSummary": "Manual support grant"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_avatar_admin_proof_drop() -> Result<()> {
let request = ClientRequest::AvatarAdminProofDrop {
request_id: RequestId::Integer(10),
params: v2::CodexAvatarAdminProofDropGrantParams {
account_user_id: "target-user-123".to_string(),
award_id: "proof-drop-1".to_string(),
source_type: "proof-drop-box".to_string(),
source_ref: Some("support-ticket-1".to_string()),
awarded_at: Some(123),
source_summary: Some("Manual proof-drop box".to_string()),
},
};
assert_eq!(
json!({
"method": "avatar/admin/proof-drop",
"id": 10,
"params": {
"accountUserId": "target-user-123",
"awardId": "proof-drop-1",
"sourceType": "proof-drop-box",
"sourceRef": "support-ticket-1",
"awardedAt": 123,
"sourceSummary": "Manual proof-drop box"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_avatar_admin_capabilities_read() -> Result<()> {
let request = ClientRequest::AvatarAdminCapabilitiesRead {
request_id: RequestId::Integer(11),
params: v2::CodexAvatarAdminCapabilitiesReadParams::default(),
};
assert_eq!(
json!({
"method": "avatar/admin/capabilities/read",
"id": 11,
"params": {}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn account_serializes_fields_in_camel_case() -> Result<()> {
let api_key = v2::Account::ApiKey {};

View File

@@ -1737,6 +1737,188 @@ pub struct GetAccountResponse {
pub requires_openai_auth: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CodexAvatarStatus {
Active,
Hidden,
Retired,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CodexAvatarRarity {
Common,
Rare,
Epic,
Legendary,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarDefinition {
pub avatar_id: String,
pub display_name: String,
pub description: String,
pub rarity: CodexAvatarRarity,
pub asset_ref: String,
pub status: CodexAvatarStatus,
pub sort_order: i64,
pub collection_name: String,
pub collection_description: String,
pub lore: String,
pub accent_class_name: String,
pub silhouette_glow_class_name: String,
pub is_progress_visible: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarOwnership {
pub account_user_id: String,
pub avatar_id: String,
pub source_summary: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarAward {
pub award_id: String,
pub account_user_id: String,
pub avatar_id: String,
pub source_type: String,
pub source_ref: Option<String>,
pub awarded_at: i64,
pub awarded_by: Option<String>,
pub metadata_json: Option<String>,
pub source_summary: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarInventoryReadParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarEquipParams {
pub avatar_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarAdminAwardGrantParams {
pub account_user_id: String,
pub award_id: String,
pub avatar_id: String,
pub source_type: String,
#[ts(optional = nullable)]
pub source_ref: Option<String>,
#[ts(optional = nullable)]
pub awarded_at: Option<i64>,
#[ts(optional = nullable)]
pub awarded_by: Option<String>,
#[ts(optional = nullable)]
pub metadata_json: Option<String>,
#[ts(optional = nullable)]
pub source_summary: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarAdminProofDropGrantParams {
pub account_user_id: String,
pub award_id: String,
pub source_type: String,
#[ts(optional = nullable)]
pub source_ref: Option<String>,
#[ts(optional = nullable)]
pub awarded_at: Option<i64>,
#[ts(optional = nullable)]
pub source_summary: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarAdminCapabilitiesReadParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarAdminCapabilitiesReadResponse {
pub can_grant_awards: bool,
pub can_grant_proof_drop_boxes: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarBoxOddsBucket {
pub bucket_id: String,
pub label: String,
pub probability_percent: i64,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarPityState {
pub rolls_since_rare_or_better: i64,
pub rolls_since_legendary: i64,
pub non_new_outcome_streak: i64,
pub guaranteed_new_available: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarBoxRules {
pub ruleset_version: String,
pub odds_table_version: String,
pub rare_or_better_pity_threshold: i64,
pub legendary_pity_threshold: i64,
pub guaranteed_new_threshold: i64,
pub odds: Vec<CodexAvatarBoxOddsBucket>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarRevealAward {
pub award_id: String,
pub awarded_at: i64,
pub source_type: String,
pub source_ref: Option<String>,
pub source_summary: Option<String>,
pub outcome_kind: String,
pub outcome_avatar_id: Option<String>,
pub metadata_json: Option<String>,
pub pity_state_after: CodexAvatarPityState,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CodexAvatarInventoryReadResponse {
pub account_user_id: String,
pub avatar_definitions: Vec<CodexAvatarDefinition>,
pub owned_avatars: Vec<CodexAvatarOwnership>,
pub equipped_avatar_id: String,
pub box_rules: CodexAvatarBoxRules,
pub pity_state: CodexAvatarPityState,
pub pending_reveal_awards: Vec<CodexAvatarRevealAward>,
pub updated_at: i64,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]

View File

@@ -7,6 +7,7 @@ use crate::export::filter_experimental_ts_tree;
use crate::export::generate_index_ts_tree;
use crate::protocol::common::visit_client_response_types;
use crate::protocol::common::visit_server_response_types;
use crate::protocol::v2::CodexAvatarAward;
use anyhow::Context;
use anyhow::Result;
use serde_json::Map;
@@ -65,6 +66,7 @@ pub fn generate_typescript_schema_fixture_subtree_for_tests() -> Result<BTreeMap
visit_server_response_types(visitor);
})?;
collect_typescript_fixture_file::<ServerNotification>(&mut files, &mut seen)?;
collect_typescript_fixture_file::<CodexAvatarAward>(&mut files, &mut seen)?;
filter_experimental_ts_tree(&mut files)?;
generate_index_ts_tree(&mut files);

View File

@@ -23,7 +23,7 @@ fn typescript_schema_fixtures_match_generated() -> Result<()> {
#[test]
fn json_schema_fixtures_match_generated() -> Result<()> {
assert_schema_fixtures_match_generated("json", |output_dir| {
generate_json_with_experimental(output_dir, false)
generate_json_with_experimental(output_dir, /*experimental_api*/ false)
})
}

View File

@@ -0,0 +1,284 @@
use crate::error_code::INTERNAL_ERROR_CODE;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use codex_app_server_protocol::CodexAvatarAdminAwardGrantParams;
use codex_app_server_protocol::CodexAvatarAdminCapabilitiesReadResponse;
use codex_app_server_protocol::CodexAvatarAdminProofDropGrantParams;
use codex_app_server_protocol::CodexAvatarBoxOddsBucket;
use codex_app_server_protocol::CodexAvatarBoxRules;
use codex_app_server_protocol::CodexAvatarDefinition;
use codex_app_server_protocol::CodexAvatarEquipParams;
use codex_app_server_protocol::CodexAvatarInventoryReadResponse;
use codex_app_server_protocol::CodexAvatarOwnership;
use codex_app_server_protocol::CodexAvatarPityState;
use codex_app_server_protocol::CodexAvatarRarity;
use codex_app_server_protocol::CodexAvatarRevealAward;
use codex_app_server_protocol::CodexAvatarStatus;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_backend_client::Client as BackendClient;
use codex_backend_client::CodexAvatarAdminAwardGrantRequest as BackendAvatarAdminAwardGrantRequest;
use codex_backend_client::CodexAvatarAdminCapabilitiesResponse as BackendAvatarAdminCapabilitiesResponse;
use codex_backend_client::CodexAvatarAdminProofDropGrantRequest as BackendAvatarAdminProofDropGrantRequest;
use codex_backend_client::CodexAvatarBoxOddsBucket as BackendAvatarBoxOddsBucket;
use codex_backend_client::CodexAvatarBoxRules as BackendAvatarBoxRules;
use codex_backend_client::CodexAvatarDefinition as BackendAvatarDefinition;
use codex_backend_client::CodexAvatarInventoryResponse as BackendAvatarInventoryResponse;
use codex_backend_client::CodexAvatarOwnership as BackendAvatarOwnership;
use codex_backend_client::CodexAvatarPityState as BackendAvatarPityState;
use codex_backend_client::CodexAvatarRarity as BackendAvatarRarity;
use codex_backend_client::CodexAvatarRevealAward as BackendAvatarRevealAward;
use codex_backend_client::CodexAvatarStatus as BackendAvatarStatus;
use codex_backend_client::RequestError;
use codex_core::AuthManager;
use serde_json::Value;
pub(crate) async fn read_avatar_inventory(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
) -> Result<CodexAvatarInventoryReadResponse, JSONRPCErrorError> {
let client = avatar_backend_client(auth_manager, chatgpt_base_url).await?;
let response = client
.get_avatar_inventory()
.await
.map_err(|err| backend_avatar_error("read avatar inventory", err))?;
Ok(map_avatar_inventory_response(response))
}
pub(crate) async fn equip_avatar(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
params: CodexAvatarEquipParams,
) -> Result<CodexAvatarInventoryReadResponse, JSONRPCErrorError> {
let client = avatar_backend_client(auth_manager, chatgpt_base_url).await?;
let response = client
.equip_avatar(params.avatar_id)
.await
.map_err(|err| backend_avatar_error("equip avatar", err))?;
Ok(map_avatar_inventory_response(response))
}
pub(crate) async fn grant_admin_avatar_award(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
params: CodexAvatarAdminAwardGrantParams,
) -> Result<CodexAvatarInventoryReadResponse, JSONRPCErrorError> {
let client = avatar_backend_client(auth_manager, chatgpt_base_url).await?;
let response = client
.grant_admin_avatar_award(BackendAvatarAdminAwardGrantRequest {
account_user_id: params.account_user_id,
award_id: params.award_id,
avatar_id: params.avatar_id,
source_type: params.source_type,
source_ref: params.source_ref,
awarded_at: params.awarded_at,
awarded_by: params.awarded_by,
metadata_json: params.metadata_json,
source_summary: params.source_summary,
})
.await
.map_err(|err| backend_avatar_error("grant avatar award", err))?;
Ok(map_avatar_inventory_response(response))
}
pub(crate) async fn grant_admin_avatar_proof_drop(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
params: CodexAvatarAdminProofDropGrantParams,
) -> Result<CodexAvatarInventoryReadResponse, JSONRPCErrorError> {
let client = avatar_backend_client(auth_manager, chatgpt_base_url).await?;
let response = client
.grant_admin_avatar_proof_drop(BackendAvatarAdminProofDropGrantRequest {
account_user_id: params.account_user_id,
award_id: params.award_id,
source_type: params.source_type,
source_ref: params.source_ref,
awarded_at: params.awarded_at,
source_summary: params.source_summary,
})
.await
.map_err(|err| backend_avatar_error("grant proof-drop box", err))?;
Ok(map_avatar_inventory_response(response))
}
pub(crate) async fn read_avatar_admin_capabilities(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
) -> Result<CodexAvatarAdminCapabilitiesReadResponse, JSONRPCErrorError> {
let client = avatar_backend_client(auth_manager, chatgpt_base_url).await?;
let response = client
.get_avatar_admin_capabilities()
.await
.map_err(|err| backend_avatar_error("read avatar admin capabilities", err))?;
Ok(map_avatar_admin_capabilities_response(response))
}
async fn avatar_backend_client(
auth_manager: &AuthManager,
chatgpt_base_url: &str,
) -> Result<BackendClient, JSONRPCErrorError> {
let Some(auth) = auth_manager.auth().await else {
return Err(JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "codex account authentication required to manage avatars".to_string(),
data: None,
});
};
if !auth.is_chatgpt_auth() {
return Err(JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "chatgpt authentication required to manage avatars".to_string(),
data: None,
});
}
BackendClient::from_auth(chatgpt_base_url.to_string(), &auth).map_err(|err| JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("failed to construct backend client: {err}"),
data: None,
})
}
fn backend_avatar_error(action: &str, err: RequestError) -> JSONRPCErrorError {
match &err {
RequestError::UnexpectedStatus { status, body, .. }
if status.as_u16() == 400 || status.as_u16() == 403 =>
{
JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: avatar_error_detail(body)
.unwrap_or_else(|| format!("failed to {action}: {err}")),
data: None,
}
}
_ => JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("failed to {action}: {err}"),
data: None,
},
}
}
fn avatar_error_detail(body: &str) -> Option<String> {
let value: Value = serde_json::from_str(body).ok()?;
value
.get("detail")
.and_then(Value::as_str)
.map(str::to_string)
}
fn map_avatar_inventory_response(
response: BackendAvatarInventoryResponse,
) -> CodexAvatarInventoryReadResponse {
CodexAvatarInventoryReadResponse {
account_user_id: response.account_user_id,
avatar_definitions: response
.avatar_definitions
.into_iter()
.map(map_avatar_definition)
.collect(),
owned_avatars: response
.owned_avatars
.into_iter()
.map(map_avatar_ownership)
.collect(),
equipped_avatar_id: response.equipped_avatar_id,
box_rules: map_avatar_box_rules(response.box_rules),
pity_state: map_avatar_pity_state(response.pity_state),
pending_reveal_awards: response
.pending_reveal_awards
.into_iter()
.map(map_avatar_reveal_award)
.collect(),
updated_at: response.updated_at,
}
}
fn map_avatar_admin_capabilities_response(
response: BackendAvatarAdminCapabilitiesResponse,
) -> CodexAvatarAdminCapabilitiesReadResponse {
CodexAvatarAdminCapabilitiesReadResponse {
can_grant_awards: response.can_grant_awards,
can_grant_proof_drop_boxes: response.can_grant_proof_drop_boxes,
}
}
fn map_avatar_box_rules(rules: BackendAvatarBoxRules) -> CodexAvatarBoxRules {
CodexAvatarBoxRules {
ruleset_version: rules.ruleset_version,
odds_table_version: rules.odds_table_version,
rare_or_better_pity_threshold: rules.rare_or_better_pity_threshold,
legendary_pity_threshold: rules.legendary_pity_threshold,
guaranteed_new_threshold: rules.guaranteed_new_threshold,
odds: rules
.odds
.into_iter()
.map(map_avatar_box_odds_bucket)
.collect(),
}
}
fn map_avatar_box_odds_bucket(bucket: BackendAvatarBoxOddsBucket) -> CodexAvatarBoxOddsBucket {
CodexAvatarBoxOddsBucket {
bucket_id: bucket.bucket_id,
label: bucket.label,
probability_percent: bucket.probability_percent,
}
}
fn map_avatar_pity_state(pity_state: BackendAvatarPityState) -> CodexAvatarPityState {
CodexAvatarPityState {
rolls_since_rare_or_better: pity_state.rolls_since_rare_or_better,
rolls_since_legendary: pity_state.rolls_since_legendary,
non_new_outcome_streak: pity_state.non_new_outcome_streak,
guaranteed_new_available: pity_state.guaranteed_new_available,
}
}
fn map_avatar_reveal_award(award: BackendAvatarRevealAward) -> CodexAvatarRevealAward {
CodexAvatarRevealAward {
award_id: award.award_id,
awarded_at: award.awarded_at,
source_type: award.source_type,
source_ref: award.source_ref,
source_summary: award.source_summary,
outcome_kind: award.outcome_kind,
outcome_avatar_id: award.outcome_avatar_id,
metadata_json: award.metadata_json,
pity_state_after: map_avatar_pity_state(award.pity_state_after),
}
}
fn map_avatar_definition(definition: BackendAvatarDefinition) -> CodexAvatarDefinition {
CodexAvatarDefinition {
avatar_id: definition.avatar_id,
display_name: definition.display_name,
description: definition.description,
rarity: match definition.rarity {
BackendAvatarRarity::Common => CodexAvatarRarity::Common,
BackendAvatarRarity::Rare => CodexAvatarRarity::Rare,
BackendAvatarRarity::Epic => CodexAvatarRarity::Epic,
BackendAvatarRarity::Legendary => CodexAvatarRarity::Legendary,
},
asset_ref: definition.asset_ref,
status: match definition.status {
BackendAvatarStatus::Active => CodexAvatarStatus::Active,
BackendAvatarStatus::Hidden => CodexAvatarStatus::Hidden,
BackendAvatarStatus::Retired => CodexAvatarStatus::Retired,
},
sort_order: definition.sort_order,
collection_name: definition.collection_name,
collection_description: definition.collection_description,
lore: definition.lore,
accent_class_name: definition.accent_class_name,
silhouette_glow_class_name: definition.silhouette_glow_class_name,
is_progress_visible: definition.is_progress_visible,
}
}
fn map_avatar_ownership(ownership: BackendAvatarOwnership) -> CodexAvatarOwnership {
CodexAvatarOwnership {
account_user_id: ownership.account_user_id,
avatar_id: ownership.avatar_id,
source_summary: ownership.source_summary,
}
}

View File

@@ -1,3 +1,4 @@
use crate::avatar_rpc;
use crate::bespoke_event_handling::apply_bespoke_event_handling;
use crate::command_exec::CommandExecManager;
use crate::command_exec::StartCommandExecParams;
@@ -32,6 +33,10 @@ use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginAccountResponse;
use codex_app_server_protocol::CancelLoginAccountStatus;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::CodexAvatarAdminAwardGrantParams;
use codex_app_server_protocol::CodexAvatarAdminProofDropGrantParams;
use codex_app_server_protocol::CodexAvatarEquipParams;
use codex_app_server_protocol::CodexErrorInfo as AppServerCodexErrorInfo;
use codex_app_server_protocol::CollaborationModeListParams;
use codex_app_server_protocol::CollaborationModeListResponse;
@@ -179,6 +184,7 @@ use codex_arg0::Arg0DispatchPaths;
use codex_backend_client::Client as BackendClient;
use codex_chatgpt::connectors;
use codex_cloud_requirements::cloud_requirements_loader;
use codex_core::AnalyticsEventsClient;
use codex_core::AuthManager;
use codex_core::CodexAuth;
use codex_core::CodexThread;
@@ -403,6 +409,7 @@ pub(crate) struct CodexMessageProcessor {
auth_manager: Arc<AuthManager>,
thread_manager: Arc<ThreadManager>,
outgoing: Arc<OutgoingMessageSender>,
analytics_events_client: AnalyticsEventsClient,
arg0_paths: Arg0DispatchPaths,
config: Arc<Config>,
cli_overrides: Arc<RwLock<Vec<(String, TomlValue)>>>,
@@ -433,6 +440,8 @@ struct ListenerTaskContext {
thread_manager: Arc<ThreadManager>,
thread_state_manager: ThreadStateManager,
outgoing: Arc<OutgoingMessageSender>,
analytics_events_client: AnalyticsEventsClient,
general_analytics_enabled: bool,
thread_watch_manager: ThreadWatchManager,
fallback_model_provider: String,
codex_home: PathBuf,
@@ -455,6 +464,7 @@ pub(crate) struct CodexMessageProcessorArgs {
pub(crate) auth_manager: Arc<AuthManager>,
pub(crate) thread_manager: Arc<ThreadManager>,
pub(crate) outgoing: Arc<OutgoingMessageSender>,
pub(crate) analytics_events_client: AnalyticsEventsClient,
pub(crate) arg0_paths: Arg0DispatchPaths,
pub(crate) config: Arc<Config>,
pub(crate) cli_overrides: Arc<RwLock<Vec<(String, TomlValue)>>>,
@@ -519,6 +529,7 @@ impl CodexMessageProcessor {
auth_manager,
thread_manager,
outgoing,
analytics_events_client,
arg0_paths,
config,
cli_overrides,
@@ -531,6 +542,7 @@ impl CodexMessageProcessor {
auth_manager,
thread_manager,
outgoing: outgoing.clone(),
analytics_events_client,
arg0_paths,
config,
cli_overrides,
@@ -896,6 +908,32 @@ impl CodexMessageProcessor {
self.get_account(to_connection_request_id(request_id), params)
.await;
}
ClientRequest::AvatarInventoryRead {
request_id,
params: _,
} => {
self.avatar_inventory_read(to_connection_request_id(request_id))
.await;
}
ClientRequest::AvatarEquip { request_id, params } => {
self.avatar_equip(to_connection_request_id(request_id), params)
.await;
}
ClientRequest::AvatarAdminAward { request_id, params } => {
self.avatar_admin_award(to_connection_request_id(request_id), params)
.await;
}
ClientRequest::AvatarAdminProofDrop { request_id, params } => {
self.avatar_admin_proof_drop(to_connection_request_id(request_id), params)
.await;
}
ClientRequest::AvatarAdminCapabilitiesRead {
request_id,
params: _,
} => {
self.avatar_admin_capabilities_read(to_connection_request_id(request_id))
.await;
}
ClientRequest::GitDiffToRemote { request_id, params } => {
self.git_diff_to_origin(to_connection_request_id(request_id), params.cwd)
.await;
@@ -1015,7 +1053,7 @@ impl CodexMessageProcessor {
&mut self,
params: &LoginApiKeyParams,
) -> std::result::Result<(), JSONRPCErrorError> {
if self.auth_manager.is_external_auth_active() {
if self.auth_manager.is_external_chatgpt_auth_active() {
return Err(self.external_auth_active_error());
}
@@ -1094,7 +1132,7 @@ impl CodexMessageProcessor {
) -> std::result::Result<LoginServerOptions, JSONRPCErrorError> {
let config = self.config.as_ref();
if self.auth_manager.is_external_auth_active() {
if self.auth_manager.is_external_chatgpt_auth_active() {
return Err(self.external_auth_active_error());
}
@@ -1531,7 +1569,7 @@ impl CodexMessageProcessor {
}
async fn refresh_token_if_requested(&self, do_refresh: bool) -> RefreshTokenRequestOutcome {
if self.auth_manager.is_external_auth_active() {
if self.auth_manager.is_external_chatgpt_auth_active() {
return RefreshTokenRequestOutcome::NotAttemptedOrSucceeded;
}
if do_refresh && let Err(err) = self.auth_manager.refresh_token().await {
@@ -1678,6 +1716,90 @@ impl CodexMessageProcessor {
}
}
async fn avatar_inventory_read(&self, request_id: ConnectionRequestId) {
match avatar_rpc::read_avatar_inventory(&self.auth_manager, &self.config.chatgpt_base_url)
.await
{
Ok(response) => {
self.outgoing.send_response(request_id, response).await;
}
Err(error) => {
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn avatar_equip(&self, request_id: ConnectionRequestId, params: CodexAvatarEquipParams) {
match avatar_rpc::equip_avatar(&self.auth_manager, &self.config.chatgpt_base_url, params)
.await
{
Ok(response) => {
self.outgoing.send_response(request_id, response).await;
}
Err(error) => {
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn avatar_admin_award(
&self,
request_id: ConnectionRequestId,
params: CodexAvatarAdminAwardGrantParams,
) {
match avatar_rpc::grant_admin_avatar_award(
&self.auth_manager,
&self.config.chatgpt_base_url,
params,
)
.await
{
Ok(response) => {
self.outgoing.send_response(request_id, response).await;
}
Err(error) => {
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn avatar_admin_proof_drop(
&self,
request_id: ConnectionRequestId,
params: CodexAvatarAdminProofDropGrantParams,
) {
match avatar_rpc::grant_admin_avatar_proof_drop(
&self.auth_manager,
&self.config.chatgpt_base_url,
params,
)
.await
{
Ok(response) => {
self.outgoing.send_response(request_id, response).await;
}
Err(error) => {
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn avatar_admin_capabilities_read(&self, request_id: ConnectionRequestId) {
match avatar_rpc::read_avatar_admin_capabilities(
&self.auth_manager,
&self.config.chatgpt_base_url,
)
.await
{
Ok(response) => {
self.outgoing.send_response(request_id, response).await;
}
Err(error) => {
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn fetch_account_rate_limits(
&self,
) -> Result<
@@ -2086,6 +2208,8 @@ impl CodexMessageProcessor {
thread_manager: Arc::clone(&self.thread_manager),
thread_state_manager: self.thread_state_manager.clone(),
outgoing: Arc::clone(&self.outgoing),
analytics_events_client: self.analytics_events_client.clone(),
general_analytics_enabled: self.config.features.enabled(Feature::GeneralAnalytics),
thread_watch_manager: self.thread_watch_manager.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.clone(),
@@ -2318,6 +2442,17 @@ impl CodexMessageProcessor {
sandbox: config_snapshot.sandbox_policy.into(),
reasoning_effort: config_snapshot.reasoning_effort,
};
if listener_task_context.general_analytics_enabled {
listener_task_context
.analytics_events_client
.track_response(
request_id.connection_id.0,
ClientResponse::ThreadStart {
request_id: request_id.request_id.clone(),
response: response.clone(),
},
);
}
listener_task_context
.outgoing
@@ -3796,6 +3931,15 @@ impl CodexMessageProcessor {
sandbox: session_configured.sandbox_policy.into(),
reasoning_effort: session_configured.reasoning_effort,
};
if self.config.features.enabled(Feature::GeneralAnalytics) {
self.analytics_events_client.track_response(
request_id.connection_id.0,
ClientResponse::ThreadResume {
request_id: request_id.request_id.clone(),
response: response.clone(),
},
);
}
self.outgoing.send_response(request_id, response).await;
}
@@ -4403,6 +4547,15 @@ impl CodexMessageProcessor {
sandbox: session_configured.sandbox_policy.into(),
reasoning_effort: session_configured.reasoning_effort,
};
if self.config.features.enabled(Feature::GeneralAnalytics) {
self.analytics_events_client.track_response(
request_id.connection_id.0,
ClientResponse::ThreadFork {
request_id: request_id.request_id.clone(),
response: response.clone(),
},
);
}
self.outgoing.send_response(request_id, response).await;
@@ -6984,6 +7137,8 @@ impl CodexMessageProcessor {
thread_manager: Arc::clone(&self.thread_manager),
thread_state_manager: self.thread_state_manager.clone(),
outgoing: Arc::clone(&self.outgoing),
analytics_events_client: self.analytics_events_client.clone(),
general_analytics_enabled: self.config.features.enabled(Feature::GeneralAnalytics),
thread_watch_manager: self.thread_watch_manager.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.clone(),
@@ -7071,6 +7226,8 @@ impl CodexMessageProcessor {
thread_manager: Arc::clone(&self.thread_manager),
thread_state_manager: self.thread_state_manager.clone(),
outgoing: Arc::clone(&self.outgoing),
analytics_events_client: self.analytics_events_client.clone(),
general_analytics_enabled: self.config.features.enabled(Feature::GeneralAnalytics),
thread_watch_manager: self.thread_watch_manager.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.clone(),
@@ -7102,6 +7259,8 @@ impl CodexMessageProcessor {
outgoing,
thread_manager,
thread_state_manager,
analytics_events_client: _,
general_analytics_enabled: _,
thread_watch_manager,
fallback_model_provider,
codex_home,
@@ -8747,7 +8906,7 @@ mod tests {
fn config_load_error_marks_non_auth_cloud_requirements_failures_without_relogin() {
let err = std::io::Error::other(CloudRequirementsLoadError::new(
CloudRequirementsLoadErrorCode::RequestFailed,
None,
/*status_code*/ None,
"failed to load your workspace-managed config",
));
@@ -8961,7 +9120,8 @@ mod tests {
fn merge_persisted_resume_metadata_skips_missing_values() -> Result<()> {
let mut request_overrides = None;
let mut typesafe_overrides = ConfigOverrides::default();
let persisted_metadata = test_thread_metadata(None, None)?;
let persisted_metadata =
test_thread_metadata(/*model*/ None, /*reasoning_effort*/ None)?;
merge_persisted_resume_metadata(
&mut request_overrides,
@@ -9013,7 +9173,7 @@ mod tests {
path.clone(),
&head,
&session_meta,
None,
/*git*/ None,
"test-provider",
timestamp.clone(),
)
@@ -9223,9 +9383,9 @@ mod tests {
source,
Some("atlas".to_string()),
Some("explorer".to_string()),
None,
None,
None,
/*git_sha*/ None,
/*git_branch*/ None,
/*git_origin_url*/ None,
);
let thread = summary_to_thread(summary);
@@ -9244,7 +9404,9 @@ mod tests {
manager.connection_initialized(connection).await;
manager
.try_ensure_connection_subscribed(thread_id, connection, false)
.try_ensure_connection_subscribed(
thread_id, connection, /*experimental_raw_events*/ false,
)
.await
.expect("connection should be live");
{
@@ -9288,11 +9450,19 @@ mod tests {
manager.connection_initialized(connection_a).await;
manager.connection_initialized(connection_b).await;
manager
.try_ensure_connection_subscribed(thread_id, connection_a, false)
.try_ensure_connection_subscribed(
thread_id,
connection_a,
/*experimental_raw_events*/ false,
)
.await
.expect("connection_a should be live");
manager
.try_ensure_connection_subscribed(thread_id, connection_b, false)
.try_ensure_connection_subscribed(
thread_id,
connection_b,
/*experimental_raw_events*/ false,
)
.await
.expect("connection_b should be live");
{
@@ -9325,7 +9495,9 @@ mod tests {
assert!(
manager
.try_ensure_connection_subscribed(thread_id, connection, false)
.try_ensure_connection_subscribed(
thread_id, connection, /*experimental_raw_events*/ false
)
.await
.is_none()
);

View File

@@ -137,7 +137,7 @@ mod tests {
&all_connectors,
&[],
&[AppConnectorId("alpha".to_string())],
false,
/*codex_apps_ready*/ false,
),
Vec::new()
);

View File

@@ -90,7 +90,7 @@ mod tests {
#[test]
fn compute_source_filters_defaults_to_interactive_sources() {
let (allowed_sources, filter) = compute_source_filters(None);
let (allowed_sources, filter) = compute_source_filters(/*source_kinds*/ None);
assert_eq!(allowed_sources, INTERACTIVE_SESSION_SOURCES.to_vec());
assert_eq!(filter, None);

View File

@@ -74,6 +74,7 @@ use codex_app_server_protocol::Result;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_arg0::Arg0DispatchPaths;
use codex_core::AppServerRpcTransport;
use codex_core::config::Config;
use codex_core::config_loader::CloudRequirementsLoader;
use codex_core::config_loader::LoaderOverrides;
@@ -393,6 +394,7 @@ fn start_uninitialized(args: InProcessStartArgs) -> InProcessClientHandle {
config_warnings: args.config_warnings,
session_source: args.session_source,
enable_codex_api_key_env: args.enable_codex_api_key_env,
rpc_transport: AppServerRpcTransport::InProcess,
});
let mut thread_created_rx = processor.thread_created_receiver();
let mut session = ConnectionSessionState::default();
@@ -786,7 +788,8 @@ mod tests {
#[tokio::test]
async fn in_process_start_clamps_zero_channel_capacity() {
let client = start_test_client_with_capacity(SessionSource::Cli, 0).await;
let client =
start_test_client_with_capacity(SessionSource::Cli, /*channel_capacity*/ 0).await;
let response = loop {
match client
.request(ClientRequest::ConfigRequirementsRead {

View File

@@ -36,6 +36,7 @@ use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::TextPosition as AppTextPosition;
use codex_app_server_protocol::TextRange as AppTextRange;
use codex_core::AppServerRpcTransport;
use codex_core::ExecPolicyError;
use codex_core::check_execpolicy_for_warnings;
use codex_core::config_loader::ConfigLoadError;
@@ -60,6 +61,7 @@ use tracing_subscriber::registry::Registry;
use tracing_subscriber::util::SubscriberInitExt;
mod app_server_tracing;
mod avatar_rpc;
mod bespoke_event_handling;
mod codex_message_processor;
mod command_exec;
@@ -623,6 +625,7 @@ pub async fn run_main_with_transport(
config_warnings,
session_source,
enable_codex_api_key_env: false,
rpc_transport: analytics_rpc_transport(transport),
});
let mut thread_created_rx = processor.thread_created_receiver();
let mut running_turn_count_rx = processor.subscribe_running_assistant_turn_count();
@@ -846,6 +849,13 @@ pub async fn run_main_with_transport(
Ok(())
}
fn analytics_rpc_transport(transport: AppServerTransport) -> AppServerRpcTransport {
match transport {
AppServerTransport::Stdio => AppServerRpcTransport::Stdio,
AppServerTransport::WebSocket { .. } => AppServerRpcTransport::Websocket,
}
}
#[cfg(test)]
mod tests {
use super::LogFormat;
@@ -860,7 +870,10 @@ mod tests {
#[test]
fn log_format_from_env_value_defaults_for_non_json_values() {
assert_eq!(LogFormat::from_env_value(None), LogFormat::Default);
assert_eq!(
LogFormat::from_env_value(/*value*/ None),
LogFormat::Default
);
assert_eq!(LogFormat::from_env_value(Some("")), LogFormat::Default);
assert_eq!(LogFormat::from_env_value(Some("text")), LogFormat::Default);
assert_eq!(LogFormat::from_env_value(Some("jsonl")), LogFormat::Default);

View File

@@ -56,12 +56,12 @@ use codex_app_server_protocol::experimental_required_message;
use codex_arg0::Arg0DispatchPaths;
use codex_chatgpt::connectors;
use codex_core::AnalyticsEventsClient;
use codex_core::AppServerRpcTransport;
use codex_core::AuthManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::config_loader::CloudRequirementsLoader;
use codex_core::config_loader::LoaderOverrides;
use codex_core::default_client::DEFAULT_ORIGINATOR;
use codex_core::default_client::SetOriginatorError;
use codex_core::default_client::USER_AGENT_SUFFIX;
use codex_core::default_client::get_codex_user_agent;
@@ -71,9 +71,10 @@ use codex_core::models_manager::collaboration_mode_presets::CollaborationModesCo
use codex_exec_server::EnvironmentManager;
use codex_features::Feature;
use codex_feedback::CodexFeedback;
use codex_login::AuthMode as LoginAuthMode;
use codex_login::auth::ExternalAuth;
use codex_login::auth::ExternalAuthRefreshContext;
use codex_login::auth::ExternalAuthRefreshReason;
use codex_login::auth::ExternalAuthRefresher;
use codex_login::auth::ExternalAuthTokens;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
@@ -88,7 +89,6 @@ use toml::Value as TomlValue;
use tracing::Instrument;
const EXTERNAL_AUTH_REFRESH_TIMEOUT: Duration = Duration::from_secs(10);
const TUI_APP_SERVER_CLIENT_NAME: &str = "codex-tui";
#[derive(Clone)]
struct ExternalAuthRefreshBridge {
@@ -104,7 +104,11 @@ impl ExternalAuthRefreshBridge {
}
#[async_trait]
impl ExternalAuthRefresher for ExternalAuthRefreshBridge {
impl ExternalAuth for ExternalAuthRefreshBridge {
fn auth_mode(&self) -> LoginAuthMode {
LoginAuthMode::Chatgpt
}
async fn refresh(
&self,
context: ExternalAuthRefreshContext,
@@ -146,11 +150,11 @@ impl ExternalAuthRefresher for ExternalAuthRefreshBridge {
let response: ChatgptAuthTokensRefreshResponse =
serde_json::from_value(result).map_err(std::io::Error::other)?;
Ok(ExternalAuthTokens {
access_token: response.access_token,
chatgpt_account_id: response.chatgpt_account_id,
chatgpt_plan_type: response.chatgpt_plan_type,
})
Ok(ExternalAuthTokens::chatgpt(
response.access_token,
response.chatgpt_account_id,
response.chatgpt_plan_type,
))
}
}
@@ -161,9 +165,11 @@ pub(crate) struct MessageProcessor {
external_agent_config_api: ExternalAgentConfigApi,
fs_api: FsApi,
auth_manager: Arc<AuthManager>,
analytics_events_client: AnalyticsEventsClient,
fs_watch_manager: FsWatchManager,
config: Arc<Config>,
config_warnings: Arc<Vec<ConfigWarningNotification>>,
rpc_transport: AppServerRpcTransport,
}
#[derive(Clone, Debug, Default)]
@@ -188,6 +194,7 @@ pub(crate) struct MessageProcessorArgs {
pub(crate) config_warnings: Vec<ConfigWarningNotification>,
pub(crate) session_source: SessionSource,
pub(crate) enable_codex_api_key_env: bool,
pub(crate) rpc_transport: AppServerRpcTransport,
}
impl MessageProcessor {
@@ -207,11 +214,15 @@ impl MessageProcessor {
config_warnings,
session_source,
enable_codex_api_key_env,
rpc_transport,
} = args;
let auth_manager = AuthManager::shared(
let auth_manager = AuthManager::shared_with_external_auth(
config.codex_home.clone(),
enable_codex_api_key_env,
config.cli_auth_credentials_store_mode,
Arc::new(ExternalAuthRefreshBridge {
outgoing: outgoing.clone(),
}),
);
let thread_manager = Arc::new(ThreadManager::new(
config.as_ref(),
@@ -225,9 +236,6 @@ impl MessageProcessor {
environment_manager,
));
auth_manager.set_forced_chatgpt_workspace_id(config.forced_chatgpt_workspace_id.clone());
auth_manager.set_external_auth_refresher(Arc::new(ExternalAuthRefreshBridge {
outgoing: outgoing.clone(),
}));
let analytics_events_client = AnalyticsEventsClient::new(
Arc::clone(&auth_manager),
config.chatgpt_base_url.trim_end_matches('/').to_string(),
@@ -244,6 +252,7 @@ impl MessageProcessor {
auth_manager: auth_manager.clone(),
thread_manager: Arc::clone(&thread_manager),
outgoing: outgoing.clone(),
analytics_events_client: analytics_events_client.clone(),
arg0_paths,
config: Arc::clone(&config),
cli_overrides: cli_overrides.clone(),
@@ -264,7 +273,7 @@ impl MessageProcessor {
loader_overrides,
cloud_requirements,
thread_manager,
analytics_events_client,
analytics_events_client.clone(),
);
let external_agent_config_api = ExternalAgentConfigApi::new(config.codex_home.clone());
let fs_api = FsApi::default();
@@ -277,14 +286,16 @@ impl MessageProcessor {
external_agent_config_api,
fs_api,
auth_manager,
analytics_events_client,
fs_watch_manager,
config,
config_warnings: Arc::new(config_warnings),
rpc_transport,
}
}
pub(crate) fn clear_runtime_references(&self) {
self.auth_manager.clear_external_auth_refresher();
self.auth_manager.clear_external_auth();
}
pub(crate) async fn process_request(
@@ -549,6 +560,7 @@ impl MessageProcessor {
// shared thread when another connected client did not opt into
// experimental API). Proposed direction is instance-global first-write-wins
// with initialize-time mismatch rejection.
let analytics_initialize_params = params.clone();
let (experimental_api_enabled, opt_out_notification_methods) =
match params.capabilities {
Some(capabilities) => (
@@ -569,14 +581,8 @@ impl MessageProcessor {
} = params.client_info;
session.app_server_client_name = Some(name.clone());
session.client_version = Some(version.clone());
let originator = if name == TUI_APP_SERVER_CLIENT_NAME {
// TODO: Remove this temporary workaround once app-server clients no longer
// need to retain the legacy TUI `codex_cli_rs` originator behavior.
DEFAULT_ORIGINATOR.to_string()
} else {
name.clone()
};
if let Err(error) = set_default_originator(originator) {
let originator = name.clone();
if let Err(error) = set_default_originator(originator.clone()) {
match error {
SetOriginatorError::InvalidHeaderValue => {
let error = JSONRPCErrorError {
@@ -599,6 +605,14 @@ impl MessageProcessor {
}
}
}
if self.config.features.enabled(Feature::GeneralAnalytics) {
self.analytics_events_client.track_initialize(
connection_id.0,
analytics_initialize_params,
originator,
self.rpc_transport,
);
}
set_default_client_residency_requirement(self.config.enforce_residency.value());
let user_agent_suffix = format!("{name}; {version}");
if let Ok(mut suffix) = USER_AGENT_SUFFIX.lock() {

View File

@@ -20,6 +20,7 @@ use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput;
use codex_arg0::Arg0DispatchPaths;
use codex_core::AppServerRpcTransport;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::CloudRequirementsLoader;
@@ -147,7 +148,7 @@ impl TracingHarness {
}),
},
},
None,
/*trace*/ None,
)
.await;
assert!(harness.session.initialized);
@@ -213,7 +214,7 @@ async fn build_test_config(codex_home: &Path, server_uri: &str) -> Result<Config
codex_home,
server_uri,
&BTreeMap::new(),
8_192,
/*auto_compact_limit*/ 8_192,
Some(false),
"mock_provider",
"compact",
@@ -246,6 +247,7 @@ fn build_test_processor(
config_warnings: Vec::new(),
session_source: SessionSource::VSCode,
enable_codex_api_key_env: false,
rpc_transport: AppServerRpcTransport::Stdio,
});
(processor, outgoing_rx)
}
@@ -509,7 +511,9 @@ async fn thread_start_jsonrpc_span_exports_server_span_and_parents_children() ->
..
} = RemoteTrace::new("00000000000000000000000000000011", "0000000000000022");
let _: ThreadStartResponse = harness.start_thread(20_002, None).await;
let _: ThreadStartResponse = harness
.start_thread(/*request_id*/ 20_002, /*trace*/ None)
.await;
let untraced_spans = wait_for_exported_spans(harness.tracing, |spans| {
spans.iter().any(|span| {
span.span_kind == SpanKind::Server
@@ -538,10 +542,16 @@ async fn thread_start_jsonrpc_span_exports_server_span_and_parents_children() ->
.span_context
.trace_id(),
);
assert_has_internal_descendant_at_min_depth(&untraced_spans, untraced_server_span, 1);
assert_has_internal_descendant_at_min_depth(
&untraced_spans,
untraced_server_span,
/*min_depth*/ 1,
);
let baseline_len = untraced_spans.len();
let _: ThreadStartResponse = harness.start_thread(20_003, Some(remote_trace)).await;
let _: ThreadStartResponse = harness
.start_thread(/*request_id*/ 20_003, Some(remote_trace))
.await;
let spans = wait_for_new_exported_spans(harness.tracing, baseline_len, |spans| {
spans.iter().any(|span| {
span.span_kind == SpanKind::Server
@@ -561,8 +571,8 @@ async fn thread_start_jsonrpc_span_exports_server_span_and_parents_children() ->
assert!(server_request_span.parent_span_is_remote);
assert_eq!(server_request_span.span_context.trace_id(), remote_trace_id);
assert_ne!(server_request_span.span_context.span_id(), SpanId::INVALID);
assert_has_internal_descendant_at_min_depth(&spans, server_request_span, 1);
assert_has_internal_descendant_at_min_depth(&spans, server_request_span, 2);
assert_has_internal_descendant_at_min_depth(&spans, server_request_span, /*min_depth*/ 1);
assert_has_internal_descendant_at_min_depth(&spans, server_request_span, /*min_depth*/ 2);
harness.shutdown().await;
Ok(())
@@ -572,7 +582,7 @@ async fn thread_start_jsonrpc_span_exports_server_span_and_parents_children() ->
async fn turn_start_jsonrpc_span_parents_core_turn_spans() -> Result<()> {
let _guard = tracing_test_guard().lock().await;
let mut harness = TracingHarness::new().await?;
let thread_start_response = harness.start_thread(2, None).await;
let thread_start_response = harness.start_thread(/*request_id*/ 2, /*trace*/ None).await;
let thread_id = thread_start_response.thread.id.clone();
harness.reset_tracing();

View File

@@ -886,7 +886,7 @@ mod tests {
.register_request_context(RequestContext::new(
request_id.clone(),
tracing::info_span!("app_server.request", rpc.method = "thread/start"),
None,
/*parent_trace*/ None,
))
.await;
assert_eq!(outgoing.request_context_count().await, 1);
@@ -997,14 +997,14 @@ mod tests {
.register_request_context(RequestContext::new(
closed_connection_request,
tracing::info_span!("app_server.request", rpc.method = "turn/interrupt"),
None,
/*parent_trace*/ None,
))
.await;
outgoing
.register_request_context(RequestContext::new(
open_connection_request,
tracing::info_span!("app_server.request", rpc.method = "turn/start"),
None,
/*parent_trace*/ None,
))
.await;
assert_eq!(outgoing.request_context_count().await, 2);

View File

@@ -530,7 +530,7 @@ mod tests {
#[test]
fn resolves_in_progress_turn_to_active_status() {
let status = resolve_thread_status(ThreadStatus::Idle, true);
let status = resolve_thread_status(ThreadStatus::Idle, /*has_in_progress_turn*/ true);
assert_eq!(
status,
ThreadStatus::Active {
@@ -538,7 +538,8 @@ mod tests {
}
);
let status = resolve_thread_status(ThreadStatus::NotLoaded, true);
let status =
resolve_thread_status(ThreadStatus::NotLoaded, /*has_in_progress_turn*/ true);
assert_eq!(
status,
ThreadStatus::Active {
@@ -550,11 +551,14 @@ mod tests {
#[test]
fn keeps_status_when_no_in_progress_turn() {
assert_eq!(
resolve_thread_status(ThreadStatus::Idle, false),
resolve_thread_status(ThreadStatus::Idle, /*has_in_progress_turn*/ false),
ThreadStatus::Idle
);
assert_eq!(
resolve_thread_status(ThreadStatus::SystemError, false),
resolve_thread_status(
ThreadStatus::SystemError,
/*has_in_progress_turn*/ false
),
ThreadStatus::SystemError
);
}

View File

@@ -506,8 +506,14 @@ mod tests {
}),
);
let tampered = token.replace(".eyJleHAi", ".eyJleHBi");
let err = verify_signed_bearer_token(&tampered, shared_secret, None, None, 30)
.expect_err("tampered jwt should fail");
let err = verify_signed_bearer_token(
&tampered,
shared_secret,
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("tampered jwt should fail");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
@@ -522,8 +528,14 @@ mod tests {
"aud": "audience",
}),
);
verify_signed_bearer_token(&token, shared_secret, Some("issuer"), Some("audience"), 30)
.expect("valid signed token should verify");
verify_signed_bearer_token(
&token,
shared_secret,
Some("issuer"),
Some("audience"),
/*max_clock_skew_seconds*/ 30,
)
.expect("valid signed token should verify");
}
#[test]
@@ -536,8 +548,14 @@ mod tests {
"aud": ["other-audience", "audience"],
}),
);
verify_signed_bearer_token(&token, shared_secret, None, Some("audience"), 30)
.expect("jwt audience arrays should verify");
verify_signed_bearer_token(
&token,
shared_secret,
/*issuer*/ None,
Some("audience"),
/*max_clock_skew_seconds*/ 30,
)
.expect("jwt audience arrays should verify");
}
#[test]
@@ -550,9 +568,14 @@ mod tests {
);
let header_segment = URL_SAFE_NO_PAD.encode(br#"{"alg":"none","typ":"JWT"}"#);
let token = format!("{header_segment}.{claims_segment}.");
let err =
verify_signed_bearer_token(&token, b"0123456789abcdef0123456789abcdef", None, None, 30)
.expect_err("alg=none jwt should be rejected");
let err = verify_signed_bearer_token(
&token,
b"0123456789abcdef0123456789abcdef",
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("alg=none jwt should be rejected");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
@@ -565,8 +588,14 @@ mod tests {
"iss": "issuer",
}),
);
let err = verify_signed_bearer_token(&token, shared_secret, None, None, 30)
.expect_err("jwt without exp should be rejected");
let err = verify_signed_bearer_token(
&token,
shared_secret,
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("jwt without exp should be rejected");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}

View File

@@ -646,7 +646,7 @@ mod tests {
initialized,
Arc::new(AtomicBool::new(true)),
opted_out_notification_methods,
None,
/*disconnect_sender*/ None,
),
);
@@ -686,7 +686,7 @@ mod tests {
Arc::new(AtomicBool::new(true)),
Arc::new(AtomicBool::new(true)),
Arc::new(RwLock::new(HashSet::from(["configWarning".to_string()]))),
None,
/*disconnect_sender*/ None,
),
);
@@ -726,7 +726,7 @@ mod tests {
Arc::new(AtomicBool::new(true)),
Arc::new(AtomicBool::new(true)),
Arc::new(RwLock::new(HashSet::new())),
None,
/*disconnect_sender*/ None,
),
);
@@ -772,7 +772,7 @@ mod tests {
Arc::new(AtomicBool::new(true)),
Arc::new(AtomicBool::new(false)),
Arc::new(RwLock::new(HashSet::new())),
None,
/*disconnect_sender*/ None,
),
);
@@ -834,7 +834,7 @@ mod tests {
Arc::new(AtomicBool::new(true)),
Arc::new(AtomicBool::new(true)),
Arc::new(RwLock::new(HashSet::new())),
None,
/*disconnect_sender*/ None,
),
);
@@ -1006,7 +1006,7 @@ mod tests {
Arc::new(AtomicBool::new(true)),
Arc::new(AtomicBool::new(true)),
Arc::new(RwLock::new(HashSet::new())),
None,
/*disconnect_sender*/ None,
),
);

View File

@@ -7,6 +7,9 @@ license.workspace = true
[lib]
path = "lib.rs"
[lints]
workspace = true
[dependencies]
anyhow = { workspace = true }
base64 = { workspace = true }

View File

@@ -15,6 +15,9 @@ use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::CodexAvatarAdminAwardGrantParams;
use codex_app_server_protocol::CodexAvatarAdminProofDropGrantParams;
use codex_app_server_protocol::CodexAvatarEquipParams;
use codex_app_server_protocol::CollaborationModeListParams;
use codex_app_server_protocol::CommandExecParams;
use codex_app_server_protocol::CommandExecResizeParams;
@@ -296,6 +299,46 @@ impl McpProcess {
self.send_request("account/read", params).await
}
/// Send an `avatar/inventory/read` JSON-RPC request.
pub async fn send_avatar_inventory_read_request(&mut self) -> anyhow::Result<i64> {
let params = Some(serde_json::json!({}));
self.send_request("avatar/inventory/read", params).await
}
/// Send an `avatar/equip` JSON-RPC request.
pub async fn send_avatar_equip_request(
&mut self,
params: CodexAvatarEquipParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("avatar/equip", params).await
}
/// Send an `avatar/admin/award` JSON-RPC request.
pub async fn send_avatar_admin_award_request(
&mut self,
params: CodexAvatarAdminAwardGrantParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("avatar/admin/award", params).await
}
/// Send an `avatar/admin/proof-drop` JSON-RPC request.
pub async fn send_avatar_admin_proof_drop_request(
&mut self,
params: CodexAvatarAdminProofDropGrantParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("avatar/admin/proof-drop", params).await
}
/// Send an `avatar/admin/capabilities/read` JSON-RPC request.
pub async fn send_avatar_admin_capabilities_read_request(&mut self) -> anyhow::Result<i64> {
let params = Some(serde_json::json!({}));
self.send_request("avatar/admin/capabilities/read", params)
.await
}
/// Send an `account/login/start` JSON-RPC request with ChatGPT auth tokens.
pub async fn send_chatgpt_auth_tokens_login_request(
&mut self,

View File

@@ -27,7 +27,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
},
supported_in_api: preset.supported_in_api,
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.into()),
upgrade: preset.upgrade.as_ref().map(Into::into),
base_instructions: "base instructions".to_string(),
model_messages: None,
supports_reasoning_summaries: false,

View File

@@ -160,7 +160,7 @@ async fn get_auth_status_with_api_key() -> Result<()> {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_when_auth_not_required() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml_custom_provider(codex_home.path(), false)?;
create_config_toml_custom_provider(codex_home.path(), /*requires_openai_auth*/ false)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;

Some files were not shown because too many files have changed in this diff Show More