made a `rust-release-prepare` environment with the necessary API key as
an environment secret. use this in the workflow rather than the action
secret.
once this merges and i confirm it works as intended, ill rm the action
secret.
This is the exact same change as @bolinfest made but he could not push
because of github action change permission.
## Why
The `rust-release` workflow can now be run manually with
`sign_macos=false` to skip macOS signing, but that path previously
stopped before creating a GitHub Release. That left the unsigned macOS
binaries available only as workflow-run artifacts, which are awkward to
fetch from automation and cannot be retrieved with a simple
unauthenticated `curl`.
For the unsigned path we still should not perform the normal release
side effects: no npm or Python publishing, no WinGet publishing, no
`latest-alpha-cli` branch update, and no promotion to GitHub's latest
release. The goal is only to make the build outputs easy to fetch from
the release page.
## What changed
- Allow the `release` job in `.github/workflows/rust-release.yml` to run
for `workflow_dispatch` runs with `sign_macos=false`.
- For unsigned runs, keep the unsigned macOS artifacts plus the normal
Linux and Windows release artifacts needed for DotSlash, then
create/update the GitHub Release with `make_latest: false`.
- Keep the normal publish/promote paths gated to signed releases:
- npm staging and publish
- Python runtime publish
- WinGet publish
- `latest-alpha-cli` update
- developer-site deploy
- normal DotSlash release files
- Add `.github/dotslash-unsigned-config.json`, which publishes
`*-unsigned` DotSlash files that use unsigned macOS artifacts and the
normal Linux/Windows artifacts.
## What I added
PLEASE READ THIS!!!
I added `codex-command-runner` and `codex-windows-sandbox-setup` entries
to `.github/dotslash-unsigned-config.json` so that with
`sign_macos=false` we would still get the dotslash files for those
artifacts which are necessary for windows builds.
## Summary
- Upload unsigned macOS release binaries before signing so they remain
available from the workflow run if signing fails
- Add a manual `workflow_dispatch` option, `sign_macos`, defaulting to
`true`
- When `sign_macos=false`, skip macOS signing, signed-name macOS
artifacts, DMGs, npm/DotSlash/PyPI publishing, latest release marking,
and `latest-alpha-cli` updates
## Process
HAVE NOT TESTED YET BUT we should be able to run
```
gh workflow run rust-release.yml \
-R openai/codex \
--ref rust-v0.132.0 \
-f sign_macos=false
```
which will then start the rust-release script with `sign_macos` and
therefore do not codesign mac and also no release afterward.
## Why
Users and support need a single command that captures the local Codex
runtime, configuration, auth, terminal, network, and state shape without
asking the user to know which diagnostic depth to choose first. `codex
doctor` now runs the useful checks by default and makes the detailed
human output the default because the command is usually run when someone
already needs context.
The command also targets concrete support failure modes we have seen
while iterating on the design:
- update-target mismatches like #21956, where the installed package
manager target can differ from the running executable
- terminal and multiplexer issues that depend on `TERM`, tmux/zellij
state, color handling, and TTY metadata
- provider-specific HTTP/WebSocket connectivity, including ChatGPT
WebSocket handshakes and API-key/provider endpoint reachability
- local state/log SQLite integrity problems and large rollout
directories
- feedback reports that need an attached, redacted diagnostic snapshot
without asking the user to run a second command
## What Changed
- Adds `codex doctor` as a grouped CLI diagnostic report with default
detailed output and `--summary` for the compact view.
- Adds stable report sections for Environment, Configuration, Updates,
Connectivity, and Background Server, plus a top Notes block that
promotes anomalies such as available updates, large rollout directories,
optional MCP issues, and mixed auth signals.
- Adds runtime provenance, install consistency, bundled/system search
readiness, terminal/multiplexer metadata, `config.toml` parse status,
auth mode details, sandbox details, feature flag summaries, update
cache/latest-version state, app-server daemon state, SQLite integrity
checks, rollout statistics, and provider-aware network diagnostics.
- Adds ChatGPT WebSocket diagnostics that report the negotiated HTTP
upgrade as `HTTP 101 Switching Protocols` and include timeout, DNS,
auth, and provider context in detailed output.
- Makes reachability provider-aware: API-key OpenAI setups check the API
endpoint, ChatGPT auth checks the ChatGPT path, and custom/AWS/local
providers check configured HTTP endpoints when available.
- Adds structured, redacted JSON output where `checks` is keyed by check
id and `details` is a key/value object for support tooling.
- Integrates doctor with feedback uploads by attaching a best-effort
`codex-doctor-report.json` report and adding derived Sentry tags for
overall status and failing/warning checks.
- Updates the TUI feedback consent copy so users can see that the doctor
report is included when logs/diagnostics are uploaded.
- Updates the CLI bug issue template to ask reporters for `codex doctor
--json` and render pasted reports as JSON.
## Example Output
The examples below are sanitized from local smoke runs with `--no-color`
so the structure is reviewable in plain text.
### `codex doctor`
```text
Codex Doctor v0.0.0 · macos-aarch64
Notes
↑ updates 0.130.0 available (current 0.0.0, dismissed 0.128.0)
⚠ rollouts 1,526 active files · 2.53 GB on disk
⚠ mcp MCP configuration has optional issues
⚠ auth mixed auth signals: ChatGPT login plus API key env var; HTTP reachability uses API-key mode
─────────────────────────────────────────────────────────────
Environment
✓ runtime local debug build
version 0.0.0
install method other
commit unknown
executable ~/code/codex.fcoury-doct…x-rs/target/debug/codex
✓ install consistent
context other
managed by npm: no · bun: no · package root —
PATH entries (2) ~/.local/share/mise/installs/node/24/bin/codex
~/.local/share/mise/shims/codex
✓ search ripgrep 15.1.0 (system, `rg`)
✓ terminal Ghostty 1.3.2-main-+b0f827665 · tmux 3.6a · TERM=xterm-256color
terminal Ghostty
TERM_PROGRAM ghostty
terminal version 1.3.2-main-+b0f827665
TERM xterm-256color
multiplexer tmux 3.6a
tmux extended-keys on
tmux allow-passthrough on
tmux set-clipboard on
✓ state databases healthy
CODEX_HOME ~/.codex (dir)
state DB ~/.codex/state_5.sqlite (file) · integrity ok
log DB ~/.codex/logs_2.sqlite (file) · integrity ok
active rollouts 1,526 files · 2.53 GB (avg 1.70 MB)
archived rollouts 8 files · 3.84 MB (avg 491.11 KB)
Configuration
✓ config loaded
model gpt-5.5 · openai
cwd ~/code/codex.fcoury-doctor/codex-rs
config.toml ~/.codex/config.toml
config.toml parse ok
MCP servers 1
feature flags 36 enabled · 7 overridden (full list with --all)
overrides code_mode, code_mode_only, memories, chronicle, goals, remote_control, prevent_idle_sleep
✓ auth auth is configured
auth storage mode File
auth file ~/.codex/auth.json
auth env vars present OPENAI_API_KEY
stored auth mode chatgpt
stored API key false
stored ChatGPT tokens true
stored agent identity false
⚠ mcp MCP configuration has optional issues — Set the missing MCP env vars or disable the affected server.
configured servers 1
disabled servers 0
streamable_http servers 1
optional reachability openaiDeveloperDocs: https://developers.openai.com/mcp (HEAD connect failed; GET connect failed)
✓ sandbox restricted fs + restricted network · approval OnRequest
approval policy OnRequest
filesystem sandbox restricted
network sandbox restricted
Connectivity
✓ network network-related environment looks readable
✓ websocket connected (HTTP 101 Switching Protocols) · 15s timeout
model provider openai
provider name OpenAI
wire API responses
supports websockets true
connect timeout 15000 ms
auth mode chatgpt
endpoint wss://chatgpt.com/backend-api/<redacted>
DNS 2 IPv4, 2 IPv6, first IPv6
handshake result HTTP 101 Switching Protocols
✗ reachability one or more required provider endpoints are unreachable over HTTP — Check proxy, VPN, firewall, DNS, and custom CA configuration.
reachability mode API key auth
openai API https://api.openai.com/v1 connect failed (required)
Background Server
○ app-server not running (ephemeral mode)
─────────────────────────────────────────────────────────────
11 ok · 1 idle · 4 notes · 1 warn · 1 fail failed
--summary compact output --all expand truncated lists
--json redacted report
```
### `codex doctor --summary`
```text
Codex Doctor v0.0.0 · macos-aarch64
Notes
↑ updates 0.130.0 available (current 0.0.0, dismissed 0.128.0)
⚠ rollouts 1,526 active files · 2.53 GB on disk
⚠ mcp MCP configuration has optional issues
⚠ auth mixed auth signals: ChatGPT login plus API key env var; HTTP reachability uses API-key mode
─────────────────────────────────────────────────────────────
Environment
✓ runtime local debug build
✓ install consistent
✓ search ripgrep 15.1.0 (system, `rg`)
✓ terminal Ghostty 1.3.2-main-+b0f827665 · tmux 3.6a · TERM=xterm-256color
✓ state databases healthy
Configuration
✓ config loaded
✓ auth auth is configured
⚠ mcp MCP configuration has optional issues — Set the missing MCP env vars or disable the affected server.
✓ sandbox restricted fs + restricted network · approval OnRequest
Updates
✓ updates update configuration is locally consistent
Connectivity
✓ network network-related environment looks readable
✓ websocket connected (HTTP 101 Switching Protocols) · 15s timeout
✗ reachability one or more required provider endpoints are unreachable over HTTP — Check proxy, VPN, firewall, DNS, and custom CA configuration.
Background Server
○ app-server not running (ephemeral mode)
─────────────────────────────────────────────────────────────
11 ok · 1 idle · 4 notes · 1 warn · 1 fail failed
Run codex doctor without --summary for detailed diagnostics.
--all expand truncated lists --json redacted report
```
### `codex doctor --json` shape
```json
{
"schema_version": 1,
"overall_status": "fail",
"checks": {
"runtime.provenance": {
"id": "runtime.provenance",
"category": "Environment",
"status": "ok",
"summary": "local debug build",
"details": {
"version": "0.0.0",
"install method": "other",
"commit": "unknown"
}
},
"sandbox.helpers": {
"id": "sandbox.helpers",
"category": "Configuration",
"status": "ok",
"summary": "restricted fs + restricted network · approval OnRequest",
"details": {
"approval policy": "OnRequest",
"filesystem sandbox": "restricted",
"network sandbox": "restricted"
}
}
}
}
```
### `/feedback` new sentry attachment
<img width="938" height="798" alt="CleanShot 2026-05-13 at 15 36 14"
src="https://github.com/user-attachments/assets/715e62e0-d7b4-4fea-a35a-fd5d5d33c4c0"
/>
### New section in CLI issue template
<img width="1164" height="435" alt="CleanShot 2026-05-13 at 15 47 24"
src="https://github.com/user-attachments/assets/9081dc25-a28c-4afa-8ba1-e299c2b4031d"
/>
## How to Test
1. Run `cargo run --bin codex -- doctor --no-color`.
2. Confirm the detailed report is the default and includes promoted
Notes, grouped sections, terminal details, state DB integrity, rollout
stats, provider reachability, WebSocket diagnostics, and app-server
status.
3. Run `cargo run --bin codex -- doctor --summary --no-color`.
4. Confirm the compact view keeps the same sections and summary counts
but omits detailed key/value rows.
5. Run `cargo run --bin codex -- doctor --json`.
6. Confirm the output is redacted JSON, `checks` is an object keyed by
check id, and each check's `details` is a key/value object.
7. Preview the CLI bug issue template and confirm the `Codex doctor
report` field appears after the terminal field, asks for `codex doctor
--json`, and renders pasted output as JSON.
8. Start a feedback flow that includes logs.
9. Confirm the upload consent copy lists `codex-doctor-report.json`
alongside the log attachments.
Targeted tests:
- `cargo test -p codex-cli doctor`
- `cargo test -p codex-app-server
doctor_report_tags_summarize_status_counts`
- `cargo test -p codex-feedback`
- `cargo test -p codex-tui feedback_view`
- `just argument-comment-lint`
- `git diff --check`
## Summary
- split the single PR-blocking Bazel Windows test leg into four Windows
shard jobs
- preserve the existing required Windows Bazel check name with a
lightweight aggregate gate
- keep Linux/macOS Bazel test jobs and the separate Windows
clippy/release jobs unchanged
## Why
The ordinary PR Windows Bazel test leg was one GitHub Actions job, so
Bazel only had in-job parallelism. This gives that lane real job-level
fanout across separate Windows hosts while keeping the target set
disjoint via stable label hashing.
## Evidence
- final pre-rebase green run: `25774733562`
- Windows shard target counts: `61/212`, `48/212`, `52/212`, `51/212`
- Windows test fanout completed in about 7m29s versus a recent
monolithic median around 22m26s
## Notes
- this is scoped to the Bazel Windows test leg only
- each shard keeps the existing Windows cross-compile/RBE path and
restores the former monolithic Windows test cache
- shard jobs do not upload duplicate repository caches after test work,
keeping cache cleanup off the PR-blocking shard path
- no local validation run; relying on GitHub Actions for the
workflow-shaped check
Co-authored-by: Codex <noreply@openai.com>
## Summary
- Split macOS Rust release builds into a dedicated `build-macos` job
- Attach the `macos-signing` environment only to the macOS signing/build
job
- Keep Linux release builds outside the Apple signing environment while
preserving the existing shared release build steps
## Why
The Python SDK needs the same tight formatter/lint loop as the rest of
the repo: a safe Ruff autofix pass, Ruff formatting, editor save
behavior, and CI checks that catch drift. Without that loop, SDK changes
can land with formatting or import ordering that differs from what
reviewers and CI expect.
## What
- Add Ruff configuration to `sdk/python/pyproject.toml`, excluding
generated protocol code and notebooks from the normal lint/format pass.
- Update `just fmt` so it still formats Rust and also runs Python SDK
Ruff autofix and formatting.
- Add Python SDK CI steps for `ruff check` and `ruff format --check`
before pytest.
- Recommend the Ruff VS Code extension and enable Python
format/fix/organize-on-save so Cmd+S uses the same tooling.
- Apply the resulting Ruff formatting to SDK Python files, examples, and
the checked-in generated `v2_all.py` output emitted by the pinned
generator.
- Add a guard test for the `just fmt` recipe so it keeps working from
both Rust and Python SDK working directories.
## Stack
1. #21891 `[1/8]` Pin Python SDK runtime dependency
2. #21893 `[2/8]` Generate Python SDK types from pinned runtime
3. #21895 `[3/8]` Run Python SDK tests in CI
4. #21896 `[4/8]` Define Python SDK public API surface
5. #21905 `[5/8]` Rename Python SDK package to `openai-codex`
6. #21910 `[6/8]` Add high-level Python SDK approval mode
7. #22014 `[7/8]` Add Python SDK app-server integration harness
8. This PR `[8/8]` Add Python SDK Ruff formatting
## Verification
- Added `test_root_fmt_recipe_formats_rust_and_python_sdk` for the
shared format recipe.
- Ran `just fmt` after the recipe update.
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
The Python SDK stack now depends on packaging metadata, pinned runtime
wheels, generated artifacts, async behavior, and stream interleaving.
Those checks need to run in CI so future changes cannot bypass the SDK
test suite.
## What
- Add a dedicated `python-sdk` job to `.github/workflows/sdk.yml`.
- Run the job in `python:3.12-alpine` so dependency resolution exercises
the pinned musl runtime wheel.
- Keep the Python SDK test job parallel to the existing SDK job instead
of serializing the full workflow.
## Stack
1. #21891 `[1/8]` Pin Python SDK runtime dependency
2. #21893 `[2/8]` Generate Python SDK types from pinned runtime
3. This PR `[3/8]` Run Python SDK tests in CI
4. #21896 `[4/8]` Define Python SDK public API surface
5. #21905 `[5/8]` Rename Python SDK package to `openai-codex`
6. #21910 `[6/8]` Add high-level Python SDK approval mode
7. #22014 `[7/8]` Add Python SDK app-server integration harness
8. #22021 `[8/8]` Add Python SDK Ruff formatting
## Verification
- The added workflow job installs the SDK with `uv sync --extra dev
--frozen` and runs the Python SDK pytest suite.
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
The Python SDK depends on the app-server runtime package for the bundled
`codex` binary and schema source of truth. That relationship should be
explicit in package metadata instead of inferred from matching version
numbers, so installers, lockfiles, and reviewers can see exactly which
runtime the SDK expects.
## What
- Declare `openai-codex-cli-bin==0.131.0a4` as a Python SDK dependency.
- Update runtime setup helpers to resolve the runtime version from the
declared dependency pin.
- Refresh the SDK lockfile for the pinned runtime wheel.
- Update package/runtime tests and docs that describe where the runtime
version comes from.
## Stack
1. This PR `[1/8]` Pin Python SDK runtime dependency
2. #21893 `[2/8]` Generate Python SDK types from pinned runtime
3. #21895 `[3/8]` Run Python SDK tests in CI
4. #21896 `[4/8]` Define Python SDK public API surface
5. #21905 `[5/8]` Rename Python SDK package to `openai-codex`
6. #21910 `[6/8]` Add high-level Python SDK approval mode
7. #22014 `[7/8]` Add Python SDK app-server integration harness
8. #22021 `[8/8]` Add Python SDK Ruff formatting
## Verification
- Added coverage for the SDK runtime dependency pin and runtime
distribution naming.
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
PR CI should test the exact commit that was pushed to the PR branch. By
default, GitHub's `pull_request` event checks out a synthetic merge
commit from `refs/pull/<number>/merge`, so the tested tree can include
an implicit merge with the current base branch instead of matching the
pushed head SHA.
Using the PR head SHA makes each check result correspond to a concrete
commit the author submitted. This also behaves better for stacked PR
workflows, including Sapling stacks and other Git stack tooling: a
middle PR's head commit already contains the lower stack changes in its
tree, without pulling in commits above it or GitHub's temporary merge
ref.
## What Changed
- Set every `actions/checkout` in `pull_request` workflows under
`.github/workflows` to use `github.event.pull_request.head.sha` on PR
events and `github.sha` otherwise.
- Updated `blob-size-policy` to compare
`github.event.pull_request.base.sha` and
`github.event.pull_request.head.sha`, since it no longer checks out
GitHub's merge commit where `HEAD^1`/`HEAD^2` represented the PR range.
## Verification
- Parsed the edited workflow YAML files with Ruby.
- Checked that every checkout block in the `pull_request` workflows has
the PR-head `ref`.
## Summary
In https://github.com/openai/codex/pull/21584, we disabled doctests for
crates that lack any doctests. We can enforce that property via `cargo
shear --deny-warnings`: crates that lack doctests will be flagged if
doctests are enabled, and crates with doctests will be flagged if
doctests are disabled.
A few additional notes:
- By adding `--deny-warnings`, `cargo shear` also flagged a number of
modules that were not reachable at all. Some of those have been removed.
- This PR removes a usage of `windows_modules!` (since `cargo shear` and
`rustfmt` couldn't see through it) in favor of simple `#[cfg(target_os =
"windows")]` macros. As a consequence, many of these files exhibit churn
in this PR, since they weren't being formatted by `rustfmt` at all on
main.
- Again, to make the code more analyzable, this PR also removes some
usages of `#[path = "cwd_junction.rs"]` in favor of a more standard
module structure. The bin sidecar structure is still retained, but,
e.g., `windows-sandbox-rs/src/bin/command_runner.rs` was moved to
`windows-sandbox-rs/src/bin/command_runner/main.rs`, and so on.
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
Published Python SDK builds depend on an exact `openai-codex-cli-bin`
runtime package, but the release workflow did not publish that runtime
package to PyPI. That left the SDK packaging story incomplete: release
artifacts could produce Codex binaries, but Python users still needed a
matching wheel carrying the platform-specific runtime and helper
executables.
This PR is stacked on #21787 so release jobs can include helper binaries
in runtime wheels: Linux wheels include `bwrap` for sandbox fallback,
and Windows wheels include the signed sandbox/elevation helpers beside
`codex.exe`.
## What changed
- Builds platform-specific `openai-codex-cli-bin` wheels from signed
release binaries on macOS, Linux, and Windows release runners.
- Packages Linux `bwrap` into musllinux runtime wheels.
- Packages Windows sandbox helper executables into Windows runtime
wheels.
- Uploads runtime wheels as GitHub release assets and publishes them to
PyPI using trusted publishing from the `pypi` GitHub environment.
- Keeps the new Python runtime publish job non-blocking so failures need
follow-up but do not fail the Rust release workflow.
- Pins the PyPA publish action to the `v1.13.0` commit SHA for
reproducible release publishing.
- Documents that runtime wheels are platform wheels published through
PyPI trusted publishing.
## Testing
- `ruby -e 'require "yaml"; ARGV.each { |f| YAML.load_file(f); puts "ok
#{f}" }' .github/workflows/rust-release.yml
.github/workflows/rust-release-windows.yml`
- `git diff --check`
CI is the real end-to-end verification for the release workflow path.
---------
Co-authored-by: Codex <noreply@openai.com>
This does two things:
- We use `persist-credentials: false` everywhere now. This is
unfortunately not the default in GitHub Actions, but it prevents
`actions/checkout` from dropping `secrets.GITHUB_TOKEN` onto disk.
- We interpose (some) template expansions through environment variables.
I've limited this to contexts that have non-fixed values; contexts that
are fixed (like `*.result`) are not dangerous to expand directly inline
(but maybe we should clean those up in the future for consistency
anyways).
This is a medium-risk change in terms of CI breakage: I did a scan for
usage of `git push` and other commands that implicitly use the persisted
credential, but couldn't find any. Even still, some implicit usages of
the persisted credentials may be lurking. Please ping ww@ if any issues
arise.
Cargo uses libgit2 by default. In uv, we gave up this entirely and
always call out to the git CLI because it is much more reliable. This is
a part of my attempt to reduce flakes in `rust-ci-full`.
Since https://github.com/openai/codex/pull/21255, `rust-ci-full` has
been failing due to a missing `bwrap`.
```
thread 'main' panicked at linux-sandbox/src/launcher.rs:43:13:
bubblewrap is unavailable: no system bwrap was found on PATH and no bundled codex-resources/bwrap binary was found next to the Codex executable
```
Since the happy path is now to use the system binary, let's ensure
that's installed.
8d51826631
was necessary for the `bwrap` executable to be discoverable when the
working directory is `/`.
I ran `rust-ci-full` at
https://github.com/openai/codex/actions/runs/25528074506
---------
Co-authored-by: Codex <noreply@openai.com>
Fixes#20870.
## Summary
The feature request template currently links users to the README
`#contributing` anchor, but that anchor does not exist. This can confuse
users who are trying to understand contribution expectations before
filing a request.
This updates `.github/ISSUE_TEMPLATE/5-feature-request.yml` to point
`Contributing` at `docs/contributing.md`, matching the repository's
existing contribution guidance.
Issue forms should only reference labels that exist in the repository so
new reports receive the intended automatic labels.
This updates the CLI issue form to stop applying the missing `needs
triage` label, and changes the documentation issue form from `docs` to
the existing `documentation` label.
Fixes#21158
Fixes#21270.
The CLI bug report template defined `description` twice for the terminal
emulator field. Because duplicate YAML keys are ambiguous and parsers
generally keep the later value, the form could drop the multiplexer
guidance.
This combines that guidance with the terminal examples under a single
block scalar in `.github/ISSUE_TEMPLATE/3-cli.yml`.
This adds 7-day cooldowns to all of our Dependabot ecosystem blocks. Our
Dependabot runs will continue at the same cadence as before, but the
scheduled PRs will no suggest updates that are fewer than 7 days old
themselves. This serves two purposes: to let dependencies "bake" for a
bit in terms of stability before we adopt them, and to give third-party
security services/tooling a chance to detect and revoke malware.
This should have no functional changes/consequences besides how rapidly
we get (non-security) updates. Dependabot security PRs can still be
scheduled and will bypass the cooldown.
This builds on top of https://github.com/openai/codex/pull/15828 by
ensuring that hash-pinned actions with version comments are fully
qualified, rather than referencing floating/mutable comments like "v7".
This makes actions management tools behave more consistently.
This shouldn't break anything, since it's comment only. But if it does,
ping ww@ 🙂
## Why
#21255 changed the Linux sandbox fallback so Codex can use a bundled
`codex-resources/bwrap` executable when no suitable system `bwrap` is
available. That lookup is relative to the native Codex executable
returned by
`std::env::current_exe()`, as implemented in
[`bundled_bwrap.rs`](9766d3d51c/codex-rs/linux-sandbox/src/bundled_bwrap.rs (L83-L93)).
The release already publishes a separate `bwrap` DotSlash output, but
the Linux `codex` DotSlash output still pointed at a single-binary
`.zst` payload. Running the `codex` DotSlash manifest only materializes
the native `codex` executable; it does not also create sibling files
from the separate `bwrap` manifest. The fallback path therefore needs
the Linux `codex` DotSlash artifact itself to include the real `bwrap`
executable at `codex-resources/bwrap`.
## What changed
- stage a Linux primary `codex-<target>-bundle.tar.zst` release artifact
containing `codex` and `codex-resources/bwrap`
- point the Linux `codex` DotSlash outputs at that bundle tarball
- leave the standalone `bwrap` DotSlash output in place for consumers
that want to fetch `bwrap` directly
## Verification
- `jq . .github/dotslash-config.json`
- Ruby YAML parse of `.github/workflows/rust-release.yml`
**Summary**
- Build Linux `bwrap` before the main release binaries.
- Export the release `bwrap` SHA-256 as `CODEX_BWRAP_SHA256` so the
Codex binary can verify the bundled fallback.
- Sign, stage, and upload `bwrap` alongside the primary Linux release
artifacts.
**Verification**
- YAML parse check for `.github/workflows/rust-release.yml`
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21256).
* #21257
* __->__ #21256
**Summary**
- Add `codex-bwrap`, a standalone `bwrap` binary built from the existing
vendored bubblewrap sources.
- Remove the linked vendored bwrap path from `codex-linux-sandbox`;
runtime now prefers system `bwrap` and falls back to bundled
`codex-resources/bwrap`.
- Add bundled SHA-256 verification with missing/all-zero digest as the
dev-mode skip value, then exec the verified file through
`/proc/self/fd`.
- Keep `launcher.rs` focused on choosing and dispatching the preferred
launcher. Bundled lookup, digest verification, and bundled exec now live
in `linux-sandbox/src/bundled_bwrap.rs`; Bazel runfiles lookup lives in
`linux-sandbox/src/bazel_bwrap.rs`; shared argv/fd exec helpers live in
`linux-sandbox/src/exec_util.rs`.
- Teach Bazel tests to surface the Bazel-built `//codex-rs/bwrap:bwrap`
through `CARGO_BIN_EXE_bwrap`; `codex-linux-sandbox` only honors that
fallback in debug Bazel runfiles environments so release/user runtime
lookup stays tied to `codex-resources/bwrap`.
- Allow `codex-exec-server` filesystem helpers to preserve just the
Bazel bwrap/runfiles variables they need in debug Bazel builds, since
those helpers intentionally rebuild a small environment before spawning
`codex-linux-sandbox`.
- Verify the Bazel bwrap target in Linux release CI with a build-only
check. Running `bwrap --version` is too strong for GitHub runners
because bubblewrap still attempts namespace setup there.
**Verification**
- Latest update: `cargo test -p codex-linux-sandbox`
- Latest update: `just fix -p codex-linux-sandbox`
- `cargo check --target x86_64-unknown-linux-gnu -p codex-linux-sandbox`
could not run locally because this macOS machine does not have
`x86_64-linux-gnu-gcc`; GitHub Linux Bazel CI is expected to cover the
Linux-only modules.
- Earlier in this PR: `cargo test -p codex-bwrap`
- Earlier in this PR: `cargo test -p codex-exec-server`
- Earlier in this PR: `cargo check --release -p codex-exec-server`
- Earlier in this PR: `just fix -p codex-linux-sandbox -p
codex-exec-server`
- Earlier in this PR: `bazel test --nobuild
//codex-rs/linux-sandbox:linux-sandbox-all-test
//codex-rs/core:core-all-test
//codex-rs/exec-server:exec-server-file_system-test
//codex-rs/app-server:app-server-all-test` (analysis completed; Bazel
then refuses to run tests under `--nobuild`)
- Earlier in this PR: `bazel build --nobuild //codex-rs/bwrap:bwrap`
- Prior to this update: `just bazel-lock-update`, `just
bazel-lock-check`, and YAML parse check for
`.github/workflows/bazel.yml`
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21255).
* #21257
* #21256
* __->__ #21255
## Summary
This is the first PR in the V8 in-process sandboxing rollout.
It adds the build-system and Rust feature plumbing needed to support
sandboxed V8 builds, then enables sandboxing by default for the
source-built Bazel V8 path that we control directly. It deliberately
keeps the published `rusty_v8` artifact workflows on their current
non-sandboxed contract so this PR can land and ship independently before
we change any released artifacts.
## Rollout plan
- [x] **PR 1: land sandbox plumbing and default source-built Bazel V8 to
sandboxed mode**
- [ ] **PR 2: publish sandbox-enabled release artifacts and add
compatibility validation**
- Produce sandboxed artifact pairs for every released Cargo target that
does not already use the source-built Bazel path.
- Add CI coverage that consumes those sandboxed artifacts and verifies:
- `codex-v8-poc` reports sandbox enabled
- `codex-code-mode` builds/tests against the sandboxed path
- [ ] **PR 3: switch release consumers to sandboxed artifacts by
default**
- Update released artifact selectors/checksums.
- Enable the Rust `v8_enable_sandbox` feature in the default release
path.
- Make the sandboxed artifact family the normal path for published
builds.
- [ ] **PR 4: remove rollout-only compatibility paths**
- Remove the temporary non-sandbox release compatibility config once the
new default has shipped and baked.
- Keep the invariant tests permanently.
## Why
Bazel CI was not actually exercising some sharded Rust integration-test
targets on macOS. The `rules_rust` sharding wrapper expects a symlink
runfiles tree, but this repo runs Bazel with `--noenable_runfiles`. In
that configuration the wrapper could fail to find the generated test
binary, produce an empty test list, and exit successfully. That made
targets such as `//codex-rs/core:core-all-test` look green even when
Cargo CI could still catch failures in the same Rust tests.
The coverage gap appears to have been introduced by
[#18082](https://github.com/openai/codex/pull/18082), which enabled
rules_rust native sharding on `//codex-rs/core:core-all-test` and the
other large Rust test labels. The manifest-runfiles setup itself
predates that change in
[#10098](https://github.com/openai/codex/pull/10098), but #18082 is
where the affected integration tests started running through the
incompatible rules_rust sharding wrapper.
[#18913](https://github.com/openai/codex/pull/18913) fixed the same
class of issue for wrapped unit-test shards, but integration-test shards
were still going through the rules_rust wrapper until this PR.
We still do not have the V8/code-mode pieces stable under the Bazel CI
cross-compile setup, so this keeps those tests out of Bazel while
restoring coverage for the rest of the sharded Rust integration suites.
Cargo CI remains responsible for V8/code-mode coverage for now.
This change did uncover a real failing core test on `main`:
`approved_folder_write_request_permissions_unblocks_later_apply_patch`.
That fix is split into
[#21060](https://github.com/openai/codex/pull/21060), which enables the
`apply_patch` tool in the test, teaches the aggregate core test binary
to dispatch the sandboxed filesystem helper, canonicalizes the macOS
temp patch target, and isolates the core test harness from managed
local/enterprise config. Keeping that fix separate lets this PR stay
focused on restoring Bazel coverage while documenting the first failure
it exposed.
## What changed
- Build sharded Rust integration tests as manual `*-bin` binaries and
run them through the existing manifest-aware `workspace_root_test`
launcher.
- Keep Bazel sharding on the launcher target so Rust test cases are
still distributed by stable test-name hashing.
- Configure Bazel CI to skip Rust tests whose names contain
`suite::code_mode::`.
- Exclude the standalone `codex-rs/code-mode` and `codex-rs/v8-poc`
unit-test targets from `bazel.yml`.
## Verification
- `bazel query --output=build //codex-rs/core:core-all-test` now shows
`workspace_root_test` wrapping `//codex-rs/core:core-all-test-bin`.
- `bazel test --test_output=all --nocache_test_results
--test_sharding_strategy=disabled //codex-rs/core:core-all-test
--test_filter=suite::request_permissions_tool::approved_folder_write_request_permissions_unblocks_later_apply_patch`
runs the actual Rust test body and passes.
- `bazel test --test_output=errors --nocache_test_results
--test_env=CODEX_BAZEL_TEST_SKIP_FILTERS=suite::code_mode::
//codex-rs/core:core-all-test` runs the sharded target with code-mode
skipped and passes overall locally, with one flaky attempt retried by
the existing `flaky = True` setting.
## Why
The automated issue labeler needs more precise area labels for newly
opened GitHub issues so triage can distinguish new Codex app and agent
feature surfaces without falling back to broad labels.
## What Changed
- Added labeler prompt entries for `computer-use`, `browser`, `memory`,
`imagen`, `remote`, `performance`, `automations`, and `pets` in
`.github/workflows/issue-labeler.yml`.
- Updated the agent-area guidance so `memory` is used for agentic memory
storage/retrieval and `performance` is used for slow behavior, high
memory utilization, and leaks.
- Expanded the fallback `agent` guidance so Codex prefers the new
specific labels when applicable.
## Verification
- Parsed `.github/workflows/issue-labeler.yml` with `yq e '.'`.
- Ran `git diff --check` for the workflow change.
## Why
#20585 moved the Windows Bazel test job to the cross-compile path, but
the Windows Bazel clippy and verify-release-build jobs were still using
the native Windows/MSVC-host fallback. Those two jobs became the slowest
Windows PR legs, even though both are build-only signal and do not need
to execute the resulting binaries.
## What Changed
- Switches the Windows Bazel clippy job from
`--windows-msvc-host-platform` to `--windows-cross-compile`, so clippy
build actions use Linux RBE while still targeting
`x86_64-pc-windows-gnullvm`.
- Switches the Windows Bazel verify-release-build job to
`--windows-cross-compile` as well. This job only compiles
`cfg(not(debug_assertions))` Rust code under `fastbuild`, so it does not
need a native Windows build host.
- Keeps the old `--skip_incompatible_explicit_targets` behavior only for
fork/community PRs without `BUILDBUDDY_API_KEY`, where `run-bazel-ci.sh`
falls back to the local Windows MSVC-host shape.
- Adds `--windows-cross-compile` support to
`.github/scripts/run-bazel-query-ci.sh`, so target-discovery queries
select the same `ci-windows-cross` config as the subsequent build.
- Threads that option through `scripts/list-bazel-clippy-targets.sh` so
the Windows clippy job discovers targets under the same platform shape
as the subsequent clippy build.
## Verification
Local checks:
```shell
bash -n .github/scripts/run-bazel-query-ci.sh
bash -n scripts/list-bazel-clippy-targets.sh
ruby -e 'require "yaml"; YAML.load_file(".github/workflows/bazel.yml"); puts "ok"'
RUNNER_OS=Linux ./scripts/list-bazel-clippy-targets.sh | grep -c -- '-windows-cross-bin$'
RUNNER_OS=Windows ./scripts/list-bazel-clippy-targets.sh --windows-cross-compile | grep -c -- '-windows-cross-bin$'
```
The Linux target-list check reported `0` Windows-cross internal test
binaries, while the Windows cross target-list check reported `47`,
preserving the test-code clippy coverage shape from the existing Windows
job.
## Status
This is the Bazel PR-CI cross-compilation follow-up to #20485. It is
intentionally split from the Cargo/cargo-xwin release-build PoC so
#20485 can stay as the historical release-build exploration. The
unrelated async-utils test cleanup has been moved to #20686, so this PR
is focused on the Windows Bazel CI path.
The intended tradeoff is now explicit in `.github/workflows/bazel.yml`:
pull requests get the fast Windows cross-compiled Bazel test leg, while
post-merge pushes to `main` run both that fast cross leg and a fully
native Windows Bazel test leg. The native main-only job keeps full
V8/code-mode coverage and gets a 40-minute timeout because it is less
latency-sensitive than PR CI. All other Bazel jobs remain at 30 minutes.
## Why
Windows Bazel PR CI currently does the expensive part of the build on
Windows. A native Windows Bazel test job on `main` completed in about
28m12s, leaving very little headroom under the 30-minute job timeout and
making Windows the slowest PR signal.
#20485 showed that Windows cross-compilation can be materially faster
for Cargo release builds, but PR CI needs Bazel because Bazel owns our
test sharding, flaky-test retries, and integration-test layout. This PR
applies the same high-level shape we already use for macOS Bazel CI:
compile with remote Linux execution, then run platform-specific tests on
the platform runner.
The compromise is deliberately signal-aware: code-mode/V8 changes are
rare enough that PR CI can accept losing the direct V8/code-mode
smoke-test signal temporarily, while `main` still runs the native
Windows job post-merge to catch that class of regression. A follow-up PR
should investigate making the cross-built Windows gnullvm V8 archive
pass the direct V8/code-mode tests so this tradeoff can eventually go
away.
## What Changed
- Adds a `ci-windows-cross` Bazel config that targets
`x86_64-pc-windows-gnullvm`, uses Linux RBE for build actions, and keeps
`TestRunner` actions local on the Windows runner.
- Adds explicit Windows platform definitions for
`windows_x86_64_gnullvm`, `windows_x86_64_msvc`, and a bridge toolchain
that lets gnullvm test targets execute under the Windows MSVC host
platform.
- Updates the Windows Bazel PR test leg to opt into the cross-compile
path via `--windows-cross-compile` and `--remote-download-toplevel`.
- Adds a `test-windows-native-main` job that runs only for `push` events
on `refs/heads/main`, uses the native Windows Bazel path, includes
V8/code-mode smoke tests, and has `timeout-minutes: 40`.
- Keeps fork/community PRs without `BUILDBUDDY_API_KEY` on the previous
local Windows MSVC-host fallback, including
`--host_platform=//:local_windows_msvc` and `--jobs=8`.
- Preserves the existing integration-test shape on non-gnullvm
platforms, while generating Windows-cross wrapper targets only for
`windows_gnullvm`.
- Resolves `CARGO_BIN_EXE_*` values from runfiles at test runtime,
avoiding hard-coded Cargo paths and duplicate test runfiles.
- Extends the V8 Bazel patches enough for the
`x86_64-pc-windows-gnullvm` target and Linux remote execution path.
- Makes the Windows sandbox test cwd derive from `INSTA_WORKSPACE_ROOT`
at runtime when Bazel provides it, because cross-compiled binaries may
contain Linux compile-time paths.
- Keeps the direct V8/code-mode unit smoke tests out of the Windows
cross PR path for now while native Windows CI continues to cover them
post-merge.
## Command Shape
The fast Windows PR test leg invokes the normal Bazel CI wrapper like
this:
```shell
./.github/scripts/run-bazel-ci.sh \
--print-failed-action-summary \
--print-failed-test-logs \
--windows-cross-compile \
--remote-download-toplevel \
-- \
test \
--test_tag_filters=-argument-comment-lint \
--test_verbose_timeout_warnings \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
//... \
-//third_party/v8:all \
-//codex-rs/code-mode:code-mode-unit-tests \
-//codex-rs/v8-poc:v8-poc-unit-tests
```
With the BuildBuddy secret available on Windows, the wrapper selects
`--config=ci-windows-cross` and appends the important Windows-cross
overrides after rc expansion:
```shell
--host_platform=//:rbe
--shell_executable=/bin/bash
--action_env=PATH=/usr/bin:/bin
--host_action_env=PATH=/usr/bin:/bin
--test_env=PATH=${CODEX_BAZEL_WINDOWS_PATH}
```
The native post-merge Windows job intentionally omits
`--windows-cross-compile` and does not exclude the V8/code-mode unit
targets:
```shell
./.github/scripts/run-bazel-ci.sh \
--print-failed-action-summary \
--print-failed-test-logs \
-- \
test \
--test_tag_filters=-argument-comment-lint \
--test_verbose_timeout_warnings \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
--build_metadata=TAG_windows_native_main=true \
-- \
//... \
-//third_party/v8:all
```
## Research Notes
The existing macOS Bazel CI config already uses the model we want here:
build actions run remotely with `--strategy=remote`, but `TestRunner`
actions execute on the macOS runner. This PR mirrors that pattern for
Windows with `--strategy=TestRunner=local`.
The important Bazel detail is that `rules_rs` is already targeting
`x86_64-pc-windows-gnullvm` for Windows Bazel PR tests. This PR changes
where the build actions execute; it does not switch the Bazel PR test
target to Cargo, `cargo-nextest`, or the MSVC release target.
Cargo release builds differ from this Bazel path for V8: the normal
Windows Cargo release target is MSVC, and `rusty_v8` publishes prebuilt
Windows MSVC `.lib.gz` archives. The Bazel PR path targets
`windows-gnullvm`; `rusty_v8` does not publish a prebuilt Windows
GNU/gnullvm archive, so this PR builds that archive in-tree. That
Linux-RBE-built gnullvm archive currently crashes in direct V8/code-mode
smoke tests, which is why the workflow keeps native Windows coverage on
`main`.
The less obvious Bazel detail is test wrapper selection. Bazel chooses
the Windows test wrapper (`tw.exe`) from the test action execution
platform, not merely from the Rust target triple. The outer
`workspace_root_test` therefore declares the default test toolchain and
uses the bridge toolchain above so the test action executes on Windows
while its inner Rust binary is built for gnullvm.
The V8 investigation exposed a Windows-client gotcha: even when an
action execution platform is Linux RBE, Bazel can still derive the
genrule shell path from the Windows client. That produced remote
commands trying to run `C:\Program Files\Git\usr\bin\bash.exe` on Linux
workers. The wrapper now passes `--shell_executable=/bin/bash` with
`--host_platform=//:rbe` for the Windows cross path.
The same Windows-client/Linux-RBE boundary also affected
`third_party/v8:binding_cc`: a multiline genrule command can carry CRLF
line endings into Linux remote bash, which failed as `$'\r'`. That
genrule now keeps the `sed` command on one physical shell line while
using an explicit Starlark join so the shell arguments stay readable.
## Verification
Local checks included:
```shell
bash -n .github/scripts/run-bazel-ci.sh
bash -n workspace_root_test_launcher.sh.tpl
ruby -e "require %q{yaml}; YAML.load_file(%q{.github/workflows/bazel.yml}); puts %q{ok}"
RUNNER_OS=Linux ./scripts/list-bazel-clippy-targets.sh
RUNNER_OS=Windows ./scripts/list-bazel-clippy-targets.sh
RUNNER_OS=Linux ./tools/argument-comment-lint/list-bazel-targets.sh
RUNNER_OS=Windows ./tools/argument-comment-lint/list-bazel-targets.sh
```
The Linux clippy and argument-comment target lists contain zero
`*-windows-cross-bin` labels, while the Windows lists still include 47
Windows-cross internal test binaries.
CI evidence:
- Baseline native Windows Bazel test on `main`: success in about 28m12s,
https://github.com/openai/codex/actions/runs/25206257208/job/73907325959
- Green Windows-cross Bazel run on the split PR before adding the
main-only native leg: Windows test 9m16s, Windows release verify 5m10s,
Windows clippy 4m43s,
https://github.com/openai/codex/actions/runs/25231890068
- The latest SHA adds the explicit PR-vs-main tradeoff in `bazel.yml`;
CI is rerunning on that focused diff.
## Follow-Up
A subsequent PR should investigate making a cross-built Windows binary
work with V8/code-mode enabled. Likely options are either making the
Linux-RBE-built `windows-gnullvm` V8 archive correct at runtime, or
evaluating whether a Bazel MSVC target/toolchain can reuse the same
prebuilt MSVC `rusty_v8` archive shape that Cargo release builds already
use.
## Why
#20271 increased the `90`-minute timeout in `rust-release.yml`, but it
did not update the reusable Windows workflow in
`rust-release-windows.yml`. As a result, the Windows release compile
jobs were still capped at `60` minutes and the `windows-x64` primary
build could continue timing out.
We are keeping the existing `90`-minute timeout in `rust-release.yml`.
That increase was still directionally correct because the top-level
release build benefits from extra headroom; the mistake was assuming it
also covered the reusable Windows jobs.
## What Changed
- increase the reusable Windows release workflow timeouts in
`rust-release-windows.yml` from `60` minutes to `90` minutes
- update the comment in `rust-release.yml` so it no longer implies that
the top-level timeout covers the Windows reusable jobs
Addresses #19856
## Summary
- Clarifies that external code contributions are invitation only.
- Points contributors to `docs/contributing.md` for the full policy
instead of using the previous warning phrasing.
## Why
All Bazel CI jobs are currently blocked in the `setup-bazelisk` step
while trying to download Bazelisk.
[`bazelbuild/setup-bazelisk`](https://github.com/bazelbuild/setup-bazelisk)
is archived, and its README now recommends migrating to
[`bazel-contrib/setup-bazel`](https://github.com/bazel-contrib/setup-bazel),
so leaving our workflows on the archived action leaves CI exposed to
exactly this sort of outage.
Because `v8-canary` now consumes the shared local `setup-bazel-ci`
action, that workflow also needs to trigger when the action changes.
Without that follow-up, Bazel bootstrap regressions specific to the V8
canary path could be skipped by the workflow path filters.
## What Changed
- Switched `.github/actions/setup-bazel-ci/action.yml` from
`bazelbuild/setup-bazelisk` to `bazel-contrib/setup-bazel`, pinned to
`0.19.0`.
- Left `bazelisk-version` unset so GitHub-hosted runners can use their
preinstalled Bazelisk instead of downloading `1.x` at job start.
- Updated `.github/workflows/rusty-v8-release.yml` and
`.github/workflows/v8-canary.yml` to use the shared `setup-bazel-ci`
action instead of referencing `setup-bazelisk` directly.
- Added `.github/actions/setup-bazel-ci/**` to the `pull_request` and
`push` path filters in `.github/workflows/v8-canary.yml` so changes to
the shared Bazel setup action still run the canary workflow.
- Kept the existing repository-cache and Windows-specific Bazel setup
logic intact.
This keeps Bazel version selection anchored by `.bazelversion` while
removing the failing dependency on the archived setup action.
## Verification
- Searched `.github/` to confirm there are no remaining `setup-bazelisk`
references.
- Parsed the updated workflow and action YAML locally with Ruby's
`YAML.load_file`.
## Why
The `build-test` workflow stages a representative `codex` npm tarball by
asking `scripts/stage_npm_packages.py` to look up a past `rust-release`
run for a hardcoded release version. That started failing in CI because
the representative version in `.github/workflows/ci.yml` was stale:
- the workflow was still using `0.115.0`
- `stage_npm_packages.py` resolves native artifacts by looking for a
`rust-release` run on the `rust-v<version>` branch
- that lookup no longer found a matching run for `rust-v0.115.0`, so the
smoke test failed before it could stage the package
This PR makes that smoke test depend on a known-good recent release run
instead of an older branch lookup that is no longer reliable.
## What Changed
- Updated the representative release version in
`.github/workflows/ci.yml` from `0.115.0` to `0.125.0`.
- Added an explicit `WORKFLOW_URL` pointing at a recent successful
`rust-release` run:
`https://github.com/openai/codex/actions/runs/24901475298`.
- Passed that URL to `scripts/stage_npm_packages.py` via
`--workflow-url` so the job can reuse the expected native artifacts
directly instead of relying on `gh run list --branch rust-v<version>` to
discover them.
That keeps the npm staging smoke test representative while making it
less sensitive to older release branch history disappearing from the
GitHub Actions lookup path.
## Verification
- Inspected the failing CI log from `build-test` and confirmed the
failure came from `scripts/stage_npm_packages.py` being unable to
resolve `rust-v0.115.0`.
- Confirmed that
`https://github.com/openai/codex/actions/runs/24901475298` is a
successful `rust-release` run for `rust-v0.125.0`.
## Why
For npm/Bun-managed installs, the update prompt was treating the latest
GitHub release as ready to install. During the `0.124.0` release, GitHub
and npm visibility were not atomic: the root npm wrapper could become
visible before the npm registry marked that version as the package
`latest`. That left a window where users could be prompted to upgrade
before npm was ready for the release.
## What changed
- Keep GitHub Releases as the candidate latest-version source for
npm/Bun installs, but only write the existing `version.json` cache after
npm registry metadata proves that same root version is ready.
- Add `codex-rs/tui/src/npm_registry.rs` to validate npm readiness by
checking `dist-tags.latest` and root package `dist` metadata for the
GitHub candidate version.
- Move version parsing helpers into
`codex-rs/tui/src/update_versions.rs` so that logic can be tested
without compiling the release-only `updates.rs` module under tests.
- Update `.github/workflows/rust-release.yml` so the six known platform
tarballs publish before the root `@openai/codex` wrapper. Other npm
tarballs publish before the root wrapper, and the SDK publishes after
the root package it depends on.
I think raising it to 45 minutes in
https://github.com/openai/codex/pull/19578 was a mistake for the reasons
explained in the comments in the code. Instead, we attempt to defend
against timeouts by increasing the number of shards in
`app-server-all-test` so that a "true failure" that gets run 3x should
not take as much wall clock time.
Unfortunately, if most of the build graph is invalidated such that there
are few cache hits, the Windows Bazel build for all the tests often
takes more than `30` minutes, so this PR increases the timeout to `45`
minutes until we set up distributed builds.
## Summary
- update Codex issue automation to pin `openai/codex-action` to
`5c3f4ccdb2b8790f73d6b21751ac00e602aa0c02`, the commit for `v1.7`
- keep the release intent visible with `# v1.7` comments beside the hash
pins
## Test plan
- `git diff --check`
- `yq e '.' .github/workflows/issue-labeler.yml`
- `yq e '.' .github/workflows/issue-deduplicator.yml`
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
The VS Code extension and desktop app do not need the full TUI binary,
and `codex-app-server` is materially smaller than standalone `codex`. We
still want to publish it as an official release artifact, but building
it by tacking another `--bin` onto the existing release `cargo build`
invocations would lengthen those jobs.
This change keeps `codex-app-server` on its own release bundle so it can
build in parallel with the existing `codex` and helper bundles.
## What changed
- Made `.github/workflows/rust-release.yml` bundle-aware so each macOS
and Linux MUSL target now builds either the existing `primary` bundle
(`codex` and `codex-responses-api-proxy`) or a standalone `app-server`
bundle (`codex-app-server`).
- Preserved the historical artifact names for the primary macOS/Linux
bundles so `scripts/stage_npm_packages.py` and
`codex-cli/scripts/install_native_deps.py` continue to find release
assets under the paths they already expect, while giving the new
app-server artifacts distinct names.
- Added a matching `app-server` bundle to
`.github/workflows/rust-release-windows.yml`, and updated the final
Windows packaging job to download, sign, stage, and archive
`codex-app-server.exe` alongside the existing release binaries.
- Generalized the shared signing actions in
`.github/actions/linux-code-sign/action.yml`,
`.github/actions/macos-code-sign/action.yml`, and
`.github/actions/windows-code-sign/action.yml` so each workflow row
declares its binaries once and reuses that list for build, signing, and
staging.
- Added `codex-app-server` to `.github/dotslash-config.json` so releases
also publish a generated DotSlash manifest for the standalone app-server
binary.
- Kept the macOS DMG focused on the existing `primary` bundle;
`codex-app-server` ships as the regular standalone archives and DotSlash
manifest.
## Verification
- Parsed the modified workflow and action YAML files locally with
`python3` + `yaml.safe_load(...)`.
- Parsed `.github/dotslash-config.json` locally with `python3` +
`json.loads(...)`.
- Reviewed the resulting release matrices, artifact names, and packaging
paths to confirm that `codex-app-server` is built separately on macOS,
Linux MUSL, and Windows, while the existing npm staging and Windows
`codex` zip bundling contracts remain intact.
## Why
We already prefer shipping the MUSL Linux builds, and the in-repo
release consumers resolve Linux release assets through the MUSL targets.
Keeping the GNU release jobs around adds release time and extra assets
without serving the paths we actually publish and consume.
This is also easier to reason about as a standalone change: future work
can point back to this PR as the intentional decision to stop publishing
`x86_64-unknown-linux-gnu` and `aarch64-unknown-linux-gnu` release
artifacts.
## What changed
- Removed the `x86_64-unknown-linux-gnu` and `aarch64-unknown-linux-gnu`
entries from the `build` matrix in `.github/workflows/rust-release.yml`.
- Added a short comment in that matrix documenting that Linux release
artifacts intentionally ship MUSL-linked binaries.
## Verification
- Reviewed `.github/workflows/rust-release.yml` to confirm that the
release workflow now only builds Linux release artifacts for
`x86_64-unknown-linux-musl` and `aarch64-unknown-linux-musl`.