mirror of
https://github.com/openai/codex.git
synced 2026-04-26 07:35:29 +00:00
b4aefff87f11692db53bf5518c5d03c90906cf8e
4 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
fce0f76d57 |
build: migrate argument-comment-lint to a native Bazel aspect (#16106)
## Why `argument-comment-lint` had become a PR bottleneck because the repo-wide lane was still effectively running a `cargo dylint`-style flow across the workspace instead of reusing Bazel's Rust dependency graph. That kept the lint enforced, but it threw away the main benefit of moving this job under Bazel in the first place: metadata reuse and cacheable per-target analysis in the same shape as Clippy. This change moves the repo-wide lint onto a native Bazel Rust aspect so Linux and macOS can lint `codex-rs` without rebuilding the world crate-by-crate through the wrapper path. ## What Changed - add a nightly Rust toolchain with `rustc-dev` for Bazel and a dedicated crate-universe repo for `tools/argument-comment-lint` - add `tools/argument-comment-lint/driver.rs` and `tools/argument-comment-lint/lint_aspect.bzl` so Bazel can run the lint as a custom `rustc_driver` - switch repo-wide `just argument-comment-lint` and the Linux/macOS `rust-ci` lanes to `bazel build --config=argument-comment-lint //codex-rs/...` - keep the Python/DotSlash wrappers as the package-scoped fallback path and as the current Windows CI path - gate the Dylint entrypoint behind a `bazel_native` feature so the Bazel-native library avoids the `dylint_*` packaging stack - update the aspect runtime environment so the driver can locate `rustc_driver` correctly under remote execution - keep the dedicated `tools/argument-comment-lint` package tests and wrapper unit tests in CI so the source and packaged entrypoints remain covered ## Verification - `python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'` - `cargo test` in `tools/argument-comment-lint` - `bazel build //tools/argument-comment-lint:argument-comment-lint-driver --@rules_rust//rust/toolchain/channel=nightly` - `bazel build --config=argument-comment-lint //codex-rs/utils/path-utils:all` - `bazel build --config=argument-comment-lint //codex-rs/rollout:rollout` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16106). * #16120 * __->__ #16106 |
||
|
|
2616c7cf12 |
ci: add Bazel clippy workflow for codex-rs (#15955)
## Why
`bazel.yml` already builds and tests the Bazel graph, but `rust-ci.yml`
still runs `cargo clippy` separately. This PR starts the transition to a
Bazel-backed lint lane for `codex-rs` so we can eventually replace the
duplicate Rust build, test, and lint work with Bazel while explicitly
keeping the V8 Bazel path out of scope for now.
To make that lane practical, the workflow also needs to look like the
Bazel job we already trust. That means sharing the common Bazel setup
and invocation logic instead of hand-copying it, and covering the arm64
macOS path in addition to Linux.
Landing the workflow green also required fixing the first lint findings
that Bazel surfaced and adding the matching local entrypoint.
## What changed
- add a reusable `build:clippy` config to `.bazelrc` and export
`codex-rs/clippy.toml` from `codex-rs/BUILD.bazel` so Bazel can run the
repository's existing Clippy policy
- add `just bazel-clippy` so the local developer entrypoint matches the
new CI lane
- extend `.github/workflows/bazel.yml` with a dedicated Bazel clippy job
for `codex-rs`, scoped to `//codex-rs/... -//codex-rs/v8-poc:all`
- run that clippy job on Linux x64 and arm64 macOS
- factor the shared Bazel workflow setup into
`.github/actions/setup-bazel-ci/action.yml` and the shared Bazel
invocation logic into `.github/scripts/run-bazel-ci.sh` so the clippy
and build/test jobs stay aligned
- fix the first Bazel-clippy findings needed to keep the lane green,
including the cross-target `cmsghdr::cmsg_len` normalization in
`codex-rs/shell-escalation/src/unix/socket.rs` and the no-`voice-input`
dead-code warnings in `codex-rs/tui` and `codex-rs/tui_app_server`
## Verification
- `just bazel-clippy`
- `RUNNER_OS=macOS ./.github/scripts/run-bazel-ci.sh -- build
--config=clippy --build_metadata=COMMIT_SHA=local-check
--build_metadata=TAG_job=clippy -- //codex-rs/...
-//codex-rs/v8-poc:all`
- `bazel build --config=clippy
//codex-rs/shell-escalation:shell-escalation`
- `CARGO_TARGET_DIR=/tmp/codex4-shell-escalation-test cargo test -p
codex-shell-escalation`
- `ruby -e 'require "yaml";
YAML.load_file(".github/workflows/bazel.yml");
YAML.load_file(".github/actions/setup-bazel-ci/action.yml")'`
## Notes
- `CARGO_TARGET_DIR=/tmp/codex4-tui-app-server-test cargo test -p
codex-tui-app-server` still hits existing guardian-approvals test and
snapshot failures unrelated to this PR's Bazel-clippy changes.
Related: #15954
|
||
|
|
42e22f3bde |
Add feature-gated freeform js_repl core runtime (#10674)
## Summary This PR adds an **experimental, feature-gated `js_repl` core runtime** so models can execute JavaScript in a persistent REPL context across tool calls. The implementation integrates with existing feature gating, tool registration, prompt composition, config/schema docs, and tests. ## What changed - Added new experimental feature flag: `features.js_repl`. - Added freeform `js_repl` tool and companion `js_repl_reset` tool. - Gated tool availability behind `Feature::JsRepl`. - Added conditional prompt-section injection for JS REPL instructions via marker-based prompt processing. - Implemented JS REPL handlers, including freeform parsing and pragma support (timeout/reset controls). - Added runtime resolution order for Node: 1. `CODEX_JS_REPL_NODE_PATH` 2. `js_repl_node_path` in config 3. `PATH` - Added JS runtime assets/version files and updated docs/schema. ## Why This enables richer agent workflows that require incremental JavaScript execution with preserved state, while keeping rollout safe behind an explicit feature flag. ## Testing Coverage includes: - Feature-flag gating behavior for tool exposure. - Freeform parser/pragma handling edge cases. - Runtime behavior (state persistence across calls and top-level `await` support). ## Usage ```toml [features] js_repl = true ``` Optional runtime override: - `CODEX_JS_REPL_NODE_PATH`, or - `js_repl_node_path` in config. #### [git stack](https://github.com/magus/git-stack-cli) - 👉 `1` https://github.com/openai/codex/pull/10674 - ⏳ `2` https://github.com/openai/codex/pull/10672 - ⏳ `3` https://github.com/openai/codex/pull/10671 - ⏳ `4` https://github.com/openai/codex/pull/10673 - ⏳ `5` https://github.com/openai/codex/pull/10670 |
||
|
|
2a06d64bc9 |
feat: add support for building with Bazel (#8875)
This PR configures Codex CLI so it can be built with [Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc` includes configuration so that remote builds can be done using [BuildBuddy](https://www.buildbuddy.io). If you are familiar with Bazel, things should work as you expect, e.g., run `bazel test //... --keep-going` to run all the tests in the repo, but we have also added some new aliases in the `justfile` for convenience: - `just bazel-test` to run tests locally - `just bazel-remote-test` to run tests remotely (currently, the remote build is for x86_64 Linux regardless of your host platform). Note we are currently seeing the following test failures in the remote build, so we still need to figure out what is happening here: ``` failures: suite::compact::manual_compact_twice_preserves_latest_user_messages suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view ``` - `just build-for-release` to build release binaries for all platforms/architectures remotely To setup remote execution: - [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI employees should also request org access at https://openai.buildbuddy.io/join/ with their `@openai.com` email address.) - [Copy your API key](https://app.buildbuddy.io/docs/setup/) to `~/.bazelrc` (add the line `build --remote_header=x-buildbuddy-api-key=YOUR_KEY`) - Use `--config=remote` in your `bazel` invocations (or add `common --config=remote` to your `~/.bazelrc`, or use the `just` commands) ## CI In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners (we are working on supporting Windows, but that is not ready yet). Note that the failures we are seeing in `just bazel-remote-test` do not occur on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml` is green right now. The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so that macOS CI jobs build _remotely_ on Linux hosts (using the `docker://docker.io/mbolin491/codex-bazel` Docker image declared in the root `BUILD.bazel`) using cross-compilation to build the macOS artifacts. Then these artifacts are downloaded locally to GitHub's macOS runner so the tests can be executed natively. This is the relevant config that enables this: ``` common:macos --config=remote common:macos --strategy=remote common:macos --strategy=TestRunner=darwin-sandbox,local ``` Because of the remote caching benefits we get from BuildBuddy, these new CI jobs can be extremely fast! For example, consider these two jobs that ran all the tests on Linux x86_64: - Bazel 1m37s https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875 - Cargo 9m20s https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875 For now, we will continue to run both the Bazel and Cargo jobs for PRs, but once we add support for Windows and running Clippy, we should be able to cutover to using Bazel exclusively for PRs, which should still speed things up considerably. We will probably continue to run the Cargo jobs post-merge for commits that land on `main` as a sanity check. Release builds will also continue to be done by Cargo for now. Earlier attempt at this PR: https://github.com/openai/codex/pull/8832 Earlier attempt to add support for Buck2, now abandoned: https://github.com/openai/codex/pull/8504 --------- Co-authored-by: David Zbarsky <dzbarsky@gmail.com> Co-authored-by: Michael Bolin <mbolin@openai.com> |