Compare commits

...

87 Commits

Author SHA1 Message Date
Michael Bolin
c1ff22bc6c feat: This PR adds a first-party curl-based installer that downloads only the binaries needed for the current platform and wires it into Codex’s existing update UX.
It introduces a dedicated `installer/` folder for non-Rust install/update assets, a `CODEX_HOME`-aware on-disk layout, and a wrapper that makes helper CLIs available to Codex without globally polluting the user’s shell.

## Why

Today the recommended install path is `npm i -g @openai/codex`, but the npm package bundles native binaries for all platforms (and `rg`), leading to very large installs (hundreds of MB unpacked) even though a single platform binary is much smaller.

Homebrew avoids this on macOS by downloading only the matching artifact. This PR brings that same property to a cross-platform one-liner:

```sh
curl -fsSL https://raw.githubusercontent.com/openai/codex/main/installer/install.sh | bash
```

Key goals:

- Download only the user’s platform artifacts.
- Keep curl-managed installs isolated under `CODEX_HOME`.
- Preserve compatibility with npm/brew installs (shadowing is fine, breaking is not).
- Provide a clean place to add helper CLIs that should be available when Codex runs.

## How It Works

### Install root and layout

All curl-managed artifacts live under `CODEX_HOME` (default: `~/.codex`).

The layout is:

- `CODEX_HOME/bin/`
- `CODEX_HOME/versions/<version>/bin/`
- `CODEX_HOME/versions/current` (symlink)
- `CODEX_HOME/tools/bin/`

This design supports atomic version switches and helper CLIs:

- The user’s global `PATH` only needs `CODEX_HOME/bin`.
- The runtime `PATH` is extended inside the wrapper to include helper CLIs.

See `installer/README.md:1`.

### Installer scripts

#### `installer/install.sh`

`installer/install.sh:1` is the public entrypoint:

- Requires Bash explicitly (we use Bash features like `mapfile`).
- Loads `installer/lib.sh` locally when run from a checkout, or downloads it when piped from curl.
- Detects `arch`/`os`, resolves the version (via `CODEX_VERSION` or GitHub Releases), downloads the correct tarball, installs it, activates it, and updates the user’s rc file.

#### `installer/lib.sh`

`installer/lib.sh:10` centralizes the mechanics:

- Resolves `CODEX_HOME` with a fallback to `~/.codex`.
- Detects OS/arch and builds release URLs.
- Idempotently updates the user rc file via a marker block that respects non-default `CODEX_HOME` values: `installer/lib.sh:106`.
- Installs a wrapper at `CODEX_HOME/bin/codex` that:
  - Sets `CODEX_MANAGED_BY_CURL=1`.
  - Extends `PATH` at runtime to include:
    - `CODEX_HOME/bin`
    - `CODEX_HOME/tools/bin`
    - `CODEX_HOME/versions/current/bin`
  - Execs the active versioned binary: `installer/lib.sh:154`.
- Installs the tarball into a versioned `bin/` directory and preserves any additional CLIs present in the tarball (not just `codex`): `installer/lib.sh:181`.
- Keeps the last two installed versions and avoids deleting the `current` symlink: `installer/lib.sh:239`.

#### `installer/update.sh`

`installer/update.sh:1` mirrors the installer flow but always targets the latest release.

### Update UX integration in the CLI

Codex’s update prompt already knows how to run a package-manager-specific update command after the TUI exits.

This PR adds a curl-managed update action in `codex-rs/tui/src/update_action.rs:3`:

- Adds `UpdateAction::CurlInstallerUpdate`.
- Detects curl-managed installs via `CODEX_MANAGED_BY_CURL` (set by the wrapper): `codex-rs/tui/src/update_action.rs:40`.
- Uses the curl updater one-liner for the update command: `codex-rs/tui/src/update_action.rs:21`.

This keeps update behavior consistent with npm/brew while avoiding fragile path heuristics.

## Docs updates

- The curl installer is now the first recommended install path in `README.md:1` and `README.md:18`.
- `docs/install.md:11` adds a curl install section and points to `installer/README.md` for the detailed mechanics.

## Compatibility and safety notes

- This does not modify npm or Homebrew installs.
- The rc-file change only adds `CODEX_HOME/bin` to `PATH`.
- Additional helper CLIs are available to Codex via the wrapper’s runtime `PATH`, rather than being forced into the user’s global shell.
- The installer honors `CODEX_HOME` everywhere, falling back to `~/.codex`.

## Testing

Commands run:

```sh
cd codex-rs
just fmt
cargo test -p codex-tui
```

Notes:

- In this environment, `cargo test -p codex-tui` required running outside the sandbox due to a macOS `SystemConfiguration` panic; it passed once run with escalated permissions.
- No new VS Code diagnostics were reported.

## Follow-ups (explicitly not in this PR)

- Ensure the release tarballs include any helper CLIs we expect to be available at runtime (for example, Windows sandbox helpers).
- Add an explicit `codex self-update` command that delegates to `installer/update.sh`.
- Decide how we want to manage third-party tools like `rg` under `CODEX_HOME/tools/` and whether the CLI should probe a managed fallback path.
2026-01-25 21:09:29 -08:00
jif-oai
8fea8f73d6 chore: half max number of sub-agents (#9861)
https://openai.slack.com/archives/C095U48JNL9/p1769359138786499?thread_ts=1769190766.962719&cid=C095U48JNL9
2026-01-25 17:51:55 +01:00
jif-oai
73b5274443 feat: cap number of agents (#9855)
Adding more guards to agent:
* Max depth or 1 (i.e. a sub-agent can't spawn another one)
* Max 12 sub-agents in total
2026-01-25 14:57:22 +00:00
jif-oai
a748600c42 Revert "Revert "fix: musl build"" (#9847)
Fix for
77222492f9
2026-01-25 08:50:31 -05:00
pakrym-oai
b332482eb1 Mark collab as beta (#9834)
Co-authored-by: jif-oai <jif@openai.com>
2026-01-25 11:13:21 +01:00
Ahmed Ibrahim
58450ba2a1 Use collaboration mode masks without mutating base settings (#9806)
Keep an unmasked base collaboration mode and apply the active mask on
demand. Simplify the TUI mask helpers and update tests/docs to match the
mask contract.
2026-01-25 07:35:31 +00:00
Ahmed Ibrahim
24230c066b Revert "fix: libcc link" (#9841)
Reverts openai/codex#9819
2026-01-25 06:58:56 +00:00
Charley Cunningham
18acec09df Ask for cwd choice when resuming session from different cwd (#9731)
# Summary
- Fix resume/fork config rebuild so cwd changes inside the TUI produce a
fully rebuilt Config (trust/approval/sandbox) instead of mutating only
the cwd.
- Preserve `--add-dir` behavior across resume/fork by normalizing
relative roots to absolute paths once (based on the original cwd).
- Prefer latest `TurnContext.cwd` for resume/fork prompts but fall back
to `SessionMeta.cwd` if the latest cwd no longer exists.
- Align resume/fork selection handling and ensure UI config matches the
resumed thread config.
- Fix Windows test TOML path escaping in trust-level test.

# Details
- Rebuild Config via `ConfigBuilder` when resuming into a different cwd;
carry forward runtime approval/sandbox overrides.
- Add `normalize_harness_overrides_for_cwd` to resolve relative
`additional_writable_roots` against the initial cwd before reuse.
- Guard `read_session_cwd` with filesystem existence check for the
latest `TurnContext.cwd`.
- Update naming/flow around cwd comparison and prompt selection.

<img width="603" height="150" alt="Screenshot 2026-01-23 at 5 42 13 PM"
src="https://github.com/user-attachments/assets/d1897386-bb28-4e8a-98cf-187fdebbecb0"
/>

And proof the model understands the new cwd:

<img width="828" height="353" alt="Screenshot 2026-01-22 at 5 36 45 PM"
src="https://github.com/user-attachments/assets/12aed8ca-dec3-4b64-8dae-c6b8cff78387"
/>
2026-01-24 21:57:19 -08:00
Matthew Zeng
182000999c Raise welcome animation breakpoint to 37 rows (#9778)
### Motivation
- The large ASCII welcome animation can push onboarding content below
the fold on default-height terminals, making the CLI appear
unresponsive; raising the breakpoint prevents that.
- The existing test measured an arbitrary row count rather than
asserting the welcome line position relative to the animation frame,
which made the intent unclear.

### Description
- Increase `MIN_ANIMATION_HEIGHT` from `20` to `37` in
`codex-rs/tui/src/onboarding/welcome.rs` so the animation is skipped
unless there is enough vertical space.
- Replace the brittle measurement logic in the welcome render test with
a `row_containing` helper and assert the welcome row equals the frame
height plus the spacer line (`frame_lines + 1`).
- Add a regression test
`welcome_skips_animation_below_height_breakpoint` that verifies the
animation is not rendered when the viewport height is one row below the
breakpoint.

### Testing
- Ran formatting with `~/.cargo/bin/just fmt` which completed
successfully.
- Ran unit tests for the crate with `cargo test -p codex-tui --lib` and
they passed (unit test suite succeeded).
- Ran `cargo test -p codex-tui` which reported a failing integration
test in this environment because the test cannot locate the `codex`
binary, so full crate tests are blocked here (environment limitation).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973b0a710d4832c9ff36fac26eb1519)
2026-01-24 21:50:35 -08:00
Ahmed Ibrahim
652f08e98f Revert "fix: musl build" (#9840)
Reverts openai/codex#9820
2026-01-25 04:46:53 +00:00
Charley Cunningham
279c9534a1 Prevent backspace from removing a text element when the cursor is at the element’s left edge (#9630)
**Summary**
- Prevent backspace from removing a text element when the cursor is at
the element’s left edge.
- Instead just delete the char before the placeholder (moving it to the
left).
2026-01-24 10:41:39 -08:00
Max Kong
e2bd9311c9 fix(windows-sandbox): remove request files after read (#9316)
## Summary
- Remove elevated runner request files after read (best-effort cleanup
on errors)
- Add a unit test to cover request file lifecycle

## Testing
- `cargo test -p codex-windows-sandbox` (Windows)

Fixes #9315
2026-01-24 10:23:37 -08:00
jif-oai
2efcdf4062 fix: musl build (#9820) 2026-01-24 16:56:28 +01:00
jif-oai
3651608365 fix: libcc link (#9819) 2026-01-24 16:32:06 +01:00
jif-oai
83775f4df1 feat: ephemeral threads (#9765)
Add ephemeral threads capabilities. Only exposed through the
`app-server` v2

The idea is to disable the rollout recorder for those threads.
2026-01-24 14:57:40 +00:00
jif-oai
515ac2cd19 feat: add thread spawn source for collab tools (#9769) 2026-01-24 14:21:34 +00:00
Charley Cunningham
eb7558ba85 Remove batman reference from experimental prompt (#9812)
https://www.reddit.com/r/codex/comments/1qldbmg/if_you_enable_experimental_subagents_in_openai/
2026-01-24 14:24:36 +01:00
Eric Traut
713ae22c04 Another round of improvements for config error messages (#9746)
In a [recent PR](https://github.com/openai/codex/pull/9182), I made some
improvements to config error messages so errors didn't leave app server
clients in a dead state. This is a follow-on PR to make these error
messages more readable and actionable for both TUI and GUI users. For
example, see #9668 where the user was understandably confused about the
source of the problem and how to fix it.

The improved error message:
1. Clearly identifies the config file where the error was found (which
is more important now that we support layered configs)
2. Provides a line and column number of the error
3. Displays the line where the error occurred and underlines it

For example, if my `config.toml` includes the following:
```toml
[features]
collaboration_modes = "true"
```

Here's the current CLI error message:
```
Error loading config.toml: invalid type: string "true", expected a boolean in `features`
```

And here's the improved message:
```
Error loading config.toml:
/Users/etraut/.codex/config.toml:43:23: invalid type: string "true", expected a boolean
   |
43 | collaboration_modes = "true"
   |                       ^^^^^^
```

The bulk of the new logic is contained within a new module
`config_loader/diagnostics.rs` that is responsible for calculating the
text range for a given toml path (which is more involved than I would
have expected).

In addition, this PR adds the file name and text range to the
`ConfigWarningNotification` app server struct. This allows GUI clients
to present the user with a better error message and an optional link to
open the errant config file. This was a suggestion from @.bolinfest when
he reviewed my previous PR.
2026-01-23 20:11:09 -08:00
Ahmed Ibrahim
b3127e2eeb Have a coding mode and only show coding and plan (#9802) 2026-01-23 19:28:49 -08:00
viyatb-oai
77222492f9 feat: introducing a network sandbox proxy (#8442)
This add a new crate, `codex-network-proxy`, a local network proxy
service used by Codex to enforce fine-grained network policy (domain
allow/deny) and to surface blocked network events for interactive
approvals.

- New crate: `codex-rs/network-proxy/` (`codex-network-proxy` binary +
library)
- Core capabilities:
  - HTTP proxy support (including CONNECT tunneling)
  - SOCKS5 proxy support (in the later PR)
- policy evaluation (allowed/denied domain lists; denylist wins;
wildcard support)
  - small admin API for polling/reload/mode changes
- optional MITM support for HTTPS CONNECT to enforce “limited mode”
method restrictions (later PR)

Will follow up integration with codex in subsequent PRs.

## Testing

- `cd codex-rs && cargo build -p codex-network-proxy`
- `cd codex-rs && cargo run -p codex-network-proxy -- proxy`
2026-01-23 17:47:09 -08:00
Ahmed Ibrahim
69cfc73dc6 change collaboration mode to struct (#9793)
Shouldn't cause behavioral change
2026-01-23 17:00:23 -08:00
Ahmed Ibrahim
1167465bf6 Chore: remove mode from header (#9792) 2026-01-23 22:38:17 +00:00
iceweasel-oai
d9232403aa bundle sandbox helper binaries in main zip, for winget. (#9707)
Winget uses the main codex.exe value as its target.
The elevated sandbox requires these two binaries to live next to
codex.exe
2026-01-23 14:36:42 -08:00
gt-oai
b9deb57689 Load untrusted rules (#9791) 2026-01-23 21:52:27 +00:00
gt-oai
c6ded0afd8 still load skills (#9700) 2026-01-23 20:35:50 +00:00
jcoens-openai
e04851816d Remove stale TODO comment from defs.bzl (#9787)
### Motivation
- Remove an outdated comment in `defs.bzl` referencing
`cargo_build_script` that is no longer relevant.

### Description
- Delete the stale `# TODO(zbarsky): cargo_build_script support?` line
so the logic flows directly from `binaries` to `lib_srcs` in `defs.bzl`.

### Testing
- Ran `git diff --check` which produced no errors.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973d9ac757c8331be475a8fb0f90a88)
2026-01-23 20:30:01 +00:00
JUAN DAVID SALAS CAMARGO
e0ae219f36 Fix resume picker when user event appears after head (#9512)
Fixes #9501

Contributing guide:
https://github.com/openai/codex/blob/main/docs/contributing.md

## Summary
The resume picker requires a session_meta line and at least one
user_message event within the initial head scan. Some rollout files
contain multiple session_meta entries before the first user_message, so
the user event can fall outside the default head window and the session
is omitted from the picker even though it is resumable by ID.

This PR keeps the head summary bounded but extends scanning for a
user_message once a session_meta has been observed. The summary still
caps stored head entries, but we allow a small, bounded extra scan to
find the first user event so valid sessions are not filtered out.

## Changes
- Continue scanning past the head limit (bounded) when session_meta is
present but no user_message has been seen yet.
- Mark session_meta as seen even if the head summary buffer is already
full.
- Add a regression test with multiple session_meta lines before the
first user_message.

## Why This Is Safe
- The head summary remains bounded to avoid unbounded memory usage.
- The extra scan is capped (USER_EVENT_SCAN_LIMIT) and only triggers
after a session_meta is seen.
- Behavior is unchanged for typical files where the user_message appears
early.

## Testing
- cargo test -p codex-core --lib
test_list_threads_scans_past_head_for_user_event
2026-01-23 12:21:27 -08:00
Ahmed Ibrahim
45fe58159e Select default model from filtered presets (#9782)
Pick the first available preset after auth filtering for default
selection.
2026-01-23 12:18:36 -08:00
gt-oai
7938c170d9 Print warning if we skip config loading (#9611)
https://github.com/openai/codex/pull/9533 silently ignored config if
untrusted. Instead, we still load it but disable it. Maybe we shouldn't
try to parse it either...

<img width="939" height="515" alt="Screenshot 2026-01-21 at 14 56 38"
src="https://github.com/user-attachments/assets/e753cc22-dd99-4242-8ffe-7589e85bef66"
/>
2026-01-23 20:06:37 +00:00
Salman Chishti
eca365cf8c Upgrade GitHub Actions for Node 24 compatibility (#9722)
## Summary

Upgrade GitHub Actions to their latest versions to ensure compatibility
with Node 24, as Node 20 will reach end-of-life in April 2026.

## Changes

| Action | Old Version(s) | New Version | Release | Files |
|--------|---------------|-------------|---------|-------|
| `actions/cache` |
[`v4`](https://github.com/actions/cache/releases/tag/v4) |
[`v5`](https://github.com/actions/cache/releases/tag/v5) |
[Release](https://github.com/actions/cache/releases/tag/v5) | bazel.yml
|

## Context

Per [GitHub's
announcement](https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/),
Node 20 is being deprecated and runners will begin using Node 24 by
default starting March 4th, 2026.

### Why this matters

- **Node 20 EOL**: April 2026
- **Node 24 default**: March 4th, 2026
- **Action**: Update to latest action versions that support Node 24

### Security Note

Actions that were previously pinned to commit SHAs remain pinned to SHAs
(updated to the latest release SHA) to maintain the security benefits of
immutable references.

### Testing

These changes only affect CI/CD workflow configurations and should not
impact application functionality. The workflows should be tested by
running them on a branch before merging.

Signed-off-by: Salman Muin Kayser Chishti <13schishti@gmail.com>
2026-01-23 12:06:04 -08:00
zerone0x
ae7d3e1b49 fix(exec): skip git repo check when --yolo flag is used (#9590)
## Summary

Fixes #7522

The `--yolo` (`--dangerously-bypass-approvals-and-sandbox`) flag is
documented to skip all confirmation prompts and execute commands without
sandboxing, intended solely for running in environments that are
externally sandboxed. However, it was not bypassing the trusted
directory (git repo) check, requiring users to also specify
`--skip-git-repo-check`.

This change makes `--yolo` also skip the git repo check, matching the
documented behavior and user expectations.

## Changes

- Modified `codex-rs/exec/src/lib.rs` to check for
`dangerously_bypass_approvals_and_sandbox` flag in addition to
`skip_git_repo_check` when determining whether to skip the git repo
check

## Testing

- Verified the code compiles with `cargo check -p codex-exec`
- Ran existing tests with `cargo test -p codex-exec` (34 passed, 8
integration tests failed due to unrelated API connectivity issues)

---
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 12:05:20 -08:00
Ahmed Ibrahim
f353d3d695 prompt (#9777)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-23 19:24:48 +00:00
charley-oai
935d88b455 Persist text element ranges and attached images across history/resume (#9116)
**Summary**
- Backtrack selection now rehydrates `text_elements` and
`local_image_paths` from the chosen user history cell so Esc‑Esc history
edits preserve image placeholders and attachments.
- Composer prefill uses the preserved elements/attachments in both `tui`
and `tui2`.
- Extended backtrack selection tests to cover image placeholder elements
and local image paths.

**Changes**
- `tui/src/app_backtrack.rs`: Backtrack selection now carries text
elements + local image paths; composer prefill uses them (removes TODO).
- `tui2/src/app_backtrack.rs`: Same as above.
- `tui/src/app.rs`: Updated backtrack test to assert restored
elements/paths.
- `tui2/src/app.rs`: Same test updates.

### The original scope of this PR (threading text elements and image
attachments through the codex harness thoroughly/persistently) was
broken into the following PRs other than this one:

The diff of this PR was reduced by changing types in a starter PR:
https://github.com/openai/codex/pull/9235

Then text element metadata was added to protocol, app server, and core
in this PR: https://github.com/openai/codex/pull/9331

Then the end-to-end flow was completed by wiring TUI/TUI2 input,
history, and restore behavior in
https://github.com/openai/codex/pull/9393

Prompt expansion was supported in this PR:
https://github.com/openai/codex/pull/9518

TextElement optional placeholder field was protected in
https://github.com/openai/codex/pull/9545
2026-01-23 10:18:19 -08:00
jif-oai
f30f39b28b feat: tui beta for collab (#9690)
https://github.com/user-attachments/assets/1ca07e7a-3d82-40da-a5b0-8ab2eef0bb69
2026-01-23 13:57:59 +01:00
jif-oai
afa08570f2 nit: exclude PWD for rc sourcing (#9753) 2026-01-23 13:35:48 +01:00
Michael Bolin
86a1e41f2e chore: use some raw strings to reduce quoting (#9745)
Small follow-ups for https://github.com/openai/codex/pull/9565. Mainly
`r#`, but also added some whitespace for early returns.
2026-01-22 22:38:10 -08:00
JUAN DAVID SALAS CAMARGO
f815fa14ea Fix execpolicy parsing for multiline quoted args (#9565)
## What
Fix bash command parsing to accept double-quoted strings that contain
literal newlines so execpolicy can match allow rules.

## Why
Allow rules like [git, commit] should still match when commit messages
include a newline in a quoted argument; the parser currently rejects
these strings and falls back to the outer shell invocation.

## How
- Validate double-quoted strings by ensuring all named children are
string_content and then stripping the outer quotes from the raw node
text so embedded newlines are preserved.
- Reuse the helper for concatenated arguments.
- Ensure large SI suffix formatting uses the caller-provided locale
formatter for grouping.
- Add coverage for newline-containing quoted arguments.

Fixes #9541.

## Tests
- cargo test -p codex-core
- just fix -p codex-core
- cargo test -p codex-protocol
- just fix -p codex-protocol
- cargo test --all-features
2026-01-22 22:16:53 -08:00
alexsong-oai
0fa45fbca4 feat: add session source as otel metadata tag (#9720)
Add session.source and user.account_id as global OTEL metric tags to
identify client surface and user.
2026-01-22 18:46:14 -08:00
charley-oai
02fced28a4 Hide mode cycle hint while a task is running (#9730)
## Summary
- hide the “(shift+tab to cycle)” suffix on the collaboration mode label
while a task is running
- keep the cycle hint visible when idle
- add a snapshot to cover the running-task label state
2026-01-22 18:32:06 -08:00
Ahmed Ibrahim
d86bd20411 Change the prompt for planning and reasoning effort (#9733)
Change the prompt for planning and reasoning effort preset for better
experience
2026-01-22 18:22:12 -08:00
Dylan Hurd
2b1ee24e11 feat(app-server) Expose personality (#9674)
### Motivation
Exposes a per-thread / per-turn `personality` override in the v2
app-server API so clients can influence model communication style at
thread/turn start. Ensures the override is passed into the session
configuration resolution so it becomes effective for subsequent turns
and headless runners.

### Testing
- [x] Add an integration-style test
`turn_start_accepts_personality_override_v2` in
`codex-rs/app-server/tests/suite/v2/turn_start.rs` that verifies a
`/personality` override results in a developer update message containing
`<personality_spec>` in the outbound model request.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6971d646b1c08322a689a54d2649f3fe)
2026-01-22 18:00:20 -08:00
Matthew Zeng
a2c829a808 [connectors] Support connectors part 1 - App server & MCP (#9667)
In order to make Codex work with connectors, we add a built-in gateway
MCP that acts as a transparent proxy between the client and the
connectors. The gateway MCP collects actions that are accessible to the
user and sends them down to the user, when a connector action is chosen
to be called, the client invokes the action through the gateway MCP as
well.

 - [x] Add the system built-in gateway MCP to list and run connectors.
 - [x] Add the app server methods and protocol
2026-01-22 16:48:43 -08:00
github-actions[bot]
d9e041e0a6 Update models.json (#9726)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-01-23 00:44:47 +00:00
iceweasel-oai
0e4adcd760 use machine scope instead of user scope for dpapi. (#9713)
This fixes a bug where the elevated sandbox setup encrypts sandbox user
passwords as an admin user, but normal command execution attempts to
decrypt them as a different user.

Machine scope allows all users to encyrpt/decrypt

this PR also moves the encrypted file to a different location
.codex/.sandbox-secrets which the sandbox users cannot read.
2026-01-22 16:40:13 -08:00
charley-oai
0e79d239ed TUI: prompt to implement plan and switch to Execute (#9712)
## Summary
- Replace the plan‑implementation prompt with a standard selection
popup.
- “Yes” submits a user turn in Execute via a dedicated app event to
preserve normal transcript behavior.
- “No” simply dismisses the popup.

<img width="977" height="433" alt="Screenshot 2026-01-22 at 2 00 54 PM"
src="https://github.com/user-attachments/assets/91fad06f-7b7a-4cd8-9051-f28a19b750b2"
/>

## Changes
- Add a plan‑implementation popup using `SelectionViewParams`.
- Add `SubmitUserMessageWithMode` so “Yes” routes through
`submit_user_message` (ensures user history + separator state).
- Track `saw_plan_update_this_turn` so the prompt appears even when only
`update_plan` is emitted.
- Suppress the plan popup on replayed turns, when messages are queued,
or when a rate‑limit prompt is pending.
- Add `execute_mode` helper for collaboration modes.
- Add tests for replay/queued/rate‑limit guards and plan update without
final message.
- Add snapshots for both the default and “No”‑selected popup states.
2026-01-23 00:25:50 +00:00
Anton Panasenko
e117a3ff33 feat: support proxy for ws connection (#9719)
reapply websocket changes without changing tls lib.
2026-01-22 15:23:15 -08:00
iudi
afd63e8bae Fix typo in experimental_prompt.md (#9716)
Simple typo fix in the first sentence of the experimental_prompt.md
instructions file.
2026-01-22 14:07:14 -08:00
Michael Bolin
5d963ee5d9 feat: fix formatting of codex features list (#9715)
The formatting of `codex features list` made it hard to follow. This PR
introduces column width math to make things nice.

Maybe slightly hard to machine-parse (since not a simple `\t`), but we
should introduce a `--json` option if that's really important.

You can see the before/after in the screenshot:

<img width="1119" height="932" alt="image"
src="https://github.com/user-attachments/assets/c99dce85-899a-4a2d-b4af-003938f5e1df"
/>
2026-01-22 13:02:41 -08:00
Owen Lin
733cb68496 feat(app-server): support archived threads in thread/list (#9571) 2026-01-22 12:22:36 -08:00
Owen Lin
80240b3b67 feat(app-server): thread/read API (#9569) 2026-01-22 12:22:01 -08:00
Dylan Hurd
8b3521ee77 feat(core) update Personality on turn (#9644)
## Summary
Support updating Personality mid-Thread via UserTurn/OverwriteTurn. This
is explicitly unused by the clients so far, to simplify PRs - app-server
and tui implementations will be follow-ups.

## Testing
- [x] added integration tests
2026-01-22 12:04:23 -08:00
charley-oai
4210fb9e6c Modes label below textarea (#9645)
# Summary
- Add a collaboration mode indicator rendered at the bottom-right of the
TUI composer footer.
- Style modes per design (Plan in #D72EE1, Execute matching dim context
style, Pair Programming using the same cyan as text elements).
- Add shared “(shift+tab to cycle)” hint text for all mode labels and
align the indicator with the left footer margin.

NOTE: currently this is hidden if the Collaboration Modes feature flag
is disabled, or in Custom mode. Maybe we should show it in Custom mode
too? I'll leave that out of this PR though

# UI
- Mode indicator appears below the textarea, bottom-right of the footer
line.
- Includes “(shift+tab to cycle)” and keeps right padding aligned to the
left footer indent.

<img width="983" height="200" alt="Screenshot 2026-01-21 at 7 17 54 PM"
src="https://github.com/user-attachments/assets/d1c5e4ed-7d7b-4f6c-9e71-bc3cf6400e0e"
/>

<img width="980" height="200" alt="Screenshot 2026-01-21 at 7 18 53 PM"
src="https://github.com/user-attachments/assets/d22ff0da-a406-4930-85c5-affb2234e84b"
/>

<img width="979" height="201" alt="Screenshot 2026-01-21 at 7 19 12 PM"
src="https://github.com/user-attachments/assets/862cb17f-0495-46fa-9b01-a4a9f29b52d5"
/>
2026-01-22 17:31:11 +00:00
pakrym-oai
b511c38ddb Support end_turn flag (#9698)
Experimental flag that signals the end of the turn.
2026-01-22 17:27:48 +00:00
pakrym-oai
4d48d4e0c2 Revert "feat: support proxy for ws connection" (#9693)
Reverts openai/codex#9409
2026-01-22 15:57:18 +00:00
Shijie Rao
a4cb97ba5a Chore: add cmd related info to exec approval request (#9659)
### Summary
We now rely purely on `item/commandExecution/requestApproval` item to
render pending approval in VSCE and app. With v2 approach, it does not
include the actual cmd that it is attempting and therefore we can only
use `proposedExecpolicyAmendment` to render which can be incomplete.

### Reproduce
* Add `prefix_rule(pattern=["echo"], decision="prompt")` to your
`~/.codex/rules.default.rules`.
* Ask to `Run  echo "approval-test" please` in VSCE or app. 
* The pending approval protal does show up but with no content

#### Example screenshot
<img width="3434" height="3648" alt="Screenshot 2026-01-21 at 8 23
25 PM"
src="https://github.com/user-attachments/assets/75644837-21f1-40f8-8b02-858d361ff817"
/>

#### Sample output
```
  {"method":"item/commandExecution/requestApproval","id":0,"params":{
    "threadId":"019be439-5a90-7600-a7ea-2d2dcc50302a",
    "turnId":"0",
    "itemId":"call_usgnQ4qEX5U9roNdjT7fPzhb",
    "reason":"`/bin/zsh -lc 'echo \"testing\"'` requires approval by policy",
    "proposedExecpolicyAmendment":null
  }}

```

### Fix
Inlude `command` string, `cwd` and `command_actions` in
`CommandExecutionRequestApprovalParams` so that consumers can display
the correct command instead of relying on exec policy output.
2026-01-21 23:58:53 -08:00
Kbediako
079fd2adb9 Fix: Lower log level for closed-channel send (#9653)
## What?
- Downgrade the closed-channel send error log to debug in
`codex-rs/core/src/codex.rs`.

## Why?
- `async_channel::Sender::send` only fails when the channel is closed,
so the current error-level log is noisy during normal shutdown. See
issue #9652.

## How?
- Replace the error log with a debug log on send failure.

## Tests
- `just fmt`
- `just fix -p codex-core`
- `cargo test -p codex-core`
2026-01-21 22:09:58 -08:00
Dylan Hurd
038b78c915 feat(tui) /permissions flow (#9561)
## Summary
Adds the `/permissions` command, with a (usually) shorter set of
permissions. `/approvals` still exists, for backwards compatibility.

<img width="863" height="309" alt="Screenshot 2026-01-20 at 4 12 51 PM"
src="https://github.com/user-attachments/assets/c49b5ba5-bc47-46dd-9067-e1a5670328fe"
/>


## Testing
- [x] updated unit tests
- [x] Tested locally
2026-01-21 21:38:46 -08:00
pakrym-oai
836f0343a3 Add tui.experimental_mode setting (#9656)
To simplify testing
2026-01-22 05:27:57 +00:00
Dylan Hurd
e520592bcf chore: tweak AGENTS.md (#9650)
## Summary
Update AGENTS.md to improve testing flow

## Testing
- [x] Tested locally, much faster
2026-01-21 20:20:45 -08:00
xl-openai
577ba3a4ca Add UI for skill enable/disable. (#9627)
"/skill" will now allow you to enable/disable skills:
<img width="658" height="199" alt="image"
src="https://github.com/user-attachments/assets/bf8994c8-d6c1-462f-8bbb-f1ee9241caa4"
/>
2026-01-21 18:21:12 -08:00
Dylan Hurd
96a72828be feat(core) ModelInfo.model_instructions_template (#9597)
## Summary
#9555 is the start of a rename, so I'm starting to standardize here.
Sets up `model_instructions` templating with a strongly-typed object for
injecting a personality block into the model instructions.

## Testing
- [x] Added tests
- [x] Ran locally
2026-01-21 18:11:18 -08:00
Josh McKinney
a489b64cb5 feat(tui): retire the tui2 experiment (#9640)
## Summary
- Retire the experimental TUI2 implementation and its feature flag.
- Remove TUI2-only config/schema/docs so the CLI stays on the
terminal-native path.
- Keep docs aligned with the legacy TUI while we focus on redraw-based
improvements.

## Customer impact
- Retires the TUI2 experiment and keeps Codex on the proven
terminal-native UI while we invest in redraw-based improvements to the
existing experience.

## Migration / compatibility
- If you previously set tui2-related options in config.toml, they are
now ignored and Codex continues using the existing terminal-native TUI
(no action required).

## Context
- What worked: a transcript-owned viewport delivered excellent resize
rewrap and high-fidelity copy (especially for code).
- Why stop: making that experience feel fully native across the
environment matrix (terminal emulator, OS, input modality, multiplexer,
font/theme, alt-screen behavior) creates a combinatorial explosion of
edge cases.
- What next: we are focusing on redraw-based improvements to the
existing terminal-native TUI so scrolling, selection, and copy remain
native while resize/redraw correctness improves.

## Testing
- just write-config-schema
- just fmt
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-core
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-cli
- cargo check
- cargo test -p codex-core
- cargo test -p codex-cli
2026-01-22 01:02:29 +00:00
charley-oai
41e38856f6 Reduce burst testing flake (#9549)
## Summary

- make paste-burst tests deterministic by injecting explicit timestamps
instead of relying on wall clock timing
- add time-aware helpers for input/submission paths so tests can drive
the burst heuristic precisely
- update burst-related tests to flush using computed timeouts while
preserving behavior assertions
- increase timeout slack in
shell_tools_start_before_response_completed_when_stream_delayed to
reduce flakiness
2026-01-21 16:42:31 -08:00
sayan-oai
c285b88980 feat: publish config schema on release (#9572)
Follow up to #8956; publish schema on new release to stable URL.

Also canonicalize schema (sort keys) when writing. This avoids reliance
on default `schema_rs` behavior and makes the schema easier to read.
2026-01-21 16:24:14 -08:00
Dylan Hurd
f1240ff4fe fix(tui) turn timing incremental (#9599)
## Summary
When we send multiple assistant messages, reset the timer so "Worked for
2m 36s" is the time since the last time we showed the message, rather
than an ever-increasing number.

We could instead change the copy so it's more clearly a running counter.

## Testing
- [x] ran locally

<img width="903" height="732" alt="Screenshot 2026-01-21 at 1 42 51 AM"
src="https://github.com/user-attachments/assets/bb4d827b-3a0e-48ba-bd6a-d8cd65d8e892"
/>
2026-01-21 15:59:56 -08:00
jif-oai
5dad1b956e feat: better sorting of shell commands (#9629)
This PR changes the way we sort slash command by going in this order:
1. Exact match
2. Prefix
3. Fuzzy

As a result, we you type `/ps` the default command is not `/approvals`
2026-01-21 23:03:01 +00:00
Eric Traut
2ca9a56528 Add layered config.toml support to app server (#9510)
This PR adds support for chained (layered) config.toml file merging for
clients that use the app server interface. This feature already exists
for the TUI, but it does not work for GUI clients.

It does the following:
* Changes code paths for new thread, resume thread, and fork thread to
use the effective config based on the cwd.
* Updates the `config/read` API to accept an optional `cwd` parameter.
If specified, the API returns the effective config based on that cwd
path. Also optionally includes all layers including project config
files. If cwd is not specified, the API falls back on its older behavior
where it considers only the global (non-project) config files when
computing the effective config.

The changes in codex_message_processor.rs look deceptively large. They
mostly just involve moving existing blocks of code to a later point in
some functions so it can use the cwd to calculate the config.

This PR builds upon #9509 and should be reviewed and merged after that
PR.

Tested:
* Verified change with (dependent, as-yet-uncommitted) changes to IDE
Extension and confirmed correct behavior

The full fix requires additional changes in the IDE Extension code base,
but they depend on this PR.
2026-01-21 14:21:48 -08:00
charley-oai
fe641f759f Add collaboration_mode to TurnContextItem (#9583)
## Summary
- add optional `collaboration_mode` to `TurnContextItem` in rollouts
- persist the current collaboration mode when recording turn context
(sampling + compaction)

## Rationale
We already persist turn context data for resume logic. Capturing
collaboration mode in the rollout gives us the mode context for each
turn, enabling follow‑up work to diff mode instructions correctly on
resume.

## Changes
- protocol: add optional `collaboration_mode` field to `TurnContextItem`
- core: persist collaboration mode alongside other turn context settings
in rollouts
2026-01-21 14:14:21 -08:00
Shijie Rao
3fcb40245e Chore: update plan mode output in prompt (#9592)
### Summary
* Update plan prompt output
* Update requestUserInput response to be a single key value pair
`answer: String`.
2026-01-21 14:12:18 -08:00
pakrym-oai
f2e1ad59bc Add websockets logging (#9633)
To help with debugging.
2026-01-21 21:35:38 +00:00
iceweasel-oai
7a9c9b8636 forgot to add some windows sandbox nux events. (#9624) 2026-01-21 13:24:09 -08:00
zbarsky-openai
ab8415dcf5 [bazel] Upgrade llvm toolchain and enable remote repo cache (#9616)
On bazel9 this lets us avoid performing some external repo downloads if
they've been previously uploaded to remote cache, downloads are deferred
until they are actually needed to execute an uncached action
2026-01-21 12:52:39 -08:00
Gav Verma
2e06d61339 Update skills/list protocol readme (#9623)
Updates readme example for `skills/list` to reflect latest response
spec.
2026-01-21 12:51:51 -08:00
Tien Nguyen
68b8381723 docs: fix outdated MCP subcommands documentation (#9622) 2026-01-21 11:17:37 -08:00
iceweasel-oai
f81dd128a2 define/emit some metrics for windows sandbox setup (#9573)
This should give us visibility into how users are using the elevated
sandbox nux flow, and the timing of the elevated setup.
2026-01-21 11:07:26 -08:00
Tiffany Citra
8179312ff5 fix: Fix tilde expansion to avoid absolute-path escape (#9621)
### Motivation
- Prevent inputs like `~//` or `~///etc` from expanding to arbitrary
absolute paths (e.g. `/`) because `Path::join` discards the left side
when the right side is absolute, which could allow config values to
escape `HOME` and broaden writable roots.

### Description
- In `codex-rs/utils/absolute-path/src/lib.rs` update
`maybe_expand_home_directory` to trim leading separators from the suffix
and return `home` when the remainder is empty so tilde expansion stays
rooted under `HOME`.
- Add a non-Windows unit test
`home_directory_double_slash_on_non_windows_is_expanded_in_deserialization`
that validates `"~//code"` expands to `home.join("code")`.

### Testing
- Ran `just fmt` successfully.
- Ran `just fix -p codex-utils-absolute-path` (Clippy autofix)
successfully.
- Ran `cargo test -p codex-utils-absolute-path` and all tests passed.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_697007481cac832dbeb1ee144d1e4cbe)
2026-01-21 10:43:10 -08:00
jif-oai
3355adad1d chore: defensive shell snapshot (#9609)
This PR adds 2 defensive mechanisms for shell snapshotting:
* Filter out invalid env variables (containing `-` for example) without
dropping the whole snapshot
* Validate the snapshot before considering it as valid by running a mock
command with a shell snapshot
2026-01-21 18:41:58 +00:00
jif-oai
338f2d634b nit: ui on interruption (#9606) 2026-01-21 14:09:15 +00:00
zbarsky-openai
2338f99f58 [bazel] Upgrade to bazel9 (#9576) 2026-01-21 13:25:36 +00:00
jif-oai
f1b6a43907 nit: better collab tui (#9551)
<img width="478" height="304" alt="Screenshot 2026-01-21 at 11 53 50"
src="https://github.com/user-attachments/assets/e2ef70de-2fff-44e0-a574-059177966ed2"
/>
2026-01-21 11:53:58 +00:00
jif-oai
13358fa131 fix: nit tui on terminal interactions (#9602) 2026-01-21 11:30:34 +00:00
jif-oai
b75024c465 feat: async shell snapshot (#9600) 2026-01-21 10:41:13 +00:00
Eric Traut
16b9380e99 Added "codex." prefix to "conversation.turn.count" metric name (#9594)
All other metrics names start with "codex.", so I presume this was an
unintended omission.
2026-01-21 10:00:47 +00:00
jif-oai
a22a61e678 feat: display raw command on user shell (#9598) 2026-01-21 09:44:38 +00:00
jif-oai
f1c961d5f7 feat: max threads config (#9483)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-21 09:39:11 +00:00
Ahmed Ibrahim
6e9a31def1 fix going up and down on questions after writing notes (#9596) 2026-01-21 09:37:37 +00:00
Ahmed Ibrahim
5f55ed666b Add request-user-input overlay (#9585)
- Add request-user-input overlay and routing in the TUI
2026-01-21 00:19:35 -08:00
1053 changed files with 19019 additions and 85280 deletions

View File

@@ -4,6 +4,7 @@ common --repo_env=BAZEL_NO_APPLE_CPP_TOOLCHAIN=1
common --disk_cache=~/.cache/bazel-disk-cache
common --repo_contents_cache=~/.cache/bazel-repo-contents-cache
common --repository_cache=~/.cache/bazel-repo-cache
startup --experimental_remote_repo_contents_cache
common --experimental_platform_in_output_dir

1
.bazelversion Normal file
View File

@@ -0,0 +1 @@
9.0.0

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bash
set -euo pipefail
: "${TARGET:?TARGET environment variable is required}"
: "${GITHUB_ENV:?GITHUB_ENV environment variable is required}"
apt_update_args=()
if [[ -n "${APT_UPDATE_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_update_args=(${APT_UPDATE_ARGS})
fi
apt_install_args=()
if [[ -n "${APT_INSTALL_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_install_args=(${APT_INSTALL_ARGS})
fi
sudo apt-get update "${apt_update_args[@]}"
sudo apt-get install -y "${apt_install_args[@]}" musl-tools pkg-config g++ clang libc++-dev libc++abi-dev lld
case "${TARGET}" in
x86_64-unknown-linux-musl)
arch="x86_64"
;;
aarch64-unknown-linux-musl)
arch="aarch64"
;;
*)
echo "Unexpected musl target: ${TARGET}" >&2
exit 1
;;
esac
# Use the musl toolchain as the Rust linker to avoid Zig injecting its own CRT.
if command -v "${arch}-linux-musl-gcc" >/dev/null; then
musl_linker="$(command -v "${arch}-linux-musl-gcc")"
elif command -v musl-gcc >/dev/null; then
musl_linker="$(command -v musl-gcc)"
else
echo "musl gcc not found after install; arch=${arch}" >&2
exit 1
fi
zig_target="${TARGET/-unknown-linux-musl/-linux-musl}"
runner_temp="${RUNNER_TEMP:-/tmp}"
tool_root="${runner_temp}/codex-musl-tools-${TARGET}"
mkdir -p "${tool_root}"
sysroot=""
if command -v zig >/dev/null; then
zig_bin="$(command -v zig)"
cc="${tool_root}/zigcc"
cxx="${tool_root}/zigcxx"
cat >"${cc}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
# Drop any explicit --target/-target flags. Zig expects -target and
# rejects Rust triples like *-unknown-linux-musl.
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" cc -target "${zig_target}" "\${args[@]}"
EOF
cat >"${cxx}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" c++ -target "${zig_target}" "\${args[@]}"
EOF
chmod +x "${cc}" "${cxx}"
sysroot="$("${zig_bin}" cc -target "${zig_target}" -print-sysroot 2>/dev/null || true)"
else
cc="${musl_linker}"
if command -v "${arch}-linux-musl-g++" >/dev/null; then
cxx="$(command -v "${arch}-linux-musl-g++")"
elif command -v musl-g++ >/dev/null; then
cxx="$(command -v musl-g++)"
else
cxx="${cc}"
fi
fi
if [[ -n "${sysroot}" && "${sysroot}" != "/" ]]; then
echo "BORING_BSSL_SYSROOT=${sysroot}" >> "$GITHUB_ENV"
boring_sysroot_var="BORING_BSSL_SYSROOT_${TARGET}"
boring_sysroot_var="${boring_sysroot_var//-/_}"
echo "${boring_sysroot_var}=${sysroot}" >> "$GITHUB_ENV"
fi
cflags="-pthread"
cxxflags="-pthread"
if [[ "${TARGET}" == "aarch64-unknown-linux-musl" ]]; then
# BoringSSL enables -Wframe-larger-than=25344 under clang and treats warnings as errors.
cflags="${cflags} -Wno-error=frame-larger-than"
cxxflags="${cxxflags} -Wno-error=frame-larger-than"
fi
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
echo "CC=${cc}" >> "$GITHUB_ENV"
echo "TARGET_CC=${cc}" >> "$GITHUB_ENV"
target_cc_var="CC_${TARGET}"
target_cc_var="${target_cc_var//-/_}"
echo "${target_cc_var}=${cc}" >> "$GITHUB_ENV"
echo "CXX=${cxx}" >> "$GITHUB_ENV"
echo "TARGET_CXX=${cxx}" >> "$GITHUB_ENV"
target_cxx_var="CXX_${TARGET}"
target_cxx_var="${target_cxx_var//-/_}"
echo "${target_cxx_var}=${cxx}" >> "$GITHUB_ENV"
cargo_linker_var="CARGO_TARGET_${TARGET^^}_LINKER"
cargo_linker_var="${cargo_linker_var//-/_}"
echo "${cargo_linker_var}=${musl_linker}" >> "$GITHUB_ENV"
echo "CMAKE_C_COMPILER=${cc}" >> "$GITHUB_ENV"
echo "CMAKE_CXX_COMPILER=${cxx}" >> "$GITHUB_ENV"
echo "CMAKE_ARGS=-DCMAKE_HAVE_THREADS_LIBRARY=1 -DCMAKE_USE_PTHREADS_INIT=1 -DCMAKE_THREAD_LIBS_INIT=-pthread -DTHREADS_PREFER_PTHREAD_FLAG=ON" >> "$GITHUB_ENV"

View File

@@ -81,7 +81,7 @@ jobs:
# previously built artifacts to minimize build time. The more precise you are with
# hashFiles sources the less work bazel will have to do.
# - name: Mount bazel caches
# uses: actions/cache@v4
# uses: actions/cache@v5
# with:
# path: |
# ~/.cache/bazel-repo-cache

View File

@@ -261,15 +261,21 @@ jobs:
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: |
set -euo pipefail
sudo apt-get -y update -o Acquire::Retries=3
sudo apt-get -y install --no-install-recommends musl-tools pkg-config
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}

View File

@@ -20,6 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Validate tag matches Cargo.toml version
shell: bash
@@ -45,6 +46,15 @@ jobs:
echo "✅ Tag and Cargo.toml agree (${tag_ver})"
echo "::endgroup::"
- name: Verify config schema fixture
shell: bash
working-directory: codex-rs
run: |
set -euo pipefail
echo "If this fails, run: just write-config-schema to overwrite fixture with intentional changes."
cargo run -p codex-core --bin codex-write-config-schema
git diff --exit-code core/config.schema.json
build:
needs: tag-check
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
@@ -94,11 +104,17 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Cargo build
shell: bash
@@ -275,7 +291,28 @@ jobs:
# Must run from inside the dest dir so 7z won't
# embed the directory path inside the zip.
if [[ "${{ matrix.runner }}" == windows* ]]; then
(cd "$dest" && 7z a "${base}.zip" "$base")
if [[ "$base" == "codex-${{ matrix.target }}.exe" ]]; then
# Bundle the sandbox helper binaries into the main codex zip so
# WinGet installs include the required helpers next to codex.exe.
# Fall back to the single-binary zip if the helpers are missing
# to avoid breaking releases.
bundle_dir="$(mktemp -d)"
runner_src="$dest/codex-command-runner-${{ matrix.target }}.exe"
setup_src="$dest/codex-windows-sandbox-setup-${{ matrix.target }}.exe"
if [[ -f "$runner_src" && -f "$setup_src" ]]; then
cp "$dest/$base" "$bundle_dir/$base"
cp "$runner_src" "$bundle_dir/codex-command-runner.exe"
cp "$setup_src" "$bundle_dir/codex-windows-sandbox-setup.exe"
(cd "$bundle_dir" && 7z a "$dest/${base}.zip" .)
else
echo "warning: missing sandbox binaries; falling back to single-binary zip"
echo "warning: expected $runner_src and $setup_src"
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
rm -rf "$bundle_dir"
else
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
fi
# Also create .zst (existing behaviour) *and* remove the original
@@ -358,6 +395,10 @@ jobs:
ls -R dist/
- name: Add config schema release asset
run: |
cp codex-rs/core/config.schema.json dist/config-schema.json
- name: Define release name
id: release_name
run: |
@@ -428,6 +469,19 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
- name: Trigger developers.openai.com deploy
# Only trigger the deploy if the release is not a pre-release.
# The deploy is used to update the developers.openai.com website with the new config schema json file.
if: ${{ !contains(steps.release_name.outputs.name, '-') }}
continue-on-error: true
env:
DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL: ${{ secrets.DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL }}
run: |
if ! curl -sS -f -o /dev/null -X POST "$DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL"; then
echo "::warning title=developers.openai.com deploy hook failed::Vercel deploy hook POST failed for ${GITHUB_REF_NAME}"
exit 1
fi
# Publish to npm using OIDC authentication.
# July 31, 2025: https://github.blog/changelog/2025-07-31-npm-trusted-publishing-with-oidc-is-generally-available/
# npm docs: https://docs.npmjs.com/trusted-publishers

View File

@@ -97,11 +97,17 @@ jobs:
with:
targets: ${{ matrix.target }}
- if: ${{ matrix.install_musl }}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.install_musl }}
name: Install musl build dependencies
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Build exec server binaries
run: cargo build --release --target ${{ matrix.target }} --bin codex-exec-mcp-server --bin codex-execve-wrapper

View File

@@ -15,11 +15,13 @@ In the codex-rs folder where the rust code lives:
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates.
## TUI style conventions
See `codex-rs/tui/styles.md`.

View File

@@ -2,13 +2,9 @@ bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.3.1")
archive_override(
module_name = "toolchains_llvm_bootstrapped",
integrity = "sha256-9ks21bgEqbQWmwUIvqeLA64+Jk6o4ZVjC8KxjVa2Vw8=",
strip_prefix = "toolchains_llvm_bootstrapped-e3775e66a7b6d287c705ca0cd24497ef4a77c503",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/e3775e66a7b6d287c705ca0cd24497ef4a77c503/master.tar.gz"],
patch_strip = 1,
patches = [
"//patches:llvm_toolchain_archive_params.patch",
],
integrity = "sha256-4/2h4tYSUSptxFVI9G50yJxWGOwHSeTeOGBlaLQBV8g=",
strip_prefix = "toolchains_llvm_bootstrapped-d20baf67e04d8e2887e3779022890d1dc5e6b948",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/d20baf67e04d8e2887e3779022890d1dc5e6b948.tar.gz"],
)
osx = use_extension("@toolchains_llvm_bootstrapped//toolchain/extension:osx.bzl", "osx")
@@ -94,7 +90,7 @@ crate.annotation(
crate = "windows-link",
patch_args = ["-p1"],
patches = [
"//patches:windows-link.patch"
"//patches:windows-link.patch",
],
)

362
MODULE.bazel.lock generated
View File

@@ -1,5 +1,5 @@
{
"lockFileVersion": 24,
"lockFileVersion": 26,
"registryFileHashes": {
"https://bcr.bazel.build/bazel_registry.json": "8a28e4aff06ee60aed2a8c281907fb8bcbf3b753c91fb5a5c57da3215d5b3497",
"https://bcr.bazel.build/modules/abseil-cpp/20210324.2/MODULE.bazel": "7cd0312e064fde87c8d1cd79ba06c876bd23630c83466e9500321be55c96ace2",
@@ -9,11 +9,19 @@
"https://bcr.bazel.build/modules/abseil-cpp/20230802.0/MODULE.bazel": "d253ae36a8bd9ee3c5955384096ccb6baf16a1b1e93e858370da0a3b94f77c16",
"https://bcr.bazel.build/modules/abseil-cpp/20230802.1/MODULE.bazel": "fa92e2eb41a04df73cdabeec37107316f7e5272650f81d6cc096418fe647b915",
"https://bcr.bazel.build/modules/abseil-cpp/20240116.1/MODULE.bazel": "37bcdb4440fbb61df6a1c296ae01b327f19e9bb521f9b8e26ec854b6f97309ed",
"https://bcr.bazel.build/modules/abseil-cpp/20240116.1/source.json": "9be551b8d4e3ef76875c0d744b5d6a504a27e3ae67bc6b28f46415fd2d2957da",
"https://bcr.bazel.build/modules/abseil-cpp/20240116.2/MODULE.bazel": "73939767a4686cd9a520d16af5ab440071ed75cec1a876bf2fcfaf1f71987a16",
"https://bcr.bazel.build/modules/abseil-cpp/20250127.1/MODULE.bazel": "c4a89e7ceb9bf1e25cf84a9f830ff6b817b72874088bf5141b314726e46a57c1",
"https://bcr.bazel.build/modules/abseil-cpp/20250512.1/MODULE.bazel": "d209fdb6f36ffaf61c509fcc81b19e81b411a999a934a032e10cd009a0226215",
"https://bcr.bazel.build/modules/abseil-cpp/20250814.1/MODULE.bazel": "51f2312901470cdab0dbdf3b88c40cd21c62a7ed58a3de45b365ddc5b11bcab2",
"https://bcr.bazel.build/modules/abseil-cpp/20250814.1/source.json": "cea3901d7e299da7320700abbaafe57a65d039f10d0d7ea601c4a66938ea4b0c",
"https://bcr.bazel.build/modules/apple_support/1.11.1/MODULE.bazel": "1843d7cd8a58369a444fc6000e7304425fba600ff641592161d9f15b179fb896",
"https://bcr.bazel.build/modules/apple_support/1.15.1/MODULE.bazel": "a0556fefca0b1bb2de8567b8827518f94db6a6e7e7d632b4c48dc5f865bc7c85",
"https://bcr.bazel.build/modules/apple_support/1.21.0/MODULE.bazel": "ac1824ed5edf17dee2fdd4927ada30c9f8c3b520be1b5fd02a5da15bc10bff3e",
"https://bcr.bazel.build/modules/apple_support/1.21.1/MODULE.bazel": "5809fa3efab15d1f3c3c635af6974044bac8a4919c62238cce06acee8a8c11f1",
"https://bcr.bazel.build/modules/apple_support/1.23.0/MODULE.bazel": "317d47e3f65b580e7fb4221c160797fda48e32f07d2dfff63d754ef2316dcd25",
"https://bcr.bazel.build/modules/apple_support/1.23.1/MODULE.bazel": "53763fed456a968cf919b3240427cf3a9d5481ec5466abc9d5dc51bc70087442",
"https://bcr.bazel.build/modules/apple_support/1.24.1/MODULE.bazel": "f46e8ddad60aef170ee92b2f3d00ef66c147ceafea68b6877cb45bd91737f5f8",
"https://bcr.bazel.build/modules/apple_support/1.24.1/source.json": "cf725267cbacc5f028ef13bb77e7f2c2e0066923a4dab1025e4a0511b1ed258a",
"https://bcr.bazel.build/modules/apple_support/1.24.2/MODULE.bazel": "0e62471818affb9f0b26f128831d5c40b074d32e6dda5a0d3852847215a41ca4",
"https://bcr.bazel.build/modules/apple_support/1.24.2/source.json": "2c22c9827093250406c5568da6c54e6fdf0ef06238def3d99c71b12feb057a8d",
"https://bcr.bazel.build/modules/aspect_bazel_lib/2.14.0/MODULE.bazel": "2b31ffcc9bdc8295b2167e07a757dbbc9ac8906e7028e5170a3708cecaac119f",
"https://bcr.bazel.build/modules/aspect_bazel_lib/2.19.3/MODULE.bazel": "253d739ba126f62a5767d832765b12b59e9f8d2bc88cc1572f4a73e46eb298ca",
"https://bcr.bazel.build/modules/aspect_bazel_lib/2.19.3/source.json": "ffab9254c65ba945f8369297ad97ca0dec213d3adc6e07877e23a48624a8b456",
@@ -21,16 +29,21 @@
"https://bcr.bazel.build/modules/aspect_tools_telemetry/0.3.2/MODULE.bazel": "598e7fe3b54f5fa64fdbeead1027653963a359cc23561d43680006f3b463d5a4",
"https://bcr.bazel.build/modules/aspect_tools_telemetry/0.3.2/source.json": "c6f5c39e6f32eb395f8fdaea63031a233bbe96d49a3bfb9f75f6fce9b74bec6c",
"https://bcr.bazel.build/modules/bazel_features/1.1.1/MODULE.bazel": "27b8c79ef57efe08efccbd9dd6ef70d61b4798320b8d3c134fd571f78963dbcd",
"https://bcr.bazel.build/modules/bazel_features/1.10.0/MODULE.bazel": "f75e8807570484a99be90abcd52b5e1f390362c258bcb73106f4544957a48101",
"https://bcr.bazel.build/modules/bazel_features/1.11.0/MODULE.bazel": "f9382337dd5a474c3b7d334c2f83e50b6eaedc284253334cf823044a26de03e8",
"https://bcr.bazel.build/modules/bazel_features/1.15.0/MODULE.bazel": "d38ff6e517149dc509406aca0db3ad1efdd890a85e049585b7234d04238e2a4d",
"https://bcr.bazel.build/modules/bazel_features/1.17.0/MODULE.bazel": "039de32d21b816b47bd42c778e0454217e9c9caac4a3cf8e15c7231ee3ddee4d",
"https://bcr.bazel.build/modules/bazel_features/1.18.0/MODULE.bazel": "1be0ae2557ab3a72a57aeb31b29be347bcdc5d2b1eb1e70f39e3851a7e97041a",
"https://bcr.bazel.build/modules/bazel_features/1.19.0/MODULE.bazel": "59adcdf28230d220f0067b1f435b8537dd033bfff8db21335ef9217919c7fb58",
"https://bcr.bazel.build/modules/bazel_features/1.21.0/MODULE.bazel": "675642261665d8eea09989aa3b8afb5c37627f1be178382c320d1b46afba5e3b",
"https://bcr.bazel.build/modules/bazel_features/1.23.0/MODULE.bazel": "fd1ac84bc4e97a5a0816b7fd7d4d4f6d837b0047cf4cbd81652d616af3a6591a",
"https://bcr.bazel.build/modules/bazel_features/1.24.0/MODULE.bazel": "4796b4c25b47053e9bbffa792b3792d07e228ff66cd0405faef56a978708acd4",
"https://bcr.bazel.build/modules/bazel_features/1.27.0/MODULE.bazel": "621eeee06c4458a9121d1f104efb80f39d34deff4984e778359c60eaf1a8cb65",
"https://bcr.bazel.build/modules/bazel_features/1.28.0/MODULE.bazel": "4b4200e6cbf8fa335b2c3f43e1d6ef3e240319c33d43d60cc0fbd4b87ece299d",
"https://bcr.bazel.build/modules/bazel_features/1.3.0/MODULE.bazel": "cdcafe83ec318cda34e02948e81d790aab8df7a929cec6f6969f13a489ccecd9",
"https://bcr.bazel.build/modules/bazel_features/1.30.0/MODULE.bazel": "a14b62d05969a293b80257e72e597c2da7f717e1e69fa8b339703ed6731bec87",
"https://bcr.bazel.build/modules/bazel_features/1.32.0/MODULE.bazel": "095d67022a58cb20f7e20e1aefecfa65257a222c18a938e2914fd257b5f1ccdc",
"https://bcr.bazel.build/modules/bazel_features/1.33.0/MODULE.bazel": "8b8dc9d2a4c88609409c3191165bccec0e4cb044cd7a72ccbe826583303459f6",
"https://bcr.bazel.build/modules/bazel_features/1.34.0/MODULE.bazel": "e8475ad7c8965542e0c7aac8af68eb48c4af904be3d614b6aa6274c092c2ea1e",
"https://bcr.bazel.build/modules/bazel_features/1.34.0/source.json": "dfa5c4b01110313153b484a735764d247fee5624bbab63d25289e43b151a657a",
"https://bcr.bazel.build/modules/bazel_features/1.4.1/MODULE.bazel": "e45b6bb2350aff3e442ae1111c555e27eac1d915e77775f6fdc4b351b758b5d7",
@@ -52,20 +65,25 @@
"https://bcr.bazel.build/modules/bazel_skylib/1.8.1/MODULE.bazel": "88ade7293becda963e0e3ea33e7d54d3425127e0a326e0d17da085a5f1f03ff6",
"https://bcr.bazel.build/modules/bazel_skylib/1.8.2/MODULE.bazel": "69ad6927098316848b34a9142bcc975e018ba27f08c4ff403f50c1b6e646ca67",
"https://bcr.bazel.build/modules/bazel_skylib/1.8.2/source.json": "34a3c8bcf233b835eb74be9d628899bb32999d3e0eadef1947a0a562a2b16ffb",
"https://bcr.bazel.build/modules/buildozer/7.1.2/MODULE.bazel": "2e8dd40ede9c454042645fd8d8d0cd1527966aa5c919de86661e62953cd73d84",
"https://bcr.bazel.build/modules/buildozer/7.1.2/source.json": "c9028a501d2db85793a6996205c8de120944f50a0d570438fcae0457a5f9d1f8",
"https://bcr.bazel.build/modules/buildozer/8.2.1/MODULE.bazel": "61e9433c574c2bd9519cad7fa66b9c1d2b8e8d5f3ae5d6528a2c2d26e68d874d",
"https://bcr.bazel.build/modules/buildozer/8.2.1/source.json": "7c33f6a26ee0216f85544b4bca5e9044579e0219b6898dd653f5fb449cf2e484",
"https://bcr.bazel.build/modules/gawk/5.3.2.bcr.1/MODULE.bazel": "cdf8cbe5ee750db04b78878c9633cc76e80dcf4416cbe982ac3a9222f80713c8",
"https://bcr.bazel.build/modules/gawk/5.3.2.bcr.1/source.json": "fa7b512dfcb5eafd90ce3959cf42a2a6fe96144ebbb4b3b3928054895f2afac2",
"https://bcr.bazel.build/modules/google_benchmark/1.8.2/MODULE.bazel": "a70cf1bba851000ba93b58ae2f6d76490a9feb74192e57ab8e8ff13c34ec50cb",
"https://bcr.bazel.build/modules/googletest/1.11.0/MODULE.bazel": "3a83f095183f66345ca86aa13c58b59f9f94a2f81999c093d4eeaa2d262d12f4",
"https://bcr.bazel.build/modules/googletest/1.14.0.bcr.1/MODULE.bazel": "22c31a561553727960057361aa33bf20fb2e98584bc4fec007906e27053f80c6",
"https://bcr.bazel.build/modules/googletest/1.14.0.bcr.1/source.json": "41e9e129f80d8c8bf103a7acc337b76e54fad1214ac0a7084bf24f4cd924b8b4",
"https://bcr.bazel.build/modules/googletest/1.14.0/MODULE.bazel": "cfbcbf3e6eac06ef9d85900f64424708cc08687d1b527f0ef65aa7517af8118f",
"https://bcr.bazel.build/modules/googletest/1.15.2/MODULE.bazel": "6de1edc1d26cafb0ea1a6ab3f4d4192d91a312fd2d360b63adaa213cd00b2108",
"https://bcr.bazel.build/modules/googletest/1.17.0/MODULE.bazel": "dbec758171594a705933a29fcf69293d2468c49ec1f2ebca65c36f504d72df46",
"https://bcr.bazel.build/modules/googletest/1.17.0/source.json": "38e4454b25fc30f15439c0378e57909ab1fd0a443158aa35aec685da727cd713",
"https://bcr.bazel.build/modules/jq.bzl/0.1.0/MODULE.bazel": "2ce69b1af49952cd4121a9c3055faa679e748ce774c7f1fda9657f936cae902f",
"https://bcr.bazel.build/modules/jq.bzl/0.1.0/source.json": "746bf13cac0860f091df5e4911d0c593971cd8796b5ad4e809b2f8e133eee3d5",
"https://bcr.bazel.build/modules/jsoncpp/1.9.5/MODULE.bazel": "31271aedc59e815656f5736f282bb7509a97c7ecb43e927ac1a37966e0578075",
"https://bcr.bazel.build/modules/jsoncpp/1.9.5/source.json": "4108ee5085dd2885a341c7fab149429db457b3169b86eb081fa245eadf69169d",
"https://bcr.bazel.build/modules/jsoncpp/1.9.6/MODULE.bazel": "2f8d20d3b7d54143213c4dfc3d98225c42de7d666011528dc8fe91591e2e17b0",
"https://bcr.bazel.build/modules/jsoncpp/1.9.6/source.json": "a04756d367a2126c3541682864ecec52f92cdee80a35735a3cb249ce015ca000",
"https://bcr.bazel.build/modules/libpfm/4.11.0/MODULE.bazel": "45061ff025b301940f1e30d2c16bea596c25b176c8b6b3087e92615adbd52902",
"https://bcr.bazel.build/modules/nlohmann_json/3.6.1/MODULE.bazel": "6f7b417dcc794d9add9e556673ad25cb3ba835224290f4f848f8e2db1e1fca74",
"https://bcr.bazel.build/modules/nlohmann_json/3.6.1/source.json": "f448c6e8963fdfa7eb831457df83ad63d3d6355018f6574fb017e8169deb43a9",
"https://bcr.bazel.build/modules/openssl/3.5.4.bcr.0/MODULE.bazel": "0f6b8f20b192b9ff0781406256150bcd46f19e66d807dcb0c540548439d6fc35",
"https://bcr.bazel.build/modules/openssl/3.5.4.bcr.0/source.json": "543ed7627cc18e6460b9c1ae4a1b6b1debc5a5e0aca878b00f7531c7186b73da",
"https://bcr.bazel.build/modules/package_metadata/0.0.2/MODULE.bazel": "fb8d25550742674d63d7b250063d4580ca530499f045d70748b1b142081ebb92",
@@ -83,21 +101,28 @@
"https://bcr.bazel.build/modules/platforms/1.0.0/source.json": "f4ff1fd412e0246fd38c82328eb209130ead81d62dcd5a9e40910f867f733d96",
"https://bcr.bazel.build/modules/protobuf/21.7/MODULE.bazel": "a5a29bb89544f9b97edce05642fac225a808b5b7be74038ea3640fae2f8e66a7",
"https://bcr.bazel.build/modules/protobuf/27.0/MODULE.bazel": "7873b60be88844a0a1d8f80b9d5d20cfbd8495a689b8763e76c6372998d3f64c",
"https://bcr.bazel.build/modules/protobuf/27.1/MODULE.bazel": "703a7b614728bb06647f965264967a8ef1c39e09e8f167b3ca0bb1fd80449c0d",
"https://bcr.bazel.build/modules/protobuf/29.0-rc2/MODULE.bazel": "6241d35983510143049943fc0d57937937122baf1b287862f9dc8590fc4c37df",
"https://bcr.bazel.build/modules/protobuf/29.0/MODULE.bazel": "319dc8bf4c679ff87e71b1ccfb5a6e90a6dbc4693501d471f48662ac46d04e4e",
"https://bcr.bazel.build/modules/protobuf/29.0/source.json": "b857f93c796750eef95f0d61ee378f3420d00ee1dd38627b27193aa482f4f981",
"https://bcr.bazel.build/modules/protobuf/29.0-rc3/MODULE.bazel": "33c2dfa286578573afc55a7acaea3cada4122b9631007c594bf0729f41c8de92",
"https://bcr.bazel.build/modules/protobuf/29.1/MODULE.bazel": "557c3457560ff49e122ed76c0bc3397a64af9574691cb8201b4e46d4ab2ecb95",
"https://bcr.bazel.build/modules/protobuf/3.19.0/MODULE.bazel": "6b5fbb433f760a99a22b18b6850ed5784ef0e9928a72668b66e4d7ccd47db9b0",
"https://bcr.bazel.build/modules/protobuf/32.1/MODULE.bazel": "89cd2866a9cb07fee9ff74c41ceace11554f32e0d849de4e23ac55515cfada4d",
"https://bcr.bazel.build/modules/protobuf/33.4/MODULE.bazel": "114775b816b38b6d0ca620450d6b02550c60ceedfdc8d9a229833b34a223dc42",
"https://bcr.bazel.build/modules/protobuf/33.4/source.json": "555f8686b4c7d6b5ba731fbea13bf656b4bfd9a7ff629c1d9d3f6e1d6155de79",
"https://bcr.bazel.build/modules/pybind11_bazel/2.11.1/MODULE.bazel": "88af1c246226d87e65be78ed49ecd1e6f5e98648558c14ce99176da041dc378e",
"https://bcr.bazel.build/modules/pybind11_bazel/2.11.1/source.json": "be4789e951dd5301282729fe3d4938995dc4c1a81c2ff150afc9f1b0504c6022",
"https://bcr.bazel.build/modules/pybind11_bazel/2.12.0/MODULE.bazel": "e6f4c20442eaa7c90d7190d8dc539d0ab422f95c65a57cc59562170c58ae3d34",
"https://bcr.bazel.build/modules/pybind11_bazel/2.12.0/source.json": "6900fdc8a9e95866b8c0d4ad4aba4d4236317b5c1cd04c502df3f0d33afed680",
"https://bcr.bazel.build/modules/re2/2023-09-01/MODULE.bazel": "cb3d511531b16cfc78a225a9e2136007a48cf8a677e4264baeab57fe78a80206",
"https://bcr.bazel.build/modules/re2/2023-09-01/source.json": "e044ce89c2883cd957a2969a43e79f7752f9656f6b20050b62f90ede21ec6eb4",
"https://bcr.bazel.build/modules/re2/2024-07-02.bcr.1/MODULE.bazel": "b4963dda9b31080be1905ef085ecd7dd6cd47c05c79b9cdf83ade83ab2ab271a",
"https://bcr.bazel.build/modules/re2/2024-07-02.bcr.1/source.json": "2ff292be6ef3340325ce8a045ecc326e92cbfab47c7cbab4bd85d28971b97ac4",
"https://bcr.bazel.build/modules/re2/2024-07-02/MODULE.bazel": "0eadc4395959969297cbcf31a249ff457f2f1d456228c67719480205aa306daa",
"https://bcr.bazel.build/modules/rules_android/0.1.1/MODULE.bazel": "48809ab0091b07ad0182defb787c4c5328bd3a278938415c00a7b69b50c4d3a8",
"https://bcr.bazel.build/modules/rules_android/0.1.1/source.json": "e6986b41626ee10bdc864937ffb6d6bf275bb5b9c65120e6137d56e6331f089e",
"https://bcr.bazel.build/modules/rules_apple/3.16.0/MODULE.bazel": "0d1caf0b8375942ce98ea944be754a18874041e4e0459401d925577624d3a54a",
"https://bcr.bazel.build/modules/rules_apple/4.1.0/MODULE.bazel": "76e10fd4a48038d3fc7c5dc6e63b7063bbf5304a2e3bd42edda6ec660eebea68",
"https://bcr.bazel.build/modules/rules_apple/4.1.0/source.json": "8ee81e1708756f81b343a5eb2b2f0b953f1d25c4ab3d4a68dc02754872e80715",
"https://bcr.bazel.build/modules/rules_cc/0.0.1/MODULE.bazel": "cb2aa0747f84c6c3a78dad4e2049c154f08ab9d166b1273835a8174940365647",
"https://bcr.bazel.build/modules/rules_cc/0.0.10/MODULE.bazel": "ec1705118f7eaedd6e118508d3d26deba2a4e76476ada7e0e3965211be012002",
"https://bcr.bazel.build/modules/rules_cc/0.0.13/MODULE.bazel": "0e8529ed7b323dad0775ff924d2ae5af7640b23553dfcd4d34344c7e7a867191",
"https://bcr.bazel.build/modules/rules_cc/0.0.14/MODULE.bazel": "5e343a3aac88b8d7af3b1b6d2093b55c347b8eefc2e7d1442f7a02dc8fea48ac",
"https://bcr.bazel.build/modules/rules_cc/0.0.15/MODULE.bazel": "6704c35f7b4a72502ee81f61bf88706b54f06b3cbe5558ac17e2e14666cd5dcc",
"https://bcr.bazel.build/modules/rules_cc/0.0.16/MODULE.bazel": "7661303b8fc1b4d7f532e54e9d6565771fea666fbdf839e0a86affcd02defe87",
"https://bcr.bazel.build/modules/rules_cc/0.0.17/MODULE.bazel": "2ae1d8f4238ec67d7185d8861cb0a2cdf4bc608697c331b95bf990e69b62e64a",
@@ -106,35 +131,37 @@
"https://bcr.bazel.build/modules/rules_cc/0.0.8/MODULE.bazel": "964c85c82cfeb6f3855e6a07054fdb159aced38e99a5eecf7bce9d53990afa3e",
"https://bcr.bazel.build/modules/rules_cc/0.0.9/MODULE.bazel": "836e76439f354b89afe6a911a7adf59a6b2518fafb174483ad78a2a2fde7b1c5",
"https://bcr.bazel.build/modules/rules_cc/0.1.1/MODULE.bazel": "2f0222a6f229f0bf44cd711dc13c858dad98c62d52bd51d8fc3a764a83125513",
"https://bcr.bazel.build/modules/rules_cc/0.1.2/MODULE.bazel": "557ddc3a96858ec0d465a87c0a931054d7dcfd6583af2c7ed3baf494407fd8d0",
"https://bcr.bazel.build/modules/rules_cc/0.1.5/MODULE.bazel": "88dfc9361e8b5ae1008ac38f7cdfd45ad738e4fa676a3ad67d19204f045a1fd8",
"https://bcr.bazel.build/modules/rules_cc/0.2.0/MODULE.bazel": "b5c17f90458caae90d2ccd114c81970062946f49f355610ed89bebf954f5783c",
"https://bcr.bazel.build/modules/rules_cc/0.2.13/MODULE.bazel": "eecdd666eda6be16a8d9dc15e44b5c75133405e820f620a234acc4b1fdc5aa37",
"https://bcr.bazel.build/modules/rules_cc/0.2.14/MODULE.bazel": "353c99ed148887ee89c54a17d4100ae7e7e436593d104b668476019023b58df8",
"https://bcr.bazel.build/modules/rules_cc/0.2.16/MODULE.bazel": "9242fa89f950c6ef7702801ab53922e99c69b02310c39fb6e62b2bd30df2a1d4",
"https://bcr.bazel.build/modules/rules_cc/0.2.16/source.json": "d03d5cde49376d87e14ec14b666c56075e5e3926930327fd5d0484a1ff2ac1cc",
"https://bcr.bazel.build/modules/rules_cc/0.2.4/MODULE.bazel": "1ff1223dfd24f3ecf8f028446d4a27608aa43c3f41e346d22838a4223980b8cc",
"https://bcr.bazel.build/modules/rules_cc/0.2.8/MODULE.bazel": "f1df20f0bf22c28192a794f29b501ee2018fa37a3862a1a2132ae2940a23a642",
"https://bcr.bazel.build/modules/rules_foreign_cc/0.9.0/MODULE.bazel": "c9e8c682bf75b0e7c704166d79b599f93b72cfca5ad7477df596947891feeef6",
"https://bcr.bazel.build/modules/rules_fuzzing/0.5.2/MODULE.bazel": "40c97d1144356f52905566c55811f13b299453a14ac7769dfba2ac38192337a8",
"https://bcr.bazel.build/modules/rules_fuzzing/0.5.2/source.json": "c8b1e2c717646f1702290959a3302a178fb639d987ab61d548105019f11e527e",
"https://bcr.bazel.build/modules/rules_java/4.0.0/MODULE.bazel": "5a78a7ae82cd1a33cef56dc578c7d2a46ed0dca12643ee45edbb8417899e6f74",
"https://bcr.bazel.build/modules/rules_java/5.3.5/MODULE.bazel": "a4ec4f2db570171e3e5eb753276ee4b389bae16b96207e9d3230895c99644b86",
"https://bcr.bazel.build/modules/rules_java/6.0.0/MODULE.bazel": "8a43b7df601a7ec1af61d79345c17b31ea1fedc6711fd4abfd013ea612978e39",
"https://bcr.bazel.build/modules/rules_java/6.3.0/MODULE.bazel": "a97c7678c19f236a956ad260d59c86e10a463badb7eb2eda787490f4c969b963",
"https://bcr.bazel.build/modules/rules_java/6.4.0/MODULE.bazel": "e986a9fe25aeaa84ac17ca093ef13a4637f6107375f64667a15999f77db6c8f6",
"https://bcr.bazel.build/modules/rules_java/6.5.2/MODULE.bazel": "1d440d262d0e08453fa0c4d8f699ba81609ed0e9a9a0f02cd10b3e7942e61e31",
"https://bcr.bazel.build/modules/rules_java/7.10.0/MODULE.bazel": "530c3beb3067e870561739f1144329a21c851ff771cd752a49e06e3dc9c2e71a",
"https://bcr.bazel.build/modules/rules_java/7.12.2/MODULE.bazel": "579c505165ee757a4280ef83cda0150eea193eed3bef50b1004ba88b99da6de6",
"https://bcr.bazel.build/modules/rules_java/7.2.0/MODULE.bazel": "06c0334c9be61e6cef2c8c84a7800cef502063269a5af25ceb100b192453d4ab",
"https://bcr.bazel.build/modules/rules_java/7.3.2/MODULE.bazel": "50dece891cfdf1741ea230d001aa9c14398062f2b7c066470accace78e412bc2",
"https://bcr.bazel.build/modules/rules_java/7.6.1/MODULE.bazel": "2f14b7e8a1aa2f67ae92bc69d1ec0fa8d9f827c4e17ff5e5f02e91caa3b2d0fe",
"https://bcr.bazel.build/modules/rules_java/8.14.0/MODULE.bazel": "717717ed40cc69994596a45aec6ea78135ea434b8402fb91b009b9151dd65615",
"https://bcr.bazel.build/modules/rules_java/8.14.0/source.json": "8a88c4ca9e8759da53cddc88123880565c520503321e2566b4e33d0287a3d4bc",
"https://bcr.bazel.build/modules/rules_java/8.3.2/MODULE.bazel": "7336d5511ad5af0b8615fdc7477535a2e4e723a357b6713af439fe8cf0195017",
"https://bcr.bazel.build/modules/rules_java/8.5.1/MODULE.bazel": "d8a9e38cc5228881f7055a6079f6f7821a073df3744d441978e7a43e20226939",
"https://bcr.bazel.build/modules/rules_java/8.6.0/MODULE.bazel": "9c064c434606d75a086f15ade5edb514308cccd1544c2b2a89bbac4310e41c71",
"https://bcr.bazel.build/modules/rules_java/8.6.1/MODULE.bazel": "f4808e2ab5b0197f094cabce9f4b006a27766beb6a9975931da07099560ca9c2",
"https://bcr.bazel.build/modules/rules_java/9.0.3/MODULE.bazel": "1f98ed015f7e744a745e0df6e898a7c5e83562d6b759dfd475c76456dda5ccea",
"https://bcr.bazel.build/modules/rules_java/9.0.3/source.json": "b038c0c07e12e658135bbc32cc1a2ded6e33785105c9d41958014c592de4593e",
"https://bcr.bazel.build/modules/rules_jvm_external/4.4.2/MODULE.bazel": "a56b85e418c83eb1839819f0b515c431010160383306d13ec21959ac412d2fe7",
"https://bcr.bazel.build/modules/rules_jvm_external/5.1/MODULE.bazel": "33f6f999e03183f7d088c9be518a63467dfd0be94a11d0055fe2d210f89aa909",
"https://bcr.bazel.build/modules/rules_jvm_external/5.2/MODULE.bazel": "d9351ba35217ad0de03816ef3ed63f89d411349353077348a45348b096615036",
"https://bcr.bazel.build/modules/rules_jvm_external/5.3/MODULE.bazel": "bf93870767689637164657731849fb887ad086739bd5d360d90007a581d5527d",
"https://bcr.bazel.build/modules/rules_jvm_external/6.1/MODULE.bazel": "75b5fec090dbd46cf9b7d8ea08cf84a0472d92ba3585b476f44c326eda8059c4",
"https://bcr.bazel.build/modules/rules_jvm_external/6.3/MODULE.bazel": "c998e060b85f71e00de5ec552019347c8bca255062c990ac02d051bb80a38df0",
"https://bcr.bazel.build/modules/rules_jvm_external/6.3/source.json": "6f5f5a5a4419ae4e37c35a5bb0a6ae657ed40b7abc5a5189111b47fcebe43197",
"https://bcr.bazel.build/modules/rules_kotlin/1.9.0/MODULE.bazel": "ef85697305025e5a61f395d4eaede272a5393cee479ace6686dba707de804d59",
"https://bcr.bazel.build/modules/rules_jvm_external/6.7/MODULE.bazel": "e717beabc4d091ecb2c803c2d341b88590e9116b8bf7947915eeb33aab4f96dd",
"https://bcr.bazel.build/modules/rules_jvm_external/6.7/source.json": "5426f412d0a7fc6b611643376c7e4a82dec991491b9ce5cb1cfdd25fe2e92be4",
"https://bcr.bazel.build/modules/rules_kotlin/1.9.6/MODULE.bazel": "d269a01a18ee74d0335450b10f62c9ed81f2321d7958a2934e44272fe82dcef3",
"https://bcr.bazel.build/modules/rules_kotlin/1.9.6/source.json": "2faa4794364282db7c06600b7e5e34867a564ae91bda7cae7c29c64e9466b7d5",
"https://bcr.bazel.build/modules/rules_license/0.0.3/MODULE.bazel": "627e9ab0247f7d1e05736b59dbb1b6871373de5ad31c3011880b4133cafd4bd0",
@@ -150,34 +177,47 @@
"https://bcr.bazel.build/modules/rules_platform/0.1.0/source.json": "98becf9569572719b65f639133510633eb3527fb37d347d7ef08447f3ebcf1c9",
"https://bcr.bazel.build/modules/rules_proto/4.0.0/MODULE.bazel": "a7a7b6ce9bee418c1a760b3d84f83a299ad6952f9903c67f19e4edd964894e06",
"https://bcr.bazel.build/modules/rules_proto/5.3.0-21.7/MODULE.bazel": "e8dff86b0971688790ae75528fe1813f71809b5afd57facb44dad9e8eca631b7",
"https://bcr.bazel.build/modules/rules_proto/6.0.0-rc1/MODULE.bazel": "1e5b502e2e1a9e825eef74476a5a1ee524a92297085015a052510b09a1a09483",
"https://bcr.bazel.build/modules/rules_proto/6.0.2/MODULE.bazel": "ce916b775a62b90b61888052a416ccdda405212b6aaeb39522f7dc53431a5e73",
"https://bcr.bazel.build/modules/rules_proto/7.0.2/MODULE.bazel": "bf81793bd6d2ad89a37a40693e56c61b0ee30f7a7fdbaf3eabbf5f39de47dea2",
"https://bcr.bazel.build/modules/rules_proto/7.0.2/source.json": "1e5e7260ae32ef4f2b52fd1d0de8d03b606a44c91b694d2f1afb1d3b28a48ce1",
"https://bcr.bazel.build/modules/rules_proto/7.1.0/MODULE.bazel": "002d62d9108f75bb807cd56245d45648f38275cb3a99dcd45dfb864c5d74cb96",
"https://bcr.bazel.build/modules/rules_proto/7.1.0/source.json": "39f89066c12c24097854e8f57ab8558929f9c8d474d34b2c00ac04630ad8940e",
"https://bcr.bazel.build/modules/rules_python/0.10.2/MODULE.bazel": "cc82bc96f2997baa545ab3ce73f196d040ffb8756fd2d66125a530031cd90e5f",
"https://bcr.bazel.build/modules/rules_python/0.23.1/MODULE.bazel": "49ffccf0511cb8414de28321f5fcf2a31312b47c40cc21577144b7447f2bf300",
"https://bcr.bazel.build/modules/rules_python/0.25.0/MODULE.bazel": "72f1506841c920a1afec76975b35312410eea3aa7b63267436bfb1dd91d2d382",
"https://bcr.bazel.build/modules/rules_python/0.28.0/MODULE.bazel": "cba2573d870babc976664a912539b320cbaa7114cd3e8f053c720171cde331ed",
"https://bcr.bazel.build/modules/rules_python/0.31.0/MODULE.bazel": "93a43dc47ee570e6ec9f5779b2e64c1476a6ce921c48cc9a1678a91dd5f8fd58",
"https://bcr.bazel.build/modules/rules_python/0.33.2/MODULE.bazel": "3e036c4ad8d804a4dad897d333d8dce200d943df4827cb849840055be8d2e937",
"https://bcr.bazel.build/modules/rules_python/0.4.0/MODULE.bazel": "9208ee05fd48bf09ac60ed269791cf17fb343db56c8226a720fbb1cdf467166c",
"https://bcr.bazel.build/modules/rules_python/0.40.0/MODULE.bazel": "9d1a3cd88ed7d8e39583d9ffe56ae8a244f67783ae89b60caafc9f5cf318ada7",
"https://bcr.bazel.build/modules/rules_python/0.40.0/source.json": "939d4bd2e3110f27bfb360292986bb79fd8dcefb874358ccd6cdaa7bda029320",
"https://bcr.bazel.build/modules/rules_python/1.3.0/MODULE.bazel": "8361d57eafb67c09b75bf4bbe6be360e1b8f4f18118ab48037f2bd50aa2ccb13",
"https://bcr.bazel.build/modules/rules_python/1.4.1/MODULE.bazel": "8991ad45bdc25018301d6b7e1d3626afc3c8af8aaf4bc04f23d0b99c938b73a6",
"https://bcr.bazel.build/modules/rules_python/1.6.0/MODULE.bazel": "7e04ad8f8d5bea40451cf80b1bd8262552aa73f841415d20db96b7241bd027d8",
"https://bcr.bazel.build/modules/rules_python/1.7.0/MODULE.bazel": "d01f995ecd137abf30238ad9ce97f8fc3ac57289c8b24bd0bf53324d937a14f8",
"https://bcr.bazel.build/modules/rules_python/1.7.0/source.json": "028a084b65dcf8f4dc4f82f8778dbe65df133f234b316828a82e060d81bdce32",
"https://bcr.bazel.build/modules/rules_rs/0.0.23/MODULE.bazel": "2e7ae2044105b1873a451c628713329d6746493f677b371f9d8063fd06a00937",
"https://bcr.bazel.build/modules/rules_rs/0.0.23/source.json": "1149e7f599f2e41e9e9de457f9c4deb3d219a4fec967cea30557d02ede88037e",
"https://bcr.bazel.build/modules/rules_rust/0.66.0/MODULE.bazel": "86ef763a582f4739a27029bdcc6c562258ed0ea6f8d58294b049e215ceb251b3",
"https://bcr.bazel.build/modules/rules_rust/0.68.1/MODULE.bazel": "8d3332ef4079673385eb81f8bd68b012decc04ac00c9d5a01a40eff90301732c",
"https://bcr.bazel.build/modules/rules_rust/0.68.1/source.json": "3378e746f81b62457fdfd37391244fa8ff075ba85c05931ee4f3a20ac1efe963",
"https://bcr.bazel.build/modules/rules_shell/0.2.0/MODULE.bazel": "fda8a652ab3c7d8fee214de05e7a9916d8b28082234e8d2c0094505c5268ed3c",
"https://bcr.bazel.build/modules/rules_shell/0.3.0/MODULE.bazel": "de4402cd12f4cc8fda2354fce179fdb068c0b9ca1ec2d2b17b3e21b24c1a937b",
"https://bcr.bazel.build/modules/rules_shell/0.4.0/MODULE.bazel": "0f8f11bb3cd11755f0b48c1de0bbcf62b4b34421023aa41a2fc74ef68d9584f0",
"https://bcr.bazel.build/modules/rules_shell/0.4.1/MODULE.bazel": "00e501db01bbf4e3e1dd1595959092c2fadf2087b2852d3f553b5370f5633592",
"https://bcr.bazel.build/modules/rules_shell/0.6.1/MODULE.bazel": "72e76b0eea4e81611ef5452aa82b3da34caca0c8b7b5c0c9584338aa93bae26b",
"https://bcr.bazel.build/modules/rules_shell/0.6.1/source.json": "20ec05cd5e592055e214b2da8ccb283c7f2a421ea0dc2acbf1aa792e11c03d0c",
"https://bcr.bazel.build/modules/rules_swift/1.16.0/MODULE.bazel": "4a09f199545a60d09895e8281362b1ff3bb08bbde69c6fc87aff5b92fcc916ca",
"https://bcr.bazel.build/modules/rules_swift/2.1.1/MODULE.bazel": "494900a80f944fc7aa61500c2073d9729dff0b764f0e89b824eb746959bc1046",
"https://bcr.bazel.build/modules/rules_swift/2.4.0/MODULE.bazel": "1639617eb1ede28d774d967a738b4a68b0accb40650beadb57c21846beab5efd",
"https://bcr.bazel.build/modules/rules_swift/3.1.2/MODULE.bazel": "72c8f5cf9d26427cee6c76c8e3853eb46ce6b0412a081b2b6db6e8ad56267400",
"https://bcr.bazel.build/modules/rules_swift/3.1.2/source.json": "e85761f3098a6faf40b8187695e3de6d97944e98abd0d8ce579cb2daf6319a66",
"https://bcr.bazel.build/modules/stardoc/0.5.1/MODULE.bazel": "1a05d92974d0c122f5ccf09291442580317cdd859f07a8655f1db9a60374f9f8",
"https://bcr.bazel.build/modules/stardoc/0.5.3/MODULE.bazel": "c7f6948dae6999bf0db32c1858ae345f112cacf98f174c7a8bb707e41b974f1c",
"https://bcr.bazel.build/modules/stardoc/0.5.6/MODULE.bazel": "c43dabc564990eeab55e25ed61c07a1aadafe9ece96a4efabb3f8bf9063b71ef",
"https://bcr.bazel.build/modules/stardoc/0.6.2/MODULE.bazel": "7060193196395f5dd668eda046ccbeacebfd98efc77fed418dbe2b82ffaa39fd",
"https://bcr.bazel.build/modules/stardoc/0.7.0/MODULE.bazel": "05e3d6d30c099b6770e97da986c53bd31844d7f13d41412480ea265ac9e8079c",
"https://bcr.bazel.build/modules/stardoc/0.7.1/MODULE.bazel": "3548faea4ee5dda5580f9af150e79d0f6aea934fc60c1cc50f4efdd9420759e7",
"https://bcr.bazel.build/modules/stardoc/0.7.1/source.json": "b6500ffcd7b48cd72c29bb67bcac781e12701cc0d6d55d266a652583cfcdab01",
"https://bcr.bazel.build/modules/stardoc/0.7.2/MODULE.bazel": "fc152419aa2ea0f51c29583fab1e8c99ddefd5b3778421845606ee628629e0e5",
"https://bcr.bazel.build/modules/stardoc/0.7.2/source.json": "58b029e5e901d6802967754adf0a9056747e8176f017cfe3607c0851f4d42216",
"https://bcr.bazel.build/modules/swift_argument_parser/1.3.1.1/MODULE.bazel": "5e463fbfba7b1701d957555ed45097d7f984211330106ccd1352c6e0af0dcf91",
"https://bcr.bazel.build/modules/swift_argument_parser/1.3.1.2/MODULE.bazel": "75aab2373a4bbe2a1260b9bf2a1ebbdbf872d3bd36f80bff058dccd82e89422f",
"https://bcr.bazel.build/modules/swift_argument_parser/1.3.1.2/source.json": "5fba48bbe0ba48761f9e9f75f92876cafb5d07c0ce059cc7a8027416de94a05b",
"https://bcr.bazel.build/modules/tar.bzl/0.2.1/MODULE.bazel": "52d1c00a80a8cc67acbd01649e83d8dd6a9dc426a6c0b754a04fe8c219c76468",
"https://bcr.bazel.build/modules/tar.bzl/0.6.0/MODULE.bazel": "a3584b4edcfafcabd9b0ef9819808f05b372957bbdff41601429d5fd0aac2e7c",
"https://bcr.bazel.build/modules/tar.bzl/0.6.0/source.json": "4a620381df075a16cb3a7ed57bd1d05f7480222394c64a20fa51bdb636fda658",
@@ -197,9 +237,10 @@
"general": {
"bzlTransitiveDigest": "dnnhvKMf9MIXMulhbhHBblZdDAfAkiSVjApIXpUz9Y8=",
"usagesDigest": "dPuxg6asjUidjHZi+xFfMiW+r9RawVYGjTZnOeP+fLI=",
"recordedFileInputs": {},
"recordedDirentsInputs": {},
"envVariables": {},
"recordedInputs": [
"REPO_MAPPING:aspect_tools_telemetry+,bazel_lib bazel_lib+",
"REPO_MAPPING:aspect_tools_telemetry+,bazel_skylib bazel_skylib+"
],
"generatedRepoSpecs": {
"aspect_tools_telemetry_report": {
"repoRuleId": "@@aspect_tools_telemetry+//:extension.bzl%tel_repository",
@@ -246,28 +287,16 @@
}
}
}
},
"recordedRepoMappingEntries": [
[
"aspect_tools_telemetry+",
"bazel_lib",
"bazel_lib+"
],
[
"aspect_tools_telemetry+",
"bazel_skylib",
"bazel_skylib+"
]
]
}
}
},
"@@rules_kotlin+//src/main/starlark/core/repositories:bzlmod_setup.bzl%rules_kotlin_extensions": {
"general": {
"bzlTransitiveDigest": "rL/34P1aFDq2GqVC2zCFgQ8nTuOC6ziogocpvG50Qz8=",
"bzlTransitiveDigest": "ABI1D/sbS1ovwaW/kHDoj8nnXjQ0oKU9fzmzEG4iT8o=",
"usagesDigest": "QI2z8ZUR+mqtbwsf2fLqYdJAkPOHdOV+tF2yVAUgRzw=",
"recordedFileInputs": {},
"recordedDirentsInputs": {},
"envVariables": {},
"recordedInputs": [
"REPO_MAPPING:rules_kotlin+,bazel_tools bazel_tools"
],
"generatedRepoSpecs": {
"com_github_jetbrains_kotlin_git": {
"repoRuleId": "@@rules_kotlin+//src/main/starlark/core/repositories:compiler.bzl%kotlin_compiler_git_repository",
@@ -315,14 +344,205 @@
]
}
}
},
"recordedRepoMappingEntries": [
[
"rules_kotlin+",
"bazel_tools",
"bazel_tools"
]
]
}
}
},
"@@rules_python+//python/extensions:config.bzl%config": {
"general": {
"bzlTransitiveDigest": "2hLgIvNVTLgxus0ZuXtleBe70intCfo0cHs8qvt6cdM=",
"usagesDigest": "ZVSXMAGpD+xzVNPuvF1IoLBkty7TROO0+akMapt1pAg=",
"recordedInputs": [
"REPO_MAPPING:rules_python+,bazel_tools bazel_tools",
"REPO_MAPPING:rules_python+,pypi__build rules_python++config+pypi__build",
"REPO_MAPPING:rules_python+,pypi__click rules_python++config+pypi__click",
"REPO_MAPPING:rules_python+,pypi__colorama rules_python++config+pypi__colorama",
"REPO_MAPPING:rules_python+,pypi__importlib_metadata rules_python++config+pypi__importlib_metadata",
"REPO_MAPPING:rules_python+,pypi__installer rules_python++config+pypi__installer",
"REPO_MAPPING:rules_python+,pypi__more_itertools rules_python++config+pypi__more_itertools",
"REPO_MAPPING:rules_python+,pypi__packaging rules_python++config+pypi__packaging",
"REPO_MAPPING:rules_python+,pypi__pep517 rules_python++config+pypi__pep517",
"REPO_MAPPING:rules_python+,pypi__pip rules_python++config+pypi__pip",
"REPO_MAPPING:rules_python+,pypi__pip_tools rules_python++config+pypi__pip_tools",
"REPO_MAPPING:rules_python+,pypi__pyproject_hooks rules_python++config+pypi__pyproject_hooks",
"REPO_MAPPING:rules_python+,pypi__setuptools rules_python++config+pypi__setuptools",
"REPO_MAPPING:rules_python+,pypi__tomli rules_python++config+pypi__tomli",
"REPO_MAPPING:rules_python+,pypi__wheel rules_python++config+pypi__wheel",
"REPO_MAPPING:rules_python+,pypi__zipp rules_python++config+pypi__zipp"
],
"generatedRepoSpecs": {
"rules_python_internal": {
"repoRuleId": "@@rules_python+//python/private:internal_config_repo.bzl%internal_config_repo",
"attributes": {
"transition_setting_generators": {},
"transition_settings": []
}
},
"pypi__build": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/e2/03/f3c8ba0a6b6e30d7d18c40faab90807c9bb5e9a1e3b2fe2008af624a9c97/build-1.2.1-py3-none-any.whl",
"sha256": "75e10f767a433d9a86e50d83f418e83efc18ede923ee5ff7df93b6cb0306c5d4",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__click": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/00/2e/d53fa4befbf2cfa713304affc7ca780ce4fc1fd8710527771b58311a3229/click-8.1.7-py3-none-any.whl",
"sha256": "ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__colorama": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl",
"sha256": "4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__importlib_metadata": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/2d/0a/679461c511447ffaf176567d5c496d1de27cbe34a87df6677d7171b2fbd4/importlib_metadata-7.1.0-py3-none-any.whl",
"sha256": "30962b96c0c223483ed6cc7280e7f0199feb01a0e40cfae4d4450fc6fab1f570",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__installer": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/e5/ca/1172b6638d52f2d6caa2dd262ec4c811ba59eee96d54a7701930726bce18/installer-0.7.0-py3-none-any.whl",
"sha256": "05d1933f0a5ba7d8d6296bb6d5018e7c94fa473ceb10cf198a92ccea19c27b53",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__more_itertools": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/50/e2/8e10e465ee3987bb7c9ab69efb91d867d93959095f4807db102d07995d94/more_itertools-10.2.0-py3-none-any.whl",
"sha256": "686b06abe565edfab151cb8fd385a05651e1fdf8f0a14191e4439283421f8684",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__packaging": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/49/df/1fceb2f8900f8639e278b056416d49134fb8d84c5942ffaa01ad34782422/packaging-24.0-py3-none-any.whl",
"sha256": "2ddfb553fdf02fb784c234c7ba6ccc288296ceabec964ad2eae3777778130bc5",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__pep517": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/25/6e/ca4a5434eb0e502210f591b97537d322546e4833dcb4d470a48c375c5540/pep517-0.13.1-py3-none-any.whl",
"sha256": "31b206f67165b3536dd577c5c3f1518e8fbaf38cbc57efff8369a392feff1721",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__pip": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/8a/6a/19e9fe04fca059ccf770861c7d5721ab4c2aebc539889e97c7977528a53b/pip-24.0-py3-none-any.whl",
"sha256": "ba0d021a166865d2265246961bec0152ff124de910c5cc39f1156ce3fa7c69dc",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__pip_tools": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/0d/dc/38f4ce065e92c66f058ea7a368a9c5de4e702272b479c0992059f7693941/pip_tools-7.4.1-py3-none-any.whl",
"sha256": "4c690e5fbae2f21e87843e89c26191f0d9454f362d8acdbd695716493ec8b3a9",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__pyproject_hooks": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/ae/f3/431b9d5fe7d14af7a32340792ef43b8a714e7726f1d7b69cc4e8e7a3f1d7/pyproject_hooks-1.1.0-py3-none-any.whl",
"sha256": "7ceeefe9aec63a1064c18d939bdc3adf2d8aa1988a510afec15151578b232aa2",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__setuptools": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/90/99/158ad0609729111163fc1f674a5a42f2605371a4cf036d0441070e2f7455/setuptools-78.1.1-py3-none-any.whl",
"sha256": "c3a9c4211ff4c309edb8b8c4f1cbfa7ae324c4ba9f91ff254e3d305b9fd54561",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__tomli": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/97/75/10a9ebee3fd790d20926a90a2547f0bf78f371b2f13aa822c759680ca7b9/tomli-2.0.1-py3-none-any.whl",
"sha256": "939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__wheel": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/7d/cd/d7460c9a869b16c3dd4e1e403cce337df165368c71d6af229a74699622ce/wheel-0.43.0-py3-none-any.whl",
"sha256": "55c570405f142630c6b9f72fe09d9b67cf1477fcf543ae5b8dcb1f5b7377da81",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
},
"pypi__zipp": {
"repoRuleId": "@@bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "https://files.pythonhosted.org/packages/da/55/a03fd7240714916507e1fcf7ae355bd9d9ed2e6db492595f1a67f61681be/zipp-3.18.2-py3-none-any.whl",
"sha256": "dce197b859eb796242b0622af1b8beb0a722d52aa2f57133ead08edd5bf5374e",
"type": "zip",
"build_file_content": "package(default_visibility = [\"//visibility:public\"])\n\nload(\"@rules_python//python:py_library.bzl\", \"py_library\")\n\npy_library(\n name = \"lib\",\n srcs = glob([\"**/*.py\"]),\n data = glob([\"**/*\"], exclude=[\n # These entries include those put into user-installed dependencies by\n # data_exclude to avoid non-determinism.\n \"**/*.py\",\n \"**/*.pyc\",\n \"**/*.pyc.*\", # During pyc creation, temp files named *.pyc.NNN are created\n \"**/*.dist-info/RECORD\",\n \"BUILD\",\n \"WORKSPACE\",\n ]),\n # This makes this directory a top-level in the python import\n # search path for anything that depends on this.\n imports = [\".\"],\n)\n"
}
}
}
}
},
"@@rules_python+//python/uv:uv.bzl%uv": {
"general": {
"bzlTransitiveDigest": "ijW9KS7qsIY+yBVvJ+Nr1mzwQox09j13DnE3iIwaeTM=",
"usagesDigest": "H8dQoNZcoqP+Mu0tHZTi4KHATzvNkM5ePuEqoQdklIU=",
"recordedInputs": [
"REPO_MAPPING:rules_python+,bazel_tools bazel_tools",
"REPO_MAPPING:rules_python+,platforms platforms"
],
"generatedRepoSpecs": {
"uv": {
"repoRuleId": "@@rules_python+//python/uv/private:uv_toolchains_repo.bzl%uv_toolchains_repo",
"attributes": {
"toolchain_type": "'@@rules_python+//python/uv:uv_toolchain_type'",
"toolchain_names": [
"none"
],
"toolchain_implementations": {
"none": "'@@rules_python+//python:none'"
},
"toolchain_compatible_with": {
"none": [
"@platforms//:incompatible"
]
},
"toolchain_target_settings": {}
}
}
}
}
}
},
@@ -355,7 +575,7 @@
"anstyle_1.0.11": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"lexopt\",\"req\":\"^0.3.0\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"anyhow_1.0.100": "{\"dependencies\":[{\"name\":\"backtrace\",\"optional\":true,\"req\":\"^0.3.51\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.6\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"syn\",\"req\":\"^2.0\"},{\"kind\":\"dev\",\"name\":\"thiserror\",\"req\":\"^2\"},{\"features\":[\"diff\"],\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.108\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"arboard_3.6.1": "{\"dependencies\":[{\"features\":[\"std\"],\"name\":\"clipboard-win\",\"req\":\"^5.3.1\",\"target\":\"cfg(windows)\"},{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.10.2\"},{\"default_features\":false,\"features\":[\"png\"],\"name\":\"image\",\"optional\":true,\"req\":\"^0.25\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"},{\"default_features\":false,\"features\":[\"tiff\"],\"name\":\"image\",\"optional\":true,\"req\":\"^0.25\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"default_features\":false,\"features\":[\"png\",\"bmp\"],\"name\":\"image\",\"optional\":true,\"req\":\"^0.25\",\"target\":\"cfg(windows)\"},{\"name\":\"log\",\"req\":\"^0.4\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"},{\"name\":\"log\",\"req\":\"^0.4\",\"target\":\"cfg(windows)\"},{\"name\":\"objc2\",\"req\":\"^0.6.0\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"default_features\":false,\"features\":[\"std\",\"objc2-core-graphics\",\"NSPasteboard\",\"NSPasteboardItem\",\"NSImage\"],\"name\":\"objc2-app-kit\",\"req\":\"^0.3.0\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"default_features\":false,\"features\":[\"std\",\"CFCGTypes\"],\"name\":\"objc2-core-foundation\",\"optional\":true,\"req\":\"^0.3.0\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"default_features\":false,\"features\":[\"std\",\"CGImage\",\"CGColorSpace\",\"CGDataProvider\"],\"name\":\"objc2-core-graphics\",\"optional\":true,\"req\":\"^0.3.0\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"default_features\":false,\"features\":[\"std\",\"NSArray\",\"NSString\",\"NSEnumerator\",\"NSGeometry\",\"NSValue\"],\"name\":\"objc2-foundation\",\"req\":\"^0.3.0\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"name\":\"parking_lot\",\"req\":\"^0.12\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"},{\"name\":\"percent-encoding\",\"req\":\"^2.3.1\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"},{\"features\":[\"Win32_Foundation\",\"Win32_Storage_FileSystem\",\"Win32_System_DataExchange\",\"Win32_System_Memory\",\"Win32_System_Ole\",\"Win32_UI_Shell\"],\"name\":\"windows-sys\",\"req\":\">=0.52.0, <0.61.0\",\"target\":\"cfg(windows)\"},{\"name\":\"wl-clipboard-rs\",\"optional\":true,\"req\":\"^0.9.0\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"},{\"name\":\"x11rb\",\"req\":\"^0.13\",\"target\":\"cfg(all(unix, not(any(target_os=\\\"macos\\\", target_os=\\\"android\\\", target_os=\\\"emscripten\\\"))))\"}],\"features\":{\"core-graphics\":[\"dep:objc2-core-graphics\"],\"default\":[\"image-data\"],\"image\":[\"dep:image\"],\"image-data\":[\"dep:objc2-core-graphics\",\"dep:objc2-core-foundation\",\"image\",\"windows-sys\",\"core-graphics\"],\"wayland-data-control\":[\"wl-clipboard-rs\"],\"windows-sys\":[\"windows-sys/Win32_Graphics_Gdi\"],\"wl-clipboard-rs\":[\"dep:wl-clipboard-rs\"]}}",
"arc-swap_1.7.1": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"adaptive-barrier\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"~0.5\"},{\"kind\":\"dev\",\"name\":\"crossbeam-utils\",\"req\":\"~0.8\"},{\"kind\":\"dev\",\"name\":\"itertools\",\"req\":\"^0.12\"},{\"kind\":\"dev\",\"name\":\"num_cpus\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"once_cell\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"~0.12\"},{\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\"},{\"features\":[\"rc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1.0.130\"},{\"kind\":\"dev\",\"name\":\"serde_test\",\"req\":\"^1.0.130\"}],\"features\":{\"experimental-strategies\":[],\"experimental-thread-local\":[],\"internal-test-strategies\":[],\"weak\":[]}}",
"arc-swap_1.8.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"adaptive-barrier\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"~0.7\"},{\"kind\":\"dev\",\"name\":\"crossbeam-utils\",\"req\":\"~0.8\"},{\"kind\":\"dev\",\"name\":\"itertools\",\"req\":\"^0.14\"},{\"kind\":\"dev\",\"name\":\"num_cpus\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"once_cell\",\"req\":\"~1\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"~0.12\"},{\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\"},{\"name\":\"rustversion\",\"req\":\"^1\"},{\"features\":[\"rc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1.0.130\"},{\"kind\":\"dev\",\"name\":\"serde_test\",\"req\":\"^1.0.177\"}],\"features\":{\"experimental-strategies\":[],\"experimental-thread-local\":[],\"internal-test-strategies\":[],\"weak\":[]}}",
"arrayvec_0.7.6": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"bencher\",\"req\":\"^0.1.4\"},{\"default_features\":false,\"name\":\"borsh\",\"optional\":true,\"req\":\"^1.2.0\"},{\"kind\":\"dev\",\"name\":\"matches\",\"req\":\"^0.1\"},{\"default_features\":false,\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_test\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"zeroize\",\"optional\":true,\"req\":\"^1.4\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"ascii-canvas_3.0.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"diff\",\"req\":\"^0.1\"},{\"name\":\"term\",\"req\":\"^0.7\"}],\"features\":{}}",
"ascii_1.1.0": "{\"dependencies\":[{\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.25\"},{\"name\":\"serde_test\",\"optional\":true,\"req\":\"^1.0\"}],\"features\":{\"alloc\":[],\"default\":[\"std\"],\"std\":[\"alloc\"]}}",
@@ -406,7 +626,7 @@
"cfg_aliases_0.1.1": "{\"dependencies\":[],\"features\":{}}",
"cfg_aliases_0.2.1": "{\"dependencies\":[],\"features\":{}}",
"chardetng_0.1.17": "{\"dependencies\":[{\"name\":\"arrayvec\",\"optional\":true,\"req\":\"^0.5.1\"},{\"name\":\"cfg-if\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"detone\",\"req\":\"^1.0.0\"},{\"default_features\":false,\"name\":\"encoding_rs\",\"req\":\"^0.8.29\"},{\"default_features\":false,\"name\":\"memchr\",\"req\":\"^2.2.0\"},{\"name\":\"rayon\",\"optional\":true,\"req\":\"^1.3.0\"}],\"features\":{\"multithreading\":[\"rayon\",\"arrayvec\"],\"testing-only-no-semver-guarantees-do-not-use\":[]}}",
"chrono_0.4.42": "{\"dependencies\":[{\"features\":[\"derive\"],\"name\":\"arbitrary\",\"optional\":true,\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"bincode\",\"req\":\"^1.3.0\"},{\"features\":[\"fallback\"],\"name\":\"iana-time-zone\",\"optional\":true,\"req\":\"^0.1.45\",\"target\":\"cfg(unix)\"},{\"name\":\"js-sys\",\"optional\":true,\"req\":\"^0.3\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"default_features\":false,\"name\":\"num-traits\",\"req\":\"^0.2\"},{\"name\":\"pure-rust-locales\",\"optional\":true,\"req\":\"^0.8\"},{\"default_features\":false,\"name\":\"rkyv\",\"optional\":true,\"req\":\"^0.7.43\"},{\"default_features\":false,\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.99\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"similar-asserts\",\"req\":\"^1.6.1\"},{\"name\":\"wasm-bindgen\",\"optional\":true,\"req\":\"^0.2\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"kind\":\"dev\",\"name\":\"windows-bindgen\",\"req\":\"^0.63\",\"target\":\"cfg(windows)\"},{\"name\":\"windows-link\",\"optional\":true,\"req\":\"^0.2\",\"target\":\"cfg(windows)\"}],\"features\":{\"__internal_bench\":[],\"alloc\":[],\"clock\":[\"winapi\",\"iana-time-zone\",\"now\"],\"core-error\":[],\"default\":[\"clock\",\"std\",\"oldtime\",\"wasmbind\"],\"libc\":[],\"now\":[\"std\"],\"oldtime\":[],\"rkyv\":[\"dep:rkyv\",\"rkyv/size_32\"],\"rkyv-16\":[\"dep:rkyv\",\"rkyv?/size_16\"],\"rkyv-32\":[\"dep:rkyv\",\"rkyv?/size_32\"],\"rkyv-64\":[\"dep:rkyv\",\"rkyv?/size_64\"],\"rkyv-validation\":[\"rkyv?/validation\"],\"std\":[\"alloc\"],\"unstable-locales\":[\"pure-rust-locales\"],\"wasmbind\":[\"wasm-bindgen\",\"js-sys\"],\"winapi\":[\"windows-link\"]}}",
"chrono_0.4.43": "{\"dependencies\":[{\"features\":[\"derive\"],\"name\":\"arbitrary\",\"optional\":true,\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"bincode\",\"req\":\"^1.3.0\"},{\"name\":\"defmt\",\"optional\":true,\"req\":\"^1.0.1\"},{\"features\":[\"fallback\"],\"name\":\"iana-time-zone\",\"optional\":true,\"req\":\"^0.1.45\",\"target\":\"cfg(unix)\"},{\"name\":\"js-sys\",\"optional\":true,\"req\":\"^0.3\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"default_features\":false,\"name\":\"num-traits\",\"req\":\"^0.2\"},{\"name\":\"pure-rust-locales\",\"optional\":true,\"req\":\"^0.8.2\"},{\"default_features\":false,\"name\":\"rkyv\",\"optional\":true,\"req\":\"^0.7.43\"},{\"default_features\":false,\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.99\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"similar-asserts\",\"req\":\"^1.6.1\"},{\"name\":\"wasm-bindgen\",\"optional\":true,\"req\":\"^0.2\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", not(any(target_os = \\\"emscripten\\\", target_os = \\\"wasi\\\"))))\"},{\"kind\":\"dev\",\"name\":\"windows-bindgen\",\"req\":\"^0.66\"},{\"name\":\"windows-link\",\"optional\":true,\"req\":\"^0.2\",\"target\":\"cfg(windows)\"}],\"features\":{\"__internal_bench\":[],\"alloc\":[],\"clock\":[\"winapi\",\"iana-time-zone\",\"now\"],\"core-error\":[],\"default\":[\"clock\",\"std\",\"oldtime\",\"wasmbind\"],\"defmt\":[\"dep:defmt\",\"pure-rust-locales?/defmt\"],\"libc\":[],\"now\":[\"std\"],\"oldtime\":[],\"rkyv\":[\"dep:rkyv\",\"rkyv/size_32\"],\"rkyv-16\":[\"dep:rkyv\",\"rkyv?/size_16\"],\"rkyv-32\":[\"dep:rkyv\",\"rkyv?/size_32\"],\"rkyv-64\":[\"dep:rkyv\",\"rkyv?/size_64\"],\"rkyv-validation\":[\"rkyv?/validation\"],\"std\":[\"alloc\"],\"unstable-locales\":[\"pure-rust-locales\"],\"wasmbind\":[\"wasm-bindgen\",\"js-sys\"],\"winapi\":[\"windows-link\"]}}",
"chunked_transfer_1.5.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.3\"}],\"features\":{}}",
"cipher_0.4.4": "{\"dependencies\":[{\"name\":\"blobby\",\"optional\":true,\"req\":\"^0.3\"},{\"name\":\"crypto-common\",\"req\":\"^0.1.6\"},{\"name\":\"inout\",\"req\":\"^0.1\"},{\"default_features\":false,\"name\":\"zeroize\",\"optional\":true,\"req\":\"^1.5\"}],\"features\":{\"alloc\":[],\"block-padding\":[\"inout/block-padding\"],\"dev\":[\"blobby\"],\"rand_core\":[\"crypto-common/rand_core\"],\"std\":[\"alloc\",\"crypto-common/std\",\"inout/std\"]}}",
"clap_4.5.54": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"automod\",\"req\":\"^1.0.14\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"clap-cargo\",\"req\":\"^0.15.0\"},{\"default_features\":false,\"name\":\"clap_builder\",\"req\":\"=4.5.54\"},{\"name\":\"clap_derive\",\"optional\":true,\"req\":\"=4.5.49\"},{\"kind\":\"dev\",\"name\":\"jiff\",\"req\":\"^0.2.3\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.15\"},{\"kind\":\"dev\",\"name\":\"semver\",\"req\":\"^1.0.26\"},{\"kind\":\"dev\",\"name\":\"shlex\",\"req\":\"^1.3.0\"},{\"features\":[\"term-svg\"],\"kind\":\"dev\",\"name\":\"snapbox\",\"req\":\"^0.6.16\"},{\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.91\"},{\"default_features\":false,\"features\":[\"color-auto\",\"diff\",\"examples\"],\"kind\":\"dev\",\"name\":\"trycmd\",\"req\":\"^0.15.3\"}],\"features\":{\"cargo\":[\"clap_builder/cargo\"],\"color\":[\"clap_builder/color\"],\"debug\":[\"clap_builder/debug\",\"clap_derive?/debug\"],\"default\":[\"std\",\"color\",\"help\",\"usage\",\"error-context\",\"suggestions\"],\"deprecated\":[\"clap_builder/deprecated\",\"clap_derive?/deprecated\"],\"derive\":[\"dep:clap_derive\"],\"env\":[\"clap_builder/env\"],\"error-context\":[\"clap_builder/error-context\"],\"help\":[\"clap_builder/help\"],\"std\":[\"clap_builder/std\"],\"string\":[\"clap_builder/string\"],\"suggestions\":[\"clap_builder/suggestions\"],\"unicode\":[\"clap_builder/unicode\"],\"unstable-derive-ui-tests\":[],\"unstable-doc\":[\"clap_builder/unstable-doc\",\"derive\"],\"unstable-ext\":[\"clap_builder/unstable-ext\"],\"unstable-markdown\":[\"clap_derive/unstable-markdown\"],\"unstable-styles\":[\"clap_builder/unstable-styles\"],\"unstable-v5\":[\"clap_builder/unstable-v5\",\"clap_derive?/unstable-v5\",\"deprecated\"],\"usage\":[\"clap_builder/usage\"],\"wrap_help\":[\"clap_builder/wrap_help\"]}}",
@@ -421,7 +641,6 @@
"colorchoice_1.0.4": "{\"dependencies\":[],\"features\":{}}",
"combine_4.6.7": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-std\",\"req\":\"^1\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"bytes\",\"req\":\"^1\"},{\"name\":\"bytes_05\",\"optional\":true,\"package\":\"bytes\",\"req\":\"^0.5\"},{\"kind\":\"dev\",\"name\":\"bytes_05\",\"package\":\"bytes\",\"req\":\"^0.5\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.3\"},{\"kind\":\"dev\",\"name\":\"futures-03-dep\",\"package\":\"futures\",\"req\":\"^0.3.1\"},{\"default_features\":false,\"name\":\"futures-core-03\",\"optional\":true,\"package\":\"futures-core\",\"req\":\"^0.3.1\"},{\"default_features\":false,\"name\":\"futures-io-03\",\"optional\":true,\"package\":\"futures-io\",\"req\":\"^0.3.1\"},{\"default_features\":false,\"name\":\"memchr\",\"req\":\"^2.3\"},{\"kind\":\"dev\",\"name\":\"once_cell\",\"req\":\"^1.0\"},{\"features\":[\"tokio\",\"quickcheck\"],\"kind\":\"dev\",\"name\":\"partial-io\",\"req\":\"^0.3\"},{\"name\":\"pin-project-lite\",\"optional\":true,\"req\":\"^0.2\"},{\"kind\":\"dev\",\"name\":\"quick-error\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"quickcheck\",\"req\":\"^0.6\"},{\"name\":\"regex\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"io-util\"],\"name\":\"tokio-02-dep\",\"optional\":true,\"package\":\"tokio\",\"req\":\"^0.2.3\"},{\"features\":[\"fs\",\"io-driver\",\"io-util\",\"macros\"],\"kind\":\"dev\",\"name\":\"tokio-02-dep\",\"package\":\"tokio\",\"req\":\"^0.2\"},{\"default_features\":false,\"name\":\"tokio-03-dep\",\"optional\":true,\"package\":\"tokio\",\"req\":\"^0.3\"},{\"features\":[\"fs\",\"macros\",\"rt-multi-thread\"],\"kind\":\"dev\",\"name\":\"tokio-03-dep\",\"package\":\"tokio\",\"req\":\"^0.3\"},{\"default_features\":false,\"name\":\"tokio-dep\",\"optional\":true,\"package\":\"tokio\",\"req\":\"^1\"},{\"features\":[\"fs\",\"macros\",\"rt\",\"rt-multi-thread\",\"io-util\"],\"kind\":\"dev\",\"name\":\"tokio-dep\",\"package\":\"tokio\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"codec\"],\"name\":\"tokio-util\",\"optional\":true,\"req\":\"^0.7\"}],\"features\":{\"alloc\":[],\"default\":[\"std\"],\"futures-03\":[\"pin-project\",\"std\",\"futures-core-03\",\"futures-io-03\",\"pin-project-lite\"],\"mp4\":[],\"pin-project\":[\"pin-project-lite\"],\"std\":[\"memchr/std\",\"bytes\",\"alloc\"],\"tokio\":[\"tokio-dep\",\"tokio-util/io\",\"futures-core-03\",\"pin-project-lite\"],\"tokio-02\":[\"pin-project\",\"std\",\"tokio-02-dep\",\"futures-core-03\",\"pin-project-lite\",\"bytes_05\"],\"tokio-03\":[\"pin-project\",\"std\",\"tokio-03-dep\",\"futures-core-03\",\"pin-project-lite\"]}}",
"compact_str_0.8.1": "{\"dependencies\":[{\"default_features\":false,\"name\":\"arbitrary\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"borsh\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"castaway\",\"req\":\"^0.2.3\"},{\"name\":\"cfg-if\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"cfg-if\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"diesel\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"itoa\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"markup\",\"optional\":true,\"req\":\"^0.13\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"proptest\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"std\"],\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"quickcheck\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"quickcheck\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"quickcheck_macros\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"rayon\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"size_32\"],\"name\":\"rkyv\",\"optional\":true,\"req\":\"^0.7\"},{\"default_features\":false,\"features\":[\"alloc\",\"size_32\"],\"kind\":\"dev\",\"name\":\"rkyv\",\"req\":\"^0.7\"},{\"name\":\"rustversion\",\"req\":\"^1\"},{\"name\":\"ryu\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"derive\",\"alloc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"features\":[\"union\"],\"name\":\"smallvec\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"name\":\"sqlx\",\"optional\":true,\"req\":\"^0.7\"},{\"name\":\"static_assertions\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"test-case\",\"req\":\"^3\"},{\"kind\":\"dev\",\"name\":\"test-strategy\",\"req\":\"^0.3\"}],\"features\":{\"arbitrary\":[\"dep:arbitrary\"],\"borsh\":[\"dep:borsh\"],\"bytes\":[\"dep:bytes\"],\"default\":[\"std\"],\"diesel\":[\"dep:diesel\"],\"markup\":[\"dep:markup\"],\"proptest\":[\"dep:proptest\"],\"quickcheck\":[\"dep:quickcheck\"],\"rkyv\":[\"dep:rkyv\"],\"serde\":[\"dep:serde\"],\"smallvec\":[\"dep:smallvec\"],\"sqlx\":[\"dep:sqlx\",\"std\"],\"sqlx-mysql\":[\"sqlx\",\"sqlx/mysql\"],\"sqlx-postgres\":[\"sqlx\",\"sqlx/postgres\"],\"sqlx-sqlite\":[\"sqlx\",\"sqlx/sqlite\"],\"std\":[]}}",
"compact_str_0.9.0": "{\"dependencies\":[{\"default_features\":false,\"name\":\"arbitrary\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"borsh\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"castaway\",\"req\":\"^0.2.3\"},{\"name\":\"cfg-if\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"cfg-if\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"diesel\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"itoa\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"markup\",\"optional\":true,\"req\":\"^0.15\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"proptest\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"std\"],\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"quickcheck\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"quickcheck\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"quickcheck_macros\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"rayon\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"rkyv\",\"optional\":true,\"req\":\"^0.8\"},{\"kind\":\"dev\",\"name\":\"rkyv\",\"req\":\"^0.8.8\"},{\"name\":\"rustversion\",\"req\":\"^1\"},{\"name\":\"ryu\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"derive\",\"alloc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"features\":[\"union\"],\"name\":\"smallvec\",\"optional\":true,\"req\":\"^1\"},{\"default_features\":false,\"name\":\"sqlx\",\"optional\":true,\"req\":\"^0.8\"},{\"name\":\"static_assertions\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"test-case\",\"req\":\"^3\"},{\"kind\":\"dev\",\"name\":\"test-strategy\",\"req\":\"^0.3\"},{\"default_features\":false,\"name\":\"zeroize\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"arbitrary\":[\"dep:arbitrary\"],\"borsh\":[\"dep:borsh\"],\"bytes\":[\"dep:bytes\"],\"default\":[\"std\"],\"diesel\":[\"dep:diesel\"],\"markup\":[\"dep:markup\"],\"proptest\":[\"dep:proptest\"],\"quickcheck\":[\"dep:quickcheck\"],\"rkyv\":[\"dep:rkyv\"],\"serde\":[\"dep:serde\"],\"smallvec\":[\"dep:smallvec\"],\"sqlx\":[\"dep:sqlx\",\"std\"],\"sqlx-mysql\":[\"sqlx\",\"sqlx/mysql\"],\"sqlx-postgres\":[\"sqlx\",\"sqlx/postgres\"],\"sqlx-sqlite\":[\"sqlx\",\"sqlx/sqlite\"],\"std\":[],\"zeroize\":[\"dep:zeroize\"]}}",
"concurrent-queue_2.5.0": "{\"dependencies\":[{\"default_features\":false,\"features\":[\"cargo_bench_support\"],\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5\"},{\"default_features\":false,\"name\":\"crossbeam-utils\",\"req\":\"^0.8.11\"},{\"kind\":\"dev\",\"name\":\"easy-parallel\",\"req\":\"^3.1.0\"},{\"kind\":\"dev\",\"name\":\"fastrand\",\"req\":\"^2.0.0\"},{\"name\":\"loom\",\"optional\":true,\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"default_features\":false,\"name\":\"portable-atomic\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"console_0.15.11": "{\"dependencies\":[{\"name\":\"encode_unicode\",\"req\":\"^1\",\"target\":\"cfg(windows)\"},{\"name\":\"libc\",\"req\":\"^0.2.99\"},{\"name\":\"once_cell\",\"req\":\"^1.8\"},{\"default_features\":false,\"features\":[\"std\",\"bit-set\",\"break-dead-code\"],\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"regex\",\"req\":\"^1.4.2\"},{\"name\":\"unicode-width\",\"optional\":true,\"req\":\"^0.2\"},{\"features\":[\"Win32_Foundation\",\"Win32_System_Console\",\"Win32_Storage_FileSystem\",\"Win32_UI_Input_KeyboardAndMouse\"],\"name\":\"windows-sys\",\"req\":\"^0.59\",\"target\":\"cfg(windows)\"}],\"features\":{\"ansi-parsing\":[],\"default\":[\"unicode-width\",\"ansi-parsing\"],\"windows-console-colors\":[\"ansi-parsing\"]}}",
"const-hex_1.17.0": "{\"dependencies\":[{\"name\":\"cfg-if\",\"req\":\"^1\"},{\"name\":\"cpufeatures\",\"req\":\"^0.2\",\"target\":\"cfg(any(target_arch = \\\"x86\\\", target_arch = \\\"x86_64\\\"))\"},{\"kind\":\"dev\",\"name\":\"divan\",\"package\":\"codspeed-divan-compat\",\"req\":\"^3\"},{\"default_features\":false,\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"faster-hex\",\"req\":\"^0.10.0\"},{\"default_features\":false,\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"hex\",\"req\":\"~0.4.2\"},{\"default_features\":false,\"name\":\"proptest\",\"optional\":true,\"req\":\"^1.4\"},{\"kind\":\"dev\",\"name\":\"rustc-hex\",\"req\":\"^2.1\"},{\"default_features\":false,\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"serde_core\",\"optional\":true,\"req\":\"^1.0\"},{\"default_features\":false,\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0\"}],\"features\":{\"__fuzzing\":[\"dep:proptest\",\"std\"],\"alloc\":[\"serde_core?/alloc\",\"proptest?/alloc\"],\"core-error\":[],\"default\":[\"std\"],\"force-generic\":[],\"hex\":[],\"nightly\":[],\"portable-simd\":[],\"serde\":[\"dep:serde_core\"],\"std\":[\"serde_core?/std\",\"proptest?/std\",\"alloc\"]}}",
@@ -439,9 +658,9 @@
"crossterm_winapi_0.9.1": "{\"dependencies\":[{\"features\":[\"winbase\",\"consoleapi\",\"processenv\",\"handleapi\",\"synchapi\",\"impl-default\"],\"name\":\"winapi\",\"req\":\"^0.3.8\",\"target\":\"cfg(windows)\"}],\"features\":{}}",
"crunchy_0.2.4": "{\"dependencies\":[],\"features\":{\"default\":[\"limit_128\"],\"limit_1024\":[],\"limit_128\":[],\"limit_2048\":[],\"limit_256\":[],\"limit_512\":[],\"limit_64\":[],\"std\":[]}}",
"crypto-common_0.1.6": "{\"dependencies\":[{\"features\":[\"more_lengths\"],\"name\":\"generic-array\",\"req\":\"^0.14.4\"},{\"name\":\"rand_core\",\"optional\":true,\"req\":\"^0.6\"},{\"name\":\"typenum\",\"req\":\"^1.14\"}],\"features\":{\"getrandom\":[\"rand_core/getrandom\"],\"std\":[]}}",
"ctor-proc-macro_0.0.6": "{\"dependencies\":[],\"features\":{\"default\":[]}}",
"ctor-proc-macro_0.0.7": "{\"dependencies\":[],\"features\":{\"default\":[]}}",
"ctor_0.1.26": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"libc-print\",\"req\":\"^0.1.20\"},{\"name\":\"quote\",\"req\":\"^1.0.20\"},{\"default_features\":false,\"features\":[\"full\",\"parsing\",\"printing\",\"proc-macro\"],\"name\":\"syn\",\"req\":\"^1.0.98\"}],\"features\":{}}",
"ctor_0.5.0": "{\"dependencies\":[{\"name\":\"ctor-proc-macro\",\"optional\":true,\"req\":\"=0.0.6\"},{\"default_features\":false,\"name\":\"dtor\",\"optional\":true,\"req\":\"^0.1.0\"},{\"kind\":\"dev\",\"name\":\"libc-print\",\"req\":\"^0.1.20\"}],\"features\":{\"__no_warn_on_missing_unsafe\":[\"dtor?/__no_warn_on_missing_unsafe\"],\"default\":[\"dtor\",\"proc_macro\",\"__no_warn_on_missing_unsafe\"],\"dtor\":[\"dep:dtor\"],\"proc_macro\":[\"dep:ctor-proc-macro\",\"dtor?/proc_macro\"],\"used_linker\":[\"dtor?/used_linker\"]}}",
"ctor_0.6.3": "{\"dependencies\":[{\"name\":\"ctor-proc-macro\",\"optional\":true,\"req\":\"=0.0.7\"},{\"default_features\":false,\"name\":\"dtor\",\"optional\":true,\"req\":\"^0.1.0\"},{\"kind\":\"dev\",\"name\":\"libc-print\",\"req\":\"^0.1.20\"}],\"features\":{\"__no_warn_on_missing_unsafe\":[\"dtor?/__no_warn_on_missing_unsafe\"],\"default\":[\"dtor\",\"proc_macro\",\"__no_warn_on_missing_unsafe\"],\"dtor\":[\"dep:dtor\"],\"proc_macro\":[\"dep:ctor-proc-macro\",\"dtor?/proc_macro\"],\"used_linker\":[\"dtor?/used_linker\"]}}",
"darling_0.20.11": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.20.11\"},{\"name\":\"darling_macro\",\"req\":\"=0.20.11\"},{\"kind\":\"dev\",\"name\":\"proc-macro2\",\"req\":\"^1.0.86\"},{\"kind\":\"dev\",\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.9\",\"target\":\"cfg(compiletests)\"},{\"kind\":\"dev\",\"name\":\"syn\",\"req\":\"^2.0.15\"},{\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.89\",\"target\":\"cfg(compiletests)\"}],\"features\":{\"default\":[\"suggestions\"],\"diagnostics\":[\"darling_core/diagnostics\"],\"suggestions\":[\"darling_core/suggestions\"]}}",
"darling_0.21.3": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.21.3\"},{\"name\":\"darling_macro\",\"req\":\"=0.21.3\"},{\"kind\":\"dev\",\"name\":\"proc-macro2\",\"req\":\"^1.0.86\"},{\"kind\":\"dev\",\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.9\",\"target\":\"cfg(compiletests)\"},{\"kind\":\"dev\",\"name\":\"syn\",\"req\":\"^2.0.15\"},{\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.89\",\"target\":\"cfg(compiletests)\"}],\"features\":{\"default\":[\"suggestions\"],\"diagnostics\":[\"darling_core/diagnostics\"],\"serde\":[\"darling_core/serde\"],\"suggestions\":[\"darling_core/suggestions\"]}}",
"darling_0.23.0": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.23.0\"},{\"name\":\"darling_macro\",\"req\":\"=0.23.0\"},{\"kind\":\"dev\",\"name\":\"proc-macro2\",\"req\":\"^1.0.86\"},{\"kind\":\"dev\",\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.9\",\"target\":\"cfg(compiletests)\"},{\"kind\":\"dev\",\"name\":\"syn\",\"req\":\"^2.0.15\"},{\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.89\",\"target\":\"cfg(compiletests)\"}],\"features\":{\"default\":[\"suggestions\"],\"diagnostics\":[\"darling_core/diagnostics\"],\"serde\":[\"darling_core/serde\"],\"suggestions\":[\"darling_core/suggestions\"]}}",
@@ -477,7 +696,6 @@
"display_container_0.9.0": "{\"dependencies\":[{\"name\":\"either\",\"req\":\"^1.8\"},{\"name\":\"indenter\",\"req\":\"^0.3.3\"}],\"features\":{}}",
"displaydoc_0.2.5": "{\"dependencies\":[{\"default_features\":false,\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2\"},{\"kind\":\"dev\",\"name\":\"pretty_assertions\",\"req\":\"^0.6.1\"},{\"name\":\"proc-macro2\",\"req\":\"^1.0\"},{\"name\":\"quote\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"rustversion\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"static_assertions\",\"req\":\"^1.1\"},{\"name\":\"syn\",\"req\":\"^2.0\"},{\"kind\":\"dev\",\"name\":\"thiserror\",\"req\":\"^1.0.24\"},{\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"doc-comment_0.3.3": "{\"dependencies\":[],\"features\":{\"no_core\":[],\"old_macros\":[]}}",
"document-features_0.2.12": "{\"dependencies\":[{\"name\":\"litrs\",\"req\":\"^1.0.0\"}],\"features\":{\"default\":[],\"self-test\":[]}}",
"dotenvy_0.15.7": "{\"dependencies\":[{\"name\":\"clap\",\"optional\":true,\"req\":\"^3.2\"},{\"kind\":\"dev\",\"name\":\"once_cell\",\"req\":\"^1.16.0\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.3.0\"}],\"features\":{\"cli\":[\"clap\"]}}",
"downcast-rs_1.2.1": "{\"dependencies\":[],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"dtor-proc-macro_0.0.6": "{\"dependencies\":[],\"features\":{\"default\":[]}}",
@@ -541,6 +759,8 @@
"getrandom_0.2.16": "{\"dependencies\":[{\"name\":\"cfg-if\",\"req\":\"^1\"},{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0\"},{\"name\":\"js-sys\",\"optional\":true,\"req\":\"^0.3\",\"target\":\"cfg(all(any(target_arch = \\\"wasm32\\\", target_arch = \\\"wasm64\\\"), target_os = \\\"unknown\\\"))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(unix)\"},{\"default_features\":false,\"name\":\"wasi\",\"req\":\"^0.11\",\"target\":\"cfg(target_os = \\\"wasi\\\")\"},{\"default_features\":false,\"name\":\"wasm-bindgen\",\"optional\":true,\"req\":\"^0.2.62\",\"target\":\"cfg(all(any(target_arch = \\\"wasm32\\\", target_arch = \\\"wasm64\\\"), target_os = \\\"unknown\\\"))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3.18\",\"target\":\"cfg(all(any(target_arch = \\\"wasm32\\\", target_arch = \\\"wasm64\\\"), target_os = \\\"unknown\\\"))\"}],\"features\":{\"custom\":[],\"js\":[\"wasm-bindgen\",\"js-sys\"],\"linux_disable_fallback\":[],\"rdrand\":[],\"rustc-dep-of-std\":[\"compiler_builtins\",\"core\",\"libc/rustc-dep-of-std\",\"wasi/rustc-dep-of-std\"],\"std\":[],\"test-in-browser\":[]}}",
"getrandom_0.3.3": "{\"dependencies\":[{\"name\":\"cfg-if\",\"req\":\"^1\"},{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"js-sys\",\"optional\":true,\"req\":\"^0.3.77\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", any(target_os = \\\"unknown\\\", target_os = \\\"none\\\"), target_feature = \\\"atomics\\\"))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(all(any(target_os = \\\"linux\\\", target_os = \\\"android\\\"), not(any(all(target_os = \\\"linux\\\", target_env = \\\"\\\"), getrandom_backend = \\\"custom\\\", getrandom_backend = \\\"linux_raw\\\", getrandom_backend = \\\"rdrand\\\", getrandom_backend = \\\"rndr\\\"))))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(any(target_os = \\\"dragonfly\\\", target_os = \\\"freebsd\\\", target_os = \\\"hurd\\\", target_os = \\\"illumos\\\", target_os = \\\"cygwin\\\", all(target_os = \\\"horizon\\\", target_arch = \\\"arm\\\")))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(any(target_os = \\\"haiku\\\", target_os = \\\"redox\\\", target_os = \\\"nto\\\", target_os = \\\"aix\\\"))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(any(target_os = \\\"ios\\\", target_os = \\\"visionos\\\", target_os = \\\"watchos\\\", target_os = \\\"tvos\\\"))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(any(target_os = \\\"macos\\\", target_os = \\\"openbsd\\\", target_os = \\\"vita\\\", target_os = \\\"emscripten\\\"))\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(target_os = \\\"netbsd\\\")\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(target_os = \\\"solaris\\\")\"},{\"default_features\":false,\"name\":\"libc\",\"req\":\"^0.2.154\",\"target\":\"cfg(target_os = \\\"vxworks\\\")\"},{\"default_features\":false,\"name\":\"r-efi\",\"req\":\"^5.1\",\"target\":\"cfg(all(target_os = \\\"uefi\\\", getrandom_backend = \\\"efi_rng\\\"))\"},{\"default_features\":false,\"name\":\"wasi\",\"req\":\"^0.14\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", target_os = \\\"wasi\\\", target_env = \\\"p2\\\"))\"},{\"default_features\":false,\"name\":\"wasm-bindgen\",\"optional\":true,\"req\":\"^0.2.98\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", any(target_os = \\\"unknown\\\", target_os = \\\"none\\\")))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_arch = \\\"wasm32\\\", any(target_os = \\\"unknown\\\", target_os = \\\"none\\\")))\"}],\"features\":{\"rustc-dep-of-std\":[\"dep:compiler_builtins\",\"dep:core\"],\"std\":[],\"wasm_js\":[\"dep:wasm-bindgen\",\"dep:js-sys\"]}}",
"gimli_0.31.1": "{\"dependencies\":[{\"name\":\"alloc\",\"optional\":true,\"package\":\"rustc-std-workspace-alloc\",\"req\":\"^1.0.0\"},{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1.2\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0.0\"},{\"default_features\":false,\"name\":\"fallible-iterator\",\"optional\":true,\"req\":\"^0.3.0\"},{\"name\":\"indexmap\",\"optional\":true,\"req\":\"^2.0.0\"},{\"default_features\":false,\"name\":\"stable_deref_trait\",\"optional\":true,\"req\":\"^1.1.0\"},{\"kind\":\"dev\",\"name\":\"test-assembler\",\"req\":\"^0.1.3\"}],\"features\":{\"default\":[\"read-all\",\"write\"],\"endian-reader\":[\"read\",\"dep:stable_deref_trait\"],\"fallible-iterator\":[\"dep:fallible-iterator\"],\"read\":[\"read-core\"],\"read-all\":[\"read\",\"std\",\"fallible-iterator\",\"endian-reader\"],\"read-core\":[],\"rustc-dep-of-std\":[\"dep:core\",\"dep:alloc\",\"dep:compiler_builtins\"],\"std\":[\"fallible-iterator?/std\",\"stable_deref_trait?/std\"],\"write\":[\"dep:indexmap\"]}}",
"git+https://github.com/JakkuSakura/tokio-tungstenite?rev=2ae536b0de793f3ddf31fc2f22d445bf1ef2023d#2ae536b0de793f3ddf31fc2f22d445bf1ef2023d_tokio-tungstenite": "{\"dependencies\":[{\"default_features\":false,\"features\":[\"sink\",\"std\"],\"name\":\"futures-util\",\"optional\":false},{\"name\":\"log\"},{\"default_features\":true,\"features\":[],\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\"},{\"default_features\":false,\"features\":[],\"name\":\"rustls\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"rustls-native-certs\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"rustls-pki-types\",\"optional\":true},{\"default_features\":false,\"features\":[\"io-util\"],\"name\":\"tokio\",\"optional\":false},{\"default_features\":true,\"features\":[],\"name\":\"tokio-native-tls\",\"optional\":true},{\"default_features\":false,\"features\":[],\"name\":\"tokio-rustls\",\"optional\":true},{\"default_features\":false,\"features\":[],\"name\":\"tungstenite\",\"optional\":false},{\"default_features\":true,\"features\":[],\"name\":\"webpki-roots\",\"optional\":true}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\",\"tokio-rustls\",\"stream\",\"tungstenite/__rustls-tls\",\"handshake\"],\"connect\":[\"stream\",\"tokio/net\",\"handshake\"],\"default\":[\"connect\",\"handshake\"],\"handshake\":[\"tungstenite/handshake\"],\"native-tls\":[\"native-tls-crate\",\"tokio-native-tls\",\"stream\",\"tungstenite/native-tls\",\"handshake\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\",\"tungstenite/native-tls-vendored\"],\"proxy\":[\"tungstenite/proxy\",\"tokio/net\",\"handshake\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"],\"stream\":[],\"url\":[\"tungstenite/url\"]},\"strip_prefix\":\"\"}",
"git+https://github.com/JakkuSakura/tungstenite-rs?rev=f514de8644821113e5d18a027d6d28a5c8cc0a6e#f514de8644821113e5d18a027d6d28a5c8cc0a6e_tungstenite": "{\"dependencies\":[{\"name\":\"bytes\"},{\"default_features\":true,\"features\":[],\"name\":\"data-encoding\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"http\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"httparse\",\"optional\":true},{\"name\":\"log\"},{\"default_features\":true,\"features\":[],\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\"},{\"name\":\"rand\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"rustls\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"rustls-native-certs\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"rustls-pki-types\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"sha1\",\"optional\":true},{\"name\":\"thiserror\"},{\"default_features\":true,\"features\":[],\"name\":\"url\",\"optional\":true},{\"name\":\"utf-8\"},{\"default_features\":true,\"features\":[],\"name\":\"webpki-roots\",\"optional\":true}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\"],\"default\":[\"handshake\"],\"handshake\":[\"data-encoding\",\"http\",\"httparse\",\"sha1\"],\"native-tls\":[\"native-tls-crate\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\"],\"proxy\":[\"handshake\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"],\"url\":[\"dep:url\"]},\"strip_prefix\":\"\"}",
"git+https://github.com/nornagon/crossterm?branch=nornagon%2Fcolor-query#87db8bfa6dc99427fd3b071681b07fc31c6ce995_crossterm": "{\"dependencies\":[{\"default_features\":true,\"features\":[],\"name\":\"bitflags\",\"optional\":false},{\"default_features\":false,\"features\":[],\"name\":\"futures-core\",\"optional\":true},{\"name\":\"parking_lot\"},{\"default_features\":true,\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"filedescriptor\",\"optional\":true,\"target\":\"cfg(unix)\"},{\"default_features\":false,\"features\":[],\"name\":\"libc\",\"optional\":true,\"target\":\"cfg(unix)\"},{\"default_features\":true,\"features\":[\"os-poll\"],\"name\":\"mio\",\"optional\":true,\"target\":\"cfg(unix)\"},{\"default_features\":false,\"features\":[\"std\",\"stdio\",\"termios\"],\"name\":\"rustix\",\"optional\":false,\"target\":\"cfg(unix)\"},{\"default_features\":true,\"features\":[],\"name\":\"signal-hook\",\"optional\":true,\"target\":\"cfg(unix)\"},{\"default_features\":true,\"features\":[\"support-v1_0\"],\"name\":\"signal-hook-mio\",\"optional\":true,\"target\":\"cfg(unix)\"},{\"default_features\":true,\"features\":[],\"name\":\"crossterm_winapi\",\"optional\":true,\"target\":\"cfg(windows)\"},{\"default_features\":true,\"features\":[\"winuser\",\"winerror\"],\"name\":\"winapi\",\"optional\":true,\"target\":\"cfg(windows)\"}],\"features\":{\"bracketed-paste\":[],\"default\":[\"bracketed-paste\",\"windows\",\"events\"],\"event-stream\":[\"dep:futures-core\",\"events\"],\"events\":[\"dep:mio\",\"dep:signal-hook\",\"dep:signal-hook-mio\"],\"serde\":[\"dep:serde\",\"bitflags/serde\"],\"use-dev-tty\":[\"filedescriptor\",\"rustix/process\"],\"windows\":[\"dep:winapi\",\"dep:crossterm_winapi\"]},\"strip_prefix\":\"\"}",
"git+https://github.com/nornagon/ratatui?branch=nornagon-v0.29.0-patch#9b2ad1298408c45918ee9f8241a6f95498cdbed2_ratatui": "{\"dependencies\":[{\"name\":\"bitflags\"},{\"name\":\"cassowary\"},{\"name\":\"compact_str\"},{\"default_features\":true,\"features\":[],\"name\":\"crossterm\",\"optional\":true},{\"default_features\":true,\"features\":[],\"name\":\"document-features\",\"optional\":true},{\"name\":\"indoc\"},{\"name\":\"instability\"},{\"name\":\"itertools\"},{\"name\":\"lru\"},{\"default_features\":true,\"features\":[],\"name\":\"palette\",\"optional\":true},{\"name\":\"paste\"},{\"default_features\":true,\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true},{\"default_features\":true,\"features\":[\"derive\"],\"name\":\"strum\",\"optional\":false},{\"default_features\":true,\"features\":[],\"name\":\"termwiz\",\"optional\":true},{\"default_features\":true,\"features\":[\"local-offset\"],\"name\":\"time\",\"optional\":true},{\"name\":\"unicode-segmentation\"},{\"name\":\"unicode-truncate\"},{\"name\":\"unicode-width\"},{\"default_features\":true,\"features\":[],\"name\":\"termion\",\"optional\":true,\"target\":\"cfg(not(windows))\"}],\"features\":{\"all-widgets\":[\"widget-calendar\"],\"crossterm\":[\"dep:crossterm\"],\"default\":[\"crossterm\",\"underline-color\"],\"macros\":[],\"palette\":[\"dep:palette\"],\"scrolling-regions\":[],\"serde\":[\"dep:serde\",\"bitflags/serde\",\"compact_str/serde\"],\"termion\":[\"dep:termion\"],\"termwiz\":[\"dep:termwiz\"],\"underline-color\":[\"dep:crossterm\"],\"unstable\":[\"unstable-rendered-line-info\",\"unstable-widget-ref\",\"unstable-backend-writer\"],\"unstable-backend-writer\":[],\"unstable-rendered-line-info\":[],\"unstable-widget-ref\":[],\"widget-calendar\":[\"dep:time\"]},\"strip_prefix\":\"\"}",
"globset_0.4.16": "{\"dependencies\":[{\"name\":\"aho-corasick\",\"req\":\"^1.1.1\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"bstr\",\"req\":\"^1.6.2\"},{\"kind\":\"dev\",\"name\":\"glob\",\"req\":\"^0.3.1\"},{\"name\":\"log\",\"optional\":true,\"req\":\"^0.4.20\"},{\"default_features\":false,\"features\":[\"std\",\"perf\",\"syntax\",\"meta\",\"nfa\",\"hybrid\"],\"name\":\"regex-automata\",\"req\":\"^0.4.0\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"regex-syntax\",\"req\":\"^0.8.0\"},{\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.188\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0.107\"}],\"features\":{\"default\":[\"log\"],\"serde1\":[\"serde\"],\"simd-accel\":[]}}",
@@ -614,7 +834,6 @@
"jni_0.21.1": "{\"dependencies\":[{\"name\":\"cesu8\",\"req\":\"^1.1.0\"},{\"name\":\"cfg-if\",\"req\":\"^1.0.0\"},{\"name\":\"combine\",\"req\":\"^4.1.0\"},{\"name\":\"java-locator\",\"optional\":true,\"req\":\"^0.1\"},{\"name\":\"jni-sys\",\"req\":\"^0.3.0\"},{\"name\":\"libloading\",\"optional\":true,\"req\":\"^0.7\"},{\"name\":\"log\",\"req\":\"^0.4.4\"},{\"name\":\"thiserror\",\"req\":\"^1.0.20\"},{\"kind\":\"dev\",\"name\":\"assert_matches\",\"req\":\"^1.5.0\"},{\"kind\":\"dev\",\"name\":\"lazy_static\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"rusty-fork\",\"req\":\"^0.3.0\"},{\"kind\":\"build\",\"name\":\"walkdir\",\"req\":\"^2\"},{\"features\":[\"Win32_Globalization\"],\"name\":\"windows-sys\",\"req\":\"^0.45.0\",\"target\":\"cfg(windows)\"},{\"kind\":\"dev\",\"name\":\"bytemuck\",\"req\":\"^1.13.0\",\"target\":\"cfg(windows)\"}],\"features\":{\"default\":[],\"invocation\":[\"java-locator\",\"libloading\"]}}",
"jobserver_0.1.34": "{\"dependencies\":[{\"features\":[\"std\"],\"name\":\"getrandom\",\"req\":\"^0.3.2\",\"target\":\"cfg(windows)\"},{\"name\":\"libc\",\"req\":\"^0.2.171\",\"target\":\"cfg(unix)\"},{\"features\":[\"fs\"],\"kind\":\"dev\",\"name\":\"nix\",\"req\":\"^0.28.0\",\"target\":\"cfg(unix)\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.10.1\"}],\"features\":{}}",
"js-sys_0.3.77": "{\"dependencies\":[{\"default_features\":false,\"name\":\"once_cell\",\"req\":\"^1.12\"},{\"default_features\":false,\"name\":\"wasm-bindgen\",\"req\":\"=0.2.100\"}],\"features\":{\"default\":[\"std\"],\"std\":[\"wasm-bindgen/std\"]}}",
"kasuari_0.4.11": "{\"dependencies\":[{\"name\":\"document-features\",\"optional\":true,\"req\":\"^0.2\"},{\"name\":\"hashbrown\",\"req\":\"^0.16\"},{\"default_features\":false,\"features\":[\"require-cas\"],\"name\":\"portable-atomic\",\"optional\":true,\"req\":\"^1.11\"},{\"features\":[\"alloc\"],\"name\":\"portable-atomic-util\",\"optional\":true,\"req\":\"^0.2.4\"},{\"kind\":\"dev\",\"name\":\"rstest\",\"req\":\"^0.26\"},{\"default_features\":false,\"name\":\"thiserror\",\"req\":\"^2.0\"}],\"features\":{\"default\":[\"std\"],\"document-features\":[\"dep:document-features\"],\"portable-atomic\":[\"dep:portable-atomic\",\"dep:portable-atomic-util\"],\"std\":[\"thiserror/std\",\"portable-atomic?/std\"]}}",
"keyring_3.6.3": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"base64\",\"req\":\"^0.22\"},{\"name\":\"byteorder\",\"optional\":true,\"req\":\"^1.2\",\"target\":\"cfg(target_os = \\\"windows\\\")\"},{\"features\":[\"derive\",\"wrap_help\"],\"kind\":\"dev\",\"name\":\"clap\",\"req\":\"^4\"},{\"name\":\"dbus-secret-service\",\"optional\":true,\"req\":\"^4.0.0-rc.1\",\"target\":\"cfg(target_os = \\\"openbsd\\\")\"},{\"name\":\"dbus-secret-service\",\"optional\":true,\"req\":\"^4.0.0-rc.2\",\"target\":\"cfg(target_os = \\\"linux\\\")\"},{\"name\":\"dbus-secret-service\",\"optional\":true,\"req\":\"^4.0.1\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"kind\":\"dev\",\"name\":\"doc-comment\",\"req\":\"^0.3\"},{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.11.5\"},{\"kind\":\"dev\",\"name\":\"fastrand\",\"req\":\"^2\"},{\"features\":[\"std\"],\"name\":\"linux-keyutils\",\"optional\":true,\"req\":\"^0.2\",\"target\":\"cfg(target_os = \\\"linux\\\")\"},{\"name\":\"log\",\"req\":\"^0.4.22\"},{\"name\":\"openssl\",\"optional\":true,\"req\":\"^0.10.66\"},{\"kind\":\"dev\",\"name\":\"rpassword\",\"req\":\"^7\"},{\"kind\":\"dev\",\"name\":\"rprompt\",\"req\":\"^2\"},{\"name\":\"secret-service\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"name\":\"secret-service\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"linux\\\")\"},{\"name\":\"secret-service\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"openbsd\\\")\"},{\"name\":\"security-framework\",\"optional\":true,\"req\":\"^2\",\"target\":\"cfg(target_os = \\\"ios\\\")\"},{\"name\":\"security-framework\",\"optional\":true,\"req\":\"^3\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"kind\":\"dev\",\"name\":\"whoami\",\"req\":\"^1.5\"},{\"features\":[\"Win32_Foundation\",\"Win32_Security_Credentials\"],\"name\":\"windows-sys\",\"optional\":true,\"req\":\"^0.60\",\"target\":\"cfg(target_os = \\\"windows\\\")\"},{\"name\":\"zbus\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"name\":\"zbus\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"linux\\\")\"},{\"name\":\"zbus\",\"optional\":true,\"req\":\"^4\",\"target\":\"cfg(target_os = \\\"openbsd\\\")\"},{\"name\":\"zeroize\",\"req\":\"^1.8.1\",\"target\":\"cfg(target_os = \\\"windows\\\")\"}],\"features\":{\"apple-native\":[\"dep:security-framework\"],\"async-io\":[\"zbus?/async-io\"],\"async-secret-service\":[\"dep:secret-service\",\"dep:zbus\"],\"crypto-openssl\":[\"dbus-secret-service?/crypto-openssl\",\"secret-service?/crypto-openssl\"],\"crypto-rust\":[\"dbus-secret-service?/crypto-rust\",\"secret-service?/crypto-rust\"],\"linux-native\":[\"dep:linux-keyutils\"],\"linux-native-async-persistent\":[\"linux-native\",\"async-secret-service\"],\"linux-native-sync-persistent\":[\"linux-native\",\"sync-secret-service\"],\"sync-secret-service\":[\"dep:dbus-secret-service\"],\"tokio\":[\"zbus?/tokio\"],\"vendored\":[\"dbus-secret-service?/vendored\",\"openssl?/vendored\"],\"windows-native\":[\"dep:windows-sys\",\"dep:byteorder\"]}}",
"kqueue-sys_1.0.4": "{\"dependencies\":[{\"name\":\"bitflags\",\"req\":\"^1.2.1\"},{\"name\":\"libc\",\"req\":\"^0.2.74\"}],\"features\":{}}",
"kqueue_1.1.1": "{\"dependencies\":[{\"features\":[\"html_reports\"],\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5\"},{\"kind\":\"dev\",\"name\":\"dhat\",\"req\":\"^0.3.2\"},{\"name\":\"kqueue-sys\",\"req\":\"^1.0.4\"},{\"name\":\"libc\",\"req\":\"^0.2.17\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\"}],\"features\":{}}",
@@ -630,10 +849,9 @@
"linux-raw-sys_0.4.15": "{\"dependencies\":[{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1.49\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2.100\"},{\"kind\":\"dev\",\"name\":\"static_assertions\",\"req\":\"^1.1.0\"}],\"features\":{\"bootparam\":[],\"btrfs\":[],\"default\":[\"std\",\"general\",\"errno\"],\"elf\":[],\"elf_uapi\":[],\"errno\":[],\"general\":[],\"if_arp\":[],\"if_ether\":[],\"if_packet\":[],\"io_uring\":[],\"ioctl\":[],\"landlock\":[],\"loop_device\":[],\"mempolicy\":[],\"net\":[],\"netlink\":[],\"no_std\":[],\"prctl\":[],\"ptrace\":[],\"rustc-dep-of-std\":[\"core\",\"compiler_builtins\",\"no_std\"],\"std\":[],\"system\":[],\"xdp\":[]}}",
"linux-raw-sys_0.9.4": "{\"dependencies\":[{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1.49\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2.100\"},{\"kind\":\"dev\",\"name\":\"static_assertions\",\"req\":\"^1.1.0\"}],\"features\":{\"bootparam\":[],\"btrfs\":[],\"default\":[\"std\",\"general\",\"errno\"],\"elf\":[],\"elf_uapi\":[],\"errno\":[],\"general\":[],\"if_arp\":[],\"if_ether\":[],\"if_packet\":[],\"image\":[],\"io_uring\":[],\"ioctl\":[],\"landlock\":[],\"loop_device\":[],\"mempolicy\":[],\"net\":[],\"netlink\":[],\"no_std\":[],\"prctl\":[],\"ptrace\":[],\"rustc-dep-of-std\":[\"core\",\"compiler_builtins\",\"no_std\"],\"std\":[],\"system\":[],\"xdp\":[]}}",
"litemap_0.8.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"bincode\",\"req\":\"^1.3.1\"},{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5.0\",\"target\":\"cfg(not(target_arch = \\\"wasm32\\\"))\"},{\"default_features\":false,\"name\":\"databake\",\"optional\":true,\"req\":\"^0.2.0\"},{\"default_features\":false,\"features\":[\"use-std\"],\"kind\":\"dev\",\"name\":\"postcard\",\"req\":\"^1.0.3\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.9\"},{\"features\":[\"validation\"],\"kind\":\"dev\",\"name\":\"rkyv\",\"req\":\"^0.7\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.110\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0.110\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0.45\"},{\"default_features\":false,\"features\":[\"derive\"],\"name\":\"yoke\",\"optional\":true,\"req\":\"^0.8.0\"}],\"features\":{\"alloc\":[],\"databake\":[\"dep:databake\"],\"default\":[\"alloc\"],\"serde\":[\"dep:serde\",\"alloc\"],\"testing\":[\"alloc\"],\"yoke\":[\"dep:yoke\"]}}",
"litrs_1.0.0": "{\"dependencies\":[{\"name\":\"proc-macro2\",\"optional\":true,\"req\":\"^1.0.63\"},{\"name\":\"unicode-xid\",\"optional\":true,\"req\":\"^0.2.4\"}],\"features\":{\"check_suffix\":[\"unicode-xid\"]}}",
"local-waker_0.1.4": "{\"dependencies\":[],\"features\":{}}",
"lock_api_0.4.13": "{\"dependencies\":[{\"kind\":\"build\",\"name\":\"autocfg\",\"req\":\"^1.1.0\"},{\"name\":\"owning_ref\",\"optional\":true,\"req\":\"^0.4.1\"},{\"default_features\":false,\"name\":\"scopeguard\",\"req\":\"^1.1.0\"},{\"default_features\":false,\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.126\"}],\"features\":{\"arc_lock\":[],\"atomic_usize\":[],\"default\":[\"atomic_usize\"],\"nightly\":[]}}",
"log_0.4.28": "{\"dependencies\":[{\"default_features\":false,\"kind\":\"dev\",\"name\":\"proc-macro2\",\"req\":\"^1.0.63\"},{\"default_features\":false,\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_test\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"sval\",\"optional\":true,\"req\":\"^2.14.1\"},{\"kind\":\"dev\",\"name\":\"sval\",\"req\":\"^2.1\"},{\"kind\":\"dev\",\"name\":\"sval_derive\",\"req\":\"^2.1\"},{\"default_features\":false,\"name\":\"sval_ref\",\"optional\":true,\"req\":\"^2.1\"},{\"default_features\":false,\"features\":[\"inline-i128\"],\"name\":\"value-bag\",\"optional\":true,\"req\":\"^1.7\"},{\"features\":[\"test\"],\"kind\":\"dev\",\"name\":\"value-bag\",\"req\":\"^1.7\"}],\"features\":{\"kv\":[],\"kv_serde\":[\"kv_std\",\"value-bag/serde\",\"serde\"],\"kv_std\":[\"std\",\"kv\",\"value-bag/error\"],\"kv_sval\":[\"kv\",\"value-bag/sval\",\"sval\",\"sval_ref\"],\"kv_unstable\":[\"kv\",\"value-bag\"],\"kv_unstable_serde\":[\"kv_serde\",\"kv_unstable_std\"],\"kv_unstable_std\":[\"kv_std\",\"kv_unstable\"],\"kv_unstable_sval\":[\"kv_sval\",\"kv_unstable\"],\"max_level_debug\":[],\"max_level_error\":[],\"max_level_info\":[],\"max_level_off\":[],\"max_level_trace\":[],\"max_level_warn\":[],\"release_max_level_debug\":[],\"release_max_level_error\":[],\"release_max_level_info\":[],\"release_max_level_off\":[],\"release_max_level_trace\":[],\"release_max_level_warn\":[],\"std\":[]}}",
"log_0.4.29": "{\"dependencies\":[{\"default_features\":false,\"kind\":\"dev\",\"name\":\"proc-macro2\",\"req\":\"^1.0.63\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"serde_core\",\"optional\":true,\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_test\",\"req\":\"^1.0\"},{\"default_features\":false,\"name\":\"sval\",\"optional\":true,\"req\":\"^2.16\"},{\"kind\":\"dev\",\"name\":\"sval\",\"req\":\"^2.16\"},{\"kind\":\"dev\",\"name\":\"sval_derive\",\"req\":\"^2.16\"},{\"default_features\":false,\"name\":\"sval_ref\",\"optional\":true,\"req\":\"^2.16\"},{\"default_features\":false,\"features\":[\"inline-i128\"],\"name\":\"value-bag\",\"optional\":true,\"req\":\"^1.12\"},{\"features\":[\"test\"],\"kind\":\"dev\",\"name\":\"value-bag\",\"req\":\"^1.12\"}],\"features\":{\"kv\":[],\"kv_serde\":[\"kv_std\",\"value-bag/serde\",\"serde\"],\"kv_std\":[\"std\",\"kv\",\"value-bag/error\"],\"kv_sval\":[\"kv\",\"value-bag/sval\",\"sval\",\"sval_ref\"],\"kv_unstable\":[\"kv\",\"value-bag\"],\"kv_unstable_serde\":[\"kv_serde\",\"kv_unstable_std\"],\"kv_unstable_std\":[\"kv_std\",\"kv_unstable\"],\"kv_unstable_sval\":[\"kv_sval\",\"kv_unstable\"],\"max_level_debug\":[],\"max_level_error\":[],\"max_level_info\":[],\"max_level_off\":[],\"max_level_trace\":[],\"max_level_warn\":[],\"release_max_level_debug\":[],\"release_max_level_error\":[],\"release_max_level_info\":[],\"release_max_level_off\":[],\"release_max_level_trace\":[],\"release_max_level_warn\":[],\"serde\":[\"serde_core\"],\"std\":[]}}",
"logos-derive_0.12.1": "{\"dependencies\":[{\"name\":\"beef\",\"req\":\"^0.5.0\"},{\"name\":\"fnv\",\"req\":\"^1.0.6\"},{\"kind\":\"dev\",\"name\":\"pretty_assertions\",\"req\":\"^0.6.1\"},{\"name\":\"proc-macro2\",\"req\":\"^1.0.9\"},{\"name\":\"quote\",\"req\":\"^1.0.3\"},{\"name\":\"regex-syntax\",\"req\":\"^0.6\"},{\"features\":[\"full\"],\"name\":\"syn\",\"req\":\"^1.0.17\"}],\"features\":{}}",
"logos_0.12.1": "{\"dependencies\":[{\"name\":\"logos-derive\",\"optional\":true,\"req\":\"^0.12.1\"}],\"features\":{\"default\":[\"export_derive\",\"std\"],\"export_derive\":[\"logos-derive\"],\"std\":[]}}",
"lru-slab_0.1.2": "{\"dependencies\":[],\"features\":{}}",
@@ -761,7 +979,6 @@
"rand_core_0.6.4": "{\"dependencies\":[{\"name\":\"getrandom\",\"optional\":true,\"req\":\"^0.2\"},{\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"alloc\":[],\"serde1\":[\"serde\"],\"std\":[\"alloc\",\"getrandom\",\"getrandom/std\"]}}",
"rand_core_0.9.3": "{\"dependencies\":[{\"name\":\"getrandom\",\"optional\":true,\"req\":\"^0.3.0\"},{\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"os_rng\":[\"dep:getrandom\"],\"serde\":[\"dep:serde\"],\"std\":[\"getrandom?/std\"]}}",
"rand_xorshift_0.4.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"bincode\",\"req\":\"^1\"},{\"name\":\"rand_core\",\"req\":\"^0.9.0\"},{\"default_features\":false,\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.118\"}],\"features\":{\"serde\":[\"dep:serde\"]}}",
"ratatui-core_0.1.0": "{\"dependencies\":[{\"name\":\"anstyle\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"bitflags\",\"req\":\"^2.10\"},{\"default_features\":false,\"name\":\"compact_str\",\"req\":\"^0.9\"},{\"name\":\"document-features\",\"optional\":true,\"req\":\"^0.2\"},{\"name\":\"hashbrown\",\"req\":\"^0.16\"},{\"name\":\"indoc\",\"req\":\"^2\"},{\"default_features\":false,\"features\":[\"use_alloc\"],\"name\":\"itertools\",\"req\":\"^0.14\"},{\"default_features\":false,\"name\":\"kasuari\",\"req\":\"^0.4\"},{\"name\":\"lru\",\"req\":\"^0.16\"},{\"name\":\"palette\",\"optional\":true,\"req\":\"^0.7\"},{\"kind\":\"dev\",\"name\":\"pretty_assertions\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"rstest\",\"req\":\"^0.26\"},{\"features\":[\"derive\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"derive\"],\"name\":\"strum\",\"req\":\"^0.27\"},{\"default_features\":false,\"name\":\"thiserror\",\"req\":\"^2\"},{\"name\":\"unicode-segmentation\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"unicode-truncate\",\"req\":\"^2\"},{\"name\":\"unicode-width\",\"req\":\">=0.2.0, <=0.2.2\"}],\"features\":{\"anstyle\":[\"dep:anstyle\"],\"default\":[],\"layout-cache\":[\"std\"],\"palette\":[\"std\",\"dep:palette\"],\"portable-atomic\":[\"kasuari/portable-atomic\"],\"scrolling-regions\":[],\"serde\":[\"std\",\"dep:serde\",\"bitflags/serde\",\"compact_str/serde\"],\"std\":[\"itertools/use_std\",\"thiserror/std\",\"kasuari/std\",\"compact_str/std\",\"unicode-truncate/std\",\"strum/std\"],\"underline-color\":[]}}",
"ratatui-macros_0.6.0": "{\"dependencies\":[{\"features\":[\"user-hooks\"],\"kind\":\"dev\",\"name\":\"cargo-husky\",\"req\":\"^1.5.0\"},{\"name\":\"ratatui\",\"req\":\"^0.29.0\"},{\"features\":[\"diff\"],\"kind\":\"dev\",\"name\":\"trybuild\",\"req\":\"^1.0.101\"}],\"features\":{}}",
"redox_syscall_0.5.15": "{\"dependencies\":[{\"name\":\"bitflags\",\"req\":\"^2.4\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"}],\"features\":{\"default\":[\"userspace\"],\"rustc-dep-of-std\":[\"core\",\"bitflags/rustc-dep-of-std\"],\"std\":[],\"userspace\":[]}}",
"redox_users_0.4.6": "{\"dependencies\":[{\"features\":[\"std\"],\"name\":\"getrandom\",\"req\":\"^0.2\"},{\"default_features\":false,\"features\":[\"std\",\"call\"],\"name\":\"libredox\",\"req\":\"^0.1.3\"},{\"name\":\"rust-argon2\",\"optional\":true,\"req\":\"^0.8\"},{\"name\":\"thiserror\",\"req\":\"^1.0\"},{\"features\":[\"zeroize_derive\"],\"name\":\"zeroize\",\"optional\":true,\"req\":\"^1.4\"}],\"features\":{\"auth\":[\"rust-argon2\",\"zeroize\"],\"default\":[\"auth\"]}}",
@@ -907,9 +1124,8 @@
"tokio-rustls_0.26.2": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"argh\",\"req\":\"^0.1.1\"},{\"kind\":\"dev\",\"name\":\"futures-util\",\"req\":\"^0.3.1\"},{\"kind\":\"dev\",\"name\":\"lazy_static\",\"req\":\"^1.1\"},{\"features\":[\"pem\"],\"kind\":\"dev\",\"name\":\"rcgen\",\"req\":\"^0.13\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"rustls\",\"req\":\"^0.23.22\"},{\"name\":\"tokio\",\"req\":\"^1.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"webpki-roots\",\"req\":\"^0.26\"}],\"features\":{\"aws-lc-rs\":[\"aws_lc_rs\"],\"aws_lc_rs\":[\"rustls/aws_lc_rs\"],\"default\":[\"logging\",\"tls12\",\"aws_lc_rs\"],\"early-data\":[],\"fips\":[\"rustls/fips\"],\"logging\":[\"rustls/logging\"],\"ring\":[\"rustls/ring\"],\"tls12\":[\"rustls/tls12\"]}}",
"tokio-stream_0.1.18": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"req\":\"^1.15.0\"},{\"features\":[\"full\",\"test-util\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4\"},{\"name\":\"tokio-util\",\"optional\":true,\"req\":\"^0.7.0\"}],\"features\":{\"default\":[\"time\"],\"fs\":[\"tokio/fs\"],\"full\":[\"time\",\"net\",\"io-util\",\"fs\",\"sync\",\"signal\"],\"io-util\":[\"tokio/io-util\"],\"net\":[\"tokio/net\"],\"signal\":[\"tokio/signal\"],\"sync\":[\"tokio/sync\",\"tokio-util\"],\"time\":[\"tokio/time\"]}}",
"tokio-test_0.4.4": "{\"dependencies\":[{\"name\":\"async-stream\",\"req\":\"^0.3.3\"},{\"name\":\"bytes\",\"req\":\"^1.0.0\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-util\",\"req\":\"^0.3.0\"},{\"features\":[\"rt\",\"sync\",\"time\",\"test-util\"],\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"name\":\"tokio-stream\",\"req\":\"^0.1.1\"}],\"features\":{}}",
"tokio-tungstenite_0.21.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.10.0\"},{\"kind\":\"dev\",\"name\":\"futures-channel\",\"req\":\"^0.3.28\"},{\"default_features\":false,\"features\":[\"sink\",\"std\"],\"name\":\"futures-util\",\"req\":\"^0.3.28\"},{\"default_features\":false,\"features\":[\"http1\",\"server\",\"tcp\"],\"kind\":\"dev\",\"name\":\"hyper\",\"req\":\"^0.14.25\"},{\"name\":\"log\",\"req\":\"^0.4.17\"},{\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\",\"req\":\"^0.2.11\"},{\"name\":\"rustls\",\"optional\":true,\"req\":\"^0.22.0\"},{\"name\":\"rustls-native-certs\",\"optional\":true,\"req\":\"^0.7.0\"},{\"name\":\"rustls-pki-types\",\"optional\":true,\"req\":\"^1.0\"},{\"default_features\":false,\"features\":[\"io-util\"],\"name\":\"tokio\",\"req\":\"^1.0.0\"},{\"default_features\":false,\"features\":[\"io-std\",\"macros\",\"net\",\"rt-multi-thread\",\"time\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.27.0\"},{\"name\":\"tokio-native-tls\",\"optional\":true,\"req\":\"^0.3.1\"},{\"name\":\"tokio-rustls\",\"optional\":true,\"req\":\"^0.25.0\"},{\"default_features\":false,\"name\":\"tungstenite\",\"req\":\"^0.21.0\"},{\"kind\":\"dev\",\"name\":\"url\",\"req\":\"^2.3.1\"},{\"name\":\"webpki-roots\",\"optional\":true,\"req\":\"^0.26.0\"}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\",\"tokio-rustls\",\"stream\",\"tungstenite/__rustls-tls\",\"handshake\"],\"connect\":[\"stream\",\"tokio/net\",\"handshake\"],\"default\":[\"connect\",\"handshake\"],\"handshake\":[\"tungstenite/handshake\"],\"native-tls\":[\"native-tls-crate\",\"tokio-native-tls\",\"stream\",\"tungstenite/native-tls\",\"handshake\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\",\"tungstenite/native-tls-vendored\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"],\"stream\":[]}}",
"tokio-util_0.7.18": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3.0\"},{\"name\":\"bytes\",\"req\":\"^1.5.0\"},{\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3.0\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"name\":\"futures-io\",\"optional\":true,\"req\":\"^0.3.0\"},{\"name\":\"futures-sink\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-test\",\"req\":\"^0.3.5\"},{\"name\":\"futures-util\",\"optional\":true,\"req\":\"^0.3.0\"},{\"default_features\":false,\"name\":\"hashbrown\",\"optional\":true,\"req\":\"^0.15.0\"},{\"features\":[\"futures\",\"checkpoint\"],\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"name\":\"slab\",\"optional\":true,\"req\":\"^0.4.4\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"req\":\"^1.44.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"tokio-stream\",\"req\":\"^0.1\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4.0\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.29\"}],\"features\":{\"__docs_rs\":[\"futures-util\"],\"codec\":[],\"compat\":[\"futures-io\"],\"default\":[],\"full\":[\"codec\",\"compat\",\"io-util\",\"time\",\"net\",\"rt\",\"join-map\"],\"io\":[],\"io-util\":[\"io\",\"tokio/rt\",\"tokio/io-util\"],\"join-map\":[\"rt\",\"hashbrown\"],\"net\":[\"tokio/net\"],\"rt\":[\"tokio/rt\",\"tokio/sync\",\"futures-util\"],\"time\":[\"tokio/time\",\"slab\"]}}",
"tokio_1.48.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3\"},{\"name\":\"backtrace\",\"optional\":true,\"req\":\"^0.3.58\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1.2.1\"},{\"features\":[\"async-await\"],\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-concurrency\",\"req\":\"^7.6.3\"},{\"default_features\":false,\"name\":\"io-uring\",\"optional\":true,\"req\":\"^0.7.6\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"features\":[\"futures\",\"checkpoint\"],\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"default_features\":false,\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\"},{\"default_features\":false,\"features\":[\"os-poll\",\"os-ext\"],\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"tokio\"],\"kind\":\"dev\",\"name\":\"mio-aio\",\"req\":\"^1\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"kind\":\"dev\",\"name\":\"mockall\",\"req\":\"^0.13.0\"},{\"default_features\":false,\"features\":[\"aio\",\"fs\",\"socket\"],\"kind\":\"dev\",\"name\":\"nix\",\"req\":\"^0.29.0\",\"target\":\"cfg(unix)\"},{\"name\":\"parking_lot\",\"optional\":true,\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.9\",\"target\":\"cfg(not(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\")))\"},{\"name\":\"signal-hook-registry\",\"optional\":true,\"req\":\"^1.1.1\",\"target\":\"cfg(unix)\"},{\"name\":\"slab\",\"optional\":true,\"req\":\"^0.4.9\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"all\"],\"name\":\"socket2\",\"optional\":true,\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"socket2\",\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"name\":\"tokio-macros\",\"optional\":true,\"req\":\"~2.6.0\"},{\"kind\":\"dev\",\"name\":\"tokio-stream\",\"req\":\"^0.1\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4.0\"},{\"features\":[\"rt\"],\"kind\":\"dev\",\"name\":\"tokio-util\",\"req\":\"^0.7\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.29\",\"target\":\"cfg(tokio_unstable)\"},{\"kind\":\"dev\",\"name\":\"tracing-mock\",\"req\":\"=0.1.0-beta.1\",\"target\":\"cfg(all(tokio_unstable, target_has_atomic = \\\"64\\\"))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3.0\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", not(target_os = \\\"wasi\\\")))\"},{\"name\":\"windows-sys\",\"optional\":true,\"req\":\"^0.61\",\"target\":\"cfg(windows)\"},{\"features\":[\"Win32_Foundation\",\"Win32_Security_Authorization\"],\"kind\":\"dev\",\"name\":\"windows-sys\",\"req\":\"^0.61\",\"target\":\"cfg(windows)\"}],\"features\":{\"default\":[],\"fs\":[],\"full\":[\"fs\",\"io-util\",\"io-std\",\"macros\",\"net\",\"parking_lot\",\"process\",\"rt\",\"rt-multi-thread\",\"signal\",\"sync\",\"time\"],\"io-std\":[],\"io-uring\":[\"dep:io-uring\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"dep:slab\"],\"io-util\":[\"bytes\"],\"macros\":[\"tokio-macros\"],\"net\":[\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"socket2\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_Security\",\"windows-sys/Win32_Storage_FileSystem\",\"windows-sys/Win32_System_Pipes\",\"windows-sys/Win32_System_SystemServices\"],\"process\":[\"bytes\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Threading\",\"windows-sys/Win32_System_WindowsProgramming\"],\"rt\":[],\"rt-multi-thread\":[\"rt\"],\"signal\":[\"libc\",\"mio/os-poll\",\"mio/net\",\"mio/os-ext\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Console\"],\"sync\":[],\"taskdump\":[\"dep:backtrace\"],\"test-util\":[\"rt\",\"sync\",\"time\"],\"time\":[]}}",
"tokio_1.49.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3\"},{\"name\":\"backtrace\",\"optional\":true,\"req\":\"^0.3.58\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1.2.1\"},{\"features\":[\"async-await\"],\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-concurrency\",\"req\":\"^7.6.3\"},{\"kind\":\"dev\",\"name\":\"futures-test\",\"req\":\"^0.3.31\"},{\"default_features\":false,\"name\":\"io-uring\",\"optional\":true,\"req\":\"^0.7.6\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"features\":[\"futures\",\"checkpoint\"],\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"default_features\":false,\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\"},{\"default_features\":false,\"features\":[\"os-poll\",\"os-ext\"],\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"tokio\"],\"kind\":\"dev\",\"name\":\"mio-aio\",\"req\":\"^1\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"kind\":\"dev\",\"name\":\"mockall\",\"req\":\"^0.13.0\"},{\"default_features\":false,\"features\":[\"aio\",\"fs\",\"socket\"],\"kind\":\"dev\",\"name\":\"nix\",\"req\":\"^0.29.0\",\"target\":\"cfg(unix)\"},{\"name\":\"parking_lot\",\"optional\":true,\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.9\",\"target\":\"cfg(not(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\")))\"},{\"name\":\"signal-hook-registry\",\"optional\":true,\"req\":\"^1.1.1\",\"target\":\"cfg(unix)\"},{\"name\":\"slab\",\"optional\":true,\"req\":\"^0.4.9\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"all\"],\"name\":\"socket2\",\"optional\":true,\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"socket2\",\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"name\":\"tokio-macros\",\"optional\":true,\"req\":\"~2.6.0\"},{\"kind\":\"dev\",\"name\":\"tokio-stream\",\"req\":\"^0.1\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4.0\"},{\"features\":[\"rt\"],\"kind\":\"dev\",\"name\":\"tokio-util\",\"req\":\"^0.7\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.29\",\"target\":\"cfg(tokio_unstable)\"},{\"kind\":\"dev\",\"name\":\"tracing-mock\",\"req\":\"=0.1.0-beta.1\",\"target\":\"cfg(all(tokio_unstable, target_has_atomic = \\\"64\\\"))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3.0\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", not(target_os = \\\"wasi\\\")))\"},{\"name\":\"windows-sys\",\"optional\":true,\"req\":\"^0.61\",\"target\":\"cfg(windows)\"},{\"features\":[\"Win32_Foundation\",\"Win32_Security_Authorization\"],\"kind\":\"dev\",\"name\":\"windows-sys\",\"req\":\"^0.61\",\"target\":\"cfg(windows)\"}],\"features\":{\"default\":[],\"fs\":[],\"full\":[\"fs\",\"io-util\",\"io-std\",\"macros\",\"net\",\"parking_lot\",\"process\",\"rt\",\"rt-multi-thread\",\"signal\",\"sync\",\"time\"],\"io-std\":[],\"io-uring\":[\"dep:io-uring\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"dep:slab\"],\"io-util\":[\"bytes\"],\"macros\":[\"tokio-macros\"],\"net\":[\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"socket2\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_Security\",\"windows-sys/Win32_Storage_FileSystem\",\"windows-sys/Win32_System_Pipes\",\"windows-sys/Win32_System_SystemServices\"],\"process\":[\"bytes\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Threading\",\"windows-sys/Win32_System_WindowsProgramming\"],\"rt\":[],\"rt-multi-thread\":[\"rt\"],\"signal\":[\"libc\",\"mio/os-poll\",\"mio/net\",\"mio/os-ext\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Console\"],\"sync\":[],\"taskdump\":[\"dep:backtrace\"],\"test-util\":[\"rt\",\"sync\",\"time\"],\"time\":[]}}",
"toml_0.5.11": "{\"dependencies\":[{\"name\":\"indexmap\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"serde\",\"req\":\"^1.0.97\"},{\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0\"}],\"features\":{\"default\":[],\"preserve_order\":[\"indexmap\"]}}",
"toml_0.9.5": "{\"dependencies\":[{\"name\":\"anstream\",\"optional\":true,\"req\":\"^0.6.15\"},{\"name\":\"anstyle\",\"optional\":true,\"req\":\"^1.0.8\"},{\"default_features\":false,\"name\":\"foldhash\",\"optional\":true,\"req\":\"^0.1.5\"},{\"default_features\":false,\"name\":\"indexmap\",\"optional\":true,\"req\":\"^2.3.0\"},{\"kind\":\"dev\",\"name\":\"itertools\",\"req\":\"^0.14.0\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"serde\",\"optional\":true,\"req\":\"^1.0.145\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0.199\"},{\"kind\":\"dev\",\"name\":\"serde-untagged\",\"req\":\"^0.1.7\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0.116\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"serde_spanned\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"snapbox\",\"req\":\"^0.6.0\"},{\"kind\":\"dev\",\"name\":\"toml-test-data\",\"req\":\"^2.3.0\"},{\"features\":[\"snapshot\"],\"kind\":\"dev\",\"name\":\"toml-test-harness\",\"req\":\"^1.3.2\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"toml_datetime\",\"req\":\"^0.7.0\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"toml_parser\",\"optional\":true,\"req\":\"^1.0.2\"},{\"default_features\":false,\"features\":[\"alloc\"],\"name\":\"toml_writer\",\"optional\":true,\"req\":\"^1.0.2\"},{\"kind\":\"dev\",\"name\":\"walkdir\",\"req\":\"^2.5.0\"},{\"default_features\":false,\"name\":\"winnow\",\"optional\":true,\"req\":\"^0.7.10\"}],\"features\":{\"debug\":[\"std\",\"toml_parser?/debug\",\"dep:anstream\",\"dep:anstyle\"],\"default\":[\"std\",\"serde\",\"parse\",\"display\"],\"display\":[\"dep:toml_writer\"],\"fast_hash\":[\"preserve_order\",\"dep:foldhash\"],\"parse\":[\"dep:toml_parser\",\"dep:winnow\"],\"preserve_order\":[\"dep:indexmap\",\"std\"],\"serde\":[\"dep:serde\",\"toml_datetime/serde\",\"serde_spanned/serde\"],\"std\":[\"indexmap?/std\",\"serde?/std\",\"toml_parser?/std\",\"toml_writer?/std\",\"toml_datetime/std\",\"serde_spanned/std\"],\"unbounded\":[]}}",
"toml_datetime_0.7.5+spec-1.1.0": "{\"dependencies\":[{\"default_features\":false,\"name\":\"serde_core\",\"optional\":true,\"req\":\"^1.0.225\"},{\"kind\":\"dev\",\"name\":\"snapbox\",\"req\":\"^0.6.21\"}],\"features\":{\"alloc\":[\"serde_core?/alloc\"],\"default\":[\"std\"],\"serde\":[\"dep:serde_core\"],\"std\":[\"alloc\",\"serde_core?/std\"]}}",
@@ -941,8 +1157,6 @@
"try-lock_0.2.5": "{\"dependencies\":[],\"features\":{}}",
"ts-rs-macros_11.1.0": "{\"dependencies\":[{\"name\":\"proc-macro2\",\"req\":\"^1\"},{\"name\":\"quote\",\"req\":\"^1\"},{\"features\":[\"full\",\"extra-traits\"],\"name\":\"syn\",\"req\":\"^2.0.28\"},{\"name\":\"termcolor\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"no-serde-warnings\":[],\"serde-compat\":[\"termcolor\"]}}",
"ts-rs_11.1.0": "{\"dependencies\":[{\"features\":[\"serde\"],\"name\":\"bigdecimal\",\"optional\":true,\"req\":\">=0.0.13, <0.5\"},{\"name\":\"bson\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"chrono\",\"optional\":true,\"req\":\"^0.4\"},{\"features\":[\"serde\"],\"kind\":\"dev\",\"name\":\"chrono\",\"req\":\"^0.4\"},{\"name\":\"dprint-plugin-typescript\",\"optional\":true,\"req\":\"=0.95\"},{\"name\":\"heapless\",\"optional\":true,\"req\":\">=0.7, <0.9\"},{\"name\":\"indexmap\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"ordered-float\",\"optional\":true,\"req\":\">=3, <6\"},{\"name\":\"semver\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0\"},{\"name\":\"serde_json\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"name\":\"smol_str\",\"optional\":true,\"req\":\"^0.3\"},{\"name\":\"thiserror\",\"req\":\"^2\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"sync\",\"rt\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.40\"},{\"name\":\"ts-rs-macros\",\"req\":\"=11.1.0\"},{\"name\":\"url\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"uuid\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"bigdecimal-impl\":[\"bigdecimal\"],\"bson-uuid-impl\":[\"bson\"],\"bytes-impl\":[\"bytes\"],\"chrono-impl\":[\"chrono\"],\"default\":[\"serde-compat\"],\"format\":[\"dprint-plugin-typescript\"],\"heapless-impl\":[\"heapless\"],\"import-esm\":[],\"indexmap-impl\":[\"indexmap\"],\"no-serde-warnings\":[\"ts-rs-macros/no-serde-warnings\"],\"ordered-float-impl\":[\"ordered-float\"],\"semver-impl\":[\"semver\"],\"serde-compat\":[\"ts-rs-macros/serde-compat\"],\"serde-json-impl\":[\"serde_json\"],\"smol_str-impl\":[\"smol_str\"],\"tokio-impl\":[\"tokio\"],\"url-impl\":[\"url\"],\"uuid-impl\":[\"uuid\"]}}",
"tui-scrollbar_0.2.2": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"color-eyre\",\"req\":\"^0.6\"},{\"name\":\"crossterm_0_28\",\"optional\":true,\"package\":\"crossterm\",\"req\":\"^0.28\"},{\"name\":\"crossterm_0_29\",\"optional\":true,\"package\":\"crossterm\",\"req\":\"^0.29\"},{\"name\":\"document-features\",\"req\":\"^0.2.11\"},{\"kind\":\"dev\",\"name\":\"ratatui\",\"req\":\"^0.30.0\"},{\"name\":\"ratatui-core\",\"req\":\"^0.1\"}],\"features\":{\"crossterm\":[\"crossterm_0_29\"],\"crossterm_0_28\":[\"dep:crossterm_0_28\"],\"crossterm_0_29\":[\"dep:crossterm_0_29\"],\"default\":[]}}",
"tungstenite_0.21.0": "{\"dependencies\":[{\"name\":\"byteorder\",\"req\":\"^1.3.2\"},{\"name\":\"bytes\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5.0\"},{\"name\":\"data-encoding\",\"optional\":true,\"req\":\"^2\"},{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.10.0\"},{\"name\":\"http\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"httparse\",\"optional\":true,\"req\":\"^1.3.4\"},{\"kind\":\"dev\",\"name\":\"input_buffer\",\"req\":\"^0.5.0\"},{\"name\":\"log\",\"req\":\"^0.4.8\"},{\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\",\"req\":\"^0.2.3\"},{\"name\":\"rand\",\"req\":\"^0.8.0\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.8.4\"},{\"name\":\"rustls\",\"optional\":true,\"req\":\"^0.22.0\"},{\"name\":\"rustls-native-certs\",\"optional\":true,\"req\":\"^0.7.0\"},{\"name\":\"rustls-pki-types\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"sha1\",\"optional\":true,\"req\":\"^0.10\"},{\"kind\":\"dev\",\"name\":\"socket2\",\"req\":\"^0.5.5\"},{\"name\":\"thiserror\",\"req\":\"^1.0.23\"},{\"name\":\"url\",\"optional\":true,\"req\":\"^2.1.0\"},{\"name\":\"utf-8\",\"req\":\"^0.7.5\"},{\"name\":\"webpki-roots\",\"optional\":true,\"req\":\"^0.26\"}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\"],\"default\":[\"handshake\"],\"handshake\":[\"data-encoding\",\"http\",\"httparse\",\"sha1\",\"url\"],\"native-tls\":[\"native-tls-crate\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"]}}",
"typenum_1.18.0": "{\"dependencies\":[{\"default_features\":false,\"name\":\"scale-info\",\"optional\":true,\"req\":\"^1.0\"}],\"features\":{\"const-generics\":[],\"force_unix_path_separator\":[],\"i128\":[],\"no_std\":[],\"scale_info\":[\"scale-info/derive\"],\"strict\":[]}}",
"uds_windows_1.1.0": "{\"dependencies\":[{\"name\":\"memoffset\",\"req\":\"^0.9.0\"},{\"name\":\"tempfile\",\"req\":\"^3\",\"target\":\"cfg(windows)\"},{\"features\":[\"winsock2\",\"ws2def\",\"minwinbase\",\"ntdef\",\"processthreadsapi\",\"handleapi\",\"ws2tcpip\",\"winbase\"],\"name\":\"winapi\",\"req\":\"^0.3.9\",\"target\":\"cfg(windows)\"}],\"features\":{}}",
"uname_0.1.1": "{\"dependencies\":[{\"name\":\"libc\",\"req\":\"^0.2\"}],\"features\":{}}",
@@ -952,7 +1166,6 @@
"unicode-linebreak_0.1.5": "{\"dependencies\":[],\"features\":{}}",
"unicode-segmentation_1.12.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5\"},{\"kind\":\"dev\",\"name\":\"quickcheck\",\"req\":\"^0.7\"}],\"features\":{\"no_std\":[]}}",
"unicode-truncate_1.1.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5\"},{\"default_features\":false,\"name\":\"itertools\",\"req\":\"^0.13\"},{\"default_features\":false,\"name\":\"unicode-segmentation\",\"req\":\"^1\"},{\"name\":\"unicode-width\",\"req\":\"^0.1\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"unicode-truncate_2.0.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5\"},{\"default_features\":false,\"name\":\"itertools\",\"req\":\"^0.13\"},{\"default_features\":false,\"name\":\"unicode-segmentation\",\"req\":\"^1\"},{\"name\":\"unicode-width\",\"req\":\"^0.2\"}],\"features\":{\"default\":[\"std\"],\"std\":[]}}",
"unicode-width_0.1.14": "{\"dependencies\":[{\"name\":\"compiler_builtins\",\"optional\":true,\"req\":\"^0.1\"},{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0\"},{\"name\":\"std\",\"optional\":true,\"package\":\"rustc-std-workspace-std\",\"req\":\"^1.0\"}],\"features\":{\"cjk\":[],\"default\":[\"cjk\"],\"no_std\":[],\"rustc-dep-of-std\":[\"std\",\"core\",\"compiler_builtins\"]}}",
"unicode-width_0.2.1": "{\"dependencies\":[{\"name\":\"core\",\"optional\":true,\"package\":\"rustc-std-workspace-core\",\"req\":\"^1.0\"},{\"name\":\"std\",\"optional\":true,\"package\":\"rustc-std-workspace-std\",\"req\":\"^1.0\"}],\"features\":{\"cjk\":[],\"default\":[\"cjk\"],\"no_std\":[],\"rustc-dep-of-std\":[\"std\",\"core\"]}}",
"unicode-xid_0.2.6": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.3\"}],\"features\":{\"bench\":[],\"default\":[],\"no_std\":[]}}",
@@ -993,6 +1206,7 @@
"web-time_1.1.0": "{\"dependencies\":[{\"default_features\":false,\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"futures-channel\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_feature = \\\"atomics\\\"))\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"futures-util\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_feature = \\\"atomics\\\"))\"},{\"features\":[\"js\"],\"kind\":\"dev\",\"name\":\"getrandom\",\"req\":\"^0.2\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"},{\"name\":\"js-sys\",\"req\":\"^0.3.20\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\"))\"},{\"features\":[\"macro\"],\"kind\":\"dev\",\"name\":\"pollster\",\"req\":\"^0.3\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.8\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"},{\"name\":\"serde\",\"optional\":true,\"req\":\"^1\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\"))\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"},{\"kind\":\"dev\",\"name\":\"static_assertions\",\"req\":\"^1\"},{\"default_features\":false,\"name\":\"wasm-bindgen\",\"req\":\"^0.2.70\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\"))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-futures\",\"req\":\"^0.4\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"},{\"features\":[\"WorkerGlobalScope\"],\"kind\":\"dev\",\"name\":\"web-sys\",\"req\":\"^0.3\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", target_feature = \\\"atomics\\\"))\"},{\"features\":[\"CssStyleDeclaration\",\"Document\",\"Element\",\"HtmlTableElement\",\"HtmlTableRowElement\",\"Performance\",\"Window\"],\"kind\":\"dev\",\"name\":\"web-sys\",\"req\":\"^0.3\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"}],\"features\":{\"serde\":[\"dep:serde\"]}}",
"webbrowser_1.0.6": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"actix-files\",\"req\":\"^0.6\"},{\"kind\":\"dev\",\"name\":\"actix-web\",\"req\":\"^4\"},{\"name\":\"core-foundation\",\"req\":\"^0.10\",\"target\":\"cfg(target_os = \\\"macos\\\")\"},{\"kind\":\"dev\",\"name\":\"crossbeam-channel\",\"req\":\"^0.5\"},{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.9.0\"},{\"name\":\"jni\",\"req\":\"^0.21\",\"target\":\"cfg(target_os = \\\"android\\\")\"},{\"name\":\"log\",\"req\":\"^0.4\"},{\"name\":\"ndk-context\",\"req\":\"^0.1\",\"target\":\"cfg(target_os = \\\"android\\\")\"},{\"kind\":\"dev\",\"name\":\"ndk-glue\",\"req\":\">=0.3, <=0.7\",\"target\":\"cfg(target_os = \\\"android\\\")\"},{\"name\":\"objc2\",\"req\":\"^0.6\",\"target\":\"cfg(any(target_os = \\\"ios\\\", target_os = \\\"tvos\\\", target_os = \\\"visionos\\\"))\"},{\"default_features\":false,\"features\":[\"std\",\"NSDictionary\",\"NSString\",\"NSURL\"],\"name\":\"objc2-foundation\",\"req\":\"^0.3\",\"target\":\"cfg(any(target_os = \\\"ios\\\", target_os = \\\"tvos\\\", target_os = \\\"visionos\\\"))\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.8\"},{\"kind\":\"dev\",\"name\":\"serial_test\",\"req\":\"^0.10\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1\"},{\"name\":\"url\",\"req\":\"^2\"},{\"kind\":\"dev\",\"name\":\"urlencoding\",\"req\":\"^2.1\"},{\"features\":[\"Window\"],\"name\":\"web-sys\",\"req\":\"^0.3\",\"target\":\"cfg(target_family = \\\"wasm\\\")\"}],\"features\":{\"disable-wsl\":[],\"hardened\":[],\"wasm-console\":[\"web-sys/console\"]}}",
"webpki-root-certs_1.0.4": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"hex\",\"req\":\"^0.4.3\"},{\"kind\":\"dev\",\"name\":\"percent-encoding\",\"req\":\"^2.3\"},{\"default_features\":false,\"name\":\"pki-types\",\"package\":\"rustls-pki-types\",\"req\":\"^1.8\"},{\"kind\":\"dev\",\"name\":\"ring\",\"req\":\"^0.17.0\"},{\"features\":[\"macros\",\"rt-multi-thread\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1\"},{\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"webpki\",\"package\":\"rustls-webpki\",\"req\":\"^0.103\"},{\"kind\":\"dev\",\"name\":\"x509-parser\",\"req\":\"^0.17.0\"}],\"features\":{}}",
"webpki-roots_0.26.11": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"hex\",\"req\":\"^0.4.3\"},{\"name\":\"parent\",\"package\":\"webpki-roots\",\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"percent-encoding\",\"req\":\"^2.3\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"pki-types\",\"package\":\"rustls-pki-types\",\"req\":\"^1.8\"},{\"kind\":\"dev\",\"name\":\"rcgen\",\"req\":\"^0.13\"},{\"kind\":\"dev\",\"name\":\"ring\",\"req\":\"^0.17.0\"},{\"kind\":\"dev\",\"name\":\"rustls\",\"req\":\"^0.23\"},{\"features\":[\"macros\",\"rt-multi-thread\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1\"},{\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"webpki\",\"package\":\"rustls-webpki\",\"req\":\"^0.102\"},{\"kind\":\"dev\",\"name\":\"x509-parser\",\"req\":\"^0.17.0\"},{\"kind\":\"dev\",\"name\":\"yasna\",\"req\":\"^0.5.2\"}],\"features\":{}}",
"webpki-roots_1.0.2": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"hex\",\"req\":\"^0.4.3\"},{\"kind\":\"dev\",\"name\":\"percent-encoding\",\"req\":\"^2.3\"},{\"default_features\":false,\"name\":\"pki-types\",\"package\":\"rustls-pki-types\",\"req\":\"^1.8\"},{\"kind\":\"dev\",\"name\":\"rcgen\",\"req\":\"^0.14\"},{\"kind\":\"dev\",\"name\":\"ring\",\"req\":\"^0.17.0\"},{\"kind\":\"dev\",\"name\":\"rustls\",\"req\":\"^0.23\"},{\"features\":[\"macros\",\"rt-multi-thread\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1\"},{\"features\":[\"alloc\"],\"kind\":\"dev\",\"name\":\"webpki\",\"package\":\"rustls-webpki\",\"req\":\"^0.103\"},{\"kind\":\"dev\",\"name\":\"x509-parser\",\"req\":\"^0.17.0\"},{\"kind\":\"dev\",\"name\":\"yasna\",\"req\":\"^0.5.2\"}],\"features\":{}}",
"weezl_0.1.10": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.3.1\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"futures\",\"optional\":true,\"req\":\"^0.3.12\"},{\"default_features\":false,\"features\":[\"macros\",\"io-util\",\"net\",\"rt\",\"rt-multi-thread\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1\"},{\"default_features\":false,\"features\":[\"compat\"],\"kind\":\"dev\",\"name\":\"tokio-util\",\"req\":\"^0.6.2\"}],\"features\":{\"alloc\":[],\"async\":[\"futures\",\"std\"],\"default\":[\"std\"],\"std\":[\"alloc\"]}}",
"which_8.0.0": "{\"dependencies\":[{\"name\":\"env_home\",\"optional\":true,\"req\":\"^0.1.0\",\"target\":\"cfg(any(windows, unix, target_os = \\\"redox\\\"))\"},{\"name\":\"regex\",\"optional\":true,\"req\":\"^1.10.2\"},{\"default_features\":false,\"features\":[\"fs\",\"std\"],\"name\":\"rustix\",\"optional\":true,\"req\":\"^1.0.5\",\"target\":\"cfg(any(unix, target_os = \\\"wasi\\\", target_os = \\\"redox\\\"))\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.9.0\"},{\"default_features\":false,\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.40\"},{\"features\":[\"kernel\"],\"name\":\"winsafe\",\"optional\":true,\"req\":\"^0.0.19\",\"target\":\"cfg(windows)\"}],\"features\":{\"default\":[\"real-sys\"],\"real-sys\":[\"dep:env_home\",\"dep:rustix\",\"dep:winsafe\"],\"regex\":[\"dep:regex\"],\"tracing\":[\"dep:tracing\"]}}",

View File

@@ -1,4 +1,4 @@
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install --cask codex</code></p>
<p align="center"><code>curl -fsSL https://raw.githubusercontent.com/openai/codex/main/installer/install.sh | bash</code><br />or <code>brew install --cask codex</code><br />or <code>npm i -g @openai/codex</code></p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
<p align="center">
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />
@@ -15,6 +15,13 @@ If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href="http
Install globally with your preferred package manager:
```shell
# Install using curl
curl -fsSL https://raw.githubusercontent.com/openai/codex/main/installer/install.sh | bash
```
If you want to install somewhere other than `~/.codex`, set `CODEX_HOME` first.
```shell
# Install using npm
npm install -g @openai/codex

1031
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -27,6 +27,7 @@ members = [
"login",
"mcp-server",
"mcp-types",
"network-proxy",
"ollama",
"process-hardening",
"protocol",
@@ -35,7 +36,6 @@ members = [
"stdio-to-uds",
"otel",
"tui",
"tui2",
"utils/absolute-path",
"utils/cargo-bin",
"utils/git",
@@ -93,7 +93,6 @@ codex-responses-api-proxy = { path = "responses-api-proxy" }
codex-rmcp-client = { path = "rmcp-client" }
codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-tui = { path = "tui" }
codex-tui2 = { path = "tui2" }
codex-utils-absolute-path = { path = "utils/absolute-path" }
codex-utils-cache = { path = "utils/cache" }
codex-utils-cargo-bin = { path = "utils/cargo-bin" }
@@ -138,6 +137,7 @@ env-flags = "0.1.1"
env_logger = "0.11.5"
eventsource-stream = "0.2.3"
futures = { version = "0.3", default-features = false }
globset = "0.4"
http = "1.3.1"
icu_decimal = "2.1"
icu_locale_core = "2.1"
@@ -178,17 +178,17 @@ pretty_assertions = "1.4.1"
pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
ratatui-core = "0.1.0"
ratatui-macros = "0.6.0"
regex = "1.12.2"
regex-lite = "0.1.8"
reqwest = { version = "0.12", features = ["rustls-tls-webpki-roots", "rustls-tls-native-roots"] }
reqwest = "0.12"
rmcp = { version = "0.12.0", default-features = false }
schemars = "0.8.22"
seccompiler = "0.5.0"
sentry = "0.46.0"
serde = "1"
serde_json = "1"
serde_path_to_error = "0.1.20"
serde_with = "3.16"
serde_yaml = "0.9"
serial_test = "3.2.0"
@@ -212,7 +212,7 @@ tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.18"
tokio-test = "0.4"
tokio-tungstenite = { version = "0.28.0", features = ["rustls-tls-webpki-roots", "rustls-tls-native-roots", "proxy"] }
tokio-tungstenite = { version = "0.28.0", features = ["proxy", "rustls-tls-native-roots"] }
tokio-util = "0.7.18"
toml = "0.9.5"
toml_edit = "0.24.0"
@@ -225,7 +225,6 @@ tree-sitter-bash = "0.25"
zstd = "0.13"
tree-sitter-highlight = "0.25.10"
ts-rs = "11"
tui-scrollbar = "0.2.2"
uds_windows = "1.1.0"
unicode-segmentation = "1.12.0"
unicode-width = "0.2"

View File

@@ -129,10 +129,18 @@ client_request_definitions! {
params: v2::ThreadLoadedListParams,
response: v2::ThreadLoadedListResponse,
},
ThreadRead => "thread/read" {
params: v2::ThreadReadParams,
response: v2::ThreadReadResponse,
},
SkillsList => "skills/list" {
params: v2::SkillsListParams,
response: v2::SkillsListResponse,
},
AppsList => "app/list" {
params: v2::AppsListParams,
response: v2::AppsListResponse,
},
SkillsConfigWrite => "skills/config/write" {
params: v2::SkillsConfigWriteParams,
response: v2::SkillsConfigWriteResponse,

View File

@@ -5,7 +5,9 @@ use crate::protocol::common::AuthMode;
use codex_protocol::account::PlanType;
use codex_protocol::approvals::ExecPolicyAmendment as CoreExecPolicyAmendment;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode as CoreSandboxMode;
use codex_protocol::config_types::Verbosity;
@@ -29,6 +31,7 @@ use codex_protocol::protocol::SkillErrorInfo as CoreSkillErrorInfo;
use codex_protocol::protocol::SkillInterface as CoreSkillInterface;
use codex_protocol::protocol::SkillMetadata as CoreSkillMetadata;
use codex_protocol::protocol::SkillScope as CoreSkillScope;
use codex_protocol::protocol::SubAgentSource as CoreSubAgentSource;
use codex_protocol::protocol::TokenUsage as CoreTokenUsage;
use codex_protocol::protocol::TokenUsageInfo as CoreTokenUsageInfo;
use codex_protocol::user_input::ByteRange as CoreByteRange;
@@ -217,7 +220,6 @@ v2_enum_from_core!(
}
);
// TODO(mbolin): Support in-repo layer.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
@@ -393,6 +395,8 @@ pub struct ConfigLayer {
pub name: ConfigLayerSource,
pub version: String,
pub config: JsonValue,
#[serde(skip_serializing_if = "Option::is_none")]
pub disabled_reason: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
@@ -449,6 +453,10 @@ pub enum ConfigWriteErrorCode {
pub struct ConfigReadParams {
#[serde(default)]
pub include_layers: bool,
/// Optional working directory to resolve project config layers. If specified,
/// return the effective config as seen from that directory (i.e., including any
/// project layers between `cwd` and the project/repo root).
pub cwd: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -694,6 +702,7 @@ pub enum SessionSource {
VsCode,
Exec,
AppServer,
SubAgent(CoreSubAgentSource),
#[serde(other)]
Unknown,
}
@@ -705,7 +714,7 @@ impl From<CoreSessionSource> for SessionSource {
CoreSessionSource::VSCode => SessionSource::VsCode,
CoreSessionSource::Exec => SessionSource::Exec,
CoreSessionSource::Mcp => SessionSource::AppServer,
CoreSessionSource::SubAgent(_) => SessionSource::Unknown,
CoreSessionSource::SubAgent(sub) => SessionSource::SubAgent(sub),
CoreSessionSource::Unknown => SessionSource::Unknown,
}
}
@@ -718,6 +727,7 @@ impl From<SessionSource> for CoreSessionSource {
SessionSource::VsCode => CoreSessionSource::VSCode,
SessionSource::Exec => CoreSessionSource::Exec,
SessionSource::AppServer => CoreSessionSource::Mcp,
SessionSource::SubAgent(sub) => CoreSessionSource::SubAgent(sub),
SessionSource::Unknown => CoreSessionSource::Unknown,
}
}
@@ -928,7 +938,7 @@ pub struct CollaborationModeListParams {}
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollaborationModeListResponse {
pub data: Vec<CollaborationMode>,
pub data: Vec<CollaborationModeMask>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -962,6 +972,39 @@ pub struct ListMcpServerStatusResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppInfo {
pub id: String,
pub name: String,
pub description: Option<String>,
pub logo_url: Option<String>,
pub install_url: Option<String>,
#[serde(default)]
pub is_accessible: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListResponse {
pub data: Vec<AppInfo>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1043,6 +1086,8 @@ pub struct ThreadStartParams {
pub config: Option<HashMap<String, JsonValue>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
pub ephemeral: Option<bool>,
/// If true, opt into emitting raw response items on the event stream.
///
/// This is for internal use only (e.g. Codex Cloud).
@@ -1097,6 +1142,7 @@ pub struct ThreadResumeParams {
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1202,6 +1248,9 @@ pub struct ThreadListParams {
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
pub model_providers: Option<Vec<String>>,
/// Optional archived filter; when set to true, only archived threads are returned.
/// If false or null, only non-archived threads are returned.
pub archived: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, JsonSchema, TS)]
@@ -1243,6 +1292,23 @@ pub struct ThreadLoadedListResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadParams {
pub thread_id: String,
/// When true, include turns and their items from rollout history.
#[serde(default)]
pub include_turns: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1405,7 +1471,7 @@ pub struct Thread {
#[ts(type = "number")]
pub updated_at: i64,
/// [UNSTABLE] Path to the thread on disk.
pub path: PathBuf,
pub path: Option<PathBuf>,
/// Working directory captured for the thread.
pub cwd: PathBuf,
/// Version of the CLI that created the thread.
@@ -1414,7 +1480,8 @@ pub struct Thread {
pub source: SessionSource,
/// Optional Git metadata captured when the thread was created.
pub git_info: Option<GitInfo>,
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork` responses.
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork`, and `thread/read`
/// (when `includeTurns` is true) responses.
/// For all other responses and notifications returning a Thread,
/// the turns field will be an empty list.
pub turns: Vec<Turn>,
@@ -1551,6 +1618,8 @@ pub struct TurnStartParams {
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
pub summary: Option<ReasoningSummary>,
/// Override the personality for this turn and subsequent turns.
pub personality: Option<Personality>,
/// Optional JSON Schema used to constrain the final assistant message for this turn.
pub output_schema: Option<JsonValue>,
@@ -2260,6 +2329,18 @@ pub struct CommandExecutionRequestApprovalParams {
pub item_id: String,
/// Optional explanatory reason (e.g. request for network access).
pub reason: Option<String>,
/// The command to be executed.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command: Option<String>,
/// The command's working directory.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cwd: Option<PathBuf>,
/// Best-effort parsed command actions for friendly display.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command_actions: Option<Vec<CommandAction>>,
/// Optional proposed execpolicy amendment to allow similar commands without prompting.
pub proposed_execpolicy_amendment: Option<ExecPolicyAmendment>,
}
@@ -2327,8 +2408,7 @@ pub struct ToolRequestUserInputParams {
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Captures a user's answer to a request_user_input question.
pub struct ToolRequestUserInputAnswer {
pub selected: Vec<String>,
pub other: Option<String>,
pub answers: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -2428,6 +2508,24 @@ pub struct DeprecationNoticeNotification {
pub details: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextPosition {
/// 1-based line number.
pub line: usize,
/// 1-based column number (in Unicode scalar values).
pub column: usize,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextRange {
pub start: TextPosition,
pub end: TextPosition,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2436,6 +2534,14 @@ pub struct ConfigWarningNotification {
pub summary: String,
/// Optional extra guidance or error details.
pub details: Option<String>,
/// Optional path to the config file that triggered the warning.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub path: Option<String>,
/// Optional range for the error location inside the config file.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub range: Option<TextRange>,
}
#[cfg(test)]

View File

@@ -842,6 +842,9 @@ impl CodexClient {
turn_id,
item_id,
reason,
command,
cwd,
command_actions,
proposed_execpolicy_amendment,
} = params;
@@ -851,6 +854,17 @@ impl CodexClient {
if let Some(reason) = reason.as_deref() {
println!("< reason: {reason}");
}
if let Some(command) = command.as_deref() {
println!("< command: {command}");
}
if let Some(cwd) = cwd.as_ref() {
println!("< cwd: {}", cwd.display());
}
if let Some(command_actions) = command_actions.as_ref()
&& !command_actions.is_empty()
{
println!("< command actions: {command_actions:?}");
}
if let Some(execpolicy_amendment) = proposed_execpolicy_amendment.as_ref() {
println!("< proposed execpolicy amendment: {execpolicy_amendment:?}");
}

View File

@@ -22,6 +22,7 @@ codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-backend-client = { workspace = true }
codex-file-search = { workspace = true }
codex-chatgpt = { workspace = true }
codex-login = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
@@ -34,6 +35,7 @@ serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
mcp-types = { workspace = true }
tempfile = { workspace = true }
time = { workspace = true }
toml = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
@@ -48,11 +50,20 @@ uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
app_test_support = { workspace = true }
axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
] }
base64 = { workspace = true }
core_test_support = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
rmcp = { workspace = true, default-features = false, features = [
"server",
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
wiremock = { workspace = true }
shlex = { workspace = true }

View File

@@ -79,6 +79,7 @@ Example (from OpenAI's official VSCode extension):
- `thread/fork` — fork an existing thread into a new thread id by copying the stored history; emits `thread/started` and auto-subscribes you to turn/item events for the new thread.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/loaded/list` — list the thread ids currently loaded in memory.
- `thread/read` — read a stored thread by id without resuming it; optionally include turns via `includeTurns`.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `thread/rollback` — drop the last N turns from the agents in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updated `thread` (with `turns` populated) on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
@@ -88,6 +89,7 @@ Example (from OpenAI's official VSCode extension):
- `model/list` — list available models (with reasoning effort options).
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination).
- `skills/list` — list skills for one or more `cwd` values (optional `forceReload`).
- `app/list` — list available apps.
- `skills/config/write` — write user-level skill config by path.
- `mcpServer/oauth/login` — start an OAuth login for a configured MCP server; returns an `authorization_url` and later emits `mcpServer/oauthLogin/completed` once the browser flow finishes.
- `tool/requestUserInput` — prompt the user with 13 short questions for a tool call and return their answers (experimental).
@@ -112,6 +114,7 @@ Start a fresh thread when you need a new Codex conversation.
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
"personality": "friendly"
} }
{ "id": 10, "result": {
"thread": {
@@ -124,10 +127,13 @@ Start a fresh thread when you need a new Codex conversation.
{ "method": "thread/started", "params": { "thread": { } } }
```
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted:
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted. You can also pass the same configuration overrides supported by `thread/start`, such as `personality`:
```json
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
{ "method": "thread/resume", "id": 11, "params": {
"threadId": "thr_123",
"personality": "friendly"
} }
{ "id": 11, "result": { "thread": { "id": "thr_123", } } }
```
@@ -147,6 +153,7 @@ To branch from a stored session, call `thread/fork` with the `thread.id`. This c
- `limit` — server defaults to a reasonable page size if unset.
- `sortKey``created_at` (default) or `updated_at`.
- `modelProviders` — restrict results to specific providers; unset, null, or an empty array will include all providers.
- `archived` — when `true`, list archived threads only. When `false` or `null`, list non-archived threads (default).
Example:
@@ -178,6 +185,20 @@ When `nextCursor` is `null`, youve reached the final page.
} }
```
### Example: Read a thread
Use `thread/read` to fetch a stored thread by id without resuming it. Pass `includeTurns` when you want the rollout history loaded into `thread.turns`.
```json
{ "method": "thread/read", "id": 22, "params": { "threadId": "thr_123" } }
{ "id": 22, "result": { "thread": { "id": "thr_123", "turns": [] } } }
```
```json
{ "method": "thread/read", "id": 23, "params": { "threadId": "thr_123", "includeTurns": true } }
{ "id": 23, "result": { "thread": { "id": "thr_123", "turns": [ ... ] } } }
```
### Example: Archive a thread
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
@@ -187,7 +208,7 @@ Use `thread/archive` to move the persisted rollout (stored as a JSONL file on di
{ "id": 21, "result": {} }
```
An archived thread will not appear in future calls to `thread/list`.
An archived thread will not appear in `thread/list` unless `archived` is set to `true`.
### Example: Start a turn (send user input)
@@ -214,6 +235,7 @@ You can optionally specify config overrides on the new turn. If specified, these
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise",
"personality": "friendly",
// Optional JSON Schema to constrain the final assistant message for this turn.
"outputSchema": {
"type": "object",
@@ -445,7 +467,7 @@ Certain actions (shell commands or modifying files) may require explicit user ap
Order of messages:
1. `item/started` — shows the pending `commandExecution` item with `command`, `cwd`, and other fields so you can render the proposed action.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason` or `risk`, plus `parsedCmd` for friendly display.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason`, plus `command`, `cwd`, and `commandActions` for friendly display.
3. Client response — `{ "decision": "accept", "acceptSettings": { "forSession": false } }` or `{ "decision": "decline" }`.
4. `item/completed` — final `commandExecution` item with `status: "completed" | "failed" | "declined"` and execution output. Render this as the authoritative result.
@@ -504,7 +526,19 @@ Use `skills/list` to fetch the available skills (optionally scoped by `cwds`, wi
"data": [{
"cwd": "/Users/me/project",
"skills": [
{ "name": "skill-creator", "description": "Create or update a Codex skill", "enabled": true }
{
"name": "skill-creator",
"description": "Create or update a Codex skill",
"enabled": true,
"interface": {
"displayName": "Skill Creator",
"shortDescription": "Create or update a Codex skill",
"iconSmall": "icon.svg",
"iconLarge": "icon-large.svg",
"brandColor": "#111111",
"defaultPrompt": "Add a new skill for triaging flaky CI."
}
}
],
"errors": []
}]

View File

@@ -241,6 +241,9 @@ pub(crate) async fn apply_bespoke_event_handling(
// and emit the corresponding EventMsg, we repurpose the call_id as the item_id.
item_id: item_id.clone(),
reason,
command: Some(command_string.clone()),
cwd: Some(cwd.clone()),
command_actions: Some(command_actions.clone()),
proposed_execpolicy_amendment: proposed_execpolicy_amendment_v2,
};
let rx = outgoing
@@ -1003,7 +1006,15 @@ pub(crate) async fn apply_bespoke_event_handling(
};
if let Some(request_id) = pending {
let rollout_path = conversation.rollout_path();
let Some(rollout_path) = conversation.rollout_path() else {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "thread has no persisted rollout".to_string(),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
};
let response = match read_summary_from_rollout(
rollout_path.as_path(),
fallback_model_provider.as_str(),
@@ -1445,8 +1456,7 @@ async fn on_request_user_input_response(
(
id,
CoreRequestUserInputAnswer {
selected: answer.selected,
other: answer.other,
answers: answer.answers,
},
)
})

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::ConfigLayerStackOrdering;
use codex_core::config_loader::LoaderOverrides;
use std::io::ErrorKind;
use std::io::Result as IoResult;
@@ -11,9 +12,15 @@ use std::path::PathBuf;
use crate::message_processor::MessageProcessor;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::ConfigLayerSource;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::TextPosition as AppTextPosition;
use codex_app_server_protocol::TextRange as AppTextRange;
use codex_core::ExecPolicyError;
use codex_core::check_execpolicy_for_warnings;
use codex_core::config_loader::ConfigLoadError;
use codex_core::config_loader::TextRange as CoreTextRange;
use codex_feedback::CodexFeedback;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
@@ -44,6 +51,112 @@ mod outgoing_message;
/// plenty for an interactive CLI.
const CHANNEL_CAPACITY: usize = 128;
fn config_warning_from_error(
summary: impl Into<String>,
err: &std::io::Error,
) -> ConfigWarningNotification {
let (path, range) = match config_error_location(err) {
Some((path, range)) => (Some(path), Some(range)),
None => (None, None),
};
ConfigWarningNotification {
summary: summary.into(),
details: Some(err.to_string()),
path,
range,
}
}
fn config_error_location(err: &std::io::Error) -> Option<(String, AppTextRange)> {
err.get_ref()
.and_then(|err| err.downcast_ref::<ConfigLoadError>())
.map(|err| {
let config_error = err.config_error();
(
config_error.path.to_string_lossy().to_string(),
app_text_range(&config_error.range),
)
})
}
fn exec_policy_warning_location(err: &ExecPolicyError) -> (Option<String>, Option<AppTextRange>) {
match err {
ExecPolicyError::ParsePolicy { path, source } => {
if let Some(location) = source.location() {
let range = AppTextRange {
start: AppTextPosition {
line: location.range.start.line,
column: location.range.start.column,
},
end: AppTextPosition {
line: location.range.end.line,
column: location.range.end.column,
},
};
return (Some(location.path), Some(range));
}
(Some(path.clone()), None)
}
_ => (None, None),
}
}
fn app_text_range(range: &CoreTextRange) -> AppTextRange {
AppTextRange {
start: AppTextPosition {
line: range.start.line,
column: range.start.column,
},
end: AppTextPosition {
line: range.end.line,
column: range.end.column,
},
}
}
fn project_config_warning(config: &Config) -> Option<ConfigWarningNotification> {
let mut disabled_folders = Vec::new();
for layer in config
.config_layer_stack
.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, true)
{
if !matches!(layer.name, ConfigLayerSource::Project { .. })
|| layer.disabled_reason.is_none()
{
continue;
}
if let ConfigLayerSource::Project { dot_codex_folder } = &layer.name {
disabled_folders.push((
dot_codex_folder.as_path().display().to_string(),
layer
.disabled_reason
.as_ref()
.map(ToString::to_string)
.unwrap_or_else(|| "Config folder disabled.".to_string()),
));
}
}
if disabled_folders.is_empty() {
return None;
}
let mut message = "The following config folders are disabled:\n".to_string();
for (index, (folder, reason)) in disabled_folders.iter().enumerate() {
let display_index = index + 1;
message.push_str(&format!(" {display_index}. {folder}\n"));
message.push_str(&format!(" {reason}\n"));
}
Some(ConfigWarningNotification {
summary: message,
details: None,
path: None,
range: None,
})
}
pub async fn run_main(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
@@ -95,10 +208,7 @@ pub async fn run_main(
{
Ok(config) => config,
Err(err) => {
let message = ConfigWarningNotification {
summary: "Invalid configuration; using defaults.".to_string(),
details: Some(err.to_string()),
};
let message = config_warning_from_error("Invalid configuration; using defaults.", &err);
config_warnings.push(message);
Config::load_default_with_cli_overrides(cli_kv_overrides.clone()).map_err(|e| {
std::io::Error::new(
@@ -112,13 +222,20 @@ pub async fn run_main(
if let Ok(Some(err)) =
check_execpolicy_for_warnings(&config.features, &config.config_layer_stack).await
{
let (path, range) = exec_policy_warning_location(&err);
let message = ConfigWarningNotification {
summary: "Error parsing rules; custom rules not applied.".to_string(),
details: Some(err.to_string()),
path,
range,
};
config_warnings.push(message);
}
if let Some(warning) = project_config_warning(&config) {
config_warnings.push(warning);
}
let feedback = CodexFeedback::new();
let otel = codex_core::otel_init::build_provider(

View File

@@ -286,6 +286,8 @@ mod tests {
let notification = ServerNotification::ConfigWarning(ConfigWarningNotification {
summary: "Config error: using defaults".to_string(),
details: Some("error loading config: bad config".to_string()),
path: None,
range: None,
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);

View File

@@ -49,6 +49,16 @@ impl ChatGptAuthFixture {
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.claims.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.claims.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
pub fn email(mut self, email: impl Into<String>) -> Self {
self.claims.email = Some(email.into());
self
@@ -69,6 +79,8 @@ impl ChatGptAuthFixture {
pub struct ChatGptIdTokenClaims {
pub email: Option<String>,
pub plan_type: Option<String>,
pub chatgpt_user_id: Option<String>,
pub chatgpt_account_id: Option<String>,
}
impl ChatGptIdTokenClaims {
@@ -85,6 +97,16 @@ impl ChatGptIdTokenClaims {
self.plan_type = Some(plan_type.into());
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
}
pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
@@ -93,10 +115,20 @@ pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
if let Some(email) = &claims.email {
payload.insert("email".to_string(), json!(email));
}
let mut auth_payload = serde_json::Map::new();
if let Some(plan_type) = &claims.plan_type {
auth_payload.insert("chatgpt_plan_type".to_string(), json!(plan_type));
}
if let Some(chatgpt_user_id) = &claims.chatgpt_user_id {
auth_payload.insert("chatgpt_user_id".to_string(), json!(chatgpt_user_id));
}
if let Some(chatgpt_account_id) = &claims.chatgpt_account_id {
auth_payload.insert("chatgpt_account_id".to_string(), json!(chatgpt_account_id));
}
if !auth_payload.is_empty() {
payload.insert(
"https://api.openai.com/auth".to_string(),
json!({ "chatgpt_plan_type": plan_type }),
serde_json::Value::Object(auth_payload),
);
}
let payload = serde_json::Value::Object(payload);

View File

@@ -12,6 +12,7 @@ use tokio::process::ChildStdout;
use anyhow::Context;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
@@ -48,11 +49,13 @@ use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
use codex_core::default_client::CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR;
use tokio::process::Command;
pub struct McpProcess {
@@ -92,6 +95,7 @@ impl McpProcess {
cmd.stderr(Stdio::piped());
cmd.env("CODEX_HOME", codex_home);
cmd.env("RUST_LOG", "debug");
cmd.env_remove(CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR);
for (k, v) in env_overrides {
match v {
@@ -388,6 +392,15 @@ impl McpProcess {
self.send_request("thread/loaded/list", params).await
}
/// Send a `thread/read` JSON-RPC request.
pub async fn send_thread_read_request(
&mut self,
params: ThreadReadParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/read", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
@@ -397,6 +410,12 @@ impl McpProcess {
self.send_request("model/list", params).await
}
/// Send an `app/list` JSON-RPC request.
pub async fn send_apps_list_request(&mut self, params: AppsListParams) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("app/list", params).await
}
/// Send a `collaborationMode/list` JSON-RPC request.
pub async fn send_list_collaboration_modes_request(
&mut self,

View File

@@ -27,6 +27,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.into()),
base_instructions: "base instructions".to_string(),
model_instructions_template: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,

View File

@@ -307,6 +307,7 @@ async fn test_list_and_resume_conversations() -> Result<()> {
content: vec![ContentItem::InputText {
text: fork_history_text.to_string(),
}],
end_turn: None,
}];
let resume_with_history_req_id = mcp
.send_resume_conversation_request(ResumeConversationParams {

View File

@@ -0,0 +1,381 @@
use std::borrow::Cow;
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use app_test_support::ChatGptAuthFixture;
use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::write_chatgpt_auth;
use axum::Json;
use axum::Router;
use axum::extract::State;
use axum::http::HeaderMap;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use axum::routing::post;
use codex_app_server_protocol::AppInfo;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::AppsListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::connectors::ConnectorInfo;
use pretty_assertions::assert_eq;
use rmcp::handler::server::ServerHandler;
use rmcp::model::JsonObject;
use rmcp::model::ListToolsResult;
use rmcp::model::Meta;
use rmcp::model::ServerCapabilities;
use rmcp::model::ServerInfo;
use rmcp::model::Tool;
use rmcp::model::ToolAnnotations;
use rmcp::transport::StreamableHttpServerConfig;
use rmcp::transport::StreamableHttpService;
use rmcp::transport::streamable_http_server::session::local::LocalSessionManager;
use serde_json::json;
use tempfile::TempDir;
use tokio::net::TcpListener;
use tokio::task::JoinHandle;
use tokio::time::timeout;
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
#[tokio::test]
async fn list_apps_returns_empty_when_connectors_disabled() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: Some(50),
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
assert!(data.is_empty());
assert!(next_cursor.is_none());
Ok(())
}
#[tokio::test]
async fn list_apps_returns_connectors_with_accessible_flags() -> Result<()> {
let connectors = vec![
ConnectorInfo {
connector_id: "alpha".to_string(),
connector_name: "Alpha".to_string(),
connector_description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
install_url: None,
is_accessible: false,
},
ConnectorInfo {
connector_id: "beta".to_string(),
connector_name: "beta".to_string(),
connector_description: None,
logo_url: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: None,
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
let expected = vec![
AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
},
AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
},
];
assert_eq!(data, expected);
assert!(next_cursor.is_none());
server_handle.abort();
Ok(())
}
#[tokio::test]
async fn list_apps_paginates_results() -> Result<()> {
let connectors = vec![
ConnectorInfo {
connector_id: "alpha".to_string(),
connector_name: "Alpha".to_string(),
connector_description: Some("Alpha connector".to_string()),
logo_url: None,
install_url: None,
is_accessible: false,
},
ConnectorInfo {
connector_id: "beta".to_string(),
connector_name: "beta".to_string(),
connector_description: None,
logo_url: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let first_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: None,
})
.await?;
let first_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_request)),
)
.await??;
let AppsListResponse {
data: first_page,
next_cursor: first_cursor,
} = to_response(first_response)?;
let expected_first = vec![AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
}];
assert_eq!(first_page, expected_first);
let next_cursor = first_cursor.ok_or_else(|| anyhow::anyhow!("missing cursor"))?;
let second_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: Some(next_cursor),
})
.await?;
let second_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_request)),
)
.await??;
let AppsListResponse {
data: second_page,
next_cursor: second_cursor,
} = to_response(second_response)?;
let expected_second = vec![AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: None,
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
}];
assert_eq!(second_page, expected_second);
assert!(second_cursor.is_none());
server_handle.abort();
Ok(())
}
#[derive(Clone)]
struct AppsServerState {
expected_bearer: String,
expected_account_id: String,
response: serde_json::Value,
}
#[derive(Clone)]
struct AppListMcpServer {
tools: Arc<Vec<Tool>>,
}
impl AppListMcpServer {
fn new(tools: Arc<Vec<Tool>>) -> Self {
Self { tools }
}
}
impl ServerHandler for AppListMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
capabilities: ServerCapabilities::builder().enable_tools().build(),
..ServerInfo::default()
}
}
fn list_tools(
&self,
_request: Option<rmcp::model::PaginatedRequestParam>,
_context: rmcp::service::RequestContext<rmcp::service::RoleServer>,
) -> impl std::future::Future<Output = Result<ListToolsResult, rmcp::ErrorData>> + Send + '_
{
let tools = self.tools.clone();
async move {
Ok(ListToolsResult {
tools: (*tools).clone(),
next_cursor: None,
meta: None,
})
}
}
}
async fn start_apps_server(
connectors: Vec<ConnectorInfo>,
tools: Vec<Tool>,
) -> Result<(String, JoinHandle<()>)> {
let state = AppsServerState {
expected_bearer: "Bearer chatgpt-token".to_string(),
expected_account_id: "account-123".to_string(),
response: json!({ "connectors": connectors }),
};
let state = Arc::new(state);
let tools = Arc::new(tools);
let listener = TcpListener::bind("127.0.0.1:0").await?;
let addr = listener.local_addr()?;
let mcp_service = StreamableHttpService::new(
{
let tools = tools.clone();
move || Ok(AppListMcpServer::new(tools.clone()))
},
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = Router::new()
.route("/aip/connectors/list_accessible", post(list_connectors))
.with_state(state)
.nest_service("/api/codex/apps", mcp_service);
let handle = tokio::spawn(async move {
let _ = axum::serve(listener, router).await;
});
Ok((format!("http://{addr}"), handle))
}
async fn list_connectors(
State(state): State<Arc<AppsServerState>>,
headers: HeaderMap,
) -> Result<impl axum::response::IntoResponse, StatusCode> {
let bearer_ok = headers
.get(AUTHORIZATION)
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_bearer);
let account_ok = headers
.get("chatgpt-account-id")
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_account_id);
if bearer_ok && account_ok {
Ok(Json(state.response.clone()))
} else {
Err(StatusCode::UNAUTHORIZED)
}
}
fn connector_tool(connector_id: &str, connector_name: &str) -> Result<Tool> {
let schema: JsonObject = serde_json::from_value(json!({
"type": "object",
"additionalProperties": false
}))?;
let mut tool = Tool::new(
Cow::Owned(format!("connector_{connector_id}")),
Cow::Borrowed("Connector test tool"),
Arc::new(schema),
);
tool.annotations = Some(ToolAnnotations::new().read_only(true));
let mut meta = Meta::new();
meta.0
.insert("connector_id".to_string(), json!(connector_id));
meta.0
.insert("connector_name".to_string(), json!(connector_name));
tool.meta = Some(meta);
Ok(tool)
}
fn write_connectors_config(codex_home: &std::path::Path, base_url: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
chatgpt_base_url = "{base_url}"
[features]
connectors = true
"#
),
)
}

View File

@@ -1,7 +1,7 @@
//! Validates that the collaboration mode list endpoint returns the expected default presets.
//!
//! The test drives the app server through the MCP harness and asserts that the list response
//! includes the plan, pair programming, and execute modes with their default model and reasoning
//! includes the plan, coding, pair programming, and execute modes with their default model and reasoning
//! effort settings, which keeps the API contract visible in one place.
#![allow(clippy::unwrap_used)]
@@ -16,7 +16,8 @@ use codex_app_server_protocol::CollaborationModeListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::models_manager::test_builtin_collaboration_mode_presets;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ModeKind;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -44,8 +45,23 @@ async fn list_collaboration_modes_returns_presets() -> Result<()> {
let CollaborationModeListResponse { data: items } =
to_response::<CollaborationModeListResponse>(response)?;
let expected = vec![plan_preset(), pair_programming_preset(), execute_preset()];
assert_eq!(expected, items);
let expected = [
plan_preset(),
code_preset(),
pair_programming_preset(),
execute_preset(),
];
assert_eq!(expected.len(), items.len());
for (expected_mask, actual_mask) in expected.iter().zip(items.iter()) {
assert_eq!(expected_mask.name, actual_mask.name);
assert_eq!(expected_mask.mode, actual_mask.mode);
assert_eq!(expected_mask.model, actual_mask.model);
assert_eq!(expected_mask.reasoning_effort, actual_mask.reasoning_effort);
assert_eq!(
expected_mask.developer_instructions,
actual_mask.developer_instructions
);
}
Ok(())
}
@@ -53,11 +69,11 @@ async fn list_collaboration_modes_returns_presets() -> Result<()> {
///
/// If the defaults change in the app server, this helper should be updated alongside the
/// contract, or the test will fail in ways that imply a regression in the API.
fn plan_preset() -> CollaborationMode {
fn plan_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::Plan(_)))
.find(|p| p.mode == Some(ModeKind::Plan))
.unwrap()
}
@@ -65,11 +81,20 @@ fn plan_preset() -> CollaborationMode {
///
/// The helper keeps the expected model and reasoning defaults co-located with the test
/// so that mismatches point directly at the API contract being exercised.
fn pair_programming_preset() -> CollaborationMode {
fn pair_programming_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::PairProgramming(_)))
.find(|p| p.mode == Some(ModeKind::PairProgramming))
.unwrap()
}
/// Builds the code preset that the list response is expected to return.
fn code_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Code))
.unwrap()
}
@@ -77,10 +102,10 @@ fn pair_programming_preset() -> CollaborationMode {
///
/// The execute preset uses a different reasoning effort to capture the higher-effort
/// execution contract the server currently exposes.
fn execute_preset() -> CollaborationMode {
fn execute_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::Execute(_)))
.find(|p| p.mode == Some(ModeKind::Execute))
.unwrap()
}

View File

@@ -18,7 +18,10 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxMode;
use codex_app_server_protocol::ToolsV2;
use codex_app_server_protocol::WriteStatus;
use codex_core::config::set_project_trust_level;
use codex_core::config_loader::SYSTEM_CONFIG_TOML_FILE_UNIX;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use codex_utils_absolute_path::AbsolutePathBuf;
use pretty_assertions::assert_eq;
use serde_json::json;
@@ -53,6 +56,7 @@ sandbox_mode = "workspace-write"
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -101,6 +105,7 @@ view_image = false
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -141,6 +146,52 @@ view_image = false
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_project_layers_for_cwd() -> Result<()> {
let codex_home = TempDir::new()?;
write_config(&codex_home, r#"model = "gpt-user""#)?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let project_config = AbsolutePathBuf::try_from(project_config_dir)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: Some(workspace.path().to_string_lossy().into_owned()),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let ConfigReadResponse {
config, origins, ..
} = to_response(resp)?;
assert_eq!(config.model_reasoning_effort, Some(ReasoningEffort::High));
assert_eq!(
origins.get("model_reasoning_effort").expect("origin").name,
ConfigLayerSource::Project {
dot_codex_folder: project_config
}
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_system_layer_and_overrides() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -195,6 +246,7 @@ writable_roots = [{}]
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -281,6 +333,7 @@ model = "gpt-old"
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
@@ -315,6 +368,7 @@ model = "gpt-old"
let verify_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let verify_resp: JSONRPCResponse = timeout(
@@ -411,6 +465,7 @@ async fn config_batch_write_applies_multiple_edits() -> Result<()> {
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(

View File

@@ -1,5 +1,6 @@
mod account;
mod analytics;
mod app_list;
mod collaboration_mode_list;
mod config_rpc;
mod initialize;
@@ -12,6 +13,7 @@ mod thread_archive;
mod thread_fork;
mod thread_list;
mod thread_loaded_list;
mod thread_read;
mod thread_resume;
mod thread_rollback;
mod thread_start;

View File

@@ -13,6 +13,7 @@ use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use tokio::time::timeout;
@@ -54,11 +55,14 @@ async fn request_user_input_round_trip() -> Result<()> {
}],
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
collaboration_mode: Some(CollaborationMode::Plan(Settings {
model: "mock-model".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: None,
})),
collaboration_mode: Some(CollaborationMode {
mode: ModeKind::Plan,
settings: Settings {
model: "mock-model".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: None,
},
}),
..Default::default()
})
.await?;
@@ -87,7 +91,7 @@ async fn request_user_input_round_trip() -> Result<()> {
request_id,
serde_json::json!({
"answers": {
"confirm_path": { "selected": ["yes"], "other": serde_json::Value::Null }
"confirm_path": { "answers": ["yes"] }
}
}),
)

View File

@@ -77,8 +77,9 @@ async fn thread_fork_creates_new_thread_and_emits_started() -> Result<()> {
assert_ne!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert_ne!(thread.path, original_path);
let thread_path = thread.path.clone().expect("thread path");
assert!(thread_path.is_absolute());
assert_ne!(thread_path, original_path);
assert!(thread.cwd.is_absolute());
assert_eq!(thread.source, SessionSource::VsCode);

View File

@@ -12,9 +12,11 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadListResponse;
use codex_app_server_protocol::ThreadSortKey;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_protocol::protocol::GitInfo as CoreGitInfo;
use pretty_assertions::assert_eq;
use std::cmp::Reverse;
use std::fs;
use std::fs::FileTimes;
use std::fs::OpenOptions;
use std::path::Path;
@@ -36,8 +38,9 @@ async fn list_threads(
cursor: Option<String>,
limit: Option<u32>,
providers: Option<Vec<String>>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
list_threads_with_sort(mcp, cursor, limit, providers, None).await
list_threads_with_sort(mcp, cursor, limit, providers, None, archived).await
}
async fn list_threads_with_sort(
@@ -46,6 +49,7 @@ async fn list_threads_with_sort(
limit: Option<u32>,
providers: Option<Vec<String>>,
sort_key: Option<ThreadSortKey>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
let request_id = mcp
.send_thread_list_request(codex_app_server_protocol::ThreadListParams {
@@ -53,6 +57,7 @@ async fn list_threads_with_sort(
limit,
sort_key,
model_providers: providers,
archived,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -125,6 +130,7 @@ async fn thread_list_basic_empty() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert!(data.is_empty());
@@ -187,6 +193,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
None,
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(data1.len(), 2);
@@ -211,6 +218,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
Some(cursor1),
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert!(data2.len() <= 2);
@@ -260,6 +268,7 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
None,
Some(10),
Some(vec!["other_provider".to_string()]),
None,
)
.await?;
assert_eq!(data.len(), 1);
@@ -309,6 +318,7 @@ async fn thread_list_fetches_until_limit_or_exhausted() -> Result<()> {
None,
Some(8),
Some(vec!["target_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -353,6 +363,7 @@ async fn thread_list_enforces_max_limit() -> Result<()> {
None,
Some(200),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -398,6 +409,7 @@ async fn thread_list_stops_when_not_enough_filtered_results_exist() -> Result<()
None,
Some(10),
Some(vec!["target_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -444,6 +456,7 @@ async fn thread_list_includes_git_info() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
let thread = data
@@ -502,6 +515,7 @@ async fn thread_list_default_sorts_by_created_at() -> Result<()> {
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
@@ -562,6 +576,7 @@ async fn thread_list_sort_updated_at_orders_by_mtime() -> Result<()> {
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -625,6 +640,7 @@ async fn thread_list_updated_at_paginates_with_cursor() -> Result<()> {
Some(2),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page1: Vec<_> = page1.iter().map(|thread| thread.id.as_str()).collect();
@@ -640,6 +656,7 @@ async fn thread_list_updated_at_paginates_with_cursor() -> Result<()> {
Some(2),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page2: Vec<_> = page2.iter().map(|thread| thread.id.as_str()).collect();
@@ -678,6 +695,7 @@ async fn thread_list_created_at_tie_breaks_by_uuid() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
@@ -730,6 +748,7 @@ async fn thread_list_updated_at_tie_breaks_by_uuid() -> Result<()> {
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -769,6 +788,7 @@ async fn thread_list_updated_at_uses_mtime() -> Result<()> {
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -786,6 +806,65 @@ async fn thread_list_updated_at_uses_mtime() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_list_archived_filter() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let active_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T10-00-00",
"2025-03-01T10:00:00Z",
"Active",
Some("mock_provider"),
None,
)?;
let archived_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T09-00-00",
"2025-03-01T09:00:00Z",
"Archived",
Some("mock_provider"),
None,
)?;
let archived_dir = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
fs::create_dir_all(&archived_dir)?;
let archived_source = rollout_path(codex_home.path(), "2025-03-01T09-00-00", &archived_id);
let archived_dest = archived_dir.join(
archived_source
.file_name()
.expect("archived rollout should have a file name"),
);
fs::rename(&archived_source, &archived_dest)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, active_id);
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(true),
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, archived_id);
Ok(())
}
#[tokio::test]
async fn thread_list_invalid_cursor_returns_error() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -799,6 +878,7 @@ async fn thread_list_invalid_cursor_returns_error() -> Result<()> {
limit: Some(2),
sort_key: None,
model_providers: Some(vec!["mock_provider".to_string()]),
archived: None,
})
.await?;
let error: JSONRPCError = timeout(

View File

@@ -0,0 +1,159 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadReadResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use pretty_assertions::assert_eq;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_read_returns_summary_without_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = [TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: false,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
assert_eq!(thread.git_info, None);
assert_eq!(thread.turns.len(), 0);
Ok(())
}
#[tokio::test]
async fn thread_read_can_include_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = vec![TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: true,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.turns.len(), 1);
let turn = &thread.turns[0];
assert_eq!(turn.status, TurnStatus::Completed);
assert_eq!(turn.items.len(), 1, "expected user message item");
match &turn.items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string(),
text_elements: text_elements.clone().into_iter().map(Into::into).collect(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -11,18 +11,23 @@ use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::config_types::Personality;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const DEFAULT_BASE_INSTRUCTIONS: &str = "You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.";
#[tokio::test]
async fn thread_resume_returns_original_thread() -> Result<()> {
@@ -112,7 +117,7 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -164,7 +169,7 @@ async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let thread_path = thread.path.clone();
let thread_path = thread.path.clone().expect("thread path");
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: "not-a-valid-thread-id".to_string(),
@@ -218,6 +223,7 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
content: vec![ContentItem::InputText {
text: history_text.to_string(),
}],
end_turn: None,
}];
// Resume with explicit history and override the model.
@@ -247,6 +253,91 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_resume_accepts_personality_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
model: Some("gpt-5.2-codex".to_string()),
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let _resume: ThreadResumeResponse = to_response::<ThreadResumeResponse>(resume_resp)?;
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id,
input: vec![UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
assert!(
!developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"did not expect a personality update message in developer input, got {developer_texts:?}"
);
let instructions_text = request.instructions_text();
assert!(
instructions_text.contains(DEFAULT_BASE_INSTRUCTIONS),
"expected default base instructions from history, got {instructions_text:?}"
);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
@@ -260,6 +351,9 @@ sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
remote_models = false
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"

View File

@@ -8,6 +8,9 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_core::config::set_project_trust_level;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -69,6 +72,47 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_start_respects_project_config_from_cwd() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
cwd: Some(workspace.path().to_string_lossy().into_owned()),
..Default::default()
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse {
reasoning_effort, ..
} = to_response::<ThreadStartResponse>(resp)?;
assert_eq!(reasoning_effort, Some(ReasoningEffort::High));
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");

View File

@@ -34,13 +34,18 @@ use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::features::FEATURES;
use codex_core::features::Feature;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -54,7 +59,12 @@ async fn turn_start_sends_originator_header() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(
@@ -124,7 +134,12 @@ async fn turn_start_emits_user_message_item_with_text_elements() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -211,7 +226,12 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -321,13 +341,19 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
@@ -338,11 +364,14 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let collaboration_mode = CollaborationMode::Custom(Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),
developer_instructions: None,
},
};
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
@@ -379,6 +408,81 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_personality_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn: TurnStartResponse = to_response::<TurnStartResponse>(turn_resp)?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
if developer_texts.is_empty() {
eprintln!("request body: {}", request.body_json());
}
assert!(
developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"expected personality update message in developer input, got {developer_texts:?}"
);
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_local_image_input() -> Result<()> {
// Two Codex turns hit the mock model (session start + turn/start).
@@ -391,7 +495,12 @@ async fn turn_start_accepts_local_image_input() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -466,7 +575,12 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
];
let server = create_mock_responses_server_sequence(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -591,7 +705,12 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -738,7 +857,12 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -776,6 +900,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
@@ -806,6 +931,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
@@ -876,7 +1002,12 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
create_final_assistant_message_sse_response("patch applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1053,7 +1184,12 @@ async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Res
create_final_assistant_message_sse_response("patch 2 applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1229,7 +1365,12 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
create_final_assistant_message_sse_response("patch declined")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1369,16 +1510,12 @@ async fn command_execution_notifications_include_process_id() -> Result<()> {
];
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let config_toml = codex_home.path().join("config.toml");
let mut config_contents = std::fs::read_to_string(&config_toml)?;
config_contents.push_str(
r#"
[features]
unified_exec = true
"#,
);
std::fs::write(&config_toml, config_contents)?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::UnifiedExec, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1502,7 +1639,24 @@ fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
feature_flags: &BTreeMap<Feature, bool>,
) -> std::io::Result<()> {
let mut features = BTreeMap::from([(Feature::RemoteModels, false)]);
for (feature, enabled) in feature_flags {
features.insert(*feature, *enabled);
}
let feature_entries = features
.into_iter()
.map(|(feature, enabled)| {
let key = FEATURES
.iter()
.find(|spec| spec.id == feature)
.map(|spec| spec.key)
.unwrap_or_else(|| panic!("missing feature key for {feature:?}"));
format!("{key} = {enabled}")
})
.collect::<Vec<_>>()
.join("\n");
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
@@ -1514,6 +1668,9 @@ sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
{feature_entries}
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"

View File

@@ -12,7 +12,7 @@ path = "src/lib.rs"
anyhow = "1"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls-webpki-roots", "rustls-tls-native-roots"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
codex-backend-openapi-models = { path = "../codex-backend-openapi-models" }
codex-protocol = { workspace = true }
codex-core = { workspace = true }

View File

@@ -5,6 +5,7 @@ use crate::chatgpt_token::get_chatgpt_token_data;
use crate::chatgpt_token::init_chatgpt_token_from_auth;
use anyhow::Context;
use serde::Serialize;
use serde::de::DeserializeOwned;
/// Make a GET request to the ChatGPT backend API.
@@ -48,3 +49,37 @@ pub(crate) async fn chatgpt_get_request<T: DeserializeOwned>(
anyhow::bail!("Request failed with status {status}: {body}")
}
}
pub(crate) async fn chatgpt_post_request<T: DeserializeOwned, P: Serialize>(
config: &Config,
access_token: &str,
account_id: &str,
path: &str,
payload: &P,
) -> anyhow::Result<T> {
let chatgpt_base_url = &config.chatgpt_base_url;
let client = create_client();
let url = format!("{chatgpt_base_url}{path}");
let response = client
.post(&url)
.bearer_auth(access_token)
.header("chatgpt-account-id", account_id)
.header("Content-Type", "application/json")
.json(payload)
.send()
.await
.context("Failed to send request")?;
if response.status().is_success() {
let result: T = response
.json()
.await
.context("Failed to parse JSON response")?;
Ok(result)
} else {
let status = response.status();
let body = response.text().await.unwrap_or_default();
anyhow::bail!("Request failed with status {status}: {body}")
}
}

View File

@@ -0,0 +1,125 @@
use codex_core::config::Config;
use codex_core::features::Feature;
use serde::Deserialize;
use serde::Serialize;
use crate::chatgpt_client::chatgpt_post_request;
use crate::chatgpt_token::get_chatgpt_token_data;
use crate::chatgpt_token::init_chatgpt_token_from_auth;
pub use codex_core::connectors::ConnectorInfo;
pub use codex_core::connectors::connector_display_label;
use codex_core::connectors::connector_install_url;
pub use codex_core::connectors::list_accessible_connectors_from_mcp_tools;
use codex_core::connectors::merge_connectors;
#[derive(Debug, Serialize)]
struct ListConnectorsRequest {
principals: Vec<Principal>,
}
#[derive(Debug, Serialize)]
struct Principal {
#[serde(rename = "type")]
principal_type: PrincipalType,
id: String,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "SCREAMING_SNAKE_CASE")]
enum PrincipalType {
User,
}
#[derive(Debug, Deserialize)]
struct ListConnectorsResponse {
connectors: Vec<ConnectorInfo>,
}
pub async fn list_connectors(config: &Config) -> anyhow::Result<Vec<ConnectorInfo>> {
if !config.features.enabled(Feature::Connectors) {
return Ok(Vec::new());
}
let (connectors_result, accessible_result) = tokio::join!(
list_all_connectors(config),
list_accessible_connectors_from_mcp_tools(config),
);
let connectors = connectors_result?;
let accessible = accessible_result?;
Ok(merge_connectors(connectors, accessible))
}
pub async fn list_all_connectors(config: &Config) -> anyhow::Result<Vec<ConnectorInfo>> {
if !config.features.enabled(Feature::Connectors) {
return Ok(Vec::new());
}
init_chatgpt_token_from_auth(&config.codex_home, config.cli_auth_credentials_store_mode)
.await?;
let token_data =
get_chatgpt_token_data().ok_or_else(|| anyhow::anyhow!("ChatGPT token not available"))?;
let user_id = token_data
.id_token
.chatgpt_user_id
.as_deref()
.ok_or_else(|| {
anyhow::anyhow!("ChatGPT user ID not available, please re-run `codex login`")
})?;
let account_id = token_data
.id_token
.chatgpt_account_id
.as_deref()
.ok_or_else(|| {
anyhow::anyhow!("ChatGPT account ID not available, please re-run `codex login`")
})?;
let principal_id = format!("{user_id}__{account_id}");
let request = ListConnectorsRequest {
principals: vec![Principal {
principal_type: PrincipalType::User,
id: principal_id,
}],
};
let response: ListConnectorsResponse = chatgpt_post_request(
config,
token_data.access_token.as_str(),
account_id,
"/aip/connectors/list_accessible?skip_actions=true&external_logos=true",
&request,
)
.await?;
let mut connectors = response.connectors;
for connector in &mut connectors {
let install_url = match connector.install_url.take() {
Some(install_url) => install_url,
None => connector_install_url(&connector.connector_name, &connector.connector_id),
};
connector.connector_name =
normalize_connector_name(&connector.connector_name, &connector.connector_id);
connector.connector_description =
normalize_connector_value(connector.connector_description.as_deref());
connector.install_url = Some(install_url);
connector.is_accessible = false;
}
connectors.sort_by(|left, right| {
left.connector_name
.cmp(&right.connector_name)
.then_with(|| left.connector_id.cmp(&right.connector_id))
});
Ok(connectors)
}
fn normalize_connector_name(name: &str, connector_id: &str) -> String {
let trimmed = name.trim();
if trimmed.is_empty() {
connector_id.to_string()
} else {
trimmed.to_string()
}
}
fn normalize_connector_value(value: Option<&str>) -> Option<String> {
value
.map(str::trim)
.filter(|value| !value.is_empty())
.map(str::to_string)
}

View File

@@ -1,4 +1,5 @@
pub mod apply_command;
mod chatgpt_client;
mod chatgpt_token;
pub mod connectors;
pub mod get_task;

View File

@@ -35,8 +35,6 @@ codex-responses-api-proxy = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-stdio-to-uds = { workspace = true }
codex-tui = { workspace = true }
codex-tui2 = { workspace = true }
codex-utils-absolute-path = { workspace = true }
libc = { workspace = true }
owo-colors = { workspace = true }
regex-lite = { workspace = true }

View File

@@ -26,7 +26,6 @@ use codex_tui::AppExitInfo;
use codex_tui::Cli as TuiCli;
use codex_tui::ExitReason;
use codex_tui::update_action::UpdateAction;
use codex_tui2 as tui2;
use owo_colors::OwoColorize;
use std::io::IsTerminal;
use std::path::PathBuf;
@@ -40,14 +39,8 @@ use crate::mcp_cmd::McpCli;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::find_codex_home;
use codex_core::config::load_config_as_toml_with_cli_overrides;
use codex_core::features::Feature;
use codex_core::features::FeatureOverrides;
use codex_core::features::Features;
use codex_core::features::is_known_feature_key;
use codex_core::terminal::TerminalName;
use codex_utils_absolute_path::AbsolutePathBuf;
/// Codex CLI
///
@@ -703,11 +696,20 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
overrides,
)
.await?;
let mut rows = Vec::with_capacity(codex_core::features::FEATURES.len());
let mut name_width = 0;
let mut stage_width = 0;
for def in codex_core::features::FEATURES.iter() {
let name = def.key;
let stage = stage_str(def.stage);
let enabled = config.features.enabled(def.id);
println!("{name}\t{stage}\t{enabled}");
name_width = name_width.max(name.len());
stage_width = stage_width.max(stage.len());
rows.push((name, stage, enabled));
}
for (name, stage, enabled) in rows {
println!("{name:<name_width$} {stage:<stage_width$} {enabled}");
}
}
},
@@ -727,8 +729,6 @@ fn prepend_config_flags(
.splice(0..0, cli_config_overrides.raw_overrides);
}
/// Run the interactive Codex TUI, dispatching to either the legacy implementation or the
/// experimental TUI v2 shim based on feature flags resolved from config.
async fn run_interactive_tui(
mut interactive: TuiCli,
codex_linux_sandbox_exe: Option<PathBuf>,
@@ -756,12 +756,7 @@ async fn run_interactive_tui(
}
}
if is_tui2_enabled(&interactive).await? {
let result = tui2::run_main(interactive.into(), codex_linux_sandbox_exe).await?;
Ok(result.into())
} else {
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await
}
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await
}
fn confirm(prompt: &str) -> std::io::Result<bool> {
@@ -773,32 +768,6 @@ fn confirm(prompt: &str) -> std::io::Result<bool> {
Ok(answer.eq_ignore_ascii_case("y") || answer.eq_ignore_ascii_case("yes"))
}
/// Returns `Ok(true)` when the resolved configuration enables the `tui2` feature flag.
///
/// This performs a lightweight config load (honoring the same precedence as the lower-level TUI
/// bootstrap: `$CODEX_HOME`, config.toml, profile, and CLI `-c` overrides) solely to decide which
/// TUI frontend to launch. The full configuration is still loaded later by the interactive TUI.
async fn is_tui2_enabled(cli: &TuiCli) -> std::io::Result<bool> {
let raw_overrides = cli.config_overrides.raw_overrides.clone();
let overrides_cli = codex_common::CliConfigOverrides { raw_overrides };
let cli_kv_overrides = overrides_cli
.parse_overrides()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidInput, e))?;
let codex_home = find_codex_home()?;
let cwd = cli.cwd.clone();
let config_cwd = match cwd.as_deref() {
Some(path) => AbsolutePathBuf::from_absolute_path(path)?,
None => AbsolutePathBuf::current_dir()?,
};
let config_toml =
load_config_as_toml_with_cli_overrides(&codex_home, &config_cwd, cli_kv_overrides).await?;
let config_profile = config_toml.get_config_profile(cli.config_profile.clone())?;
let overrides = FeatureOverrides::default();
let features = Features::from_config(&config_toml, &config_profile, overrides);
Ok(features.enabled(Feature::Tui2))
}
/// Build the final `TuiCli` for a `codex resume` invocation.
fn finalize_resume_interactive(
mut interactive: TuiCli,

View File

@@ -20,11 +20,12 @@ use codex_rmcp_client::perform_oauth_login;
use codex_rmcp_client::supports_oauth_login;
/// Subcommands:
/// - `serve` — run the MCP server on stdio
/// - `list` — list configured servers (with `--json`)
/// - `get` — show a single server (with `--json`)
/// - `add` — add a server launcher entry to `~/.codex/config.toml`
/// - `remove` — delete a server entry
/// - `login` — authenticate with MCP server using OAuth
/// - `logout` — remove OAuth credentials for MCP server
#[derive(Debug, clap::Parser)]
pub struct McpCli {
#[clap(flatten)]

View File

@@ -193,6 +193,7 @@ impl Stream for AggregatedStream {
content: vec![ContentItem::OutputText {
text: std::mem::take(&mut this.cumulative),
}],
end_turn: None,
};
this.pending
.push_back(ResponseEvent::OutputItemDone(aggregated_message));

View File

@@ -24,6 +24,8 @@ use tokio_tungstenite::tungstenite::Error as WsError;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing::trace;
use url::Url;
@@ -151,15 +153,30 @@ async fn connect_websocket(
headers: HeaderMap,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<(WsStream, bool), ApiError> {
info!("connecting to websocket: {url}");
let mut request = url
.as_str()
.into_client_request()
.map_err(|err| ApiError::Stream(format!("failed to build websocket request: {err}")))?;
request.headers_mut().extend(headers);
let (stream, response) = tokio_tungstenite::connect_async(request)
.await
.map_err(|err| map_ws_error(err, &url))?;
let response = tokio_tungstenite::connect_async(request).await;
let (stream, response) = match response {
Ok((stream, response)) => {
info!(
"successfully connected to websocket: {url}, headers: {:?}",
response.headers()
);
(stream, response)
}
Err(err) => {
error!("failed to connect to websocket: {err}, url: {url}");
return Err(map_ws_error(err, &url));
}
};
let reasoning_included = response.headers().contains_key(X_REASONING_INCLUDED_HEADER);
if let Some(turn_state) = turn_state
&& let Some(header_value) = response
@@ -271,7 +288,7 @@ async fn run_websocket_response_stream(
Message::Pong(_) => {}
Message::Close(_) => {
return Err(ApiError::Stream(
"websocket closed before response.completed".into(),
"websocket closed by server before response.completed".into(),
));
}
_ => {}

View File

@@ -386,6 +386,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
}];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.conversation_id(Some("conv-1".into()))
@@ -412,6 +413,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "read these".to_string(),
}],
end_turn: None,
},
ResponseItem::FunctionCall {
id: None,

View File

@@ -15,13 +15,12 @@ pub(crate) fn subagent_header(source: &Option<SessionSource>) -> Option<String>
return None;
};
match sub {
codex_protocol::protocol::SubAgentSource::Review => Some("review".to_string()),
codex_protocol::protocol::SubAgentSource::Compact => Some("compact".to_string()),
codex_protocol::protocol::SubAgentSource::ThreadSpawn { .. } => {
Some("collab_spawn".to_string())
}
codex_protocol::protocol::SubAgentSource::Other(label) => Some(label.clone()),
other => Some(
serde_json::to_value(other)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string()),
),
}
}

View File

@@ -223,11 +223,13 @@ mod tests {
id: Some("m1".into()),
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
},
ResponseItem::Message {
id: None,
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
},
];

View File

@@ -330,6 +330,7 @@ async fn append_assistant_text(
id: None,
role: "assistant".to_string(),
content: vec![],
end_turn: None,
};
*assistant_item = Some(item.clone());
let _ = tx_event

View File

@@ -308,6 +308,7 @@ async fn streaming_client_retries_on_transport_error() -> Result<()> {
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
}],
tools: Vec::<Value>::new(),
parallel_tool_calls: false,

View File

@@ -77,6 +77,7 @@ async fn models_client_hits_models_endpoint() {
priority: 1,
upgrade: None,
base_instructions: "base instructions".to_string(),
model_instructions_template: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,

View File

@@ -24,21 +24,21 @@ pub fn builtin_approval_presets() -> Vec<ApprovalPreset> {
ApprovalPreset {
id: "read-only",
label: "Read Only",
description: "Requires approval to edit files and run commands.",
description: "Codex can read files in the current workspace. Approval is required to edit files or access the internet.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::ReadOnly,
},
ApprovalPreset {
id: "auto",
label: "Agent",
description: "Read and edit files, and run commands.",
label: "Default",
description: "Codex can read and edit files in the current workspace, and run commands. Approval is required to access the internet or edit other files. (Identical to Agent mode)",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::new_workspace_write_policy(),
},
ApprovalPreset {
id: "full-access",
label: "Agent (full access)",
description: "Codex can edit files outside this workspace and run commands with network access. Exercise caution when using.",
label: "Full Access",
description: "Codex can edit files outside this workspace and access the internet without asking for approval. Exercise caution when using.",
approval: AskForApproval::Never,
sandbox: SandboxPolicy::DangerFullAccess,
},

View File

@@ -64,6 +64,7 @@ reqwest = { workspace = true, features = ["json", "stream"] }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
serde_path_to_error = { workspace = true }
serde_yaml = { workspace = true }
sha1 = { workspace = true }
sha2 = { workspace = true }

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
use crate::agent::AgentStatus;
use crate::agent::guards::Guards;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::thread_manager::ThreadManagerState;
@@ -12,18 +13,25 @@ use tokio::sync::watch;
/// Control-plane handle for multi-agent operations.
/// `AgentControl` is held by each session (via `SessionServices`). It provides capability to
/// spawn new agents and the inter-agent communication layer.
/// An `AgentControl` instance is shared per "user session" which means the same `AgentControl`
/// is used for every sub-agent spawned by Codex. By doing so, we make sure the guards are
/// scoped to a user session.
#[derive(Clone, Default)]
pub(crate) struct AgentControl {
/// Weak handle back to the global thread registry/state.
/// This is `Weak` to avoid reference cycles and shadow persistence of the form
/// `ThreadManagerState -> CodexThread -> Session -> SessionServices -> ThreadManagerState`.
manager: Weak<ThreadManagerState>,
state: Arc<Guards>,
}
impl AgentControl {
/// Construct a new `AgentControl` that can spawn/message agents via the given manager state.
pub(crate) fn new(manager: Weak<ThreadManagerState>) -> Self {
Self { manager }
Self {
manager,
..Default::default()
}
}
/// Spawn a new agent thread and submit the initial prompt.
@@ -31,9 +39,21 @@ impl AgentControl {
&self,
config: crate::config::Config,
prompt: String,
session_source: Option<codex_protocol::protocol::SessionSource>,
) -> CodexResult<ThreadId> {
let state = self.upgrade()?;
let new_thread = state.spawn_new_thread(config, self.clone()).await?;
let reservation = self.state.reserve_spawn_slot(config.agent_max_threads)?;
// The same `AgentControl` is sent to spawn the thread.
let new_thread = match session_source {
Some(session_source) => {
state
.spawn_new_thread_with_source(config, self.clone(), session_source)
.await?
}
None => state.spawn_new_thread(config, self.clone()).await?,
};
reservation.commit(new_thread.thread_id);
// Notify a new thread has been created. This notification will be processed by clients
// to subscribe or drain this newly created thread.
@@ -67,6 +87,7 @@ impl AgentControl {
.await;
if matches!(result, Err(CodexErr::InternalAgentDied)) {
let _ = state.remove_thread(&agent_id).await;
self.state.release_spawned_thread(agent_id);
}
result
}
@@ -82,6 +103,7 @@ impl AgentControl {
let state = self.upgrade()?;
let result = state.send_op(agent_id, Op::Shutdown {}).await;
let _ = state.remove_thread(&agent_id).await;
self.state.release_spawned_thread(agent_id);
result
}
@@ -132,17 +154,25 @@ mod tests {
use codex_protocol::protocol::TurnStartedEvent;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use toml::Value as TomlValue;
async fn test_config() -> (TempDir, Config) {
async fn test_config_with_cli_overrides(
cli_overrides: Vec<(String, TomlValue)>,
) -> (TempDir, Config) {
let home = TempDir::new().expect("create temp dir");
let config = ConfigBuilder::default()
.codex_home(home.path().to_path_buf())
.cli_overrides(cli_overrides)
.build()
.await
.expect("load default test config");
(home, config)
}
async fn test_config() -> (TempDir, Config) {
test_config_with_cli_overrides(Vec::new()).await
}
struct AgentControlHarness {
_home: TempDir,
config: Config,
@@ -246,7 +276,7 @@ mod tests {
let control = AgentControl::default();
let (_home, config) = test_config().await;
let err = control
.spawn_agent(config, "hello".to_string())
.spawn_agent(config, "hello".to_string(), None)
.await
.expect_err("spawn_agent should fail without a manager");
assert_eq!(
@@ -348,7 +378,7 @@ mod tests {
let harness = AgentControlHarness::new().await;
let thread_id = harness
.control
.spawn_agent(harness.config.clone(), "spawned".to_string())
.spawn_agent(harness.config.clone(), "spawned".to_string(), None)
.await
.expect("spawn_agent should succeed");
let _thread = harness
@@ -373,4 +403,117 @@ mod tests {
.find(|entry| *entry == expected);
assert_eq!(captured, Some(expected));
}
#[tokio::test]
async fn spawn_agent_respects_max_threads_limit() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let _ = manager
.start_thread(config.clone())
.await
.expect("start thread");
let first_agent_id = control
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let err = control
.spawn_agent(config, "hello again".to_string(), None)
.await
.expect_err("spawn_agent should respect max threads");
let CodexErr::AgentLimitReached {
max_threads: seen_max_threads,
} = err
else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(seen_max_threads, max_threads);
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
}
#[tokio::test]
async fn spawn_agent_releases_slot_after_shutdown() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let first_agent_id = control
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
let second_agent_id = control
.spawn_agent(config.clone(), "hello again".to_string(), None)
.await
.expect("spawn_agent should succeed after shutdown");
let _ = control
.shutdown_agent(second_agent_id)
.await
.expect("shutdown agent");
}
#[tokio::test]
async fn spawn_agent_limit_shared_across_clones() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let cloned = control.clone();
let first_agent_id = cloned
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let err = control
.spawn_agent(config, "hello again".to_string(), None)
.await
.expect_err("spawn_agent should respect shared guard");
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
}
}

View File

@@ -0,0 +1,238 @@
use crate::error::CodexErr;
use crate::error::Result;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
/// This structure is used to add some limits on the multi-agent capabilities for Codex. In
/// the current implementation, it limits:
/// * Total number of sub-agents (i.e. threads) per user session
///
/// This structure is shared by all agents in the same user session (because the `AgentControl`
/// is).
#[derive(Default)]
pub(crate) struct Guards {
threads_set: Mutex<HashSet<ThreadId>>,
total_count: AtomicUsize,
}
/// Initial agent is depth 0.
pub(crate) const MAX_THREAD_SPAWN_DEPTH: i32 = 1;
fn session_depth(session_source: &SessionSource) -> i32 {
match session_source {
SessionSource::SubAgent(SubAgentSource::ThreadSpawn { depth, .. }) => *depth,
SessionSource::SubAgent(_) => 0,
_ => 0,
}
}
pub(crate) fn next_thread_spawn_depth(session_source: &SessionSource) -> i32 {
session_depth(session_source).saturating_add(1)
}
pub(crate) fn exceeds_thread_spawn_depth_limit(depth: i32) -> bool {
depth > MAX_THREAD_SPAWN_DEPTH
}
impl Guards {
pub(crate) fn reserve_spawn_slot(
self: &Arc<Self>,
max_threads: Option<usize>,
) -> Result<SpawnReservation> {
if let Some(max_threads) = max_threads {
if !self.try_increment_spawned(max_threads) {
return Err(CodexErr::AgentLimitReached { max_threads });
}
} else {
self.total_count.fetch_add(1, Ordering::AcqRel);
}
Ok(SpawnReservation {
state: Arc::clone(self),
active: true,
})
}
pub(crate) fn release_spawned_thread(&self, thread_id: ThreadId) {
let removed = {
let mut threads = self
.threads_set
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
threads.remove(&thread_id)
};
if removed {
self.total_count.fetch_sub(1, Ordering::AcqRel);
}
}
fn register_spawned_thread(&self, thread_id: ThreadId) {
let mut threads = self
.threads_set
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
threads.insert(thread_id);
}
fn try_increment_spawned(&self, max_threads: usize) -> bool {
let mut current = self.total_count.load(Ordering::Acquire);
loop {
if current >= max_threads {
return false;
}
match self.total_count.compare_exchange_weak(
current,
current + 1,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => return true,
Err(updated) => current = updated,
}
}
}
}
pub(crate) struct SpawnReservation {
state: Arc<Guards>,
active: bool,
}
impl SpawnReservation {
pub(crate) fn commit(mut self, thread_id: ThreadId) {
self.state.register_spawned_thread(thread_id);
self.active = false;
}
}
impl Drop for SpawnReservation {
fn drop(&mut self) {
if self.active {
self.state.total_count.fetch_sub(1, Ordering::AcqRel);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn session_depth_defaults_to_zero_for_root_sources() {
assert_eq!(session_depth(&SessionSource::Cli), 0);
}
#[test]
fn thread_spawn_depth_increments_and_enforces_limit() {
let session_source = SessionSource::SubAgent(SubAgentSource::ThreadSpawn {
parent_thread_id: ThreadId::new(),
depth: 1,
});
let child_depth = next_thread_spawn_depth(&session_source);
assert_eq!(child_depth, 2);
assert!(exceeds_thread_spawn_depth_limit(child_depth));
}
#[test]
fn non_thread_spawn_subagents_default_to_depth_zero() {
let session_source = SessionSource::SubAgent(SubAgentSource::Review);
assert_eq!(session_depth(&session_source), 0);
assert_eq!(next_thread_spawn_depth(&session_source), 1);
assert!(!exceeds_thread_spawn_depth_limit(1));
}
#[test]
fn reservation_drop_releases_slot() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
drop(reservation);
let reservation = guards.reserve_spawn_slot(Some(1)).expect("slot released");
drop(reservation);
}
#[test]
fn commit_holds_slot_until_release() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let thread_id = ThreadId::new();
reservation.commit(thread_id);
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(thread_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after thread removal");
drop(reservation);
}
#[test]
fn release_ignores_unknown_thread_id() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let thread_id = ThreadId::new();
reservation.commit(thread_id);
guards.release_spawned_thread(ThreadId::new());
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should still be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(thread_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after real thread removal");
drop(reservation);
}
#[test]
fn release_is_idempotent_for_registered_threads() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let first_id = ThreadId::new();
reservation.commit(first_id);
guards.release_spawned_thread(first_id);
let reservation = guards.reserve_spawn_slot(Some(1)).expect("slot reused");
let second_id = ThreadId::new();
reservation.commit(second_id);
guards.release_spawned_thread(first_id);
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should still be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(second_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after second thread removal");
drop(reservation);
}
}

View File

@@ -1,8 +1,12 @@
pub(crate) mod control;
mod guards;
pub(crate) mod role;
pub(crate) mod status;
pub(crate) use codex_protocol::protocol::AgentStatus;
pub(crate) use control::AgentControl;
pub(crate) use guards::MAX_THREAD_SPAWN_DEPTH;
pub(crate) use guards::exceeds_thread_spawn_depth_limit;
pub(crate) use guards::next_thread_spawn_depth;
pub(crate) use role::AgentRole;
pub(crate) use status::agent_status_from_event;

View File

@@ -996,6 +996,7 @@ mod tests {
id_token: IdTokenInfo {
email: Some("user@example.com".to_string()),
chatgpt_plan_type: Some(InternalPlanType::Known(InternalKnownPlan::Pro)),
chatgpt_user_id: Some("user-12345".to_string()),
chatgpt_account_id: None,
raw_jwt: fake_jwt,
},

View File

@@ -138,26 +138,12 @@ fn parse_plain_command_from_node(cmd: tree_sitter::Node, src: &str) -> Option<Ve
words.push(child.utf8_text(src.as_bytes()).ok()?.to_owned());
}
"string" => {
if child.child_count() == 3
&& child.child(0)?.kind() == "\""
&& child.child(1)?.kind() == "string_content"
&& child.child(2)?.kind() == "\""
{
words.push(child.child(1)?.utf8_text(src.as_bytes()).ok()?.to_owned());
} else {
return None;
}
let parsed = parse_double_quoted_string(child, src)?;
words.push(parsed);
}
"raw_string" => {
let raw_string = child.utf8_text(src.as_bytes()).ok()?;
let stripped = raw_string
.strip_prefix('\'')
.and_then(|s| s.strip_suffix('\''));
if let Some(s) = stripped {
words.push(s.to_owned());
} else {
return None;
}
let parsed = parse_raw_string(child, src)?;
words.push(parsed);
}
"concatenation" => {
// Handle concatenated arguments like -g"*.py"
@@ -170,28 +156,12 @@ fn parse_plain_command_from_node(cmd: tree_sitter::Node, src: &str) -> Option<Ve
.push_str(part.utf8_text(src.as_bytes()).ok()?.to_owned().as_str());
}
"string" => {
if part.child_count() == 3
&& part.child(0)?.kind() == "\""
&& part.child(1)?.kind() == "string_content"
&& part.child(2)?.kind() == "\""
{
concatenated.push_str(
part.child(1)?
.utf8_text(src.as_bytes())
.ok()?
.to_owned()
.as_str(),
);
} else {
return None;
}
let parsed = parse_double_quoted_string(part, src)?;
concatenated.push_str(&parsed);
}
"raw_string" => {
let raw_string = part.utf8_text(src.as_bytes()).ok()?;
let stripped = raw_string
.strip_prefix('\'')
.and_then(|s| s.strip_suffix('\''))?;
concatenated.push_str(stripped);
let parsed = parse_raw_string(part, src)?;
concatenated.push_str(&parsed);
}
_ => return None,
}
@@ -207,9 +177,40 @@ fn parse_plain_command_from_node(cmd: tree_sitter::Node, src: &str) -> Option<Ve
Some(words)
}
fn parse_double_quoted_string(node: Node, src: &str) -> Option<String> {
if node.kind() != "string" {
return None;
}
let mut cursor = node.walk();
for part in node.named_children(&mut cursor) {
if part.kind() != "string_content" {
return None;
}
}
let raw = node.utf8_text(src.as_bytes()).ok()?;
let stripped = raw
.strip_prefix('"')
.and_then(|text| text.strip_suffix('"'))?;
Some(stripped.to_string())
}
fn parse_raw_string(node: Node, src: &str) -> Option<String> {
if node.kind() != "raw_string" {
return None;
}
let raw_string = node.utf8_text(src.as_bytes()).ok()?;
let stripped = raw_string
.strip_prefix('\'')
.and_then(|s| s.strip_suffix('\''));
stripped.map(str::to_owned)
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
fn parse_seq(src: &str) -> Option<Vec<Vec<String>>> {
let tree = try_parse_shell(src)?;
@@ -250,6 +251,38 @@ mod tests {
);
}
#[test]
fn accepts_double_quoted_strings_with_newlines() {
let cmds = parse_seq("git commit -m \"line1\nline2\"").unwrap();
assert_eq!(
cmds,
vec![vec![
"git".to_string(),
"commit".to_string(),
"-m".to_string(),
"line1\nline2".to_string(),
]]
);
}
#[test]
fn accepts_mixed_quote_concatenation() {
assert_eq!(
parse_seq(r#"echo "/usr"'/'"local"/bin"#).unwrap(),
vec![vec!["echo".to_string(), "/usr/local/bin".to_string()]]
);
assert_eq!(
parse_seq(r#"echo '/usr'"/"'local'/bin"#).unwrap(),
vec![vec!["echo".to_string(), "/usr/local/bin".to_string()]]
);
}
#[test]
fn rejects_double_quoted_strings_with_expansions() {
assert!(parse_seq(r#"echo "hi ${USER}""#).is_none());
assert!(parse_seq(r#"echo "$HOME""#).is_none());
}
#[test]
fn accepts_numbers_as_words() {
let cmds = parse_seq("echo 123 456").unwrap();

View File

@@ -226,13 +226,11 @@ impl ModelClient {
let mut extra_headers = ApiHeaderMap::new();
if let SessionSource::SubAgent(sub) = &self.state.session_source {
let subagent = if let crate::protocol::SubAgentSource::Other(label) = sub {
label.clone()
} else {
serde_json::to_value(sub)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string())
let subagent = match sub {
crate::protocol::SubAgentSource::Review => "review".to_string(),
crate::protocol::SubAgentSource::Compact => "compact".to_string(),
crate::protocol::SubAgentSource::ThreadSpawn { .. } => "collab_spawn".to_string(),
crate::protocol::SubAgentSource::Other(label) => label.clone(),
};
if let Ok(val) = HeaderValue::from_str(&subagent) {
extra_headers.insert("x-openai-subagent", val);

View File

@@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::collections::HashSet;
use std::fmt::Debug;
use std::path::PathBuf;
use std::sync::Arc;
@@ -16,6 +17,7 @@ use crate::compact;
use crate::compact::run_inline_auto_compact_task;
use crate::compact::should_use_remote_compact_task;
use crate::compact_remote::run_inline_remote_auto_compact_task;
use crate::connectors;
use crate::exec_policy::ExecPolicyManager;
use crate::features::Feature;
use crate::features::Features;
@@ -33,6 +35,7 @@ use async_channel::Receiver;
use async_channel::Sender;
use codex_protocol::ThreadId;
use codex_protocol::approvals::ExecPolicyAmendment;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::items::TurnItem;
@@ -86,6 +89,7 @@ use crate::client::ModelClient;
use crate::client::ModelClientSession;
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::codex_thread::ThreadConfigSnapshot;
use crate::compact::collect_user_messages;
use crate::config::Config;
use crate::config::Constrained;
@@ -102,7 +106,10 @@ use crate::exec::StreamOutput;
use crate::exec_policy::ExecPolicyUpdateError;
use crate::feedback_tags;
use crate::instructions::UserInstructions;
use crate::mcp::CODEX_APPS_MCP_SERVER_NAME;
use crate::mcp::auth::compute_auth_statuses;
use crate::mcp::effective_mcp_servers;
use crate::mcp::with_codex_apps_mcp;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::model_provider_info::CHAT_WIRE_API_DEPRECATION_SUMMARY;
use crate::project_doc::get_user_instructions;
@@ -165,6 +172,7 @@ use crate::util::backoff;
use codex_async_utils::OrCancelExt;
use codex_otel::OtelManager;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ContentItem;
use codex_protocol::models::DeveloperInstructions;
@@ -185,6 +193,7 @@ pub struct Codex {
pub(crate) rx_event: Receiver<Event>,
// Last known status of the agent.
pub(crate) agent_status: watch::Receiver<AgentStatus>,
pub(crate) session: Arc<Session>,
}
/// Wrapper returned by [`Codex::spawn`] containing the spawned [`Codex`],
@@ -285,21 +294,25 @@ impl Codex {
.base_instructions
.clone()
.or_else(|| conversation_history.get_base_instructions().map(|s| s.text))
.unwrap_or_else(|| model_info.base_instructions.clone());
.unwrap_or_else(|| model_info.get_model_instructions(config.model_personality));
// TODO (aibrahim): Consolidate config.model and config.model_reasoning_effort into config.collaboration_mode
// to avoid extracting these fields separately and constructing CollaborationMode here.
let collaboration_mode = CollaborationMode::Custom(Settings {
model: model.clone(),
reasoning_effort: config.model_reasoning_effort,
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model: model.clone(),
reasoning_effort: config.model_reasoning_effort,
developer_instructions: None,
},
};
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
collaboration_mode,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions,
personality: config.model_personality,
base_instructions,
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy.clone(),
@@ -334,12 +347,13 @@ impl Codex {
let thread_id = session.conversation_id;
// This task will run until Op::Shutdown is received.
tokio::spawn(submission_loop(session, config, rx_sub));
tokio::spawn(submission_loop(Arc::clone(&session), config, rx_sub));
let codex = Codex {
next_id: AtomicU64::new(0),
tx_sub,
rx_event,
agent_status: agent_status_rx,
session,
};
#[allow(deprecated)]
@@ -383,6 +397,11 @@ impl Codex {
pub(crate) async fn agent_status(&self) -> AgentStatus {
self.agent_status.borrow().clone()
}
pub(crate) async fn thread_config_snapshot(&self) -> ThreadConfigSnapshot {
let state = self.session.state.lock().await;
state.session_configuration.thread_config_snapshot()
}
}
/// Context for an initialized model agent
@@ -414,6 +433,7 @@ pub(crate) struct TurnContext {
pub(crate) developer_instructions: Option<String>,
pub(crate) compact_prompt: Option<String>,
pub(crate) user_instructions: Option<String>,
pub(crate) personality: Option<Personality>,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) shell_environment_policy: ShellEnvironmentPolicy,
@@ -453,6 +473,9 @@ pub(crate) struct SessionConfiguration {
/// Model instructions that are appended to the base instructions.
user_instructions: Option<String>,
/// Personality preference for the model.
personality: Option<Personality>,
/// Base instructions for the session.
base_instructions: String,
@@ -480,6 +503,19 @@ pub(crate) struct SessionConfiguration {
}
impl SessionConfiguration {
fn thread_config_snapshot(&self) -> ThreadConfigSnapshot {
ThreadConfigSnapshot {
model: self.collaboration_mode.model().to_string(),
model_provider_id: self.original_config_do_not_use.model_provider_id.clone(),
approval_policy: self.approval_policy.value(),
sandbox_policy: self.sandbox_policy.get().clone(),
cwd: self.cwd.clone(),
reasoning_effort: self.collaboration_mode.reasoning_effort(),
personality: self.personality,
session_source: self.session_source.clone(),
}
}
pub(crate) fn apply(&self, updates: &SessionSettingsUpdate) -> ConstraintResult<Self> {
let mut next_configuration = self.clone();
if let Some(collaboration_mode) = updates.collaboration_mode.clone() {
@@ -488,6 +524,9 @@ impl SessionConfiguration {
if let Some(summary) = updates.reasoning_summary {
next_configuration.model_reasoning_summary = summary;
}
if let Some(personality) = updates.personality {
next_configuration.personality = Some(personality);
}
if let Some(approval_policy) = updates.approval_policy {
next_configuration.approval_policy.set(approval_policy)?;
}
@@ -509,6 +548,7 @@ pub(crate) struct SessionSettingsUpdate {
pub(crate) collaboration_mode: Option<CollaborationMode>,
pub(crate) reasoning_summary: Option<ReasoningSummaryConfig>,
pub(crate) final_output_json_schema: Option<Option<Value>>,
pub(crate) personality: Option<Personality>,
}
impl Session {
@@ -520,6 +560,7 @@ impl Session {
per_turn_config.model_reasoning_effort =
session_configuration.collaboration_mode.reasoning_effort();
per_turn_config.model_reasoning_summary = session_configuration.model_reasoning_summary;
per_turn_config.model_personality = session_configuration.personality;
per_turn_config.features = config.features.clone();
per_turn_config
}
@@ -565,6 +606,7 @@ impl Session {
developer_instructions: session_configuration.developer_instructions.clone(),
compact_prompt: session_configuration.compact_prompt.clone(),
user_instructions: session_configuration.user_instructions.clone(),
personality: session_configuration.personality,
approval_policy: session_configuration.approval_policy.value(),
sandbox_policy: session_configuration.sandbox_policy.get().clone(),
shell_environment_policy: per_turn_config.shell_environment_policy.clone(),
@@ -631,23 +673,44 @@ impl Session {
// - initialize RolloutRecorder with new or resumed session info
// - perform default shell discovery
// - load history metadata
let rollout_fut = RolloutRecorder::new(&config, rollout_params);
let rollout_fut = async {
if config.ephemeral {
Ok(None)
} else {
RolloutRecorder::new(&config, rollout_params)
.await
.map(Some)
}
};
let history_meta_fut = crate::message_history::history_metadata(&config);
let auth_statuses_fut = compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
);
let auth_manager_clone = Arc::clone(&auth_manager);
let config_for_mcp = Arc::clone(&config);
let auth_and_mcp_fut = async move {
let auth = auth_manager_clone.auth().await;
let mcp_servers = effective_mcp_servers(&config_for_mcp, auth.as_ref());
let auth_statuses = compute_auth_statuses(
mcp_servers.iter(),
config_for_mcp.mcp_oauth_credentials_store_mode,
)
.await;
(auth, mcp_servers, auth_statuses)
};
// Join all independent futures.
let (rollout_recorder, (history_log_id, history_entry_count), auth_statuses) =
tokio::join!(rollout_fut, history_meta_fut, auth_statuses_fut);
let (
rollout_recorder,
(history_log_id, history_entry_count),
(auth, mcp_servers, auth_statuses),
) = tokio::join!(rollout_fut, history_meta_fut, auth_and_mcp_fut);
let rollout_recorder = rollout_recorder.map_err(|e| {
error!("failed to initialize rollout recorder: {e:#}");
anyhow::Error::from(e)
})?;
let rollout_path = rollout_recorder.rollout_path.clone();
let rollout_path = rollout_recorder
.as_ref()
.map(|rec| rec.rollout_path.clone());
let mut post_session_configured_events = Vec::<Event>::new();
@@ -681,7 +744,6 @@ impl Session {
}
maybe_push_chat_wire_api_deprecation(&config, &mut post_session_configured_events);
let auth = auth_manager.auth().await;
let auth = auth.as_ref();
let otel_manager = OtelManager::new(
conversation_id,
@@ -716,25 +778,19 @@ impl Session {
config.model_auto_compact_token_limit,
config.approval_policy.value(),
config.sandbox_policy.get().clone(),
config.mcp_servers.keys().map(String::as_str).collect(),
mcp_servers.keys().map(String::as_str).collect(),
config.active_profile.clone(),
);
let mut default_shell = shell::default_user_shell();
// Create the mutable state for the Session.
if config.features.enabled(Feature::ShellSnapshot) {
let timer = otel_manager.start_timer("codex.shell_snapshot.duration_ms", &[]);
default_shell.shell_snapshot =
ShellSnapshot::try_new(&config.codex_home, conversation_id, &default_shell)
.await
.map(Arc::new);
let success = if default_shell.shell_snapshot.is_some() {
"true"
} else {
"false"
};
let _ = timer.map(|timer| timer.record(&[("success", success)]));
otel_manager.counter("codex.shell_snapshot", 1, &[("success", success)])
ShellSnapshot::start_snapshotting(
config.codex_home.clone(),
conversation_id,
&mut default_shell,
otel_manager.clone(),
);
}
let state = SessionState::new(session_configuration.clone());
@@ -743,7 +799,7 @@ impl Session {
mcp_startup_cancellation_token: Mutex::new(CancellationToken::new()),
unified_exec_manager: UnifiedExecProcessManager::default(),
notifier: UserNotifier::new(config.notify.clone()),
rollout: Mutex::new(Some(rollout_recorder)),
rollout: Mutex::new(rollout_recorder),
user_shell: Arc::new(default_shell),
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
exec_policy,
@@ -806,7 +862,7 @@ impl Session {
.write()
.await
.initialize(
&config.mcp_servers,
&mcp_servers,
config.mcp_oauth_credentials_store_mode,
auth_statuses.clone(),
tx_event.clone(),
@@ -1067,6 +1123,11 @@ impl Session {
.await
}
pub(crate) async fn current_collaboration_mode(&self) -> CollaborationMode {
let state = self.state.lock().await;
state.session_configuration.collaboration_mode.clone()
}
fn build_environment_update_item(
&self,
previous: Option<&Arc<TurnContext>>,
@@ -1109,6 +1170,34 @@ impl Session {
)
}
fn build_personality_update_item(
&self,
previous: Option<&Arc<TurnContext>>,
next: &TurnContext,
) -> Option<ResponseItem> {
let personality = next.personality?;
if let Some(prev) = previous
&& prev.personality == Some(personality)
{
return None;
}
let model_info = next.client.get_model_info();
let personality_message = Self::personality_message_for(&model_info, personality);
personality_message.map(|personality_message| {
DeveloperInstructions::personality_spec_message(personality_message).into()
})
}
fn personality_message_for(model_info: &ModelInfo, personality: Personality) -> Option<String> {
model_info
.model_instructions_template
.as_ref()
.and_then(|template| template.personality_messages.as_ref())
.and_then(|messages| messages.0.get(&personality))
.cloned()
}
fn build_collaboration_mode_update_item(
&self,
previous_collaboration_mode: &CollaborationMode,
@@ -1150,6 +1239,11 @@ impl Session {
) {
update_items.push(collaboration_mode_item);
}
if let Some(personality_item) =
self.build_personality_update_item(previous_context, current_context)
{
update_items.push(personality_item);
}
update_items
}
@@ -1181,7 +1275,7 @@ impl Session {
let rollout_items = vec![RolloutItem::EventMsg(event.msg.clone())];
self.persist_rollout_items(&rollout_items).await;
if let Err(e) = self.tx_event.send(event).await {
error!("failed to send tool call event: {e}");
debug!("dropping event because channel is closed: {e}");
}
}
@@ -1199,7 +1293,7 @@ impl Session {
.await;
self.flush_rollout().await;
if let Err(e) = self.tx_event.send(event).await {
error!("failed to send tool call event: {e}");
debug!("dropping event because channel is closed: {e}");
}
}
@@ -1504,6 +1598,7 @@ impl Session {
content: vec![ContentItem::InputText {
text: format!("Warning: {}", message.into()),
}],
end_turn: None,
};
self.record_conversation_items(ctx, &[item]).await;
@@ -1941,6 +2036,14 @@ impl Session {
}
};
let auth = self.services.auth_manager.auth().await;
let config = self.get_config().await;
let mcp_servers = with_codex_apps_mcp(
mcp_servers,
self.features.enabled(Feature::Connectors),
auth.as_ref(),
config.as_ref(),
);
let auth_statuses = compute_auth_statuses(mcp_servers.iter(), store_mode).await;
let sandbox_state = SandboxState {
sandbox_policy: turn_context.sandbox_policy.clone(),
@@ -2013,6 +2116,7 @@ async fn submission_loop(sess: Arc<Session>, config: Arc<Config>, rx_sub: Receiv
effort,
summary,
collaboration_mode,
personality,
} => {
let collaboration_mode = if let Some(collab_mode) = collaboration_mode {
collab_mode
@@ -2033,6 +2137,7 @@ async fn submission_loop(sess: Arc<Session>, config: Arc<Config>, rx_sub: Receiv
sandbox_policy,
collaboration_mode: Some(collaboration_mode),
reasoning_summary: summary,
personality,
..Default::default()
},
)
@@ -2120,6 +2225,7 @@ mod handlers {
use crate::mcp::auth::compute_auth_statuses;
use crate::mcp::collect_mcp_snapshot_from_manager;
use crate::mcp::effective_mcp_servers;
use crate::review_prompts::resolve_review_request;
use crate::tasks::CompactTask;
use crate::tasks::RegularTask;
@@ -2144,6 +2250,7 @@ mod handlers {
use crate::context_manager::is_user_turn_boundary;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::user_input::UserInput;
use codex_rmcp_client::ElicitationAction;
@@ -2217,13 +2324,17 @@ mod handlers {
final_output_json_schema,
items,
collaboration_mode,
personality,
} => {
let collaboration_mode = collaboration_mode.or_else(|| {
Some(CollaborationMode::Custom(Settings {
model: model.clone(),
reasoning_effort: effort,
developer_instructions: None,
}))
Some(CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model: model.clone(),
reasoning_effort: effort,
developer_instructions: None,
},
})
});
(
items,
@@ -2234,6 +2345,7 @@ mod handlers {
collaboration_mode,
reasoning_summary: Some(summary),
final_output_json_schema: Some(final_output_json_schema),
personality,
},
)
}
@@ -2431,13 +2543,12 @@ mod handlers {
pub async fn list_mcp_tools(sess: &Session, config: &Arc<Config>, sub_id: String) {
let mcp_connection_manager = sess.services.mcp_connection_manager.read().await;
let auth = sess.services.auth_manager.auth().await;
let mcp_servers = effective_mcp_servers(config, auth.as_ref());
let snapshot = collect_mcp_snapshot_from_manager(
&mcp_connection_manager,
compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
)
.await,
compute_auth_statuses(mcp_servers.iter(), config.mcp_oauth_credentials_store_mode)
.await,
)
.await;
let event = Event {
@@ -2482,8 +2593,7 @@ mod handlers {
for cwd in cwds {
let outcome = skills_manager.skills_for_cwd(&cwd, force_reload).await;
let errors = super::errors_to_info(&outcome.errors);
let enabled_skills = outcome.enabled_skills();
let skills_metadata = super::skills_to_info(&enabled_skills);
let skills_metadata = super::skills_to_info(&outcome.skills, &outcome.disabled_paths);
skills.push(SkillsListEntry {
cwd,
skills: skills_metadata,
@@ -2576,7 +2686,7 @@ mod handlers {
.filter(|item| is_user_turn_boundary(item))
.count();
sess.services.otel_manager.counter(
"conversation.turn.count",
"codex.conversation.turn.count",
i64::try_from(turn_count).unwrap_or(0),
&[],
);
@@ -2708,6 +2818,7 @@ async fn spawn_review_thread(
developer_instructions: None,
user_instructions: None,
compact_prompt: parent_turn_context.compact_prompt.clone(),
personality: parent_turn_context.personality,
approval_policy: parent_turn_context.approval_policy,
sandbox_policy: parent_turn_context.sandbox_policy.clone(),
shell_environment_policy: parent_turn_context.shell_environment_policy.clone(),
@@ -2736,7 +2847,10 @@ async fn spawn_review_thread(
.await;
}
fn skills_to_info(skills: &[SkillMetadata]) -> Vec<ProtocolSkillMetadata> {
fn skills_to_info(
skills: &[SkillMetadata],
disabled_paths: &HashSet<PathBuf>,
) -> Vec<ProtocolSkillMetadata> {
skills
.iter()
.map(|skill| ProtocolSkillMetadata {
@@ -2756,6 +2870,7 @@ fn skills_to_info(skills: &[SkillMetadata]) -> Vec<ProtocolSkillMetadata> {
}),
path: skill.path.clone(),
scope: skill.scope,
enabled: !disabled_paths.contains(&skill.path),
})
.collect()
}
@@ -2947,6 +3062,60 @@ async fn run_auto_compact(sess: &Arc<Session>, turn_context: &Arc<TurnContext>)
}
}
fn filter_connectors_for_input(
connectors: Vec<connectors::ConnectorInfo>,
input: &[ResponseItem],
) -> Vec<connectors::ConnectorInfo> {
let user_messages = collect_user_messages(input);
if user_messages.is_empty() {
return Vec::new();
}
connectors
.into_iter()
.filter(|connector| connector_inserted_in_messages(connector, &user_messages))
.collect()
}
fn connector_inserted_in_messages(
connector: &connectors::ConnectorInfo,
user_messages: &[String],
) -> bool {
let label = connectors::connector_display_label(connector);
let needle = label.to_lowercase();
let legacy = format!("{label} connector").to_lowercase();
user_messages.iter().any(|message| {
let message = message.to_lowercase();
message.contains(&needle) || message.contains(&legacy)
})
}
fn filter_codex_apps_mcp_tools(
mut mcp_tools: HashMap<String, crate::mcp_connection_manager::ToolInfo>,
connectors: &[connectors::ConnectorInfo],
) -> HashMap<String, crate::mcp_connection_manager::ToolInfo> {
let allowed: HashSet<&str> = connectors
.iter()
.map(|connector| connector.connector_id.as_str())
.collect();
mcp_tools.retain(|_, tool| {
if tool.server_name != CODEX_APPS_MCP_SERVER_NAME {
return true;
}
let Some(connector_id) = codex_apps_connector_id(tool) else {
return false;
};
allowed.contains(connector_id)
});
mcp_tools
}
fn codex_apps_connector_id(tool: &crate::mcp_connection_manager::ToolInfo) -> Option<&str> {
tool.connector_id.as_deref()
}
#[instrument(level = "trace",
skip_all,
fields(
@@ -2963,7 +3132,7 @@ async fn run_sampling_request(
input: Vec<ResponseItem>,
cancellation_token: CancellationToken,
) -> CodexResult<SamplingRequestResult> {
let mcp_tools = sess
let mut mcp_tools = sess
.services
.mcp_connection_manager
.read()
@@ -2971,6 +3140,20 @@ async fn run_sampling_request(
.list_all_tools()
.or_cancel(&cancellation_token)
.await?;
let connectors_for_tools = if turn_context
.client
.config()
.features
.enabled(Feature::Connectors)
{
let connectors = connectors::accessible_connectors_from_mcp_tools(&mcp_tools);
Some(filter_connectors_for_input(connectors, &input))
} else {
None
};
if let Some(connectors) = connectors_for_tools.as_ref() {
mcp_tools = filter_codex_apps_mcp_tools(mcp_tools, connectors);
}
let router = Arc::new(ToolRouter::from_config(
&turn_context.tools_config,
Some(
@@ -2993,7 +3176,7 @@ async fn run_sampling_request(
tools: router.specs(),
parallel_tool_calls: model_supports_parallel,
base_instructions,
personality: None,
personality: turn_context.personality,
output_schema: turn_context.final_output_json_schema.clone(),
};
@@ -3102,11 +3285,18 @@ async fn try_run_sampling_request(
prompt: &Prompt,
cancellation_token: CancellationToken,
) -> CodexResult<SamplingRequestResult> {
// TODO: If we need to guarantee the persisted mode always matches the prompt used for this
// turn, capture it in TurnContext at creation time. Using SessionConfiguration here avoids
// duplicating model settings on TurnContext, but a later Op could update the session config
// before this write occurs.
let collaboration_mode = sess.current_collaboration_mode().await;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
personality: turn_context.personality,
collaboration_mode: Some(collaboration_mode),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
user_instructions: turn_context.user_instructions.clone(),
@@ -3604,6 +3794,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "turn 1 user".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -3611,6 +3802,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "turn 1 assistant".to_string(),
}],
end_turn: None,
},
];
sess.record_into_history(&turn_1, tc.as_ref()).await;
@@ -3622,6 +3814,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "turn 2 user".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -3629,6 +3822,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "turn 2 assistant".to_string(),
}],
end_turn: None,
},
];
sess.record_into_history(&turn_2, tc.as_ref()).await;
@@ -3660,6 +3854,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "turn 1 user".to_string(),
}],
end_turn: None,
}];
sess.record_into_history(&turn_1, tc.as_ref()).await;
@@ -3722,21 +3917,25 @@ mod tests {
let model = ModelsManager::get_model_offline(config.model.as_deref());
let model_info = ModelsManager::construct_model_info_offline(model.as_str(), &config);
let reasoning_effort = config.model_reasoning_effort;
let collaboration_mode = CollaborationMode::Custom(Settings {
model,
reasoning_effort,
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model,
reasoning_effort,
developer_instructions: None,
},
};
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
collaboration_mode,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
personality: config.model_personality,
base_instructions: config
.base_instructions
.clone()
.unwrap_or_else(|| model_info.base_instructions.clone()),
.unwrap_or_else(|| model_info.get_model_instructions(config.model_personality)),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy.clone(),
sandbox_policy: config.sandbox_policy.clone(),
@@ -3797,21 +3996,25 @@ mod tests {
let model = ModelsManager::get_model_offline(config.model.as_deref());
let model_info = ModelsManager::construct_model_info_offline(model.as_str(), &config);
let reasoning_effort = config.model_reasoning_effort;
let collaboration_mode = CollaborationMode::Custom(Settings {
model,
reasoning_effort,
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model,
reasoning_effort,
developer_instructions: None,
},
};
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
collaboration_mode,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
personality: config.model_personality,
base_instructions: config
.base_instructions
.clone()
.unwrap_or_else(|| model_info.base_instructions.clone()),
.unwrap_or_else(|| model_info.get_model_instructions(config.model_personality)),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy.clone(),
sandbox_policy: config.sandbox_policy.clone(),
@@ -4056,21 +4259,25 @@ mod tests {
let model = ModelsManager::get_model_offline(config.model.as_deref());
let model_info = ModelsManager::construct_model_info_offline(model.as_str(), &config);
let reasoning_effort = config.model_reasoning_effort;
let collaboration_mode = CollaborationMode::Custom(Settings {
model,
reasoning_effort,
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model,
reasoning_effort,
developer_instructions: None,
},
};
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
collaboration_mode,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
personality: config.model_personality,
base_instructions: config
.base_instructions
.clone()
.unwrap_or_else(|| model_info.base_instructions.clone()),
.unwrap_or_else(|| model_info.get_model_instructions(config.model_personality)),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy.clone(),
sandbox_policy: config.sandbox_policy.clone(),
@@ -4160,21 +4367,25 @@ mod tests {
let model = ModelsManager::get_model_offline(config.model.as_deref());
let model_info = ModelsManager::construct_model_info_offline(model.as_str(), &config);
let reasoning_effort = config.model_reasoning_effort;
let collaboration_mode = CollaborationMode::Custom(Settings {
model,
reasoning_effort,
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model,
reasoning_effort,
developer_instructions: None,
},
};
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
collaboration_mode,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
personality: config.model_personality,
base_instructions: config
.base_instructions
.clone()
.unwrap_or_else(|| model_info.base_instructions.clone()),
.unwrap_or_else(|| model_info.get_model_instructions(config.model_personality)),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy.clone(),
sandbox_policy: config.sandbox_policy.clone(),
@@ -4552,6 +4763,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "first user".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&user1), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user1.clone()));
@@ -4562,6 +4774,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "assistant reply one".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&assistant1), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant1.clone()));
@@ -4586,6 +4799,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "second user".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&user2), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user2.clone()));
@@ -4596,6 +4810,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "assistant reply two".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&assistant2), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant2.clone()));
@@ -4620,6 +4835,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "third user".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&user3), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user3));
@@ -4630,6 +4846,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "assistant reply three".to_string(),
}],
end_turn: None,
};
live_history.record_items(std::iter::once(&assistant3), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant3));

View File

@@ -92,6 +92,7 @@ pub(crate) async fn run_codex_thread_interactive(
tx_sub: tx_ops,
rx_event: rx_sub,
agent_status: codex.agent_status.clone(),
session: Arc::clone(&codex.session),
})
}
@@ -134,6 +135,7 @@ pub(crate) async fn run_codex_thread_one_shot(
let (tx_bridge, rx_bridge) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
let ops_tx = io.tx_sub.clone();
let agent_status = io.agent_status.clone();
let session = Arc::clone(&io.session);
let io_for_bridge = io;
tokio::spawn(async move {
while let Ok(event) = io_for_bridge.next_event().await {
@@ -166,6 +168,7 @@ pub(crate) async fn run_codex_thread_one_shot(
rx_event: rx_bridge,
tx_sub: tx_closed,
agent_status,
session,
})
}
@@ -442,15 +445,15 @@ mod tests {
let (tx_events, rx_events) = bounded(1);
let (tx_sub, rx_sub) = bounded(SUBMISSION_CHANNEL_CAPACITY);
let (_agent_status_tx, agent_status) = watch::channel(AgentStatus::PendingInit);
let (session, ctx, _rx_evt) = crate::codex::make_session_and_context_with_rx().await;
let codex = Arc::new(Codex {
next_id: AtomicU64::new(0),
tx_sub,
rx_event: rx_events,
agent_status,
session: Arc::clone(&session),
});
let (session, ctx, _rx_evt) = crate::codex::make_session_and_context_with_rx().await;
let (tx_out, rx_out) = bounded(1);
tx_out
.send(Event {

View File

@@ -4,18 +4,35 @@ use crate::error::Result as CodexResult;
use crate::protocol::Event;
use crate::protocol::Op;
use crate::protocol::Submission;
use codex_protocol::config_types::Personality;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use std::path::PathBuf;
use tokio::sync::watch;
#[derive(Clone, Debug)]
pub struct ThreadConfigSnapshot {
pub model: String,
pub model_provider_id: String,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub cwd: PathBuf,
pub reasoning_effort: Option<ReasoningEffort>,
pub personality: Option<Personality>,
pub session_source: SessionSource,
}
pub struct CodexThread {
codex: Codex,
rollout_path: PathBuf,
rollout_path: Option<PathBuf>,
}
/// Conduit for the bidirectional stream of messages that compose a thread
/// (formerly called a conversation) in Codex.
impl CodexThread {
pub(crate) fn new(codex: Codex, rollout_path: PathBuf) -> Self {
pub(crate) fn new(codex: Codex, rollout_path: Option<PathBuf>) -> Self {
Self {
codex,
rollout_path,
@@ -43,7 +60,11 @@ impl CodexThread {
self.codex.agent_status.clone()
}
pub fn rollout_path(&self) -> PathBuf {
pub fn rollout_path(&self) -> Option<PathBuf> {
self.rollout_path.clone()
}
pub async fn config_snapshot(&self) -> ThreadConfigSnapshot {
self.codex.thread_config_snapshot().await
}
}

View File

@@ -84,11 +84,18 @@ async fn run_compact_task_inner(
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
// TODO: If we need to guarantee the persisted mode always matches the prompt used for this
// turn, capture it in TurnContext at creation time. Using SessionConfiguration here avoids
// duplicating model settings on TurnContext, but an Op after turn start could update the
// session config before this write occurs.
let collaboration_mode = sess.current_collaboration_mode().await;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
personality: turn_context.personality,
collaboration_mode: Some(collaboration_mode),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
user_instructions: turn_context.user_instructions.clone(),
@@ -105,6 +112,7 @@ async fn run_compact_task_inner(
let prompt = Prompt {
input: turn_input,
base_instructions: sess.get_base_instructions().await,
personality: turn_context.personality,
..Default::default()
};
let attempt_result = drain_to_completed(&sess, turn_context.as_ref(), &prompt).await;
@@ -299,6 +307,7 @@ fn build_compacted_history_with_limit(
content: vec![ContentItem::InputText {
text: message.clone(),
}],
end_turn: None,
});
}
@@ -312,6 +321,7 @@ fn build_compacted_history_with_limit(
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText { text: summary_text }],
end_turn: None,
});
history
@@ -400,6 +410,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: Some("user".to_string()),
@@ -407,6 +418,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "first".to_string(),
}],
end_turn: None,
},
ResponseItem::Other,
];
@@ -426,6 +438,7 @@ mod tests {
text: "# AGENTS.md instructions for project\n\n<INSTRUCTIONS>\ndo things\n</INSTRUCTIONS>"
.to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -433,6 +446,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "<ENVIRONMENT_CONTEXT>cwd=/tmp</ENVIRONMENT_CONTEXT>".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -440,6 +454,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "real user message".to_string(),
}],
end_turn: None,
},
];
@@ -524,6 +539,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: marker.clone(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -531,6 +547,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "real user message".to_string(),
}],
end_turn: None,
},
];

View File

@@ -55,7 +55,7 @@ async fn run_remote_compact_task_inner_impl(
tools: vec![],
parallel_tool_calls: false,
base_instructions: sess.get_base_instructions().await,
personality: None,
personality: turn_context.personality,
output_schema: None,
};

View File

@@ -11,9 +11,7 @@ use crate::config::types::Notifications;
use crate::config::types::OtelConfig;
use crate::config::types::OtelConfigToml;
use crate::config::types::OtelExporterKind;
use crate::config::types::Personality;
use crate::config::types::SandboxWorkspaceWrite;
use crate::config::types::ScrollInputMode;
use crate::config::types::ShellEnvironmentPolicy;
use crate::config::types::ShellEnvironmentPolicyToml;
use crate::config::types::SkillsConfig;
@@ -44,6 +42,8 @@ use codex_app_server_protocol::Tools;
use codex_app_server_protocol::UserSavedConfig;
use codex_protocol::config_types::AltScreenMode;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::TrustLevel;
@@ -89,6 +89,7 @@ pub use codex_git::GhostSnapshotConfig;
/// files are *silently truncated* to this size so we do not take up too much of
/// the context window.
pub(crate) const PROJECT_DOC_MAX_BYTES: usize = 32 * 1024; // 32 KiB
pub(crate) const DEFAULT_AGENT_MAX_THREADS: Option<usize> = Some(6);
pub const CONFIG_TOML_FILE: &str = "config.toml";
@@ -199,57 +200,8 @@ pub struct Config {
/// Show startup tooltips in the TUI welcome screen.
pub show_tooltips: bool,
/// Override the events-per-wheel-tick factor for TUI2 scroll normalization.
///
/// This is the same `tui.scroll_events_per_tick` value from `config.toml`, plumbed through the
/// merged [`Config`] object (see [`Tui`]) so TUI2 can normalize scroll event density per
/// terminal.
pub tui_scroll_events_per_tick: Option<u16>,
/// Override the number of lines applied per wheel tick in TUI2.
///
/// This is the same `tui.scroll_wheel_lines` value from `config.toml` (see [`Tui`]). TUI2
/// applies it to wheel-like scroll streams. Trackpad-like scrolling uses a separate
/// `tui.scroll_trackpad_lines` setting.
pub tui_scroll_wheel_lines: Option<u16>,
/// Override the number of lines per tick-equivalent used for trackpad scrolling in TUI2.
///
/// This is the same `tui.scroll_trackpad_lines` value from `config.toml` (see [`Tui`]).
pub tui_scroll_trackpad_lines: Option<u16>,
/// Trackpad acceleration: approximate number of events required to gain +1x speed in TUI2.
///
/// This is the same `tui.scroll_trackpad_accel_events` value from `config.toml` (see [`Tui`]).
pub tui_scroll_trackpad_accel_events: Option<u16>,
/// Trackpad acceleration: maximum multiplier applied to trackpad-like streams in TUI2.
///
/// This is the same `tui.scroll_trackpad_accel_max` value from `config.toml` (see [`Tui`]).
pub tui_scroll_trackpad_accel_max: Option<u16>,
/// Control how TUI2 interprets mouse scroll input (wheel vs trackpad).
///
/// This is the same `tui.scroll_mode` value from `config.toml` (see [`Tui`]).
pub tui_scroll_mode: ScrollInputMode,
/// Override the wheel tick detection threshold (ms) for TUI2 auto scroll mode.
///
/// This is the same `tui.scroll_wheel_tick_detect_max_ms` value from `config.toml` (see
/// [`Tui`]).
pub tui_scroll_wheel_tick_detect_max_ms: Option<u64>,
/// Override the wheel-like end-of-stream threshold (ms) for TUI2 auto scroll mode.
///
/// This is the same `tui.scroll_wheel_like_max_duration_ms` value from `config.toml` (see
/// [`Tui`]).
pub tui_scroll_wheel_like_max_duration_ms: Option<u64>,
/// Invert mouse scroll direction for TUI2.
///
/// This is the same `tui.scroll_invert` value from `config.toml` (see [`Tui`]) and is applied
/// consistently to both mouse wheels and trackpads.
pub tui_scroll_invert: bool,
/// Start the TUI in the specified collaboration mode (plan/execute/etc.).
pub experimental_mode: Option<ModeKind>,
/// Controls whether the TUI uses the terminal's alternate screen buffer.
///
@@ -299,6 +251,9 @@ pub struct Config {
/// Token budget applied when storing tool/function outputs in the context manager.
pub tool_output_token_limit: Option<usize>,
/// Maximum number of agent threads that can be open concurrently.
pub agent_max_threads: Option<usize>,
/// Directory containing all Codex state (defaults to `~/.codex` but can be
/// overridden by the `CODEX_HOME` environment variable).
pub codex_home: PathBuf,
@@ -306,6 +261,9 @@ pub struct Config {
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
pub history: History,
/// When true, session is not persisted on disk. Default to `false`
pub ephemeral: bool,
/// Optional URI-based file opener. If set, citations to files in the model
/// output will be hyperlinked using the specified URI scheme.
pub file_opener: UriBasedFileOpener,
@@ -455,9 +413,21 @@ impl ConfigBuilder {
// relative paths to absolute paths based on the parent folder of the
// respective config file, so we should be safe to deserialize without
// AbsolutePathBufGuard here.
let config_toml: ConfigToml = merged_toml
.try_into()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidData, e))?;
let config_toml: ConfigToml = match merged_toml.try_into() {
Ok(config_toml) => config_toml,
Err(err) => {
if let Some(config_error) =
crate::config_loader::first_layer_config_error(&config_layer_stack).await
{
return Err(crate::config_loader::io_error_from_config_error(
std::io::ErrorKind::InvalidData,
config_error,
Some(err),
));
}
return Err(std::io::Error::new(std::io::ErrorKind::InvalidData, err));
}
};
Config::load_config_with_layer_stack(
config_toml,
harness_overrides,
@@ -924,6 +894,9 @@ pub struct ConfigToml {
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
/// Agent-related settings (thread limits, etc.).
pub agents: Option<AgentsToml>,
/// User-level skill config entries keyed by SKILL.md path.
pub skills: Option<SkillsConfig>,
@@ -1033,6 +1006,15 @@ pub struct ToolsToml {
pub view_image: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Default, PartialEq, Eq, JsonSchema)]
#[schemars(deny_unknown_fields)]
pub struct AgentsToml {
/// Maximum number of agent threads that can be open concurrently.
/// When unset, no limit is enforced.
#[schemars(range(min = 1))]
pub max_threads: Option<usize>,
}
impl From<ToolsToml> for Tools {
fn from(tools_toml: ToolsToml) -> Self {
Self {
@@ -1174,10 +1156,12 @@ pub struct ConfigOverrides {
pub codex_linux_sandbox_exe: Option<PathBuf>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub model_personality: Option<Personality>,
pub compact_prompt: Option<String>,
pub include_apply_patch_tool: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
pub ephemeral: Option<bool>,
/// Additional directories that should be treated as writable roots for this session.
pub additional_writable_roots: Vec<PathBuf>,
}
@@ -1261,10 +1245,12 @@ impl Config {
codex_linux_sandbox_exe,
base_instructions,
developer_instructions,
model_personality,
compact_prompt,
include_apply_patch_tool: include_apply_patch_tool_override,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
ephemeral,
additional_writable_roots,
} = overrides;
@@ -1392,6 +1378,18 @@ impl Config {
let history = cfg.history.unwrap_or_default();
let agent_max_threads = cfg
.agents
.as_ref()
.and_then(|agents| agents.max_threads)
.or(DEFAULT_AGENT_MAX_THREADS);
if agent_max_threads == Some(0) {
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
"agents.max_threads must be at least 1",
));
}
let ghost_snapshot = {
let mut config = GhostSnapshotConfig::default();
if let Some(ghost_snapshot) = cfg.ghost_snapshot.as_ref()
@@ -1454,6 +1452,9 @@ impl Config {
Self::try_read_non_empty_file(model_instructions_path, "model instructions file")?;
let base_instructions = base_instructions.or(file_base_instructions);
let developer_instructions = developer_instructions.or(cfg.developer_instructions);
let model_personality = model_personality
.or(config_profile.model_personality)
.or(cfg.model_personality);
let experimental_compact_prompt_path = config_profile
.experimental_compact_prompt_file
@@ -1503,7 +1504,7 @@ impl Config {
notify: cfg.notify,
user_instructions,
base_instructions,
model_personality: config_profile.model_personality.or(cfg.model_personality),
model_personality,
developer_instructions,
compact_prompt,
// The config.toml omits "_mode" because it's a config file. However, "_mode"
@@ -1530,9 +1531,11 @@ impl Config {
})
.collect(),
tool_output_token_limit: cfg.tool_output_token_limit,
agent_max_threads,
codex_home,
config_layer_stack,
history,
ephemeral: ephemeral.unwrap_or_default(),
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
codex_linux_sandbox_exe,
@@ -1584,27 +1587,7 @@ impl Config {
.unwrap_or_default(),
animations: cfg.tui.as_ref().map(|t| t.animations).unwrap_or(true),
show_tooltips: cfg.tui.as_ref().map(|t| t.show_tooltips).unwrap_or(true),
tui_scroll_events_per_tick: cfg.tui.as_ref().and_then(|t| t.scroll_events_per_tick),
tui_scroll_wheel_lines: cfg.tui.as_ref().and_then(|t| t.scroll_wheel_lines),
tui_scroll_trackpad_lines: cfg.tui.as_ref().and_then(|t| t.scroll_trackpad_lines),
tui_scroll_trackpad_accel_events: cfg
.tui
.as_ref()
.and_then(|t| t.scroll_trackpad_accel_events),
tui_scroll_trackpad_accel_max: cfg
.tui
.as_ref()
.and_then(|t| t.scroll_trackpad_accel_max),
tui_scroll_mode: cfg.tui.as_ref().map(|t| t.scroll_mode).unwrap_or_default(),
tui_scroll_wheel_tick_detect_max_ms: cfg
.tui
.as_ref()
.and_then(|t| t.scroll_wheel_tick_detect_max_ms),
tui_scroll_wheel_like_max_duration_ms: cfg
.tui
.as_ref()
.and_then(|t| t.scroll_wheel_like_max_duration_ms),
tui_scroll_invert: cfg.tui.as_ref().map(|t| t.scroll_invert).unwrap_or(false),
experimental_mode: cfg.tui.as_ref().and_then(|t| t.experimental_mode),
tui_alternate_screen: cfg
.tui
.as_ref()
@@ -1857,15 +1840,7 @@ persistence = "none"
notifications: Notifications::Enabled(true),
animations: true,
show_tooltips: true,
scroll_events_per_tick: None,
scroll_wheel_lines: None,
scroll_trackpad_lines: None,
scroll_trackpad_accel_events: None,
scroll_trackpad_accel_max: None,
scroll_mode: ScrollInputMode::Auto,
scroll_wheel_tick_detect_max_ms: None,
scroll_wheel_like_max_duration_ms: None,
scroll_invert: false,
experimental_mode: None,
alternate_screen: AltScreenMode::Auto,
}
);
@@ -3718,9 +3693,11 @@ model_verbosity = "high"
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
agent_max_threads: DEFAULT_AGENT_MAX_THREADS,
codex_home: fixture.codex_home(),
config_layer_stack: Default::default(),
history: History::default(),
ephemeral: false,
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
@@ -3750,17 +3727,9 @@ model_verbosity = "high"
tui_notifications: Default::default(),
animations: true,
show_tooltips: true,
experimental_mode: None,
analytics_enabled: Some(true),
feedback_enabled: true,
tui_scroll_events_per_tick: None,
tui_scroll_wheel_lines: None,
tui_scroll_trackpad_lines: None,
tui_scroll_trackpad_accel_events: None,
tui_scroll_trackpad_accel_max: None,
tui_scroll_mode: ScrollInputMode::Auto,
tui_scroll_wheel_tick_detect_max_ms: None,
tui_scroll_wheel_like_max_duration_ms: None,
tui_scroll_invert: false,
tui_alternate_screen: AltScreenMode::Auto,
otel: OtelConfig::default(),
},
@@ -3806,9 +3775,11 @@ model_verbosity = "high"
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
agent_max_threads: DEFAULT_AGENT_MAX_THREADS,
codex_home: fixture.codex_home(),
config_layer_stack: Default::default(),
history: History::default(),
ephemeral: false,
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
@@ -3838,17 +3809,9 @@ model_verbosity = "high"
tui_notifications: Default::default(),
animations: true,
show_tooltips: true,
experimental_mode: None,
analytics_enabled: Some(true),
feedback_enabled: true,
tui_scroll_events_per_tick: None,
tui_scroll_wheel_lines: None,
tui_scroll_trackpad_lines: None,
tui_scroll_trackpad_accel_events: None,
tui_scroll_trackpad_accel_max: None,
tui_scroll_mode: ScrollInputMode::Auto,
tui_scroll_wheel_tick_detect_max_ms: None,
tui_scroll_wheel_like_max_duration_ms: None,
tui_scroll_invert: false,
tui_alternate_screen: AltScreenMode::Auto,
otel: OtelConfig::default(),
};
@@ -3909,9 +3872,11 @@ model_verbosity = "high"
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
agent_max_threads: DEFAULT_AGENT_MAX_THREADS,
codex_home: fixture.codex_home(),
config_layer_stack: Default::default(),
history: History::default(),
ephemeral: false,
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
@@ -3941,17 +3906,9 @@ model_verbosity = "high"
tui_notifications: Default::default(),
animations: true,
show_tooltips: true,
experimental_mode: None,
analytics_enabled: Some(false),
feedback_enabled: true,
tui_scroll_events_per_tick: None,
tui_scroll_wheel_lines: None,
tui_scroll_trackpad_lines: None,
tui_scroll_trackpad_accel_events: None,
tui_scroll_trackpad_accel_max: None,
tui_scroll_mode: ScrollInputMode::Auto,
tui_scroll_wheel_tick_detect_max_ms: None,
tui_scroll_wheel_like_max_duration_ms: None,
tui_scroll_invert: false,
tui_alternate_screen: AltScreenMode::Auto,
otel: OtelConfig::default(),
};
@@ -3998,9 +3955,11 @@ model_verbosity = "high"
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
agent_max_threads: DEFAULT_AGENT_MAX_THREADS,
codex_home: fixture.codex_home(),
config_layer_stack: Default::default(),
history: History::default(),
ephemeral: false,
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
@@ -4030,17 +3989,9 @@ model_verbosity = "high"
tui_notifications: Default::default(),
animations: true,
show_tooltips: true,
experimental_mode: None,
analytics_enabled: Some(true),
feedback_enabled: true,
tui_scroll_events_per_tick: None,
tui_scroll_wheel_lines: None,
tui_scroll_trackpad_lines: None,
tui_scroll_trackpad_accel_events: None,
tui_scroll_trackpad_accel_max: None,
tui_scroll_mode: ScrollInputMode::Auto,
tui_scroll_wheel_tick_detect_max_ms: None,
tui_scroll_wheel_like_max_duration_ms: None,
tui_scroll_invert: false,
tui_alternate_screen: AltScreenMode::Auto,
otel: OtelConfig::default(),
};

View File

@@ -8,6 +8,8 @@ use schemars::schema::ObjectValidation;
use schemars::schema::RootSchema;
use schemars::schema::Schema;
use schemars::schema::SchemaObject;
use serde_json::Map;
use serde_json::Value;
use std::path::Path;
/// Schema for the `[features]` map with known + legacy keys only.
@@ -60,10 +62,29 @@ pub fn config_schema() -> RootSchema {
.into_root_schema_for::<ConfigToml>()
}
/// Canonicalize a JSON value by sorting its keys.
fn canonicalize(value: &Value) -> Value {
match value {
Value::Array(items) => Value::Array(items.iter().map(canonicalize).collect()),
Value::Object(map) => {
let mut entries: Vec<_> = map.iter().collect();
entries.sort_by(|(left, _), (right, _)| left.cmp(right));
let mut sorted = Map::with_capacity(map.len());
for (key, child) in entries {
sorted.insert(key.clone(), canonicalize(child));
}
Value::Object(sorted)
}
_ => value.clone(),
}
}
/// Render the config schema as pretty-printed JSON.
pub fn config_schema_json() -> anyhow::Result<Vec<u8>> {
let schema = config_schema();
let json = serde_json::to_vec_pretty(&schema)?;
let value = serde_json::to_value(schema)?;
let value = canonicalize(&value);
let json = serde_json::to_vec_pretty(&value)?;
Ok(json)
}
@@ -76,26 +97,10 @@ pub fn write_config_schema(out_path: &Path) -> anyhow::Result<()> {
#[cfg(test)]
mod tests {
use super::canonicalize;
use super::config_schema_json;
use serde_json::Map;
use serde_json::Value;
use similar::TextDiff;
fn canonicalize(value: &Value) -> Value {
match value {
Value::Array(items) => Value::Array(items.iter().map(canonicalize).collect()),
Value::Object(map) => {
let mut entries: Vec<_> = map.iter().collect();
entries.sort_by(|(left, _), (right, _)| left.cmp(right));
let mut sorted = Map::with_capacity(map.len());
for (key, child) in entries {
sorted.insert(key.clone(), canonicalize(child));
}
Value::Object(sorted)
}
_ => value.clone(),
}
}
use similar::TextDiff;
#[test]
fn config_schema_matches_fixture() {

View File

@@ -4,6 +4,7 @@ use crate::config::edit::ConfigEdit;
use crate::config::edit::ConfigEditsBuilder;
use crate::config_loader::ConfigLayerEntry;
use crate::config_loader::ConfigLayerStack;
use crate::config_loader::ConfigLayerStackOrdering;
use crate::config_loader::ConfigRequirementsToml;
use crate::config_loader::LoaderOverrides;
use crate::config_loader::load_config_layers_state;
@@ -135,10 +136,27 @@ impl ConfigService {
&self,
params: ConfigReadParams,
) -> Result<ConfigReadResponse, ConfigServiceError> {
let layers = self
.load_thread_agnostic_config()
.await
.map_err(|err| ConfigServiceError::io("failed to read configuration layers", err))?;
let layers = match params.cwd.as_deref() {
Some(cwd) => {
let cwd = AbsolutePathBuf::try_from(PathBuf::from(cwd)).map_err(|err| {
ConfigServiceError::io("failed to resolve config cwd to an absolute path", err)
})?;
crate::config::ConfigBuilder::default()
.codex_home(self.codex_home.clone())
.cli_overrides(self.cli_overrides.clone())
.loader_overrides(self.loader_overrides.clone())
.fallback_cwd(Some(cwd.to_path_buf()))
.build()
.await
.map_err(|err| {
ConfigServiceError::io("failed to read configuration layers", err)
})?
.config_layer_stack
}
None => self.load_thread_agnostic_config().await.map_err(|err| {
ConfigServiceError::io("failed to read configuration layers", err)
})?,
};
let effective = layers.effective_config();
validate_config(&effective)
@@ -154,7 +172,7 @@ impl ConfigService {
origins: layers.origins(),
layers: params.include_layers.then(|| {
layers
.layers_high_to_low()
.get_layers(ConfigLayerStackOrdering::HighestPrecedenceFirst, true)
.iter()
.map(|layer| layer.as_layer())
.collect()
@@ -800,6 +818,7 @@ remote_compaction = true
let response = service
.read(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await
.expect("response");
@@ -892,6 +911,7 @@ remote_compaction = true
let read_after = service
.read(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await
.expect("read");
@@ -1032,6 +1052,7 @@ remote_compaction = true
let response = service
.read(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await
.expect("response");

View File

@@ -5,6 +5,8 @@
use crate::config_loader::RequirementSource;
pub use codex_protocol::config_types::AltScreenMode;
pub use codex_protocol::config_types::ModeKind;
pub use codex_protocol::config_types::Personality;
pub use codex_protocol::config_types::WebSearchMode;
use codex_utils_absolute_path::AbsolutePathBuf;
use std::collections::BTreeMap;
@@ -418,23 +420,6 @@ impl Default for Notifications {
}
}
/// How TUI2 should interpret mouse scroll events.
///
/// Terminals generally encode both mouse wheels and trackpads as the same "scroll up/down" mouse
/// button events, without a magnitude. This setting controls whether Codex uses a heuristic to
/// infer wheel vs trackpad per stream, or forces a specific behavior.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, Default)]
#[serde(rename_all = "snake_case")]
pub enum ScrollInputMode {
/// Infer wheel vs trackpad behavior per scroll stream.
#[default]
Auto,
/// Always treat scroll events as mouse-wheel input (fixed lines per tick).
Wheel,
/// Always treat scroll events as trackpad input (fractional accumulation).
Trackpad,
}
/// Collection of settings that are specific to the TUI.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema)]
#[schemars(deny_unknown_fields)]
@@ -454,108 +439,10 @@ pub struct Tui {
#[serde(default = "default_true")]
pub show_tooltips: bool,
/// Override the *wheel* event density used to normalize TUI2 scrolling.
///
/// Terminals generally deliver both mouse wheels and trackpads as discrete `scroll up/down`
/// mouse events with direction but no magnitude. Unfortunately, the *number* of raw events
/// per physical wheel notch varies by terminal (commonly 1, 3, or 9+). TUI2 uses this value
/// to normalize that raw event density into consistent "wheel tick" behavior.
///
/// Wheel math (conceptually):
///
/// - A single event contributes `1 / scroll_events_per_tick` tick-equivalents.
/// - Wheel-like streams then scale that by `scroll_wheel_lines` so one physical notch scrolls
/// a fixed number of lines.
///
/// Trackpad math is intentionally *not* fully tied to this value: in trackpad-like mode, TUI2
/// uses `min(scroll_events_per_tick, 3)` as the divisor so terminals with dense wheel ticks
/// (e.g. 9 events per notch) do not make trackpads feel artificially slow.
///
/// Defaults are derived per terminal from [`crate::terminal::TerminalInfo`] when TUI2 starts.
/// See `codex-rs/tui2/docs/scroll_input_model.md` for the probe data and rationale.
pub scroll_events_per_tick: Option<u16>,
/// Override how many transcript lines one physical *wheel notch* should scroll in TUI2.
///
/// This is the "classic feel" knob. Defaults to 3.
///
/// Wheel-like per-event contribution is `scroll_wheel_lines / scroll_events_per_tick`. For
/// example, in a terminal that emits 9 events per notch, the default `3 / 9` yields 1/3 of a
/// line per event and totals 3 lines once the full notch burst arrives.
///
/// See `codex-rs/tui2/docs/scroll_input_model.md` for details on the stream model and the
/// wheel/trackpad heuristic.
pub scroll_wheel_lines: Option<u16>,
/// Override baseline trackpad scroll sensitivity in TUI2.
///
/// Trackpads do not have discrete notches, but terminals still emit discrete `scroll up/down`
/// events. In trackpad-like mode, TUI2 accumulates fractional scroll and only applies whole
/// lines to the viewport.
///
/// Trackpad per-event contribution is:
///
/// - `scroll_trackpad_lines / min(scroll_events_per_tick, 3)`
///
/// (plus optional bounded acceleration; see `scroll_trackpad_accel_*`). The `min(..., 3)`
/// divisor is deliberate: `scroll_events_per_tick` is calibrated from *wheel* behavior and
/// can be much larger than trackpad event density, which would otherwise make trackpads feel
/// too slow in dense-wheel terminals.
///
/// Defaults to 1, meaning one tick-equivalent maps to one transcript line.
pub scroll_trackpad_lines: Option<u16>,
/// Trackpad acceleration: approximate number of events required to gain +1x speed in TUI2.
///
/// This keeps small swipes precise while allowing large/faster swipes to cover more content.
/// Defaults are chosen to address terminals where trackpad event density is comparatively low.
///
/// Concretely, TUI2 computes an acceleration multiplier for trackpad-like streams:
///
/// - `multiplier = clamp(1 + abs(events) / scroll_trackpad_accel_events, 1..scroll_trackpad_accel_max)`
///
/// The multiplier is applied to the streams computed line delta (including any carried
/// fractional remainder).
pub scroll_trackpad_accel_events: Option<u16>,
/// Trackpad acceleration: maximum multiplier applied to trackpad-like streams.
///
/// Set to 1 to effectively disable trackpad acceleration.
///
/// See [`Tui::scroll_trackpad_accel_events`] for the exact multiplier formula.
pub scroll_trackpad_accel_max: Option<u16>,
/// Select how TUI2 interprets mouse scroll input.
///
/// - `auto` (default): infer wheel vs trackpad per scroll stream.
/// - `wheel`: always use wheel behavior (fixed lines per wheel notch).
/// - `trackpad`: always use trackpad behavior (fractional accumulation; wheel may feel slow).
/// Start the TUI in the specified collaboration mode (plan/execute/etc.).
/// Defaults to unset.
#[serde(default)]
pub scroll_mode: ScrollInputMode,
/// Auto-mode threshold: maximum time (ms) for the first tick-worth of events to arrive.
///
/// In `scroll_mode = "auto"`, TUI2 starts a stream as trackpad-like (to avoid overshoot) and
/// promotes it to wheel-like if `scroll_events_per_tick` events arrive "quickly enough". This
/// threshold controls what "quickly enough" means.
///
/// Most users should leave this unset; it is primarily for terminals that emit wheel ticks
/// batched over longer time spans.
pub scroll_wheel_tick_detect_max_ms: Option<u64>,
/// Auto-mode fallback: maximum duration (ms) that a very small stream is still treated as wheel-like.
///
/// This is only used when `scroll_events_per_tick` is effectively 1 (one event per wheel
/// notch). In that case, we cannot observe a "tick completion time", so TUI2 treats a
/// short-lived, small stream (<= 2 events) as wheel-like to preserve classic wheel behavior.
pub scroll_wheel_like_max_duration_ms: Option<u64>,
/// Invert mouse scroll direction in TUI2.
///
/// This flips the scroll sign after terminal detection. It is applied consistently to both
/// wheel and trackpad input.
#[serde(default)]
pub scroll_invert: bool,
pub experimental_mode: Option<ModeKind>,
/// Controls whether the TUI uses the terminal's alternate screen buffer.
///
@@ -748,14 +635,6 @@ impl Default for ShellEnvironmentPolicy {
}
}
#[derive(Default, Debug, Clone, Eq, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "kebab-case")]
pub enum Personality {
Friendly,
#[default]
Pragmatic,
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -16,7 +16,7 @@ Exported from `codex_core::config_loader`:
- `origins() -> HashMap<String, ConfigLayerMetadata>`
- `layers_high_to_low() -> Vec<ConfigLayer>`
- `with_user_config(user_config) -> ConfigLayerStack`
- `ConfigLayerEntry` (one layers `{name, config, version}`; `name` carries source metadata)
- `ConfigLayerEntry` (one layers `{name, config, version, disabled_reason}`; `name` carries source metadata)
- `LoaderOverrides` (test/override hooks for managed config sources)
- `merge_toml_values(base, overlay)` (public helper used elsewhere)
@@ -29,7 +29,9 @@ Precedence is **top overrides bottom**:
3. **Session flags** (CLI overrides, applied as dotted-path TOML writes)
4. **User** config (`config.toml`)
This is what `ConfigLayerStack::effective_config()` implements.
Layers with a `disabled_reason` are still surfaced for UI, but are ignored when
computing the effective config and origins metadata. This is what
`ConfigLayerStack::effective_config()` implements.
## Typical usage

View File

@@ -0,0 +1,388 @@
//! Helpers for mapping config parse/validation failures to file locations and
//! rendering them in a user-friendly way.
use crate::config::CONFIG_TOML_FILE;
use crate::config::ConfigToml;
use codex_app_server_protocol::ConfigLayerSource;
use codex_utils_absolute_path::AbsolutePathBufGuard;
use serde_path_to_error::Path as SerdePath;
use serde_path_to_error::Segment as SerdeSegment;
use std::fmt;
use std::fmt::Write;
use std::io;
use std::path::Path;
use std::path::PathBuf;
use toml_edit::Document;
use toml_edit::Item;
use toml_edit::Table;
use toml_edit::Value;
use super::ConfigLayerEntry;
use super::ConfigLayerStack;
use super::ConfigLayerStackOrdering;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct TextPosition {
pub line: usize,
pub column: usize,
}
/// Text range in 1-based line/column coordinates.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct TextRange {
pub start: TextPosition,
pub end: TextPosition,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ConfigError {
pub path: PathBuf,
pub range: TextRange,
pub message: String,
}
impl ConfigError {
pub fn new(path: PathBuf, range: TextRange, message: impl Into<String>) -> Self {
Self {
path,
range,
message: message.into(),
}
}
}
#[derive(Debug)]
pub struct ConfigLoadError {
error: ConfigError,
source: Option<toml::de::Error>,
}
impl ConfigLoadError {
pub fn new(error: ConfigError, source: Option<toml::de::Error>) -> Self {
Self { error, source }
}
pub fn config_error(&self) -> &ConfigError {
&self.error
}
}
impl fmt::Display for ConfigLoadError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{}:{}:{}: {}",
self.error.path.display(),
self.error.range.start.line,
self.error.range.start.column,
self.error.message
)
}
}
impl std::error::Error for ConfigLoadError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.source
.as_ref()
.map(|err| err as &dyn std::error::Error)
}
}
pub(crate) fn io_error_from_config_error(
kind: io::ErrorKind,
error: ConfigError,
source: Option<toml::de::Error>,
) -> io::Error {
io::Error::new(kind, ConfigLoadError::new(error, source))
}
pub(crate) fn config_error_from_toml(
path: impl AsRef<Path>,
contents: &str,
err: toml::de::Error,
) -> ConfigError {
let range = err
.span()
.map(|span| text_range_from_span(contents, span))
.unwrap_or_else(default_range);
ConfigError::new(path.as_ref().to_path_buf(), range, err.message())
}
pub(crate) fn config_error_from_config_toml(
path: impl AsRef<Path>,
contents: &str,
) -> Option<ConfigError> {
let deserializer = match toml::de::Deserializer::parse(contents) {
Ok(deserializer) => deserializer,
Err(err) => return Some(config_error_from_toml(path, contents, err)),
};
let result: Result<ConfigToml, _> = serde_path_to_error::deserialize(deserializer);
match result {
Ok(_) => None,
Err(err) => {
let path_hint = err.path().clone();
let toml_err: toml::de::Error = err.into_inner();
let range = span_for_config_path(contents, &path_hint)
.or_else(|| toml_err.span())
.map(|span| text_range_from_span(contents, span))
.unwrap_or_else(default_range);
Some(ConfigError::new(
path.as_ref().to_path_buf(),
range,
toml_err.message(),
))
}
}
}
pub(crate) async fn first_layer_config_error(layers: &ConfigLayerStack) -> Option<ConfigError> {
// When the merged config fails schema validation, we surface the first concrete
// per-file error to point users at a specific file and range rather than an
// opaque merged-layer failure.
first_layer_config_error_for_entries(
layers.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, false),
)
.await
}
pub(crate) async fn first_layer_config_error_from_entries(
layers: &[ConfigLayerEntry],
) -> Option<ConfigError> {
first_layer_config_error_for_entries(layers.iter()).await
}
async fn first_layer_config_error_for_entries<'a, I>(layers: I) -> Option<ConfigError>
where
I: IntoIterator<Item = &'a ConfigLayerEntry>,
{
for layer in layers {
let Some(path) = config_path_for_layer(layer) else {
continue;
};
let contents = match tokio::fs::read_to_string(&path).await {
Ok(contents) => contents,
Err(err) if err.kind() == io::ErrorKind::NotFound => continue,
Err(err) => {
tracing::debug!("Failed to read config file {}: {err}", path.display());
continue;
}
};
let Some(parent) = path.parent() else {
tracing::debug!("Config file {} has no parent directory", path.display());
continue;
};
let _guard = AbsolutePathBufGuard::new(parent);
if let Some(error) = config_error_from_config_toml(&path, &contents) {
return Some(error);
}
}
None
}
fn config_path_for_layer(layer: &ConfigLayerEntry) -> Option<PathBuf> {
match &layer.name {
ConfigLayerSource::System { file } => Some(file.to_path_buf()),
ConfigLayerSource::User { file } => Some(file.to_path_buf()),
ConfigLayerSource::Project { dot_codex_folder } => {
Some(dot_codex_folder.as_path().join(CONFIG_TOML_FILE))
}
ConfigLayerSource::LegacyManagedConfigTomlFromFile { file } => Some(file.to_path_buf()),
ConfigLayerSource::Mdm { .. }
| ConfigLayerSource::SessionFlags
| ConfigLayerSource::LegacyManagedConfigTomlFromMdm => None,
}
}
fn text_range_from_span(contents: &str, span: std::ops::Range<usize>) -> TextRange {
let start = position_for_offset(contents, span.start);
let end_index = if span.end > span.start {
span.end - 1
} else {
span.end
};
let end = position_for_offset(contents, end_index);
TextRange { start, end }
}
pub fn format_config_error(error: &ConfigError, contents: &str) -> String {
let mut output = String::new();
let start = error.range.start;
let _ = writeln!(
output,
"{}:{}:{}: {}",
error.path.display(),
start.line,
start.column,
error.message
);
let line_index = start.line.saturating_sub(1);
let line = match contents.lines().nth(line_index) {
Some(line) => line.trim_end_matches('\r'),
None => return output.trim_end().to_string(),
};
let line_number = start.line;
let gutter = line_number.to_string().len();
let _ = writeln!(output, "{:width$} |", "", width = gutter);
let _ = writeln!(output, "{line_number:>gutter$} | {line}");
let highlight_len = if error.range.end.line == error.range.start.line
&& error.range.end.column >= error.range.start.column
{
error.range.end.column - error.range.start.column + 1
} else {
1
};
let spaces = " ".repeat(start.column.saturating_sub(1));
let carets = "^".repeat(highlight_len.max(1));
let _ = writeln!(output, "{:width$} | {spaces}{carets}", "", width = gutter);
output.trim_end().to_string()
}
pub fn format_config_error_with_source(error: &ConfigError) -> String {
match std::fs::read_to_string(&error.path) {
Ok(contents) => format_config_error(error, &contents),
Err(_) => format_config_error(error, ""),
}
}
fn position_for_offset(contents: &str, index: usize) -> TextPosition {
let bytes = contents.as_bytes();
if bytes.is_empty() {
return TextPosition { line: 1, column: 1 };
}
let safe_index = index.min(bytes.len().saturating_sub(1));
let column_offset = index.saturating_sub(safe_index);
let index = safe_index;
let line_start = bytes[..index]
.iter()
.rposition(|byte| *byte == b'\n')
.map(|pos| pos + 1)
.unwrap_or(0);
let line = bytes[..line_start]
.iter()
.filter(|byte| **byte == b'\n')
.count();
let column = std::str::from_utf8(&bytes[line_start..=index])
.map(|slice| slice.chars().count().saturating_sub(1))
.unwrap_or_else(|_| index - line_start);
let column = column + column_offset;
TextPosition {
line: line + 1,
column: column + 1,
}
}
fn default_range() -> TextRange {
let position = TextPosition { line: 1, column: 1 };
TextRange {
start: position,
end: position,
}
}
enum TomlNode<'a> {
Item(&'a Item),
Table(&'a Table),
Value(&'a Value),
}
fn span_for_path(contents: &str, path: &SerdePath) -> Option<std::ops::Range<usize>> {
let doc = contents.parse::<Document<String>>().ok()?;
let node = node_for_path(doc.as_item(), path)?;
match node {
TomlNode::Item(item) => item.span(),
TomlNode::Table(table) => table.span(),
TomlNode::Value(value) => value.span(),
}
}
fn span_for_config_path(contents: &str, path: &SerdePath) -> Option<std::ops::Range<usize>> {
if is_features_table_path(path)
&& let Some(span) = span_for_features_value(contents)
{
return Some(span);
}
span_for_path(contents, path)
}
fn is_features_table_path(path: &SerdePath) -> bool {
let mut segments = path.iter();
matches!(segments.next(), Some(SerdeSegment::Map { key }) if key == "features")
&& segments.next().is_none()
}
fn span_for_features_value(contents: &str) -> Option<std::ops::Range<usize>> {
let doc = contents.parse::<Document<String>>().ok()?;
let root = doc.as_item().as_table_like()?;
let features_item = root.get("features")?;
let features_table = features_item.as_table_like()?;
for (_, item) in features_table.iter() {
match item {
Item::Value(Value::Boolean(_)) => continue,
Item::Value(value) => return value.span(),
Item::Table(table) => return table.span(),
Item::ArrayOfTables(array) => return array.span(),
Item::None => continue,
}
}
None
}
fn node_for_path<'a>(item: &'a Item, path: &SerdePath) -> Option<TomlNode<'a>> {
let segments: Vec<_> = path.iter().cloned().collect();
let mut node = TomlNode::Item(item);
let mut index = 0;
while index < segments.len() {
match &segments[index] {
SerdeSegment::Map { key } | SerdeSegment::Enum { variant: key } => {
if let Some(next) = map_child(&node, key) {
node = next;
index += 1;
continue;
}
if index + 1 < segments.len() {
index += 1;
continue;
}
return None;
}
SerdeSegment::Seq { index: seq_index } => {
node = seq_child(&node, *seq_index)?;
index += 1;
}
SerdeSegment::Unknown => return None,
}
}
Some(node)
}
fn map_child<'a>(node: &TomlNode<'a>, key: &str) -> Option<TomlNode<'a>> {
match node {
TomlNode::Item(item) => {
let table = item.as_table_like()?;
table.get(key).map(TomlNode::Item)
}
TomlNode::Table(table) => table.get(key).map(TomlNode::Item),
TomlNode::Value(Value::InlineTable(table)) => table.get(key).map(TomlNode::Value),
_ => None,
}
}
fn seq_child<'a>(node: &TomlNode<'a>, index: usize) -> Option<TomlNode<'a>> {
match node {
TomlNode::Item(Item::Value(Value::Array(array))) => array.get(index).map(TomlNode::Value),
TomlNode::Item(Item::ArrayOfTables(array)) => array.get(index).map(TomlNode::Table),
TomlNode::Value(Value::Array(array)) => array.get(index).map(TomlNode::Value),
_ => None,
}
}

View File

@@ -1,4 +1,6 @@
use super::LoaderOverrides;
use super::diagnostics::config_error_from_toml;
use super::diagnostics::io_error_from_config_error;
#[cfg(target_os = "macos")]
use super::macos::load_managed_admin_config_layer;
use codex_utils_absolute_path::AbsolutePathBuf;
@@ -75,7 +77,12 @@ pub(super) async fn read_config_from_path(
Ok(value) => Ok(Some(value)),
Err(err) => {
tracing::error!("Failed to parse {}: {err}", path.as_ref().display());
Err(io::Error::new(io::ErrorKind::InvalidData, err))
let config_error = config_error_from_toml(path.as_ref(), &contents, err.clone());
Err(io_error_from_config_error(
io::ErrorKind::InvalidData,
config_error,
Some(err),
))
}
},
Err(err) if err.kind() == io::ErrorKind::NotFound => {

View File

@@ -1,4 +1,5 @@
mod config_requirements;
mod diagnostics;
mod fingerprint;
mod layer_io;
#[cfg(target_os = "macos")]
@@ -34,6 +35,16 @@ pub use config_requirements::McpServerRequirement;
pub use config_requirements::RequirementSource;
pub use config_requirements::SandboxModeRequirement;
pub use config_requirements::Sourced;
pub use diagnostics::ConfigError;
pub use diagnostics::ConfigLoadError;
pub use diagnostics::TextPosition;
pub use diagnostics::TextRange;
pub(crate) use diagnostics::config_error_from_toml;
pub(crate) use diagnostics::first_layer_config_error;
pub(crate) use diagnostics::first_layer_config_error_from_entries;
pub use diagnostics::format_config_error;
pub use diagnostics::format_config_error_with_source;
pub(crate) use diagnostics::io_error_from_config_error;
pub use merge::merge_toml_values;
pub(crate) use overrides::build_cli_overrides_layer;
pub use state::ConfigLayerEntry;
@@ -67,9 +78,9 @@ const DEFAULT_PROJECT_ROOT_MARKERS: &[&str] = &[".git"];
/// - admin: managed preferences (*)
/// - system `/etc/codex/config.toml`
/// - user `${CODEX_HOME}/config.toml`
/// - cwd `${PWD}/config.toml` (only when the directory is trusted)
/// - tree parent directories up to root looking for `./.codex/config.toml` (trusted only)
/// - repo `$(git rev-parse --show-toplevel)/.codex/config.toml` (trusted only)
/// - cwd `${PWD}/config.toml` (loaded but disabled when the directory is untrusted)
/// - tree parent directories up to root looking for `./.codex/config.toml` (loaded but disabled when untrusted)
/// - repo `$(git rev-parse --show-toplevel)/.codex/config.toml` (loaded but disabled when untrusted)
/// - runtime e.g., --config flags, model selector in UI
///
/// (*) Only available on macOS via managed device profiles.
@@ -171,14 +182,51 @@ pub async fn load_config_layers_state(
merge_toml_values(&mut merged_so_far, cli_overrides_layer);
}
let project_root_markers = project_root_markers_from_config(&merged_so_far)?
.unwrap_or_else(default_project_root_markers);
if let Some(project_root) =
trusted_project_root(&merged_so_far, &cwd, &project_root_markers, codex_home).await?
let project_root_markers = match project_root_markers_from_config(&merged_so_far) {
Ok(markers) => markers.unwrap_or_else(default_project_root_markers),
Err(err) => {
if let Some(config_error) = first_layer_config_error_from_entries(&layers).await {
return Err(io_error_from_config_error(
io::ErrorKind::InvalidData,
config_error,
None,
));
}
return Err(err);
}
};
let project_trust_context = match project_trust_context(
&merged_so_far,
&cwd,
&project_root_markers,
codex_home,
&user_file,
)
.await
{
let project_layers = load_project_layers(&cwd, &project_root).await?;
layers.extend(project_layers);
}
Ok(context) => context,
Err(err) => {
let source = err
.get_ref()
.and_then(|err| err.downcast_ref::<toml::de::Error>())
.cloned();
if let Some(config_error) = first_layer_config_error_from_entries(&layers).await {
return Err(io_error_from_config_error(
io::ErrorKind::InvalidData,
config_error,
source,
));
}
return Err(err);
}
};
let project_layers = load_project_layers(
&cwd,
&project_trust_context.project_root,
&project_trust_context,
)
.await?;
layers.extend(project_layers);
}
// Add a layer for runtime overrides from the CLI or UI, if any exist.
@@ -243,11 +291,9 @@ async fn load_config_toml_for_required_layer(
let toml_file = config_toml.as_ref();
let toml_value = match tokio::fs::read_to_string(toml_file).await {
Ok(contents) => {
let config: TomlValue = toml::from_str(&contents).map_err(|e| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("Error parsing config file {}: {e}", toml_file.display()),
)
let config: TomlValue = toml::from_str(&contents).map_err(|err| {
let config_error = config_error_from_toml(toml_file, &contents, err.clone());
io_error_from_config_error(io::ErrorKind::InvalidData, config_error, Some(err))
})?;
let config_parent = toml_file.parent().ok_or_else(|| {
io::Error::new(
@@ -402,42 +448,132 @@ fn default_project_root_markers() -> Vec<String> {
.collect()
}
async fn trusted_project_root(
struct ProjectTrustContext {
project_root: AbsolutePathBuf,
project_root_key: String,
repo_root_key: Option<String>,
projects_trust: std::collections::HashMap<String, TrustLevel>,
user_config_file: AbsolutePathBuf,
}
struct ProjectTrustDecision {
trust_level: Option<TrustLevel>,
trust_key: String,
}
impl ProjectTrustDecision {
fn is_trusted(&self) -> bool {
matches!(self.trust_level, Some(TrustLevel::Trusted))
}
}
impl ProjectTrustContext {
fn decision_for_dir(&self, dir: &AbsolutePathBuf) -> ProjectTrustDecision {
let dir_key = dir.as_path().to_string_lossy().to_string();
if let Some(trust_level) = self.projects_trust.get(&dir_key).copied() {
return ProjectTrustDecision {
trust_level: Some(trust_level),
trust_key: dir_key,
};
}
if let Some(trust_level) = self.projects_trust.get(&self.project_root_key).copied() {
return ProjectTrustDecision {
trust_level: Some(trust_level),
trust_key: self.project_root_key.clone(),
};
}
if let Some(repo_root_key) = self.repo_root_key.as_ref()
&& let Some(trust_level) = self.projects_trust.get(repo_root_key).copied()
{
return ProjectTrustDecision {
trust_level: Some(trust_level),
trust_key: repo_root_key.clone(),
};
}
ProjectTrustDecision {
trust_level: None,
trust_key: self
.repo_root_key
.clone()
.unwrap_or_else(|| self.project_root_key.clone()),
}
}
fn disabled_reason_for_dir(&self, dir: &AbsolutePathBuf) -> Option<String> {
let decision = self.decision_for_dir(dir);
if decision.is_trusted() {
return None;
}
let trust_key = decision.trust_key.as_str();
let user_config_file = self.user_config_file.as_path().display();
match decision.trust_level {
Some(TrustLevel::Untrusted) => Some(format!(
"{trust_key} is marked as untrusted in {user_config_file}. Mark it trusted to enable project config folders."
)),
_ => Some(format!(
"Add {trust_key} as a trusted project in {user_config_file}."
)),
}
}
}
fn project_layer_entry(
trust_context: &ProjectTrustContext,
dot_codex_folder: &AbsolutePathBuf,
layer_dir: &AbsolutePathBuf,
config: TomlValue,
) -> ConfigLayerEntry {
match trust_context.disabled_reason_for_dir(layer_dir) {
Some(reason) => ConfigLayerEntry::new_disabled(
ConfigLayerSource::Project {
dot_codex_folder: dot_codex_folder.clone(),
},
config,
reason,
),
None => ConfigLayerEntry::new(
ConfigLayerSource::Project {
dot_codex_folder: dot_codex_folder.clone(),
},
config,
),
}
}
async fn project_trust_context(
merged_config: &TomlValue,
cwd: &AbsolutePathBuf,
project_root_markers: &[String],
config_base_dir: &Path,
) -> io::Result<Option<AbsolutePathBuf>> {
user_config_file: &AbsolutePathBuf,
) -> io::Result<ProjectTrustContext> {
let config_toml = deserialize_config_toml_with_base(merged_config.clone(), config_base_dir)?;
let project_root = find_project_root(cwd, project_root_markers).await?;
let projects = config_toml.projects.unwrap_or_default();
let cwd_key = cwd.as_path().to_string_lossy().to_string();
let project_root_key = project_root.as_path().to_string_lossy().to_string();
let repo_root_key = resolve_root_git_project_for_trust(cwd.as_path())
let repo_root = resolve_root_git_project_for_trust(cwd.as_path());
let repo_root_key = repo_root
.as_ref()
.map(|root| root.to_string_lossy().to_string());
let trust_level = projects
.get(&cwd_key)
.and_then(|project| project.trust_level)
.or_else(|| {
projects
.get(&project_root_key)
.and_then(|project| project.trust_level)
})
.or_else(|| {
repo_root_key
.as_ref()
.and_then(|root| projects.get(root))
.and_then(|project| project.trust_level)
});
let projects_trust = projects
.into_iter()
.filter_map(|(key, project)| project.trust_level.map(|trust_level| (key, trust_level)))
.collect();
if matches!(trust_level, Some(TrustLevel::Trusted)) {
Ok(Some(project_root))
} else {
Ok(None)
}
Ok(ProjectTrustContext {
project_root,
project_root_key,
repo_root_key,
projects_trust,
user_config_file: user_config_file.clone(),
})
}
/// Takes a `toml::Value` parsed from a config.toml file and walks through it,
@@ -527,6 +663,7 @@ async fn find_project_root(
async fn load_project_layers(
cwd: &AbsolutePathBuf,
project_root: &AbsolutePathBuf,
trust_context: &ProjectTrustContext,
) -> io::Result<Vec<ConfigLayerEntry>> {
let mut dirs = cwd
.as_path()
@@ -555,46 +692,54 @@ async fn load_project_layers(
continue;
}
let layer_dir = AbsolutePathBuf::from_absolute_path(dir)?;
let decision = trust_context.decision_for_dir(&layer_dir);
let dot_codex_abs = AbsolutePathBuf::from_absolute_path(&dot_codex)?;
let config_file = dot_codex_abs.join(CONFIG_TOML_FILE)?;
match tokio::fs::read_to_string(&config_file).await {
Ok(contents) => {
let config: TomlValue = toml::from_str(&contents).map_err(|e| {
io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Error parsing project config file {}: {e}",
config_file.as_path().display(),
),
)
})?;
let config: TomlValue = match toml::from_str(&contents) {
Ok(config) => config,
Err(e) => {
if decision.is_trusted() {
let config_file_display = config_file.as_path().display();
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Error parsing project config file {config_file_display}: {e}"
),
));
}
layers.push(project_layer_entry(
trust_context,
&dot_codex_abs,
&layer_dir,
TomlValue::Table(toml::map::Map::new()),
));
continue;
}
};
let config =
resolve_relative_paths_in_config_toml(config, dot_codex_abs.as_path())?;
layers.push(ConfigLayerEntry::new(
ConfigLayerSource::Project {
dot_codex_folder: dot_codex_abs,
},
config,
));
let entry = project_layer_entry(trust_context, &dot_codex_abs, &layer_dir, config);
layers.push(entry);
}
Err(err) => {
if err.kind() == io::ErrorKind::NotFound {
// If there is no config.toml file, record an empty entry
// for this project layer, as this may still have subfolders
// that are significant in the overall ConfigLayerStack.
layers.push(ConfigLayerEntry::new(
ConfigLayerSource::Project {
dot_codex_folder: dot_codex_abs,
},
layers.push(project_layer_entry(
trust_context,
&dot_codex_abs,
&layer_dir,
TomlValue::Table(toml::map::Map::new()),
));
} else {
let config_file_display = config_file.as_path().display();
return Err(io::Error::new(
err.kind(),
format!(
"Failed to read project config file {}: {err}",
config_file.as_path().display(),
),
format!("Failed to read project config file {config_file_display}: {err}"),
));
}
}

View File

@@ -28,6 +28,7 @@ pub struct ConfigLayerEntry {
pub name: ConfigLayerSource,
pub config: TomlValue,
pub version: String,
pub disabled_reason: Option<String>,
}
impl ConfigLayerEntry {
@@ -37,9 +38,28 @@ impl ConfigLayerEntry {
name,
config,
version,
disabled_reason: None,
}
}
pub fn new_disabled(
name: ConfigLayerSource,
config: TomlValue,
disabled_reason: impl Into<String>,
) -> Self {
let version = version_for_toml(&config);
Self {
name,
config,
version,
disabled_reason: Some(disabled_reason.into()),
}
}
pub fn is_disabled(&self) -> bool {
self.disabled_reason.is_some()
}
pub fn metadata(&self) -> ConfigLayerMetadata {
ConfigLayerMetadata {
name: self.name.clone(),
@@ -52,6 +72,7 @@ impl ConfigLayerEntry {
name: self.name.clone(),
version: self.version.clone(),
config: serde_json::to_value(&self.config).unwrap_or(JsonValue::Null),
disabled_reason: self.disabled_reason.clone(),
}
}
@@ -172,7 +193,7 @@ impl ConfigLayerStack {
pub fn effective_config(&self) -> TomlValue {
let mut merged = TomlValue::Table(toml::map::Map::new());
for layer in &self.layers {
for layer in self.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, false) {
merge_toml_values(&mut merged, &layer.config);
}
merged
@@ -182,7 +203,7 @@ impl ConfigLayerStack {
let mut origins = HashMap::new();
let mut path = Vec::new();
for layer in &self.layers {
for layer in self.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, false) {
record_origins(&layer.config, &layer.metadata(), &mut path, &mut origins);
}
@@ -192,16 +213,25 @@ impl ConfigLayerStack {
/// Returns the highest-precedence to lowest-precedence layers, so
/// `ConfigLayerSource::SessionFlags` would be first, if present.
pub fn layers_high_to_low(&self) -> Vec<&ConfigLayerEntry> {
self.get_layers(ConfigLayerStackOrdering::HighestPrecedenceFirst)
self.get_layers(ConfigLayerStackOrdering::HighestPrecedenceFirst, false)
}
/// Returns the highest-precedence to lowest-precedence layers, so
/// `ConfigLayerSource::SessionFlags` would be first, if present.
pub fn get_layers(&self, ordering: ConfigLayerStackOrdering) -> Vec<&ConfigLayerEntry> {
match ordering {
ConfigLayerStackOrdering::HighestPrecedenceFirst => self.layers.iter().rev().collect(),
ConfigLayerStackOrdering::LowestPrecedenceFirst => self.layers.iter().collect(),
pub fn get_layers(
&self,
ordering: ConfigLayerStackOrdering,
include_disabled: bool,
) -> Vec<&ConfigLayerEntry> {
let mut layers: Vec<&ConfigLayerEntry> = self
.layers
.iter()
.filter(|layer| include_disabled || !layer.is_disabled())
.collect();
if ordering == ConfigLayerStackOrdering::HighestPrecedenceFirst {
layers.reverse();
}
layers
}
}

View File

@@ -6,6 +6,7 @@ use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use crate::config::ProjectConfig;
use crate::config_loader::ConfigLayerEntry;
use crate::config_loader::ConfigLoadError;
use crate::config_loader::ConfigRequirements;
use crate::config_loader::config_requirements::ConfigRequirementsWithSources;
use crate::config_loader::fingerprint::version_for_toml;
@@ -21,6 +22,13 @@ use std::path::Path;
use tempfile::tempdir;
use toml::Value as TomlValue;
fn config_error_from_io(err: &std::io::Error) -> &super::ConfigError {
err.get_ref()
.and_then(|err| err.downcast_ref::<ConfigLoadError>())
.map(ConfigLoadError::config_error)
.expect("expected ConfigLoadError")
}
async fn make_config_for_test(
codex_home: &Path,
project_path: &Path,
@@ -44,6 +52,98 @@ async fn make_config_for_test(
.await
}
#[tokio::test]
async fn returns_config_error_for_invalid_user_config_toml() {
let tmp = tempdir().expect("tempdir");
let contents = "model = \"gpt-4\"\ninvalid = [";
let config_path = tmp.path().join(CONFIG_TOML_FILE);
std::fs::write(&config_path, contents).expect("write config");
let cwd = AbsolutePathBuf::try_from(tmp.path()).expect("cwd");
let err = load_config_layers_state(
tmp.path(),
Some(cwd),
&[] as &[(String, TomlValue)],
LoaderOverrides::default(),
)
.await
.expect_err("expected error");
let config_error = config_error_from_io(&err);
let expected_toml_error = toml::from_str::<TomlValue>(contents).expect_err("parse error");
let expected_config_error =
super::config_error_from_toml(&config_path, contents, expected_toml_error);
assert_eq!(config_error, &expected_config_error);
}
#[tokio::test]
async fn returns_config_error_for_invalid_managed_config_toml() {
let tmp = tempdir().expect("tempdir");
let managed_path = tmp.path().join("managed_config.toml");
let contents = "model = \"gpt-4\"\ninvalid = [";
std::fs::write(&managed_path, contents).expect("write managed config");
let overrides = LoaderOverrides {
managed_config_path: Some(managed_path.clone()),
..Default::default()
};
let cwd = AbsolutePathBuf::try_from(tmp.path()).expect("cwd");
let err = load_config_layers_state(
tmp.path(),
Some(cwd),
&[] as &[(String, TomlValue)],
overrides,
)
.await
.expect_err("expected error");
let config_error = config_error_from_io(&err);
let expected_toml_error = toml::from_str::<TomlValue>(contents).expect_err("parse error");
let expected_config_error =
super::config_error_from_toml(&managed_path, contents, expected_toml_error);
assert_eq!(config_error, &expected_config_error);
}
#[tokio::test]
async fn returns_config_error_for_schema_error_in_user_config() {
let tmp = tempdir().expect("tempdir");
let contents = "model_context_window = \"not_a_number\"";
let config_path = tmp.path().join(CONFIG_TOML_FILE);
std::fs::write(&config_path, contents).expect("write config");
let err = ConfigBuilder::default()
.codex_home(tmp.path().to_path_buf())
.fallback_cwd(Some(tmp.path().to_path_buf()))
.build()
.await
.expect_err("expected error");
let config_error = config_error_from_io(&err);
let _guard = codex_utils_absolute_path::AbsolutePathBufGuard::new(tmp.path());
let expected_config_error =
super::diagnostics::config_error_from_config_toml(&config_path, contents)
.expect("schema error");
assert_eq!(config_error, &expected_config_error);
}
#[test]
fn schema_error_points_to_feature_value() {
let tmp = tempdir().expect("tempdir");
let contents = "[features]\ncollaboration_modes = \"true\"";
let config_path = tmp.path().join(CONFIG_TOML_FILE);
std::fs::write(&config_path, contents).expect("write config");
let _guard = codex_utils_absolute_path::AbsolutePathBufGuard::new(tmp.path());
let error = super::diagnostics::config_error_from_config_toml(&config_path, contents)
.expect("schema error");
let value_line = contents.lines().nth(1).expect("value line");
let value_column = value_line.find("\"true\"").expect("value") + 1;
assert_eq!(error.range.start.line, 2);
assert_eq!(error.range.start.column, value_column);
}
#[tokio::test]
async fn merges_managed_config_layer_on_top() {
let tmp = tempdir().expect("tempdir");
@@ -132,6 +232,7 @@ async fn returns_empty_when_all_layers_missing() {
},
config: TomlValue::Table(toml::map::Map::new()),
version: version_for_toml(&TomlValue::Table(toml::map::Map::new())),
disabled_reason: None,
},
user_layer,
);
@@ -477,6 +578,42 @@ model_instructions_file = "child.txt"
Ok(())
}
#[tokio::test]
async fn cli_override_model_instructions_file_sets_base_instructions() -> std::io::Result<()> {
let tmp = tempdir()?;
let codex_home = tmp.path().join("home");
tokio::fs::create_dir_all(&codex_home).await?;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), "").await?;
let cwd = tmp.path().join("work");
tokio::fs::create_dir_all(&cwd).await?;
let instructions_path = tmp.path().join("instr.md");
tokio::fs::write(&instructions_path, "cli override instructions").await?;
let cli_overrides = vec![(
"model_instructions_file".to_string(),
TomlValue::String(instructions_path.to_string_lossy().to_string()),
)];
let config = ConfigBuilder::default()
.codex_home(codex_home)
.cli_overrides(cli_overrides)
.harness_overrides(ConfigOverrides {
cwd: Some(cwd),
..ConfigOverrides::default()
})
.build()
.await?;
assert_eq!(
config.base_instructions.as_deref(),
Some("cli override instructions")
);
Ok(())
}
#[tokio::test]
async fn project_layer_is_added_when_dot_codex_exists_without_config_toml() -> std::io::Result<()> {
let tmp = tempdir()?;
@@ -510,6 +647,7 @@ async fn project_layer_is_added_when_dot_codex_exists_without_config_toml() -> s
},
config: TomlValue::Table(toml::map::Map::new()),
version: version_for_toml(&TomlValue::Table(toml::map::Map::new())),
disabled_reason: None,
}],
project_layers
);
@@ -518,7 +656,7 @@ async fn project_layer_is_added_when_dot_codex_exists_without_config_toml() -> s
}
#[tokio::test]
async fn project_layers_skipped_when_untrusted_or_unknown() -> std::io::Result<()> {
async fn project_layers_disabled_when_untrusted_or_unknown() -> std::io::Result<()> {
let tmp = tempdir()?;
let project_root = tmp.path().join("project");
let nested = project_root.join("child");
@@ -540,6 +678,13 @@ async fn project_layers_skipped_when_untrusted_or_unknown() -> std::io::Result<(
None,
)
.await?;
let untrusted_config_path = codex_home_untrusted.join(CONFIG_TOML_FILE);
let untrusted_config_contents = tokio::fs::read_to_string(&untrusted_config_path).await?;
tokio::fs::write(
&untrusted_config_path,
format!("foo = \"user\"\n{untrusted_config_contents}"),
)
.await?;
let layers_untrusted = load_config_layers_state(
&codex_home_untrusted,
@@ -548,16 +693,35 @@ async fn project_layers_skipped_when_untrusted_or_unknown() -> std::io::Result<(
LoaderOverrides::default(),
)
.await?;
let project_layers_untrusted = layers_untrusted
.layers_high_to_low()
let project_layers_untrusted: Vec<_> = layers_untrusted
.get_layers(
super::ConfigLayerStackOrdering::HighestPrecedenceFirst,
true,
)
.into_iter()
.filter(|layer| matches!(layer.name, super::ConfigLayerSource::Project { .. }))
.count();
assert_eq!(project_layers_untrusted, 0);
assert_eq!(layers_untrusted.effective_config().get("foo"), None);
.collect();
assert_eq!(project_layers_untrusted.len(), 1);
assert!(
project_layers_untrusted[0].disabled_reason.is_some(),
"expected untrusted project layer to be disabled"
);
assert_eq!(
project_layers_untrusted[0].config.get("foo"),
Some(&TomlValue::String("child".to_string()))
);
assert_eq!(
layers_untrusted.effective_config().get("foo"),
Some(&TomlValue::String("user".to_string()))
);
let codex_home_unknown = tmp.path().join("home_unknown");
tokio::fs::create_dir_all(&codex_home_unknown).await?;
tokio::fs::write(
codex_home_unknown.join(CONFIG_TOML_FILE),
"foo = \"user\"\n",
)
.await?;
let layers_unknown = load_config_layers_state(
&codex_home_unknown,
@@ -566,13 +730,92 @@ async fn project_layers_skipped_when_untrusted_or_unknown() -> std::io::Result<(
LoaderOverrides::default(),
)
.await?;
let project_layers_unknown = layers_unknown
.layers_high_to_low()
let project_layers_unknown: Vec<_> = layers_unknown
.get_layers(
super::ConfigLayerStackOrdering::HighestPrecedenceFirst,
true,
)
.into_iter()
.filter(|layer| matches!(layer.name, super::ConfigLayerSource::Project { .. }))
.count();
assert_eq!(project_layers_unknown, 0);
assert_eq!(layers_unknown.effective_config().get("foo"), None);
.collect();
assert_eq!(project_layers_unknown.len(), 1);
assert!(
project_layers_unknown[0].disabled_reason.is_some(),
"expected unknown-trust project layer to be disabled"
);
assert_eq!(
project_layers_unknown[0].config.get("foo"),
Some(&TomlValue::String("child".to_string()))
);
assert_eq!(
layers_unknown.effective_config().get("foo"),
Some(&TomlValue::String("user".to_string()))
);
Ok(())
}
#[tokio::test]
async fn invalid_project_config_ignored_when_untrusted_or_unknown() -> std::io::Result<()> {
let tmp = tempdir()?;
let project_root = tmp.path().join("project");
let nested = project_root.join("child");
tokio::fs::create_dir_all(nested.join(".codex")).await?;
tokio::fs::write(project_root.join(".git"), "gitdir: here").await?;
tokio::fs::write(nested.join(".codex").join(CONFIG_TOML_FILE), "foo =").await?;
let cwd = AbsolutePathBuf::from_absolute_path(&nested)?;
let cases = [
("untrusted", Some(TrustLevel::Untrusted)),
("unknown", None),
];
for (name, trust_level) in cases {
let codex_home = tmp.path().join(format!("home_{name}"));
tokio::fs::create_dir_all(&codex_home).await?;
let config_path = codex_home.join(CONFIG_TOML_FILE);
if let Some(trust_level) = trust_level {
make_config_for_test(&codex_home, &project_root, trust_level, None).await?;
let config_contents = tokio::fs::read_to_string(&config_path).await?;
tokio::fs::write(&config_path, format!("foo = \"user\"\n{config_contents}")).await?;
} else {
tokio::fs::write(&config_path, "foo = \"user\"\n").await?;
}
let layers = load_config_layers_state(
&codex_home,
Some(cwd.clone()),
&[] as &[(String, TomlValue)],
LoaderOverrides::default(),
)
.await?;
let project_layers: Vec<_> = layers
.get_layers(
super::ConfigLayerStackOrdering::HighestPrecedenceFirst,
true,
)
.into_iter()
.filter(|layer| matches!(layer.name, super::ConfigLayerSource::Project { .. }))
.collect();
assert_eq!(
project_layers.len(),
1,
"expected one project layer for {name}"
);
assert!(
project_layers[0].disabled_reason.is_some(),
"expected {name} project layer to be disabled"
);
assert_eq!(
project_layers[0].config,
TomlValue::Table(toml::map::Map::new())
);
assert_eq!(
layers.effective_config().get("foo"),
Some(&TomlValue::String("user".to_string()))
);
}
Ok(())
}

View File

@@ -0,0 +1,227 @@
use std::collections::HashMap;
use std::env;
use std::path::PathBuf;
use async_channel::unbounded;
use codex_protocol::protocol::SandboxPolicy;
use serde::Deserialize;
use serde::Serialize;
use tokio_util::sync::CancellationToken;
use crate::AuthManager;
use crate::SandboxState;
use crate::config::Config;
use crate::features::Feature;
use crate::mcp::CODEX_APPS_MCP_SERVER_NAME;
use crate::mcp::auth::compute_auth_statuses;
use crate::mcp::with_codex_apps_mcp;
use crate::mcp_connection_manager::McpConnectionManager;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ConnectorInfo {
#[serde(rename = "id")]
pub connector_id: String,
#[serde(rename = "name")]
pub connector_name: String,
#[serde(default, rename = "description")]
pub connector_description: Option<String>,
#[serde(default, rename = "logo_url")]
pub logo_url: Option<String>,
#[serde(default, rename = "install_url")]
pub install_url: Option<String>,
#[serde(default)]
pub is_accessible: bool,
}
pub async fn list_accessible_connectors_from_mcp_tools(
config: &Config,
) -> anyhow::Result<Vec<ConnectorInfo>> {
if !config.features.enabled(Feature::Connectors) {
return Ok(Vec::new());
}
let auth_manager = auth_manager_from_config(config);
let auth = auth_manager.auth().await;
let mcp_servers = with_codex_apps_mcp(HashMap::new(), true, auth.as_ref(), config);
if mcp_servers.is_empty() {
return Ok(Vec::new());
}
let auth_status_entries =
compute_auth_statuses(mcp_servers.iter(), config.mcp_oauth_credentials_store_mode).await;
let mut mcp_connection_manager = McpConnectionManager::default();
let (tx_event, rx_event) = unbounded();
drop(rx_event);
let cancel_token = CancellationToken::new();
let sandbox_state = SandboxState {
sandbox_policy: SandboxPolicy::ReadOnly,
codex_linux_sandbox_exe: config.codex_linux_sandbox_exe.clone(),
sandbox_cwd: env::current_dir().unwrap_or_else(|_| PathBuf::from("/")),
};
mcp_connection_manager
.initialize(
&mcp_servers,
config.mcp_oauth_credentials_store_mode,
auth_status_entries,
tx_event,
cancel_token.clone(),
sandbox_state,
)
.await;
let tools = mcp_connection_manager.list_all_tools().await;
cancel_token.cancel();
Ok(accessible_connectors_from_mcp_tools(&tools))
}
fn auth_manager_from_config(config: &Config) -> std::sync::Arc<AuthManager> {
AuthManager::shared(
config.codex_home.clone(),
false,
config.cli_auth_credentials_store_mode,
)
}
pub fn connector_display_label(connector: &ConnectorInfo) -> String {
format_connector_label(&connector.connector_name, &connector.connector_id)
}
pub(crate) fn accessible_connectors_from_mcp_tools(
mcp_tools: &HashMap<String, crate::mcp_connection_manager::ToolInfo>,
) -> Vec<ConnectorInfo> {
let tools = mcp_tools.values().filter_map(|tool| {
if tool.server_name != CODEX_APPS_MCP_SERVER_NAME {
return None;
}
let connector_id = tool.connector_id.as_deref()?;
let connector_name = normalize_connector_value(tool.connector_name.as_deref());
Some((connector_id.to_string(), connector_name))
});
collect_accessible_connectors(tools)
}
pub fn merge_connectors(
connectors: Vec<ConnectorInfo>,
accessible_connectors: Vec<ConnectorInfo>,
) -> Vec<ConnectorInfo> {
let mut merged: HashMap<String, ConnectorInfo> = connectors
.into_iter()
.map(|mut connector| {
connector.is_accessible = false;
(connector.connector_id.clone(), connector)
})
.collect();
for mut connector in accessible_connectors {
connector.is_accessible = true;
let connector_id = connector.connector_id.clone();
if let Some(existing) = merged.get_mut(&connector_id) {
existing.is_accessible = true;
if existing.connector_name == existing.connector_id
&& connector.connector_name != connector.connector_id
{
existing.connector_name = connector.connector_name;
}
if existing.connector_description.is_none() && connector.connector_description.is_some()
{
existing.connector_description = connector.connector_description;
}
if existing.logo_url.is_none() && connector.logo_url.is_some() {
existing.logo_url = connector.logo_url;
}
} else {
merged.insert(connector_id, connector);
}
}
let mut merged = merged.into_values().collect::<Vec<_>>();
for connector in &mut merged {
if connector.install_url.is_none() {
connector.install_url = Some(connector_install_url(
&connector.connector_name,
&connector.connector_id,
));
}
}
merged.sort_by(|left, right| {
right
.is_accessible
.cmp(&left.is_accessible)
.then_with(|| left.connector_name.cmp(&right.connector_name))
.then_with(|| left.connector_id.cmp(&right.connector_id))
});
merged
}
fn collect_accessible_connectors<I>(tools: I) -> Vec<ConnectorInfo>
where
I: IntoIterator<Item = (String, Option<String>)>,
{
let mut connectors: HashMap<String, String> = HashMap::new();
for (connector_id, connector_name) in tools {
let connector_name = connector_name.unwrap_or_else(|| connector_id.clone());
if let Some(existing_name) = connectors.get_mut(&connector_id) {
if existing_name == &connector_id && connector_name != connector_id {
*existing_name = connector_name;
}
} else {
connectors.insert(connector_id, connector_name);
}
}
let mut accessible: Vec<ConnectorInfo> = connectors
.into_iter()
.map(|(connector_id, connector_name)| ConnectorInfo {
install_url: Some(connector_install_url(&connector_name, &connector_id)),
connector_id,
connector_name,
connector_description: None,
logo_url: None,
is_accessible: true,
})
.collect();
accessible.sort_by(|left, right| {
right
.is_accessible
.cmp(&left.is_accessible)
.then_with(|| left.connector_name.cmp(&right.connector_name))
.then_with(|| left.connector_id.cmp(&right.connector_id))
});
accessible
}
fn normalize_connector_value(value: Option<&str>) -> Option<String> {
value
.map(str::trim)
.filter(|value| !value.is_empty())
.map(str::to_string)
}
pub fn connector_install_url(name: &str, connector_id: &str) -> String {
let slug = connector_name_slug(name);
format!("https://chatgpt.com/apps/{slug}/{connector_id}")
}
fn connector_name_slug(name: &str) -> String {
let mut normalized = String::with_capacity(name.len());
for character in name.chars() {
if character.is_ascii_alphanumeric() {
normalized.push(character.to_ascii_lowercase());
} else {
normalized.push('-');
}
}
let normalized = normalized.trim_matches('-');
if normalized.is_empty() {
"app".to_string()
} else {
normalized.to_string()
}
}
fn format_connector_label(name: &str, _id: &str) -> String {
name.to_string()
}

View File

@@ -86,8 +86,11 @@ impl ContextManager {
// This is a coarse lower bound, not a tokenizer-accurate count.
pub(crate) fn estimate_token_count(&self, turn_context: &TurnContext) -> Option<i64> {
let model_info = turn_context.client.get_model_info();
let base_instructions = model_info.base_instructions.as_str();
let base_tokens = i64::try_from(approx_token_count(base_instructions)).unwrap_or(i64::MAX);
let personality = turn_context
.personality
.or(turn_context.client.config().model_personality);
let base_instructions = model_info.get_model_instructions(personality);
let base_tokens = i64::try_from(approx_token_count(&base_instructions)).unwrap_or(i64::MAX);
let items_tokens = self.items.iter().fold(0i64, |acc, item| {
acc + match item {

View File

@@ -23,6 +23,7 @@ fn assistant_msg(text: &str) -> ResponseItem {
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
end_turn: None,
}
}
@@ -41,6 +42,7 @@ fn user_msg(text: &str) -> ResponseItem {
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
end_turn: None,
}
}
@@ -51,6 +53,7 @@ fn user_input_text_msg(text: &str) -> ResponseItem {
content: vec![ContentItem::InputText {
text: text.to_string(),
}],
end_turn: None,
}
}
@@ -93,6 +96,7 @@ fn filters_non_api_messages() {
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
end_turn: None,
};
let reasoning = reasoning_msg("thinking...");
h.record_items([&system, &reasoning, &ResponseItem::Other], policy);
@@ -121,14 +125,16 @@ fn filters_non_api_messages() {
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: "hi".to_string()
}]
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "hello".to_string()
}]
}],
end_turn: None,
}
]
);
@@ -255,6 +261,7 @@ fn replace_last_turn_images_does_not_touch_user_images() {
content: vec![ContentItem::InputImage {
image_url: "data:image/png;base64,AAA".to_string(),
}],
end_turn: None,
}];
let mut history = create_history_with_items(items.clone());

View File

@@ -169,8 +169,7 @@ pub fn build_reqwest_client() -> reqwest::Client {
let mut builder = reqwest::Client::builder()
// Set UA via dedicated helper to avoid header validation pitfalls
.user_agent(ua)
.default_headers(headers)
.use_rustls_tls();
.default_headers(headers);
if is_sandboxed() {
builder = builder.no_proxy();
}

View File

@@ -79,6 +79,7 @@ impl From<EnvironmentContext> for ResponseItem {
content: vec![ContentItem::InputText {
text: ec.serialize_to_xml(),
}],
end_turn: None,
}
}
}
@@ -95,7 +96,7 @@ mod tests {
Shell {
shell_type: ShellType::Bash,
shell_path: PathBuf::from("/bin/bash"),
shell_snapshot: None,
shell_snapshot: crate::shell::empty_shell_snapshot_receiver(),
}
}
@@ -189,7 +190,7 @@ mod tests {
Shell {
shell_type: ShellType::Bash,
shell_path: "/bin/bash".into(),
shell_snapshot: None,
shell_snapshot: crate::shell::empty_shell_snapshot_receiver(),
},
);
let context2 = EnvironmentContext::new(
@@ -197,7 +198,7 @@ mod tests {
Shell {
shell_type: ShellType::Zsh,
shell_path: "/bin/zsh".into(),
shell_snapshot: None,
shell_snapshot: crate::shell::empty_shell_snapshot_receiver(),
},
);

View File

@@ -78,6 +78,9 @@ pub enum CodexErr {
#[error("no thread with id: {0}")]
ThreadNotFound(ThreadId),
#[error("agent thread limit reached (max {max_threads})")]
AgentLimitReached { max_threads: usize },
#[error("session configured event was not the first event in the stream")]
SessionConfiguredNotFirstEvent,
@@ -199,6 +202,7 @@ impl CodexErr {
| CodexErr::RetryLimit(_)
| CodexErr::ContextWindowExceeded
| CodexErr::ThreadNotFound(_)
| CodexErr::AgentLimitReached { .. }
| CodexErr::Spawn
| CodexErr::SessionConfiguredNotFirstEvent
| CodexErr::UsageLimitReached(_) => false,
@@ -497,9 +501,9 @@ impl CodexErr {
CodexErr::SessionConfiguredNotFirstEvent
| CodexErr::InternalServerError
| CodexErr::InternalAgentDied => CodexErrorInfo::InternalServerError,
CodexErr::UnsupportedOperation(_) | CodexErr::ThreadNotFound(_) => {
CodexErrorInfo::BadRequest
}
CodexErr::UnsupportedOperation(_)
| CodexErr::ThreadNotFound(_)
| CodexErr::AgentLimitReached { .. } => CodexErrorInfo::BadRequest,
CodexErr::Sandbox(_) => CodexErrorInfo::SandboxError,
_ => CodexErrorInfo::Other,
}

View File

@@ -89,7 +89,9 @@ fn parse_agent_message(id: Option<&String>, message: &[ContentItem]) -> AgentMes
pub fn parse_turn_item(item: &ResponseItem) -> Option<TurnItem> {
match item {
ResponseItem::Message { role, content, id } => match role.as_str() {
ResponseItem::Message {
role, content, id, ..
} => match role.as_str() {
"user" => parse_user_message(content).map(TurnItem::UserMessage),
"assistant" => Some(TurnItem::AgentMessage(parse_agent_message(
id.as_ref(),
@@ -169,6 +171,7 @@ mod tests {
image_url: img2.clone(),
},
],
end_turn: None,
};
let turn_item = parse_turn_item(&item).expect("expected user message turn item");
@@ -210,6 +213,7 @@ mod tests {
text: user_text.clone(),
},
],
end_turn: None,
};
let turn_item = parse_turn_item(&item).expect("expected user message turn item");
@@ -250,6 +254,7 @@ mod tests {
text: user_text.clone(),
},
],
end_turn: None,
};
let turn_item = parse_turn_item(&item).expect("expected user message turn item");
@@ -278,6 +283,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "<user_instructions>test_text</user_instructions>".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -285,6 +291,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "<environment_context>test_text</environment_context>".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -292,6 +299,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "# AGENTS.md instructions for test_directory\n\n<INSTRUCTIONS>\ntest_text\n</INSTRUCTIONS>".to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -300,6 +308,7 @@ mod tests {
text: "<skill>\n<name>demo</name>\n<path>skills/demo/SKILL.md</path>\nbody\n</skill>"
.to_string(),
}],
end_turn: None,
},
ResponseItem::Message {
id: None,
@@ -307,6 +316,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "<user_shell_command>echo 42</user_shell_command>".to_string(),
}],
end_turn: None,
},
];
@@ -324,6 +334,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: "Hello from Codex".to_string(),
}],
end_turn: None,
};
let turn_item = parse_turn_item(&item).expect("expected agent message turn item");

View File

@@ -227,7 +227,9 @@ pub async fn load_exec_policy(config_stack: &ConfigLayerStack) -> Result<Policy,
// from each layer, so that higher-precedence layers can override
// rules defined in lower-precedence ones.
let mut policy_paths = Vec::new();
for layer in config_stack.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst) {
// Include disabled project layers so .codex/rules still applies when
// project config.toml is trust-disabled.
for layer in config_stack.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, true) {
if let Some(config_folder) = layer.config_folder() {
#[expect(clippy::expect_used)]
let policy_dir = config_folder.join(RULES_DIR_NAME).expect("safe join");
@@ -625,6 +627,47 @@ mod tests {
);
}
#[tokio::test]
async fn loads_rules_from_disabled_project_layers() -> anyhow::Result<()> {
let project_dir = tempdir()?;
let policy_dir = project_dir.path().join(RULES_DIR_NAME);
fs::create_dir_all(&policy_dir)?;
fs::write(
policy_dir.join("disabled.rules"),
r#"prefix_rule(pattern=["ls"], decision="forbidden")"#,
)?;
let project_dot_codex_folder = AbsolutePathBuf::from_absolute_path(project_dir.path())?;
let layers = vec![ConfigLayerEntry::new_disabled(
ConfigLayerSource::Project {
dot_codex_folder: project_dot_codex_folder,
},
TomlValue::Table(Default::default()),
"trust disabled",
)];
let config_stack = ConfigLayerStack::new(
layers,
ConfigRequirements::default(),
ConfigRequirementsToml::default(),
)?;
let policy = load_exec_policy(&config_stack).await?;
assert_eq!(
Evaluation {
decision: Decision::Forbidden,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: vec!["ls".to_string()],
decision: Decision::Forbidden,
justification: None,
}],
},
policy.check_multiple([vec!["ls".to_string()]].iter(), &|_| Decision::Allow)
);
Ok(())
}
#[tokio::test]
async fn loads_policies_from_multiple_config_layers() -> anyhow::Result<()> {
let user_dir = tempdir()?;

View File

@@ -95,17 +95,17 @@ pub enum Feature {
ShellSnapshot,
/// Append additional AGENTS.md guidance to user instructions.
ChildAgentsMd,
/// Experimental TUI v2 (viewport) implementation.
Tui2,
/// Enforce UTF8 output in Powershell.
PowershellUtf8,
/// Compress request bodies (zstd) when sending streaming requests to codex-backend.
EnableRequestCompression,
/// Enable collab tools.
Collab,
/// Enable connectors (apps).
Connectors,
/// Steer feature flag - when enabled, Enter submits immediately instead of queuing.
Steer,
/// Enable collaboration modes (Plan, Pair Programming, Execute).
/// Enable collaboration modes (Plan, Code, Pair Programming, Execute).
CollaborationModes,
/// Use the Responses API WebSocket transport for OpenAI by default.
ResponsesWebsockets,
@@ -434,16 +434,12 @@ pub const FEATURES: &[FeatureSpec] = &[
FeatureSpec {
id: Feature::Collab,
key: "collab",
stage: Stage::Experimental {
name: "Multi-agents",
menu_description: "Allow Codex to spawn and collaborate with other agents on request (formerly named `collab`).",
announcement: "NEW! Codex can now spawn other agents and work with them to solve your problems. Enable in /experimental!",
},
stage: Stage::Beta,
default_enabled: false,
},
FeatureSpec {
id: Feature::Tui2,
key: "tui2",
id: Feature::Connectors,
key: "connectors",
stage: Stage::Beta,
default_enabled: false,
},

View File

@@ -38,6 +38,7 @@ impl From<UserInstructions> for ResponseItem {
contents = ui.text
),
}],
end_turn: None,
}
}
}
@@ -71,6 +72,7 @@ impl From<SkillInstructions> for ResponseItem {
si.name, si.path, si.contents
),
}],
end_turn: None,
}
}
}

View File

@@ -15,11 +15,13 @@ pub mod codex;
mod codex_thread;
mod compact_remote;
pub use codex_thread::CodexThread;
pub use codex_thread::ThreadConfigSnapshot;
mod agent;
mod codex_delegate;
mod command_safety;
pub mod config;
pub mod config_loader;
pub mod connectors;
mod context_manager;
pub mod custom_prompts;
pub mod env;

View File

@@ -9,16 +9,135 @@ use codex_protocol::protocol::SandboxPolicy;
use mcp_types::Tool as McpTool;
use tokio_util::sync::CancellationToken;
use crate::AuthManager;
use crate::CodexAuth;
use crate::config::Config;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerTransportConfig;
use crate::features::Feature;
use crate::mcp::auth::compute_auth_statuses;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::mcp_connection_manager::SandboxState;
const MCP_TOOL_NAME_PREFIX: &str = "mcp";
const MCP_TOOL_NAME_DELIMITER: &str = "__";
pub(crate) const CODEX_APPS_MCP_SERVER_NAME: &str = "codex_apps_mcp";
const CODEX_CONNECTORS_TOKEN_ENV_VAR: &str = "CODEX_CONNECTORS_TOKEN";
fn codex_apps_mcp_bearer_token_env_var() -> Option<String> {
match env::var(CODEX_CONNECTORS_TOKEN_ENV_VAR) {
Ok(value) if !value.trim().is_empty() => Some(CODEX_CONNECTORS_TOKEN_ENV_VAR.to_string()),
Ok(_) => None,
Err(env::VarError::NotPresent) => None,
Err(env::VarError::NotUnicode(_)) => Some(CODEX_CONNECTORS_TOKEN_ENV_VAR.to_string()),
}
}
fn codex_apps_mcp_bearer_token(auth: Option<&CodexAuth>) -> Option<String> {
let token = auth.and_then(|auth| auth.get_token().ok())?;
let token = token.trim();
if token.is_empty() {
None
} else {
Some(token.to_string())
}
}
fn codex_apps_mcp_http_headers(auth: Option<&CodexAuth>) -> Option<HashMap<String, String>> {
let mut headers = HashMap::new();
if let Some(token) = codex_apps_mcp_bearer_token(auth) {
headers.insert("Authorization".to_string(), format!("Bearer {token}"));
}
if let Some(account_id) = auth.and_then(CodexAuth::get_account_id) {
headers.insert("ChatGPT-Account-ID".to_string(), account_id);
}
if headers.is_empty() {
None
} else {
Some(headers)
}
}
fn codex_apps_mcp_url(base_url: &str) -> String {
let mut base_url = base_url.trim_end_matches('/').to_string();
if (base_url.starts_with("https://chatgpt.com")
|| base_url.starts_with("https://chat.openai.com"))
&& !base_url.contains("/backend-api")
{
base_url = format!("{base_url}/backend-api");
}
if base_url.contains("/backend-api") {
format!("{base_url}/wham/apps")
} else if base_url.contains("/api/codex") {
format!("{base_url}/apps")
} else {
format!("{base_url}/api/codex/apps")
}
}
fn codex_apps_mcp_server_config(config: &Config, auth: Option<&CodexAuth>) -> McpServerConfig {
let bearer_token_env_var = codex_apps_mcp_bearer_token_env_var();
let http_headers = if bearer_token_env_var.is_some() {
None
} else {
codex_apps_mcp_http_headers(auth)
};
let url = codex_apps_mcp_url(&config.chatgpt_base_url);
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
http_headers,
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
disabled_tools: None,
}
}
pub(crate) fn with_codex_apps_mcp(
mut servers: HashMap<String, McpServerConfig>,
connectors_enabled: bool,
auth: Option<&CodexAuth>,
config: &Config,
) -> HashMap<String, McpServerConfig> {
if connectors_enabled {
servers.insert(
CODEX_APPS_MCP_SERVER_NAME.to_string(),
codex_apps_mcp_server_config(config, auth),
);
} else {
servers.remove(CODEX_APPS_MCP_SERVER_NAME);
}
servers
}
pub(crate) fn effective_mcp_servers(
config: &Config,
auth: Option<&CodexAuth>,
) -> HashMap<String, McpServerConfig> {
with_codex_apps_mcp(
config.mcp_servers.get().clone(),
config.features.enabled(Feature::Connectors),
auth,
config,
)
}
pub async fn collect_mcp_snapshot(config: &Config) -> McpListToolsResponseEvent {
if config.mcp_servers.is_empty() {
let auth_manager = AuthManager::shared(
config.codex_home.clone(),
false,
config.cli_auth_credentials_store_mode,
);
let auth = auth_manager.auth().await;
let mcp_servers = effective_mcp_servers(config, auth.as_ref());
if mcp_servers.is_empty() {
return McpListToolsResponseEvent {
tools: HashMap::new(),
resources: HashMap::new(),
@@ -27,11 +146,8 @@ pub async fn collect_mcp_snapshot(config: &Config) -> McpListToolsResponseEvent
};
}
let auth_status_entries = compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
)
.await;
let auth_status_entries =
compute_auth_statuses(mcp_servers.iter(), config.mcp_oauth_credentials_store_mode).await;
let mut mcp_connection_manager = McpConnectionManager::default();
let (tx_event, rx_event) = unbounded();
@@ -47,7 +163,7 @@ pub async fn collect_mcp_snapshot(config: &Config) -> McpListToolsResponseEvent
mcp_connection_manager
.initialize(
&config.mcp_servers,
&mcp_servers,
config.mcp_oauth_credentials_store_mode,
auth_status_entries.clone(),
tx_event,

View File

@@ -153,6 +153,8 @@ pub(crate) struct ToolInfo {
pub(crate) server_name: String,
pub(crate) tool_name: String,
pub(crate) tool: Tool,
pub(crate) connector_id: Option<String>,
pub(crate) connector_name: Option<String>,
}
type ResponderMap = HashMap<(String, RequestId), oneshot::Sender<ElicitationResponse>>;
@@ -899,14 +901,16 @@ async fn list_tools_for_client(
client: &Arc<RmcpClient>,
timeout: Option<Duration>,
) -> Result<Vec<ToolInfo>> {
let resp = client.list_tools(None, timeout).await?;
let resp = client.list_tools_with_connector_ids(None, timeout).await?;
Ok(resp
.tools
.into_iter()
.map(|tool| ToolInfo {
server_name: server_name.to_owned(),
tool_name: tool.name.clone(),
tool,
tool_name: tool.tool.name.clone(),
tool: tool.tool,
connector_id: tool.connector_id,
connector_name: tool.connector_name,
})
.collect())
}
@@ -1004,6 +1008,8 @@ mod tests {
output_schema: None,
title: None,
},
connector_id: None,
connector_name: None,
}
}

View File

@@ -1,42 +1,64 @@
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::Settings;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ModeKind;
use codex_protocol::openai_models::ReasoningEffort;
const COLLABORATION_MODE_PLAN: &str = include_str!("../../templates/collaboration_mode/plan.md");
const COLLABORATION_MODE_CODE: &str = include_str!("../../templates/collaboration_mode/code.md");
const COLLABORATION_MODE_PAIR_PROGRAMMING: &str =
include_str!("../../templates/collaboration_mode/pair_programming.md");
const COLLABORATION_MODE_EXECUTE: &str =
include_str!("../../templates/collaboration_mode/execute.md");
pub(super) fn builtin_collaboration_mode_presets() -> Vec<CollaborationMode> {
vec![plan_preset(), pair_programming_preset(), execute_preset()]
pub(super) fn builtin_collaboration_mode_presets() -> Vec<CollaborationModeMask> {
vec![
plan_preset(),
code_preset(),
pair_programming_preset(),
execute_preset(),
]
}
#[cfg(any(test, feature = "test-support"))]
pub fn test_builtin_collaboration_mode_presets() -> Vec<CollaborationMode> {
pub fn test_builtin_collaboration_mode_presets() -> Vec<CollaborationModeMask> {
builtin_collaboration_mode_presets()
}
fn plan_preset() -> CollaborationMode {
CollaborationMode::Plan(Settings {
model: "gpt-5.2-codex".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: Some(COLLABORATION_MODE_PLAN.to_string()),
})
fn plan_preset() -> CollaborationModeMask {
CollaborationModeMask {
name: "Plan".to_string(),
mode: Some(ModeKind::Plan),
model: Some("gpt-5.2-codex".to_string()),
reasoning_effort: Some(Some(ReasoningEffort::High)),
developer_instructions: Some(Some(COLLABORATION_MODE_PLAN.to_string())),
}
}
fn pair_programming_preset() -> CollaborationMode {
CollaborationMode::PairProgramming(Settings {
model: "gpt-5.2-codex".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: Some(COLLABORATION_MODE_PAIR_PROGRAMMING.to_string()),
})
fn code_preset() -> CollaborationModeMask {
CollaborationModeMask {
name: "Code".to_string(),
mode: Some(ModeKind::Code),
model: Some("gpt-5.2-codex".to_string()),
reasoning_effort: Some(Some(ReasoningEffort::Medium)),
developer_instructions: Some(Some(COLLABORATION_MODE_CODE.to_string())),
}
}
fn execute_preset() -> CollaborationMode {
CollaborationMode::Execute(Settings {
model: "gpt-5.2-codex".to_string(),
reasoning_effort: Some(ReasoningEffort::XHigh),
developer_instructions: Some(COLLABORATION_MODE_EXECUTE.to_string()),
})
fn pair_programming_preset() -> CollaborationModeMask {
CollaborationModeMask {
name: "Pair Programming".to_string(),
mode: Some(ModeKind::PairProgramming),
model: Some("gpt-5.2-codex".to_string()),
reasoning_effort: Some(Some(ReasoningEffort::Medium)),
developer_instructions: Some(Some(COLLABORATION_MODE_PAIR_PROGRAMMING.to_string())),
}
}
fn execute_preset() -> CollaborationModeMask {
CollaborationModeMask {
name: "Execute".to_string(),
mode: Some(ModeKind::Execute),
model: Some("gpt-5.2-codex".to_string()),
reasoning_effort: Some(Some(ReasoningEffort::High)),
developer_instructions: Some(Some(COLLABORATION_MODE_EXECUTE.to_string())),
}
}

View File

@@ -14,7 +14,7 @@ use crate::models_manager::model_presets::builtin_model_presets;
use codex_api::ModelsClient;
use codex_api::ReqwestTransport;
use codex_app_server_protocol::AuthMode;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ModelsResponse;
@@ -30,9 +30,6 @@ use tracing::error;
const MODEL_CACHE_FILE: &str = "models_cache.json";
const DEFAULT_MODEL_CACHE_TTL: Duration = Duration::from_secs(300);
const MODELS_REFRESH_TIMEOUT: Duration = Duration::from_secs(5);
const OPENAI_DEFAULT_API_MODEL: &str = "gpt-5.2-codex";
const OPENAI_DEFAULT_CHATGPT_MODEL: &str = "gpt-5.2-codex";
const CODEX_AUTO_BALANCED_MODEL: &str = "codex-auto-balanced";
/// Strategy for refreshing available models.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
@@ -94,7 +91,7 @@ impl ModelsManager {
/// List collaboration mode presets.
///
/// Returns a static set of presets seeded with the configured model.
pub fn list_collaboration_modes(&self) -> Vec<CollaborationMode> {
pub fn list_collaboration_modes(&self) -> Vec<CollaborationModeMask> {
builtin_collaboration_mode_presets()
}
@@ -110,7 +107,7 @@ impl ModelsManager {
/// Get the model identifier to use, refreshing according to the specified strategy.
///
/// If `model` is provided, returns it directly. Otherwise selects the default based on
/// auth mode and available models (prefers `codex-auto-balanced` for ChatGPT auth).
/// auth mode and available models.
pub async fn get_default_model(
&self,
model: &Option<String>,
@@ -126,20 +123,14 @@ impl ModelsManager {
{
error!("failed to refresh available models: {err}");
}
// if codex-auto-balanced exists & signed in with chatgpt mode, return it, otherwise return the default model
let auth_mode = self.auth_manager.get_auth_mode();
let remote_models = self.get_remote_models(config).await;
if auth_mode == Some(AuthMode::ChatGPT) {
let has_auto_balanced = self
.build_available_models(remote_models)
.iter()
.any(|model| model.model == CODEX_AUTO_BALANCED_MODEL && model.show_in_picker);
if has_auto_balanced {
return CODEX_AUTO_BALANCED_MODEL.to_string();
}
return OPENAI_DEFAULT_CHATGPT_MODEL.to_string();
}
OPENAI_DEFAULT_API_MODEL.to_string()
let available = self.build_available_models(remote_models);
available
.iter()
.find(|model| model.is_default)
.or_else(|| available.first())
.map(|model| model.model.clone())
.unwrap_or_default()
}
// todo(aibrahim): look if we can tighten it to pub(crate)
@@ -336,7 +327,16 @@ impl ModelsManager {
#[cfg(any(test, feature = "test-support"))]
/// Get model identifier without consulting remote state or cache.
pub fn get_model_offline(model: Option<&str>) -> String {
model.unwrap_or(OPENAI_DEFAULT_CHATGPT_MODEL).to_string()
if let Some(model) = model {
return model.to_string();
}
let presets = builtin_model_presets(None);
presets
.iter()
.find(|preset| preset.show_in_picker)
.or_else(|| presets.first())
.map(|preset| preset.model.clone())
.unwrap_or_default()
}
#[cfg(any(test, feature = "test-support"))]

View File

@@ -1,15 +1,19 @@
use std::collections::BTreeMap;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::Verbosity;
use codex_protocol::openai_models::ApplyPatchToolType;
use codex_protocol::openai_models::ConfigShellToolType;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ModelInstructionsTemplate;
use codex_protocol::openai_models::ModelVisibility;
use codex_protocol::openai_models::PersonalityMessages;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::ReasoningEffortPreset;
use codex_protocol::openai_models::TruncationMode;
use codex_protocol::openai_models::TruncationPolicyConfig;
use crate::config::Config;
use crate::config::types::Personality;
use crate::truncate::approx_bytes_for_tokens;
use tracing::warn;
@@ -21,7 +25,12 @@ const GPT_5_CODEX_INSTRUCTIONS: &str = include_str!("../../gpt_5_codex_prompt.md
const GPT_5_1_INSTRUCTIONS: &str = include_str!("../../gpt_5_1_prompt.md");
const GPT_5_2_INSTRUCTIONS: &str = include_str!("../../gpt_5_2_prompt.md");
const GPT_5_1_CODEX_MAX_INSTRUCTIONS: &str = include_str!("../../gpt-5.1-codex-max_prompt.md");
const GPT_5_2_CODEX_INSTRUCTIONS: &str = include_str!("../../gpt-5.2-codex_prompt.md");
const GPT_5_2_CODEX_INSTRUCTIONS_TEMPLATE: &str =
include_str!("../../templates/model_instructions/gpt-5.2-codex_instructions_template.md");
const PERSONALITY_FRIENDLY: &str = include_str!("../../templates/personalities/friendly.md");
const PERSONALITY_PRAGMATIC: &str = include_str!("../../templates/personalities/pragmatic.md");
pub(crate) const CONTEXT_WINDOW_272K: i64 = 272_000;
@@ -45,6 +54,7 @@ macro_rules! model_info {
priority: 99,
upgrade: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
model_instructions_template: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,
@@ -87,22 +97,10 @@ pub(crate) fn with_config_overrides(mut model: ModelInfo, config: &Config) -> Mo
}
};
}
if let Some(base_instructions) = &config.base_instructions {
model.base_instructions = base_instructions.clone();
} else if model.slug.contains("gpt-5.2-codex")
&& let Some(personality) = &config.model_personality
{
let template = include_str!(
"../../templates/model_instructions/gpt-5.2-codex_instructions_template.md"
);
let personality_message = match personality {
Personality::Friendly => include_str!("../../templates/personalities/friendly.md"),
Personality::Pragmatic => {
include_str!("../../templates/personalities/pragmatic.md")
}
};
let instructions = template.replace("{{ personality_message }}", personality_message);
model.base_instructions = instructions;
model.model_instructions_template = None;
}
model
}
@@ -205,6 +203,16 @@ pub(crate) fn find_model_info_for_slug(slug: &str) -> ModelInfo {
truncation_policy: TruncationPolicyConfig::tokens(10_000),
context_window: Some(CONTEXT_WINDOW_272K),
supported_reasoning_levels: supported_reasoning_level_low_medium_high_xhigh(),
model_instructions_template: Some(ModelInstructionsTemplate {
template: GPT_5_2_CODEX_INSTRUCTIONS_TEMPLATE.to_string(),
personality_messages: Some(PersonalityMessages(BTreeMap::from([(
Personality::Friendly,
PERSONALITY_FRIENDLY.to_string(),
), (
Personality::Pragmatic,
PERSONALITY_PRAGMATIC.to_string(),
)]))),
}),
)
} else if slug.starts_with("gpt-5.1-codex-max") {
model_info!(

View File

@@ -72,6 +72,7 @@ struct HeadTailSummary {
/// Hard cap to bound worstcase work per request.
const MAX_SCAN_FILES: usize = 10000;
const HEAD_RECORD_LIMIT: usize = 10;
const USER_EVENT_SCAN_LIMIT: usize = 200;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ThreadSortKey {
@@ -79,6 +80,19 @@ pub enum ThreadSortKey {
UpdatedAt,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub(crate) enum ThreadListLayout {
NestedByDate,
Flat,
}
pub(crate) struct ThreadListConfig<'a> {
pub(crate) allowed_sources: &'a [SessionSource],
pub(crate) model_providers: Option<&'a [String]>,
pub(crate) default_provider: &'a str,
pub(crate) layout: ThreadListLayout,
}
/// Pagination cursor identifying a file by timestamp and UUID.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Cursor {
@@ -259,9 +273,29 @@ pub(crate) async fn get_threads(
model_providers: Option<&[String]>,
default_provider: &str,
) -> io::Result<ThreadsPage> {
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
let root = codex_home.join(SESSIONS_SUBDIR);
get_threads_in_root(
root,
page_size,
cursor,
sort_key,
ThreadListConfig {
allowed_sources,
model_providers,
default_provider,
layout: ThreadListLayout::NestedByDate,
},
)
.await
}
pub(crate) async fn get_threads_in_root(
root: PathBuf,
page_size: usize,
cursor: Option<&Cursor>,
sort_key: ThreadSortKey,
config: ThreadListConfig<'_>,
) -> io::Result<ThreadsPage> {
if !root.exists() {
return Ok(ThreadsPage {
items: Vec::new(),
@@ -273,18 +307,34 @@ pub(crate) async fn get_threads(
let anchor = cursor.cloned();
let provider_matcher =
model_providers.and_then(|filters| ProviderMatcher::new(filters, default_provider));
let provider_matcher = config
.model_providers
.and_then(|filters| ProviderMatcher::new(filters, config.default_provider));
let result = traverse_directories_for_paths(
root.clone(),
page_size,
anchor,
sort_key,
allowed_sources,
provider_matcher.as_ref(),
)
.await?;
let result = match config.layout {
ThreadListLayout::NestedByDate => {
traverse_directories_for_paths(
root.clone(),
page_size,
anchor,
sort_key,
config.allowed_sources,
provider_matcher.as_ref(),
)
.await?
}
ThreadListLayout::Flat => {
traverse_flat_paths(
root.clone(),
page_size,
anchor,
sort_key,
config.allowed_sources,
provider_matcher.as_ref(),
)
.await?
}
};
Ok(result)
}
@@ -324,6 +374,26 @@ async fn traverse_directories_for_paths(
}
}
async fn traverse_flat_paths(
root: PathBuf,
page_size: usize,
anchor: Option<Cursor>,
sort_key: ThreadSortKey,
allowed_sources: &[SessionSource],
provider_matcher: Option<&ProviderMatcher<'_>>,
) -> io::Result<ThreadsPage> {
match sort_key {
ThreadSortKey::CreatedAt => {
traverse_flat_paths_created(root, page_size, anchor, allowed_sources, provider_matcher)
.await
}
ThreadSortKey::UpdatedAt => {
traverse_flat_paths_updated(root, page_size, anchor, allowed_sources, provider_matcher)
.await
}
}
}
/// Walk the rollout directory tree in reverse chronological order and
/// collect items until the page fills or the scan cap is hit.
///
@@ -437,6 +507,116 @@ async fn traverse_directories_for_paths_updated(
})
}
async fn traverse_flat_paths_created(
root: PathBuf,
page_size: usize,
anchor: Option<Cursor>,
allowed_sources: &[SessionSource],
provider_matcher: Option<&ProviderMatcher<'_>>,
) -> io::Result<ThreadsPage> {
let mut items: Vec<ThreadItem> = Vec::with_capacity(page_size);
let mut scanned_files = 0usize;
let mut anchor_state = AnchorState::new(anchor);
let mut more_matches_available = false;
let files = collect_flat_rollout_files(&root, &mut scanned_files).await?;
for (ts, id, path) in files.into_iter() {
if anchor_state.should_skip(ts, id) {
continue;
}
if items.len() == page_size {
more_matches_available = true;
break;
}
let updated_at = file_modified_time(&path)
.await
.unwrap_or(None)
.and_then(format_rfc3339);
if let Some(item) =
build_thread_item(path, allowed_sources, provider_matcher, updated_at).await
{
items.push(item);
}
}
let reached_scan_cap = scanned_files >= MAX_SCAN_FILES;
if reached_scan_cap && !items.is_empty() {
more_matches_available = true;
}
let next = if more_matches_available {
build_next_cursor(&items, ThreadSortKey::CreatedAt)
} else {
None
};
Ok(ThreadsPage {
items,
next_cursor: next,
num_scanned_files: scanned_files,
reached_scan_cap,
})
}
async fn traverse_flat_paths_updated(
root: PathBuf,
page_size: usize,
anchor: Option<Cursor>,
allowed_sources: &[SessionSource],
provider_matcher: Option<&ProviderMatcher<'_>>,
) -> io::Result<ThreadsPage> {
let mut items: Vec<ThreadItem> = Vec::with_capacity(page_size);
let mut scanned_files = 0usize;
let mut anchor_state = AnchorState::new(anchor);
let mut more_matches_available = false;
let candidates = collect_flat_files_by_updated_at(&root, &mut scanned_files).await?;
let mut candidates = candidates;
candidates.sort_by_key(|candidate| {
let ts = candidate.updated_at.unwrap_or(OffsetDateTime::UNIX_EPOCH);
(Reverse(ts), Reverse(candidate.id))
});
for candidate in candidates.into_iter() {
let ts = candidate.updated_at.unwrap_or(OffsetDateTime::UNIX_EPOCH);
if anchor_state.should_skip(ts, candidate.id) {
continue;
}
if items.len() == page_size {
more_matches_available = true;
break;
}
let updated_at_fallback = candidate.updated_at.and_then(format_rfc3339);
if let Some(item) = build_thread_item(
candidate.path,
allowed_sources,
provider_matcher,
updated_at_fallback,
)
.await
{
items.push(item);
}
}
let reached_scan_cap = scanned_files >= MAX_SCAN_FILES;
if reached_scan_cap && !items.is_empty() {
more_matches_available = true;
}
let next = if more_matches_available {
build_next_cursor(&items, ThreadSortKey::UpdatedAt)
} else {
None
};
Ok(ThreadsPage {
items,
next_cursor: next,
num_scanned_files: scanned_files,
reached_scan_cap,
})
}
/// Pagination cursor token format: "<ts>|<uuid>" where `ts` uses
/// YYYY-MM-DDThh-mm-ss (UTC, second precision).
/// The cursor orders files by the requested sort key (timestamp desc, then UUID desc).
@@ -558,6 +738,44 @@ where
Ok(collected)
}
async fn collect_flat_rollout_files(
root: &Path,
scanned_files: &mut usize,
) -> io::Result<Vec<(OffsetDateTime, Uuid, PathBuf)>> {
let mut dir = tokio::fs::read_dir(root).await?;
let mut collected = Vec::new();
while let Some(entry) = dir.next_entry().await? {
if *scanned_files >= MAX_SCAN_FILES {
break;
}
if !entry
.file_type()
.await
.map(|ft| ft.is_file())
.unwrap_or(false)
{
continue;
}
let file_name = entry.file_name();
let Some(name_str) = file_name.to_str() else {
continue;
};
if !name_str.starts_with("rollout-") || !name_str.ends_with(".jsonl") {
continue;
}
let Some((ts, id)) = parse_timestamp_uuid_from_filename(name_str) else {
continue;
};
*scanned_files += 1;
if *scanned_files > MAX_SCAN_FILES {
break;
}
collected.push((ts, id, entry.path()));
}
collected.sort_by_key(|(ts, sid, _path)| (Reverse(*ts), Reverse(*sid)));
Ok(collected)
}
async fn collect_rollout_day_files(
day_path: &Path,
) -> io::Result<Vec<(OffsetDateTime, Uuid, PathBuf)>> {
@@ -610,6 +828,49 @@ async fn collect_files_by_updated_at(
Ok(candidates)
}
async fn collect_flat_files_by_updated_at(
root: &Path,
scanned_files: &mut usize,
) -> io::Result<Vec<ThreadCandidate>> {
let mut candidates = Vec::new();
let mut dir = tokio::fs::read_dir(root).await?;
while let Some(entry) = dir.next_entry().await? {
if *scanned_files >= MAX_SCAN_FILES {
break;
}
if !entry
.file_type()
.await
.map(|ft| ft.is_file())
.unwrap_or(false)
{
continue;
}
let file_name = entry.file_name();
let Some(name_str) = file_name.to_str() else {
continue;
};
if !name_str.starts_with("rollout-") || !name_str.ends_with(".jsonl") {
continue;
}
let Some((_ts, id)) = parse_timestamp_uuid_from_filename(name_str) else {
continue;
};
*scanned_files += 1;
if *scanned_files > MAX_SCAN_FILES {
break;
}
let updated_at = file_modified_time(&entry.path()).await.unwrap_or(None);
candidates.push(ThreadCandidate {
path: entry.path(),
id,
updated_at,
});
}
Ok(candidates)
}
async fn walk_rollout_files(
root: &Path,
scanned_files: &mut usize,
@@ -683,14 +944,20 @@ async fn read_head_summary(path: &Path, head_limit: usize) -> io::Result<HeadTai
let reader = tokio::io::BufReader::new(file);
let mut lines = reader.lines();
let mut summary = HeadTailSummary::default();
let mut lines_scanned = 0usize;
while summary.head.len() < head_limit {
while lines_scanned < head_limit
|| (summary.saw_session_meta
&& !summary.saw_user_event
&& lines_scanned < head_limit + USER_EVENT_SCAN_LIMIT)
{
let line_opt = lines.next_line().await?;
let Some(line) = line_opt else { break };
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
lines_scanned += 1;
let parsed: Result<RolloutLine, _> = serde_json::from_str(trimmed);
let Ok(rollout_line) = parsed else { continue };
@@ -703,9 +970,11 @@ async fn read_head_summary(path: &Path, head_limit: usize) -> io::Result<HeadTai
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(session_meta_line) {
summary.saw_session_meta = true;
if summary.head.len() < head_limit
&& let Ok(val) = serde_json::to_value(session_meta_line)
{
summary.head.push(val);
summary.saw_session_meta = true;
}
}
RolloutItem::ResponseItem(item) => {
@@ -713,7 +982,9 @@ async fn read_head_summary(path: &Path, head_limit: usize) -> io::Result<HeadTai
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(item) {
if summary.head.len() < head_limit
&& let Ok(val) = serde_json::to_value(item)
{
summary.head.push(val);
}
}

View File

@@ -19,11 +19,15 @@ use tokio::sync::oneshot;
use tracing::info;
use tracing::warn;
use super::ARCHIVED_SESSIONS_SUBDIR;
use super::SESSIONS_SUBDIR;
use super::list::Cursor;
use super::list::ThreadListConfig;
use super::list::ThreadListLayout;
use super::list::ThreadSortKey;
use super::list::ThreadsPage;
use super::list::get_threads;
use super::list::get_threads_in_root;
use super::policy::is_persisted_response_item;
use crate::config::Config;
use crate::default_client::originator;
@@ -119,6 +123,32 @@ impl RolloutRecorder {
.await
}
/// List archived threads (rollout files) under the archived sessions directory.
pub async fn list_archived_threads(
codex_home: &Path,
page_size: usize,
cursor: Option<&Cursor>,
sort_key: ThreadSortKey,
allowed_sources: &[SessionSource],
model_providers: Option<&[String]>,
default_provider: &str,
) -> std::io::Result<ThreadsPage> {
let root = codex_home.join(ARCHIVED_SESSIONS_SUBDIR);
get_threads_in_root(
root,
page_size,
cursor,
sort_key,
ThreadListConfig {
allowed_sources,
model_providers,
default_provider,
layout: ThreadListLayout::Flat,
},
)
.await
}
/// Find the newest recorded thread path, optionally filtering to a matching cwd.
#[allow(clippy::too_many_arguments)]
pub async fn find_latest_thread_path(

View File

@@ -131,6 +131,63 @@ fn write_session_file_with_provider(
Ok((dt, uuid))
}
fn write_session_file_with_delayed_user_event(
root: &Path,
ts_str: &str,
uuid: Uuid,
meta_lines_before_user: usize,
) -> std::io::Result<()> {
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
let dt = PrimitiveDateTime::parse(ts_str, format)
.unwrap()
.assume_utc();
let dir = root
.join("sessions")
.join(format!("{:04}", dt.year()))
.join(format!("{:02}", u8::from(dt.month())))
.join(format!("{:02}", dt.day()));
fs::create_dir_all(&dir)?;
let filename = format!("rollout-{ts_str}-{uuid}.jsonl");
let file_path = dir.join(filename);
let mut file = File::create(file_path)?;
for i in 0..meta_lines_before_user {
let id = if i == 0 {
uuid
} else {
Uuid::from_u128(100 + i as u128)
};
let payload = serde_json::json!({
"id": id,
"timestamp": ts_str,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version",
"source": "vscode",
"model_provider": "test-provider",
});
let meta = serde_json::json!({
"timestamp": ts_str,
"type": "session_meta",
"payload": payload,
});
writeln!(file, "{meta}")?;
}
let user_event = serde_json::json!({
"timestamp": ts_str,
"type": "event_msg",
"payload": {"type": "user_message", "message": "Hello from user", "kind": "plain"}
});
writeln!(file, "{user_event}")?;
let times = FileTimes::new().set_modified(dt.into());
file.set_times(times)?;
Ok(())
}
fn write_session_file_with_meta_payload(
root: &Path,
ts_str: &str,
@@ -539,6 +596,31 @@ async fn test_pagination_cursor() {
assert_eq!(page3, expected_page3);
}
#[tokio::test]
async fn test_list_threads_scans_past_head_for_user_event() {
let temp = TempDir::new().unwrap();
let home = temp.path();
let uuid = Uuid::from_u128(99);
let ts = "2025-05-01T10-30-00";
write_session_file_with_delayed_user_event(home, ts, uuid, 12).unwrap();
let provider_filter = provider_vec(&[TEST_PROVIDER]);
let page = get_threads(
home,
10,
None,
ThreadSortKey::CreatedAt,
INTERACTIVE_SESSION_SOURCES,
Some(provider_filter.as_slice()),
TEST_PROVIDER,
)
.await
.unwrap();
assert_eq!(page.items.len(), 1);
}
#[tokio::test]
async fn test_get_thread_contents() {
let temp = TempDir::new().unwrap();
@@ -806,6 +888,7 @@ async fn test_updated_at_uses_file_mtime() -> Result<()> {
content: vec![ContentItem::OutputText {
text: format!("reply-{idx}"),
}],
end_turn: None,
}),
};
writeln!(file, "{}", serde_json::to_string(&response_line)?)?;

View File

@@ -85,6 +85,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
end_turn: None,
}
}
@@ -95,6 +96,7 @@ mod tests {
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
end_turn: None,
}
}

View File

@@ -1,9 +1,9 @@
use crate::shell_snapshot::ShellSnapshot;
use serde::Deserialize;
use serde::Serialize;
use std::path::PathBuf;
use std::sync::Arc;
use crate::shell_snapshot::ShellSnapshot;
use tokio::sync::watch;
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub enum ShellType {
@@ -14,12 +14,16 @@ pub enum ShellType {
Cmd,
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Shell {
pub(crate) shell_type: ShellType,
pub(crate) shell_path: PathBuf,
#[serde(skip_serializing, skip_deserializing, default)]
pub(crate) shell_snapshot: Option<Arc<ShellSnapshot>>,
#[serde(
skip_serializing,
skip_deserializing,
default = "empty_shell_snapshot_receiver"
)]
pub(crate) shell_snapshot: watch::Receiver<Option<Arc<ShellSnapshot>>>,
}
impl Shell {
@@ -63,8 +67,26 @@ impl Shell {
}
}
}
/// Return the shell snapshot if existing.
pub fn shell_snapshot(&self) -> Option<Arc<ShellSnapshot>> {
self.shell_snapshot.borrow().clone()
}
}
pub(crate) fn empty_shell_snapshot_receiver() -> watch::Receiver<Option<Arc<ShellSnapshot>>> {
let (_tx, rx) = watch::channel(None);
rx
}
impl PartialEq for Shell {
fn eq(&self, other: &Self) -> bool {
self.shell_type == other.shell_type && self.shell_path == other.shell_path
}
}
impl Eq for Shell {}
#[cfg(unix)]
fn get_user_shell_path() -> Option<PathBuf> {
use libc::getpwuid;
@@ -139,7 +161,7 @@ fn get_zsh_shell(path: Option<&PathBuf>) -> Option<Shell> {
shell_path.map(|shell_path| Shell {
shell_type: ShellType::Zsh,
shell_path,
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
})
}
@@ -149,7 +171,7 @@ fn get_bash_shell(path: Option<&PathBuf>) -> Option<Shell> {
shell_path.map(|shell_path| Shell {
shell_type: ShellType::Bash,
shell_path,
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
})
}
@@ -159,7 +181,7 @@ fn get_sh_shell(path: Option<&PathBuf>) -> Option<Shell> {
shell_path.map(|shell_path| Shell {
shell_type: ShellType::Sh,
shell_path,
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
})
}
@@ -175,7 +197,7 @@ fn get_powershell_shell(path: Option<&PathBuf>) -> Option<Shell> {
shell_path.map(|shell_path| Shell {
shell_type: ShellType::PowerShell,
shell_path,
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
})
}
@@ -185,7 +207,7 @@ fn get_cmd_shell(path: Option<&PathBuf>) -> Option<Shell> {
shell_path.map(|shell_path| Shell {
shell_type: ShellType::Cmd,
shell_path,
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
})
}
@@ -194,13 +216,13 @@ fn ultimate_fallback_shell() -> Shell {
Shell {
shell_type: ShellType::Cmd,
shell_path: PathBuf::from("cmd.exe"),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
}
} else {
Shell {
shell_type: ShellType::Sh,
shell_path: PathBuf::from("/bin/sh"),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
}
}
}
@@ -426,7 +448,7 @@ mod tests {
let test_bash_shell = Shell {
shell_type: ShellType::Bash,
shell_path: PathBuf::from("/bin/bash"),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
};
assert_eq!(
test_bash_shell.derive_exec_args("echo hello", false),
@@ -440,7 +462,7 @@ mod tests {
let test_zsh_shell = Shell {
shell_type: ShellType::Zsh,
shell_path: PathBuf::from("/bin/zsh"),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
};
assert_eq!(
test_zsh_shell.derive_exec_args("echo hello", false),
@@ -454,7 +476,7 @@ mod tests {
let test_powershell_shell = Shell {
shell_type: ShellType::PowerShell,
shell_path: PathBuf::from("pwsh.exe"),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
};
assert_eq!(
test_powershell_shell.derive_exec_args("echo hello", false),
@@ -481,7 +503,7 @@ mod tests {
Shell {
shell_type: ShellType::Zsh,
shell_path: PathBuf::from(shell_path),
shell_snapshot: None,
shell_snapshot: empty_shell_snapshot_receiver(),
}
);
}

View File

@@ -1,6 +1,7 @@
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use std::time::SystemTime;
@@ -12,9 +13,11 @@ use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use anyhow::bail;
use codex_otel::OtelManager;
use codex_protocol::ThreadId;
use tokio::fs;
use tokio::process::Command;
use tokio::sync::watch;
use tokio::time::timeout;
#[derive(Clone, Debug, PartialEq, Eq)]
@@ -25,9 +28,34 @@ pub struct ShellSnapshot {
const SNAPSHOT_TIMEOUT: Duration = Duration::from_secs(10);
const SNAPSHOT_RETENTION: Duration = Duration::from_secs(60 * 60 * 24 * 7); // 7 days retention.
const SNAPSHOT_DIR: &str = "shell_snapshots";
const EXCLUDED_EXPORT_VARS: &[&str] = &["PWD", "OLDPWD"];
impl ShellSnapshot {
pub async fn try_new(codex_home: &Path, session_id: ThreadId, shell: &Shell) -> Option<Self> {
pub fn start_snapshotting(
codex_home: PathBuf,
session_id: ThreadId,
shell: &mut Shell,
otel_manager: OtelManager,
) {
let (shell_snapshot_tx, shell_snapshot_rx) = watch::channel(None);
shell.shell_snapshot = shell_snapshot_rx;
let snapshot_shell = shell.clone();
let snapshot_session_id = session_id;
tokio::spawn(async move {
let timer = otel_manager.start_timer("codex.shell_snapshot.duration_ms", &[]);
let snapshot =
ShellSnapshot::try_new(&codex_home, snapshot_session_id, &snapshot_shell)
.await
.map(Arc::new);
let success = if snapshot.is_some() { "true" } else { "false" };
let _ = timer.map(|timer| timer.record(&[("success", success)]));
otel_manager.counter("codex.shell_snapshot", 1, &[("success", success)]);
let _ = shell_snapshot_tx.send(snapshot);
});
}
async fn try_new(codex_home: &Path, session_id: ThreadId, shell: &Shell) -> Option<Self> {
// File to store the snapshot
let extension = match shell.shell_type {
ShellType::PowerShell => "ps1",
@@ -47,7 +75,7 @@ impl ShellSnapshot {
});
// Make the new snapshot.
match write_shell_snapshot(shell.shell_type.clone(), &path).await {
let snapshot = match write_shell_snapshot(shell.shell_type.clone(), &path).await {
Ok(path) => {
tracing::info!("Shell snapshot successfully created: {}", path.display());
Some(Self { path })
@@ -59,7 +87,16 @@ impl ShellSnapshot {
);
None
}
};
if let Some(snapshot) = snapshot.as_ref()
&& let Err(err) = validate_snapshot(shell, &snapshot.path).await
{
tracing::error!("Shell snapshot validation failed: {err:?}");
return None;
}
snapshot
}
}
@@ -74,7 +111,7 @@ impl Drop for ShellSnapshot {
}
}
pub async fn write_shell_snapshot(shell_type: ShellType, output_path: &Path) -> Result<PathBuf> {
async fn write_shell_snapshot(shell_type: ShellType, output_path: &Path) -> Result<PathBuf> {
if shell_type == ShellType::PowerShell || shell_type == ShellType::Cmd {
bail!("Shell snapshot not supported yet for {shell_type:?}");
}
@@ -102,9 +139,9 @@ pub async fn write_shell_snapshot(shell_type: ShellType, output_path: &Path) ->
async fn capture_snapshot(shell: &Shell) -> Result<String> {
let shell_type = shell.shell_type.clone();
match shell_type {
ShellType::Zsh => run_shell_script(shell, zsh_snapshot_script()).await,
ShellType::Bash => run_shell_script(shell, bash_snapshot_script()).await,
ShellType::Sh => run_shell_script(shell, sh_snapshot_script()).await,
ShellType::Zsh => run_shell_script(shell, &zsh_snapshot_script()).await,
ShellType::Bash => run_shell_script(shell, &bash_snapshot_script()).await,
ShellType::Sh => run_shell_script(shell, &sh_snapshot_script()).await,
ShellType::PowerShell => run_shell_script(shell, powershell_snapshot_script()).await,
ShellType::Cmd => bail!("Shell snapshotting is not yet supported for {shell_type:?}"),
}
@@ -119,16 +156,25 @@ fn strip_snapshot_preamble(snapshot: &str) -> Result<String> {
Ok(snapshot[start..].to_string())
}
async fn run_shell_script(shell: &Shell, script: &str) -> Result<String> {
run_shell_script_with_timeout(shell, script, SNAPSHOT_TIMEOUT).await
async fn validate_snapshot(shell: &Shell, snapshot_path: &Path) -> Result<()> {
let snapshot_path_display = snapshot_path.display();
let script = format!("set -e; . \"{snapshot_path_display}\"");
run_script_with_timeout(shell, &script, SNAPSHOT_TIMEOUT, false)
.await
.map(|_| ())
}
async fn run_shell_script_with_timeout(
async fn run_shell_script(shell: &Shell, script: &str) -> Result<String> {
run_script_with_timeout(shell, script, SNAPSHOT_TIMEOUT, true).await
}
async fn run_script_with_timeout(
shell: &Shell,
script: &str,
snapshot_timeout: Duration,
use_login_shell: bool,
) -> Result<String> {
let args = shell.derive_exec_args(script, true);
let args = shell.derive_exec_args(script, use_login_shell);
let shell_name = shell.name();
// Handler is kept as guard to control the drop. The `mut` pattern is required because .args()
@@ -157,8 +203,13 @@ async fn run_shell_script_with_timeout(
Ok(String::from_utf8_lossy(&output.stdout).into_owned())
}
fn zsh_snapshot_script() -> &'static str {
r##"if [[ -n "$ZDOTDIR" ]]; then
fn excluded_exports_regex() -> String {
EXCLUDED_EXPORT_VARS.join("|")
}
fn zsh_snapshot_script() -> String {
let excluded = excluded_exports_regex();
let script = r##"if [[ -n "$ZDOTDIR" ]]; then
rc="$ZDOTDIR/.zshrc"
else
rc="$HOME/.zshrc"
@@ -178,14 +229,31 @@ alias_count=$(alias -L | wc -l | tr -d ' ')
print "# aliases $alias_count"
alias -L
print ''
export_count=$(export -p | wc -l | tr -d ' ')
export_lines=$(export -p | awk '
/^(export|declare -x|typeset -x) / {
line=$0
name=line
sub(/^(export|declare -x|typeset -x) /, "", name)
sub(/=.*/, "", name)
if (name ~ /^(EXCLUDED_EXPORTS)$/) {
next
}
if (name ~ /^[A-Za-z_][A-Za-z0-9_]*$/) {
print line
}
}')
export_count=$(printf '%s\n' "$export_lines" | sed '/^$/d' | wc -l | tr -d ' ')
print "# exports $export_count"
export -p
"##
if [[ -n "$export_lines" ]]; then
print -r -- "$export_lines"
fi
"##;
script.replace("EXCLUDED_EXPORTS", &excluded)
}
fn bash_snapshot_script() -> &'static str {
r##"if [ -z "$BASH_ENV" ] && [ -r "$HOME/.bashrc" ]; then
fn bash_snapshot_script() -> String {
let excluded = excluded_exports_regex();
let script = r##"if [ -z "$BASH_ENV" ] && [ -r "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
echo '# Snapshot file'
@@ -205,14 +273,31 @@ alias_count=$(alias -p | wc -l | tr -d ' ')
echo "# aliases $alias_count"
alias -p
echo ''
export_count=$(export -p | wc -l | tr -d ' ')
export_lines=$(export -p | awk '
/^(export|declare -x|typeset -x) / {
line=$0
name=line
sub(/^(export|declare -x|typeset -x) /, "", name)
sub(/=.*/, "", name)
if (name ~ /^(EXCLUDED_EXPORTS)$/) {
next
}
if (name ~ /^[A-Za-z_][A-Za-z0-9_]*$/) {
print line
}
}')
export_count=$(printf '%s\n' "$export_lines" | sed '/^$/d' | wc -l | tr -d ' ')
echo "# exports $export_count"
export -p
"##
if [ -n "$export_lines" ]; then
printf '%s\n' "$export_lines"
fi
"##;
script.replace("EXCLUDED_EXPORTS", &excluded)
}
fn sh_snapshot_script() -> &'static str {
r##"if [ -n "$ENV" ] && [ -r "$ENV" ]; then
fn sh_snapshot_script() -> String {
let excluded = excluded_exports_regex();
let script = r##"if [ -n "$ENV" ] && [ -r "$ENV" ]; then
. "$ENV"
fi
echo '# Snapshot file'
@@ -245,18 +330,37 @@ else
echo '# aliases 0'
fi
if export -p >/dev/null 2>&1; then
export_count=$(export -p | wc -l | tr -d ' ')
export_lines=$(export -p | awk '
/^(export|declare -x|typeset -x) / {
line=$0
name=line
sub(/^(export|declare -x|typeset -x) /, "", name)
sub(/=.*/, "", name)
if (name ~ /^(EXCLUDED_EXPORTS)$/) {
next
}
if (name ~ /^[A-Za-z_][A-Za-z0-9_]*$/) {
print line
}
}')
export_count=$(printf '%s\n' "$export_lines" | sed '/^$/d' | wc -l | tr -d ' ')
echo "# exports $export_count"
export -p
if [ -n "$export_lines" ]; then
printf '%s\n' "$export_lines"
fi
else
export_count=$(env | wc -l | tr -d ' ')
export_count=$(env | sort | awk -F= '$1 ~ /^[A-Za-z_][A-Za-z0-9_]*$/ { count++ } END { print count }')
echo "# exports $export_count"
env | sort | while IFS='=' read -r key value; do
case "$key" in
""|[0-9]*|*[!A-Za-z0-9_]*|EXCLUDED_EXPORTS) continue ;;
esac
escaped=$(printf "%s" "$value" | sed "s/'/'\"'\"'/g")
printf "export %s='%s'\n" "$key" "$escaped"
done
fi
"##
"##;
script.replace("EXCLUDED_EXPORTS", &excluded)
}
fn powershell_snapshot_script() -> &'static str {
@@ -362,6 +466,8 @@ mod tests {
use std::os::unix::ffi::OsStrExt;
#[cfg(target_os = "linux")]
use std::os::unix::fs::PermissionsExt;
#[cfg(unix)]
use std::process::Command;
#[cfg(target_os = "linux")]
use std::process::Command as StdCommand;
@@ -400,6 +506,30 @@ mod tests {
assert!(result.is_err());
}
#[cfg(unix)]
#[test]
fn bash_snapshot_filters_invalid_exports() -> Result<()> {
let output = Command::new("/bin/bash")
.arg("-c")
.arg(bash_snapshot_script())
.env("BASH_ENV", "/dev/null")
.env("VALID_NAME", "ok")
.env("PWD", "/tmp/stale")
.env("NEXTEST_BIN_EXE_codex-write-config-schema", "/path/to/bin")
.env("BAD-NAME", "broken")
.output()?;
assert!(output.status.success());
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(stdout.contains("VALID_NAME"));
assert!(!stdout.contains("PWD=/tmp/stale"));
assert!(!stdout.contains("NEXTEST_BIN_EXE_codex-write-config-schema"));
assert!(!stdout.contains("BAD-NAME"));
Ok(())
}
#[cfg(unix)]
#[tokio::test]
async fn try_new_creates_and_deletes_snapshot_file() -> Result<()> {
@@ -407,7 +537,7 @@ mod tests {
let shell = Shell {
shell_type: ShellType::Bash,
shell_path: PathBuf::from("/bin/bash"),
shell_snapshot: None,
shell_snapshot: crate::shell::empty_shell_snapshot_receiver(),
};
let snapshot = ShellSnapshot::try_new(dir.path(), ThreadId::new(), &shell)
@@ -449,10 +579,10 @@ mod tests {
let shell = Shell {
shell_type: ShellType::Sh,
shell_path,
shell_snapshot: None,
shell_snapshot: crate::shell::empty_shell_snapshot_receiver(),
};
let err = run_shell_script_with_timeout(&shell, "ignored", Duration::from_millis(500))
let err = run_script_with_timeout(&shell, "ignored", Duration::from_millis(500), true)
.await
.expect_err("snapshot shell should time out");
assert!(

View File

@@ -1,5 +1,6 @@
use crate::config::Config;
use crate::config_loader::ConfigLayerStack;
use crate::config_loader::ConfigLayerStackOrdering;
use crate::skills::model::SkillError;
use crate::skills::model::SkillInterface;
use crate::skills::model::SkillLoadOutcome;
@@ -133,7 +134,9 @@ where
fn skill_roots_from_layer_stack_inner(config_layer_stack: &ConfigLayerStack) -> Vec<SkillRoot> {
let mut roots = Vec::new();
for layer in config_layer_stack.layers_high_to_low() {
for layer in
config_layer_stack.get_layers(ConfigLayerStackOrdering::HighestPrecedenceFirst, true)
{
let Some(config_folder) = layer.config_folder() else {
continue;
};
@@ -575,11 +578,17 @@ mod tests {
}
async fn make_config_for_cwd(codex_home: &TempDir, cwd: PathBuf) -> Config {
let trust_root = cwd
.ancestors()
.find(|ancestor| ancestor.join(".git").exists())
.map(Path::to_path_buf)
.unwrap_or_else(|| cwd.clone());
fs::write(
codex_home.path().join(CONFIG_TOML_FILE),
toml::to_string(&ConfigToml {
projects: Some(HashMap::from([(
cwd.to_string_lossy().to_string(),
trust_root.to_string_lossy().to_string(),
ProjectConfig {
trust_level: Some(TrustLevel::Trusted),
},
@@ -663,6 +672,59 @@ mod tests {
Ok(())
}
#[test]
fn skill_roots_from_layer_stack_includes_disabled_project_layers() -> anyhow::Result<()> {
let tmp = tempfile::tempdir()?;
let user_folder = tmp.path().join("home/codex");
fs::create_dir_all(&user_folder)?;
let project_root = tmp.path().join("repo");
let dot_codex = project_root.join(".codex");
fs::create_dir_all(&dot_codex)?;
let user_file = AbsolutePathBuf::from_absolute_path(user_folder.join("config.toml"))?;
let project_dot_codex = AbsolutePathBuf::from_absolute_path(&dot_codex)?;
let layers = vec![
ConfigLayerEntry::new(
ConfigLayerSource::User { file: user_file },
TomlValue::Table(toml::map::Map::new()),
),
ConfigLayerEntry::new_disabled(
ConfigLayerSource::Project {
dot_codex_folder: project_dot_codex,
},
TomlValue::Table(toml::map::Map::new()),
"marked untrusted",
),
];
let stack = ConfigLayerStack::new(
layers,
ConfigRequirements::default(),
ConfigRequirementsToml::default(),
)?;
let got = skill_roots_from_layer_stack(&stack)
.into_iter()
.map(|root| (root.scope, root.path))
.collect::<Vec<_>>();
assert_eq!(
got,
vec![
(SkillScope::Repo, dot_codex.join("skills")),
(SkillScope::User, user_folder.join("skills")),
(
SkillScope::System,
user_folder.join("skills").join(".system")
),
]
);
Ok(())
}
fn write_skill(codex_home: &TempDir, dir: &str, name: &str, description: &str) -> PathBuf {
write_skill_at(&codex_home.path().join("skills"), dir, name, description)
}

Some files were not shown because too many files have changed in this diff Show More