Compare commits

...

198 Commits

Author SHA1 Message Date
kevin zhao
876ade0445 serializing noMatch as an object 2025-11-19 21:34:39 +00:00
kevin zhao
2b8cdc7be3 renaming things 2025-11-19 21:19:01 +00:00
kevin zhao
20a4f95136 delete unecessary test 2025-11-19 20:41:28 +00:00
kevin zhao
b6cd0a5f02 calling it run() 2025-11-19 20:37:44 +00:00
kevin zhao
9ec2873084 commonizing cli logic 2025-11-19 20:30:20 +00:00
zhao-oai
745e2a6790 Add execpolicycheck subcommand 2025-11-19 12:05:40 -08:00
kevin zhao
119b1855f3 another note on readme 2025-11-19 10:40:56 -08:00
kevin zhao
4f2ee5f94c improving readme 2025-11-19 10:39:53 -08:00
kevin zhao
3cb8a0068d update readmes pt.2 2025-11-19 10:29:27 -08:00
kevin zhao
de9b3fd75d update READMEs 2025-11-19 10:29:13 -08:00
kevin zhao
408cc0b0f2 Merge branch 'dev/zhao/execpolicy2-core-integration' of github.com:openai/codex into dev/zhao/execpolicy2-core-integration 2025-11-19 09:47:06 -08:00
kevin zhao
5a1e6defd9 do not ensure dir and default.codexpolicy 2025-11-19 09:46:12 -08:00
zhao-oai
db36ccbe35 /// for docstring
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-19 09:04:37 -08:00
kevin zhao
9bb7589a36 with_escalated_permissions -> enum 2025-11-18 15:09:36 -08:00
kevin zhao
25bf30661b adding docstrings for ApprovalRequirement 2025-11-18 12:38:32 -08:00
kevin zhao
89271eccc5 using async 2025-11-18 12:31:05 -08:00
kevin zhao
a34b9fc259 load execpolicy from codex_home/policy and default to empty policy 2025-11-18 12:31:03 -08:00
zhao-oai
345050e1be running test single threaded
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-18 12:30:45 -08:00
kevin zhao
b5241d7f38 enabling execpolicy by default 2025-11-18 12:30:44 -08:00
kevin zhao
48a2db1a5a update debug message 2025-11-18 12:30:44 -08:00
kevin zhao
9cbe84748e addressing comments 2025-11-18 12:30:44 -08:00
kevin zhao
cda6857fff remove unused import 2025-11-18 12:30:44 -08:00
kevin zhao
7ac303b051 fix rebase error 2025-11-18 12:30:44 -08:00
kevin zhao
c9a34cd493 undo diff 2025-11-18 12:30:44 -08:00
kevin zhao
f69b225f44 exec_policy 2025-11-18 12:30:44 -08:00
kevin zhao
f8dc20279b rename test file 2025-11-18 12:30:44 -08:00
kevin zhao
a13b81adea update comment 2025-11-18 12:30:44 -08:00
kevin zhao
512a6c3386 update tracing:: message 2025-11-18 12:30:43 -08:00
kevin zhao
3cd5c23910 execpolicy2 -> execpolicy 2025-11-18 12:30:41 -08:00
kevin zhao
c6c03aed22 execpolicy2 core integration 2025-11-18 12:30:19 -08:00
kevin zhao
3990d90e10 precompute approval_requirement 2025-11-18 12:30:19 -08:00
kevin zhao
f18fdc97b3 execpolicy2 core integration
fix PR

undo keyring store
2025-11-18 12:30:17 -08:00
iceweasel-oai
4bada5a84d Prompt to turn on windows sandbox when auto mode selected. (#6618)
- stop prompting users to install WSL 
- prompt users to turn on Windows sandbox when auto mode requested.

<img width="1660" height="195" alt="Screenshot 2025-11-17 110612"
src="https://github.com/user-attachments/assets/c67fc239-a227-417e-94bb-599a8ed8f11e"
/>
<img width="1684" height="168" alt="Screenshot 2025-11-17 110637"
src="https://github.com/user-attachments/assets/d18c3370-830d-4971-8746-04757ae2f709"
/>
<img width="1655" height="293" alt="Screenshot 2025-11-17 110719"
src="https://github.com/user-attachments/assets/d21f6ce9-c23e-4842-baf6-8938b77c16db"
/>
2025-11-18 11:38:18 -08:00
Ahmed Ibrahim
3de8790714 Add the utility to truncate by tokens (#6746)
- This PR is to make it on path for truncating by tokens. This path will
be initially used by unified exec and context manager (responsible for
MCP calls mainly).
- We are exposing new config `calls_output_max_tokens`
- Use `tokens` as the main budget unit but truncate based on the model
family by Introducing `TruncationPolicy`.
- Introduce `truncate_text` as a router for truncation based on the
mode.

In next PRs:
- remove truncate_with_line_bytes_budget
- Add the ability to the model to override the token budget.
2025-11-18 11:36:23 -08:00
Alejandro Peña
b035c604b0 Update faq.md section on supported models (#6832)
Update faq.md to recommend usage of GPT-5.1 Codex, the latest Codex
model from OpenAI.
2025-11-18 09:38:45 -08:00
zhao-oai
e9e644a119 fixing localshell tool calls (#6823)
- Local-shell tool responses were always tagged as
`ExecCommandSource::UserShell` because handler would call
`run_exec_like` with `is_user_shell_cmd` set to true.
- Treat `ToolPayload::LocalShell` the same as other model generated
shell tool calls by deleting `is_user_shell_cmd` from `run_exec_like`
(since actual user shell commands follow a separate code path)
2025-11-18 17:28:26 +00:00
jif-oai
f5d9939cda feat: enable parallel tool calls (#6796) 2025-11-18 17:10:14 +00:00
jif-oai
838531d3e4 feat: remote compaction (#6795)
Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-11-18 16:51:16 +00:00
jif-oai
0eb2e6f9ee nit: app server (#6830) 2025-11-18 16:34:13 +00:00
jif-oai
c20df79a38 nit: mark ghost commit as stable (#6833) 2025-11-18 16:05:49 +00:00
jif-oai
fc55fd7a81 feat: git branch tooling (#6831) 2025-11-18 15:26:09 +00:00
Lael
f3d4e210d8 🐛 fix(rmcp-client): refresh OAuth tokens using expires_at (#6574)
## Summary
- persist OAuth credential expiry timestamps and rehydrate `expires_in`
- proactively refresh rmcp OAuth tokens when `expires_at` is near, then
persist

## Testing
- just fmt
- just fix -p codex-rmcp-client
- cargo test -p codex-rmcp-client

Fixes #6572
2025-11-18 02:16:58 -05:00
Dylan Hurd
28ebe1c97a fix(windows) shell_command on windows, minor parsing (#6811)
## Summary
Enables shell_command for windows users, and starts adding some basic
command parsing here, to at least remove powershell prefixes. We'll
follow this up with command parsing but I wanted to land this change
separately with some basic UX.

**NOTE**: This implementation parses bash and powershell on both
platforms. In theory this is possible, since you can use git bash on
windows or powershell on linux. In practice, this may not be worth the
complexity of supporting, so I don't feel strongly about the current
approach vs. platform-specific branching.

## Testing
- [x] Added a bunch of tests 
- [x] Ran on both windows and os x
2025-11-17 22:23:53 -08:00
Dylan Hurd
2b7378ac77 chore(core) Add shell_serialization coverage (#6810)
## Summary
Similar to #6545, this PR updates the shell_serialization test suite to
cover the various `shell` tool invocations we have. Note that this does
not cover unified_exec, which has its own suite of tests. This should
provide some test coverage for when we eventually consolidate
serialization logic.

## Testing
- [x] These are tests
2025-11-17 19:10:56 -08:00
Ahmed Ibrahim
ddcc60a085 Update defaults to gpt-5.1 (#6652)
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults

## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
2025-11-17 17:40:11 -08:00
cassirer-openai
8465f1f2f4 Demote function call payload log to debug to avoid noisy error-level stderr (#6808) 2025-11-18 01:16:11 +00:00
zhao-oai
7ab45487dd execpolicy2 extension (#6627)
- enabling execpolicy2 parser to parse multiple policy files to build a
combined `Policy` (useful if codex detects many `.codexpolicy` files)
- adding functionality to `Policy` to allow evaluation of multiple cmds
at once (useful when we have chained commands)
2025-11-17 16:44:41 -08:00
Owen Lin
cecbd5b021 [app-server] feat: add v2 command execution approval flow (#6758)
This PR adds the API V2 version of the command‑execution approval flow
for the shell tool.

This PR wires the new RPC (`item/commandExecution/requestApproval`, V2
only) and related events (`item/started`, `item/completed`, and
`item/commandExecution/delta`, which are emitted in both V1 and V2)
through the app-server
protocol. The new approval RPC is only sent when the user initiates a
turn with the new `turn/start` API so we don't break backwards
compatibility with VSCE.

The approach I took was to make as few changes to the Codex core as
possible, leveraging existing `EventMsg` core events, and translating
those in app-server. I did have to add additional fields to
`EventMsg::ExecCommandEndEvent` to capture the command's input so that
app-server can statelessly transform these events to a
`ThreadItem::CommandExecution` item for the `item/completed` event.

Once we stabilize the API and it's complete enough for our partners, we
can work on migrating the core to be aware of command execution items as
a first-class concept.

**Note**: We'll need followup work to make sure these APIs work for the
unified exec tool, but will wait til that's stable and landed before
doing a pass on app-server.

Example payloads below:
```
{
  "method": "item/started",
  "params": {
    "item": {
      "aggregatedOutput": null,
      "command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
      "cwd": "/Users/owen/repos/codex/codex-rs",
      "durationMs": null,
      "exitCode": null,
      "id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
      "parsedCmd": [
        {
          "cmd": "touch /tmp/should-trigger-approval",
          "type": "unknown"
        }
      ],
      "status": "inProgress",
      "type": "commandExecution"
    }
  }
}
```

```
{
  "id": 0,
  "method": "item/commandExecution/requestApproval",
  "params": {
    "itemId": "call_lNWWsbXl1e47qNaYjFRs0dyU",
    "parsedCmd": [
      {
        "cmd": "touch /tmp/should-trigger-approval",
        "type": "unknown"
      }
    ],
    "reason": "Need to create file in /tmp which is outside workspace sandbox",
    "risk": null,
    "threadId": "019a93e8-0a52-7fe3-9808-b6bc40c0989a",
    "turnId": "1"
  }
}
```

```
{
  "id": 0,
  "result": {
    "acceptSettings": {
      "forSession": false
    },
    "decision": "accept"
  }
}
```

```
{
  "params": {
    "item": {
      "aggregatedOutput": null,
      "command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
      "cwd": "/Users/owen/repos/codex/codex-rs",
      "durationMs": 224,
      "exitCode": 0,
      "id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
      "parsedCmd": [
        {
          "cmd": "touch /tmp/should-trigger-approval",
          "type": "unknown"
        }
      ],
      "status": "completed",
      "type": "commandExecution"
    }
  }
}
```
2025-11-18 00:23:54 +00:00
zhao-oai
4000e26303 background rate limits fetch (#6789)
fetching rate limits every minute asynchronously
2025-11-17 16:06:26 -08:00
iceweasel-oai
e032d338f2 move cap_sid file into ~/.codex so the sandbox cannot overwrite it (#6798)
The `cap_sid` file contains the IDs of the two custom SIDs that the
Windows sandbox creates/manages to implement read-only and
workspace-write sandbox policies.

It previously lived in `<cwd>/.codex` which means that the sandbox could
write to it, which could degrade the efficacy of the sandbox. This
change moves it to `~/.codex/` (or wherever `CODEX_HOME` points to) so
that it is outside the workspace.
2025-11-17 15:49:41 -08:00
Eric Traut
8bebe86a47 Fix TUI issues with Alt-Gr on Windows (#6799)
This PR fixes keyboard handling for the Right Alt (aka "Alt-Gr") key on
Windows. This key appears on keyboards in Central and Eastern Europe.
Codex has effectively never worked for Windows users in these regions
because the code didn't properly handle this key, which is used for
typing common symbols like `\` and `@`.

A few days ago, I merged a [community-authored
PR](https://github.com/openai/codex/pull/6720) that supplied a partial
fix for this issue. Upon closer inspect, that PR was 1) too broad (not
scoped to Windows only) and 2) incomplete (didn't fix all relevant code
paths, so paste was still broken).

This improvement is based on another [community-provided
PR](https://github.com/openai/codex/pull/3241) by @marektomas-cz. He
submitted it back in September and later closed it because it didn't
receive any attention.

This fix addresses the following bugs: #5922, #3046, #3092, #3519,
#5684, #5843.
2025-11-17 15:18:16 -08:00
Jeremy Rose
ab2e7499f8 core: add a feature to disable the shell tool (#6481)
`--disable shell_tool` disables the built-in shell tool. This is useful
for MCP-only operation.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-17 22:56:19 +00:00
Dylan Hurd
daf77b8452 chore(core) Update shell instructions (#6679)
## Summary
Consolidates `shell` and `shell_command` tool instructions.
## Testing 
- [x] Updated tests, tested locally
2025-11-17 13:05:15 -08:00
Owen Lin
03a6e853c0 fix: annotate all app server v2 types with camelCase (#6791) 2025-11-17 12:02:52 -08:00
rugvedS07
837bc98a1d LM Studio OSS Support (#2312)
## Overview

Adds LM Studio OSS support. Closes #1883


### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.

If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.

#### Additional enhancements 
The default can be set using `oss-provider` in config like:

```
oss_provider = "lmstudio"
```

For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`

### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support

---------

Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-17 11:49:09 -08:00
Celia Chen
842a1b7fe7 [app-server] add events to readme (#6690)
add table of contents, lifecycle and events to readme.
2025-11-17 19:28:05 +00:00
Jeremy Rose
03ffe4d595 core/tui: non-blocking MCP startup (#6334)
This makes MCP startup not block TUI startup. Messages sent while MCPs
are booting will be queued.


https://github.com/user-attachments/assets/96e1d234-5d8f-4932-a935-a675d35c05e0


Fixes #6317

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-11-17 11:26:11 -08:00
Owen Lin
ae2a084fae chore: delete chatwidget::tests::binary_size_transcript_snapshot tui test (#6759)
We're running into quite a bit of drag maintaining this test, since
every time we add fields to an EventMsg that happened to be dumped into
the `binary-size-log.jsonl` fixture, this test starts to fail. The fix
is usually to either manually update the `binary-size-log.jsonl` fixture
file, or update the `upgrade_event_payload_for_tests` function to map
the data in that file into something workable.

Eason says it's fine to delete this test, so let's just delete it
2025-11-17 11:11:41 -08:00
zhao-oai
a941ae7632 feat: execpolicy v2 (#6467)
## Summary
- Introduces the `codex-execpolicy2` crate.
- This PR covers only the prefix-rule subset of the planned execpolicy
v2 language; a richer language will follow.

## Policy
- Policy language centers on `prefix_rule(pattern=[...], decision?,
match?, not_match?)`, where `pattern` is an ordered list of tokens; any
element may be a list to denote alternatives. `decision` defaults to
`allow`; valid values are `allow`, `prompt`, and `forbidden`. `match` /
`not_match` hold example commands that are tokenized and validated at
load time (think of these as unit tests).

## Policy shapes
- Prefix rules use Starlark syntax:
```starlark
prefix_rule(
    pattern = ["cmd", ["alt1", "alt2"]], # ordered tokens; list entries denote alternatives
    decision = "prompt",                # allow | prompt | forbidden; defaults to allow
    match = [["cmd", "alt1"]],          # examples that must match this rule (enforced at compile time)
    not_match = [["cmd", "oops"]],      # examples that must not match this rule (enforced at compile time)
)
```

## Response shapes
- Match:

```json
{
  "match": {
    "decision": "allow|prompt|forbidden",
    "matchedRules": [
      {
        "prefixRuleMatch": {
          "matchedPrefix": ["<token>", "..."],
          "decision": "allow|prompt|forbidden"
        }
      }
    ]
  }
}
```

- No match:

```json
"noMatch"
```

- `matchedRules` lists every rule whose prefix matched the command;
`matchedPrefix` is the exact prefix that matched.
- The effective `decision` is the strictest severity across all matches
(`forbidden` > `prompt` > `allow`).

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-17 10:15:45 -08:00
jif-oai
2c665fb1dd nit: personal git ignore (#6787) 2025-11-17 17:45:52 +00:00
jif-oai
98a90a3bb2 tmp: drop sccache for windows 2 (#6775) 2025-11-17 16:39:15 +00:00
jif-oai
7c8d333980 feat: placeholder for image that can't be decoded to prevent 400 (#6773) 2025-11-17 16:10:53 +00:00
Dylan Hurd
497fb4a19c fix(core) serialize shell_command (#6744)
## Summary
Ensures we're serializing calls to `shell_command`

## Testing
- [x] Added unit test
2025-11-16 23:16:51 -08:00
Xiao-Yong Jin
5860481bc4 Fix FreeBSD/OpenBSD builds: target-specific keyring features and BSD hardening (#6680)
## Summary
Builds on FreeBSD and OpenBSD were failing due to globally enabled
Linux-specific keyring features and hardening code paths not gated by
OS. This PR scopes keyring native backends to the
appropriate targets, disables default features at the workspace root,
and adds a BSD-specific hardening function. Linux/macOS/Windows behavior
remains unchanged, while FreeBSD/OpenBSD
  now build and run with a supported backend.

## Key Changes

  - Keyring features:
- Disable keyring default features at the workspace root to avoid
pulling Linux backends on non-Linux.
- Move native backend features into target-specific sections in the
affected crates:
          - Linux: linux-native-async-persistent
          - macOS: apple-native
          - Windows: windows-native
          - FreeBSD/OpenBSD: sync-secret-service
  - Process hardening:
      - Add pre_main_hardening_bsd() for FreeBSD/OpenBSD, applying:
          - Set RLIMIT_CORE to 0
          - Clear LD_* environment variables
- Simplify process-hardening Cargo deps to unconditional libc (avoid
conflicting OS fragments).
  - No changes to CODEX_SANDBOX_* behavior.

## Rationale

- Previously, enabling keyring native backends globally pulled
Linux-only features on BSD, causing build errors.
- Hardening logic was tailored for Linux/macOS; BSD builds lacked a
gated path with equivalent safeguards.
- Target-scoped features and BSD hardening make the crates portable
across these OSes without affecting existing behavior elsewhere.

## Impact by Platform

  - Linux: No functional change; backends now selected via target cfg.
  - macOS: No functional change; explicit apple-native mapping.
  - Windows: No functional change; explicit windows-native mapping.
- FreeBSD/OpenBSD: Builds succeed using sync-secret-service; BSD
hardening applied during startup.

## Testing

- Verified compilation across affected crates with target-specific
features.
- Smoke-checked that Linux/macOS/Windows feature sets remain identical
functionally after scoping.
- On BSD, confirmed keyring resolves to sync-secret-service and
hardening compiles.

## Risks / Compatibility

  - Minimal risk: only feature scoping and OS-gated additions.
- No public API changes in the crates; runtime behavior on non-BSD
platforms is preserved.
- On BSD, the new hardening clears LD_*; this is consistent with
security posture on other Unix platforms.

## Reviewer Notes

- Pay attention to target-specific sections for keyring in the affected
Cargo.toml files.
- Confirm pre_main_hardening_bsd() mirrors the safe subset of
Linux/macOS hardening without introducing Linux-only calls.
- Confirm no references to CODEX_SANDBOX_ENV_VAR or
CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR were added/modified.

## Checklist

  - Disable keyring default features at workspace root.
- Target-specific keyring features mapped per OS
(Linux/macOS/Windows/BSD).
  - Add BSD hardening (RLIMIT_CORE=0, clear LD_*).
  - Simplify process-hardening dependencies to unconditional libc.
  - No changes to sandbox env var code.
  - Formatting and linting: just fmt + just fix -p for changed crates.
  - Project tests pass for changed crates; broader suite unchanged.

---------

Co-authored-by: celia-oai <celia@openai.com>
2025-11-17 05:07:34 +00:00
Eric Traut
a52cf4d2b4 Exempt the "codex" github user from signing the CLA (#6724)
This fixes bug #6697
2025-11-16 20:49:31 -08:00
dependabot[bot]
e70c52a3af chore(deps): bump actions/github-script from 7 to 8 (#6755)
Bumps [actions/github-script](https://github.com/actions/github-script)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/github-script/releases">actions/github-script's
releases</a>.</em></p>
<blockquote>
<h2>v8.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update Node.js version support to 24.x by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li>README for updating actions/github-script from v7 to v8 by <a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li><a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7.1.0...v8.0.0">https://github.com/actions/github-script/compare/v7.1.0...v8.0.0</a></p>
<h2>v7.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade husky to v9 by <a
href="https://github.com/benelan"><code>@​benelan</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li>Upgrade IA Publish by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/486">actions/github-script#486</a></li>
<li>Fix workflow status badges by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/497">actions/github-script#497</a></li>
<li>Update usage of <code>actions/upload-artifact</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/512">actions/github-script#512</a></li>
<li>Clear up package name confusion by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/514">actions/github-script#514</a></li>
<li>Update dependencies with <code>npm audit fix</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/515">actions/github-script#515</a></li>
<li>Specify that the used script is JavaScript by <a
href="https://github.com/timotk"><code>@​timotk</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li>chore: Add Dependabot for NPM and Actions by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/472">actions/github-script#472</a></li>
<li>Define <code>permissions</code> in workflows and update actions by
<a href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in
<a
href="https://redirect.github.com/actions/github-script/pull/531">actions/github-script#531</a></li>
<li>chore: Add Dependabot for .github/actions/install-dependencies by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/532">actions/github-script#532</a></li>
<li>chore: Remove .vscode settings by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/533">actions/github-script#533</a></li>
<li>ci: Use github/setup-licensed by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/473">actions/github-script#473</a></li>
<li>make octokit instance available as octokit on top of github, to make
it easier to seamlessly copy examples from GitHub rest api or octokit
documentations by <a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li>Remove <code>octokit</code> README updates for v7 by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/557">actions/github-script#557</a></li>
<li>docs: add &quot;exec&quot; usage examples by <a
href="https://github.com/neilime"><code>@​neilime</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li>Bump ruby/setup-ruby from 1.213.0 to 1.222.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/563">actions/github-script#563</a></li>
<li>Bump ruby/setup-ruby from 1.222.0 to 1.229.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/575">actions/github-script#575</a></li>
<li>Clearly document passing inputs to the <code>script</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/603">actions/github-script#603</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/benelan"><code>@​benelan</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li><a href="https://github.com/timotk"><code>@​timotk</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li><a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li><a href="https://github.com/neilime"><code>@​neilime</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7...v7.1.0">https://github.com/actions/github-script/compare/v7...v7.1.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed597411d8"><code>ed59741</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/653">#653</a>
from actions/sneha-krip/readme-for-v8</li>
<li><a
href="2dc352e4ba"><code>2dc352e</code></a>
Bold minimum Actions Runner version in README</li>
<li><a
href="01e118c8d0"><code>01e118c</code></a>
Update README for Node 24 runtime requirements</li>
<li><a
href="8b222ac82e"><code>8b222ac</code></a>
Apply suggestion from <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a></li>
<li><a
href="adc0eeac99"><code>adc0eea</code></a>
README for updating actions/github-script from v7 to v8</li>
<li><a
href="20fe497b3f"><code>20fe497</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/637">#637</a>
from actions/node24</li>
<li><a
href="e7b7f222b1"><code>e7b7f22</code></a>
update licenses</li>
<li><a
href="2c81ba05f3"><code>2c81ba0</code></a>
Update Node.js version support to 24.x</li>
<li>See full diff in <a
href="https://github.com/actions/github-script/compare/v7...v8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/github-script&package-manager=github_actions&previous-version=7&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-16 19:53:19 -08:00
dulikaifazr
de1768d3ba Fix: Claude models return incomplete responses due to empty finish_reason handling (#6728)
## Summary
Fixes streaming issue where Claude models return only 1-4 characters
instead of full responses when used through certain API
providers/proxies.

## Environment
- **OS**: Windows
- **Models affected**: Claude models (e.g., claude-haiku-4-5-20251001)
- **API Provider**: AAAI API proxy (https://api.aaai.vip/v1)
- **Working models**: GLM, Google models work correctly

## Problem
When using Claude models in both TUI and exec modes, only 1-4 characters
are displayed despite the backend receiving the full response. Debug
logs revealed that some API providers send SSE chunks with an empty
string finish_reason during active streaming, rather than null or
omitting the field entirely.

The current code treats any non-null finish_reason as a termination
signal, causing the stream to exit prematurely after the first chunk.
The problematic chunks contain finish_reason with an empty string
instead of null.

## Solution
Fix empty finish_reason handling in chat_completions.rs by adding a
check to only process non-empty finish_reason values. This ensures empty
strings are ignored and streaming continues normally.

## Testing
- Tested on Windows with Claude Haiku model via AAAI API proxy
- Full responses now received and displayed correctly in both TUI and
exec modes
- Other models (GLM, Google) continue to work as expected
- No regression in existing functionality

## Impact
- Improves compatibility with API providers that send empty
finish_reason during streaming
- Enables Claude models to work correctly in Windows environment
- No breaking changes to existing functionality

## Related Issues
This fix resolves the issue where Claude models appeared to return
incomplete responses. The root cause was identified as a compatibility
issue in parsing SSE responses from certain API providers/proxies,
rather than a model-specific problem. This change improves overall
robustness when working with various API endpoints.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-16 19:50:36 -08:00
Akrelion45
702238f004 Fix AltGr/backslash input on Windows Codex terminal (#6720)
### Summary

- Treat AltGr chords (Ctrl+Alt) as literal character input in the Codex
TUI textarea so Windows terminals that report
    backslash and other characters via AltGr insert correctly.
- Add regression test altgr_ctrl_alt_char_inserts_literal to ensure
Ctrl+Alt char events append the character and
    advance the cursor.

 ### Motivation

On US/UK keyboard layouts, backslash is produced by a plain key, so
Ctrl+Alt handling is never exercised and the
bug isn’t visible. On many non‑US layouts (e.g., German), backslash and
other symbols require AltGr, which terminals
report as Ctrl+Alt+<char>. Our textarea previously filtered these chords
like navigation bindings, so AltGr input was
dropped on affected layouts. This change treats AltGr chords as literal
input so backslash and similar symbols work on
  Windows terminals.

This fixes multiple reported Issues where the \ symbol got cut off.
Like:
C:\Users\Admin
became
C:UsersAdmin

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-16 19:15:06 -08:00
Eric Traut
fa5f6e76c9 Revert "tmp: drop sccache for windows (#6673)" (#6751)
This reverts commit 4719cba19a
2025-11-16 18:37:12 -08:00
Joonsoo Lee
f828cd2897 fix: resolve Windows MCP server execution for script-based tools (#3828)
## What?

Fixes MCP server initialization failures on Windows when using
script-based tools like `npx`, `pnpm`, and `yarn` that rely on
`.cmd`/`.bat` files rather than `.exe` binaries.

Fixes #2945

## Why?

Windows users encounter "program not found" errors when configuring MCP
servers with commands like `npx` in their `~/.codex/config.toml`. This
happens because:

- Tools like `npx` are batch scripts (`npx.cmd`) on Windows, not
executable binaries
- Rust's `std::process::Command` bypasses the shell and cannot execute
these scripts directly
- The Windows shell normally handles this by checking `PATHEXT` for
executable extensions

Without this fix, Windows users must specify full paths or add `.cmd`
extensions manually, which breaks cross-platform compatibility.

## How?

Added platform-specific program resolution using the `which` crate to
find the correct executable path:

- **Windows**: Resolves programs through PATH/PATHEXT to find
`.cmd`/`.bat` scripts
- **Unix**: Returns the program unchanged (no-op, as Unix handles
scripts natively)

### Changes

- Added `which = "6"` dependency to `mcp-client/Cargo.toml`
- Implemented `program_resolver` module in `mcp_client.rs` with
platform-specific resolution
- Added comprehensive tests for both Windows and Unix behavior

### Testing

Added platform-specific tests to verify:
- Unix systems execute scripts without extensions
- Windows fails without proper extensions
- Windows succeeds with explicit extensions
- Cross-platform resolution enables successful execution

**Tested on:**
- Windows 11 (NT 10.0.26100.0 x64)
- PowerShell 5.1 & 7+, CMD, Git Bash
- MCP servers: playwright, context7, supabase
- WSL (verified no regression)

**Local checks passed:**
```bash
cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item
```

### Results

**Before:**
```
🖐 MCP client for `playwright` failed to start: program not found
```

**After:**
```
🖐 MCP client for `playwright` failed to start: request timed out
```

Windows users can now use simple commands like `npx` in their config
without specifying full paths or extensions. The timeout issue is a
separate concern that will be addressed in a follow-up PR.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-16 13:41:10 -08:00
Abkari Mohammed Sayeem
326c1e0a7e Fix documentation errors for Custom Prompts named arguments and add canonical examples (#5910)
The Custom Prompts documentation (docs/prompts.md) was incomplete for
named arguments:

1. **Documentation for custom prompts was incomplete** - named argument
usage was mentioned briefly but lacked comprehensive canonical examples
showing proper syntax and behavior.

2. **Fixed by adding canonical, tested syntax and examples:**
   - Example 1: Basic named arguments with TICKET_ID and TICKET_TITLE
   - Example 2: Mixed positional and named arguments with FILE and FOCUS
   - Example 3: Using positional arguments
- Example 4: Updated draftpr example to use proper $FEATURE_NAME syntax
   - Added clear usage examples showing KEY=value syntax
   - Added expanded prompt examples showing the result
   - Documented error handling and validation requirements

3. **Added Implementation Reference section** that references the
relevant feature implementation from the codebase (PRs #4470 and #4474
for initial implementation, #5332 and #5403 for clarifications).

This addresses issue #5039 by providing complete, accurate documentation
for named argument usage in custom prompts.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-15 09:25:46 -08:00
Ahmed Ibrahim
3f1c4b9add Tighten panic on double truncation (#6701) 2025-11-15 07:28:59 +00:00
Ahmed Ibrahim
0b28e72b66 Improve compact (#6692)
This PR does the following:
- Add compact prefix to the summary
- Change the compaction prompt
- Allow multiple compaction for long running tasks
- Filter out summary messages on the following compaction

Considerations:
- Filtering out the summary message isn't the most clean
- Theoretically, we can end up in infinite compaction loop if the user
messages > compaction limit . However, that's not possible in today's
code because we have hard cap on user messages.
- We need to address having multiple user messages because it confuses
the model.

Testing:
- Making sure that after compact we always end up with one user message
(task) and one summary, even on multiple compaction.
2025-11-15 07:17:51 +00:00
Ahmed Ibrahim
94dfb211af Refactor truncation helpers into its own file (#6683)
That's to centralize the truncation in one place. Next step would be to
make only two methods public: one with bytes/lines and one with tokens.
2025-11-15 06:44:23 +00:00
Ahmed Ibrahim
b560c5cef1 Revert "templates and build step for validating/submitting winget package" (#6696)
Reverts openai/codex#6485
2025-11-15 03:47:58 +00:00
Josh McKinney
4ae986967c ci: only run CLA assistant for openai org repos (#6687)
This prevents notifications coming from PRs on forked repos
2025-11-14 17:34:14 -08:00
Vinicius da Motta
89ecc00b79 Handle "Don't Trust" directory selection in onboarding (#4941)
Fixes #4940
Fixes #4892

When selecting "No, ask me to approve edits and commands" during
onboarding, the code wasn't applying the correct approval policy,
causing Codex to block all write operations instead of requesting
approval.

This PR fixes the issue by persisting the "DontTrust" decision in
config.toml as `trust_level = "untrusted"` and handling it in the
sandbox and approval policy logic, so Codex correctly asks for approval
before making changes.

## Before (bug)
<img width="709" height="500" alt="bef"
src="https://github.com/user-attachments/assets/5aced26d-d810-4754-879a-89d9e4e0073b"
/>

## After (fixed)
<img width="713" height="359" alt="aft"
src="https://github.com/user-attachments/assets/9887bbcb-a9a5-4e54-8e76-9125a782226b"
/>

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-14 15:23:35 -08:00
pakrym-oai
018a2d2e50 Ignore unified_exec_respects_workdir_override (#6693) 2025-11-14 15:00:31 -08:00
pakrym-oai
cfcc87a953 Order outputs before inputs (#6691)
For better caching performance all output items should be rendered in
the order they were produced before all new input items (for example,
all function_call before all function_call_output).
2025-11-14 14:54:11 -08:00
Owen Lin
c3951e505d feat: add app-server-test-client crate for internal use (#5391)
For app-server development it's been helpful to be able to trigger some
test flows end-to-end and print the JSON-RPC messages sent between
client and server.
2025-11-14 12:39:58 -08:00
iceweasel-oai
abb7b79701 fix codex detection, add new security-focused smoketests. (#6682)
Fix 'codex' detection to look for debug build, then release build, then
installed.

Adds more smoketests around security from @viyatb-oai
2025-11-14 12:08:59 -08:00
Ryan Lopopolo
936650001f feat(ts-sdk): allow overriding CLI environment (#6648)
## Summary
- add an `env` option for the TypeScript Codex client and plumb it into
`CodexExec` so the CLI can run without inheriting `process.env`
- extend the test spy to capture spawn environments, add coverage for
the new option, and document how to use it

## Testing
- `pnpm test` *(fails: corepack cannot download pnpm because outbound
network access is blocked in the sandbox)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916b2d7c7548322a72d61d91a2dac85)
2025-11-14 19:44:19 +00:00
iceweasel-oai
37fba28ac3 templates and build step for validating/submitting winget package (#6485) 2025-11-14 11:06:44 -08:00
pakrym-oai
4ba562d2dd Add test timeout (#6612)
Add an overall test timeout of 30s.
2025-11-14 09:30:37 -08:00
Jeremy Rose
799364de87 Enable TUI notifications by default (#6633)
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default

## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
2025-11-14 09:28:09 -08:00
jif-oai
4719cba19a tmp: drop sccache for windows (#6673) 2025-11-14 17:29:05 +01:00
Celia Chen
526777c9b4 [App server] add mcp tool call item started/completed events (#6642)
this PR does two things:
1. refactor `apply_bespoke_event_handling` into a separate file as it's
getting kind of long;
2. add mcp tool call `item/started` and `item/completed` events. To roll
out app server events asap we didn't properly migrate mcp core events to
use TurnItem for mcp tool calls - this will be a follow-up PR.

real events generated in log:
```
{
  "method": "codex/event/mcp_tool_call_end",
  "params": {
    "conversationId": "019a8021-26af-7c20-83db-21ca81e44d68",
    "id": "0",
    "msg": {
      "call_id": "call_7EjRQkD9HnfyMWf7tGrT9FKA",
      "duration": {
        "nanos": 92708,
        "secs": 0
      },
      "invocation": {
        "arguments": {
          "server": ""
        },
        "server": "codex",
        "tool": "list_mcp_resources"
      },
      "result": {
        "Ok": {
          "content": [
            {
              "text": "{\"resources\":[]}",
              "type": "text"
            }
          ],
          "isError": false
        }
      },
      "type": "mcp_tool_call_end"
    }
  }
}

{
  "method": "item/completed",
  "params": {
    "item": {
      "arguments": {
        "server": ""
      },
      "error": null,
      "id": "call_7EjRQkD9HnfyMWf7tGrT9FKA",
      "result": {
        "content": [
          {
            "text": "{\"resources\":[]}",
            "type": "text"
          }
        ],
        "structuredContent": null
      },
      "server": "codex",
      "status": "completed",
      "tool": "list_mcp_resources",
      "type": "mcpToolCall"
    }
  }
}
```
2025-11-14 08:08:43 -08:00
jif-oai
f17b392470 feat: cache tokenizer (#6609) 2025-11-14 17:05:00 +01:00
jif-oai
63c8c01f40 feat: better UI for unified_exec (#6515)
<img width="376" height="132" alt="Screenshot 2025-11-12 at 17 36 22"
src="https://github.com/user-attachments/assets/ce693f0d-5ca0-462e-b170-c20811dcc8d5"
/>
2025-11-14 16:31:12 +01:00
jif-oai
4788fb179a feat: add resume logs when doing /new (#6660)
<img width="769" height="803" alt="Screenshot 2025-11-14 at 10 25 49"
src="https://github.com/user-attachments/assets/12fbc21e-cab9-4d0a-a484-1aeb60219f96"
/>
2025-11-14 11:42:16 +01:00
pakrym-oai
6c384eb9c6 tests: replace mount_sse_once_match with mount_sse_once for SSE mocking (#6640) 2025-11-13 18:04:05 -08:00
Ahmed Ibrahim
2a6e9b20df Promote shared helpers for suite tests (#6460)
## Summary
- add `TestCodex::submit_turn_with_policies` and extend the response
helpers with reusable tool-call utilities
- update the grep_files, read_file, list_dir, shell_serialization, and
tools suites to rely on the shared helpers instead of local copies
- make the list_dir helper return `anyhow::Result` so clippy no longer
warns about `expect`

## Testing
- `just fix -p codex-core`
- `cargo test -p codex-core --test all
suite::grep_files::grep_files_tool_collects_matches`
- `cargo test -p codex-core
suite::grep_files::grep_files_tool_collects_matches -- --ignored`
(filter requests ignored tests so nothing runs, but the build stays
clean)


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69112d53abac83219813cab4d7cb6446)
2025-11-13 17:12:10 -08:00
Ahmed Ibrahim
f3c6b1334b Use shared network gating helper in chat completion tests (#6461)
## Summary
- replace the bespoke network check in the chat completion payload and
SSE tests with the existing `skip_if_no_network!` helper so they follow
the same gating convention as the rest of the suite

## Testing
- `just fmt`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69112d4cb9f08321ba773e8ccf39778e)
2025-11-13 17:11:43 -08:00
Ahmed Ibrahim
9890ceb939 Avoid double truncation (#6631)
1. Avoid double truncation by giving 10% above the tool default constant
2. Add tests that fails when const = 1
2025-11-13 16:59:31 -08:00
pakrym-oai
7b027e7536 Revert "Revert "Overhaul shell detection and centralize command generation for unified exec"" (#6607)
Reverts openai/codex#6606
2025-11-13 16:45:17 -08:00
Owen Lin
db2aa57d73 [app-server] small fixes for JSON schema export and one-of types (#6614)
A partner is consuming our generated JSON schema bundle for app-server
and identified a few issues:
- not all polymorphic / one-of types have a type descriminator
- `"$ref": "#/definitions/v2/SandboxPolicy"` is missing
- "Option<>" is an invalid schema name, and also unnecessary

This PR:
- adds the type descriminator to the various types that are missing it
except for `SessionSource` and `SubAgentSource` because they are
serialized to disk (adding this would break backwards compat for
resume), and they should not be necessary to consume for an integration
with app-server.
- removes the special handling in `export.rs` of various types like
SandboxPolicy, which turned out to be unnecessary and incorrect
- filters out `Option<>` which was auto-generated for request params
that don't need a body

For context, we currently pull in wayyy more types than we need through
the `EventMsg` god object which we are **not** planning to expose in API
v2 (this is how I suspect `SessionSource` and `SubAgentSource` are being
pulled in). But until we have all the necessary v2 notifications in
place that will allow us to remove `EventMsg`, we will keep exporting it
for now.
2025-11-13 16:25:17 -08:00
Celia Chen
b8ec97c0ef [App-server] add new v2 events:item/reasoning/delta, item/agentMessage/delta & item/reasoning/summaryPartAdded (#6559)
core event to app server event mapping:
1. `codex/event/reasoning_content_delta` ->
`item/reasoning/summaryTextDelta`.
2. `codex/event/reasoning_raw_content_delta` ->
`item/reasoning/textDelta`
3. `codex/event/agent_message_content_delta` →
`item/agentMessage/delta`.
4. `codex/event/agent_reasoning_section_break` ->
`item/reasoning/summaryPartAdded`.

Also added a change in core to pass down content index, summary index
and item id from events.

Tested with the `git checkout owen/app_server_test_client && cargo run
-p codex-app-server-test-client -- send-message-v2 "hello"` and verified
that new events are emitted correctly.
2025-11-14 00:25:01 +00:00
Dylan Hurd
2c1b693da4 chore(core) Consolidate apply_patch tests (#6545)
## Summary
Consolidates our apply_patch tests into one suite, and ensures each test
case tests the various ways the harness supports apply_patch:
1. Freeform custom tool call
2. JSON function tool
3. Simple shell call
4. Heredoc shell call

There are a few test cases that are specific to a particular variant,
I've left those alone.

## Testing
- [x] This adds a significant number of tests
2025-11-13 15:52:39 -08:00
pakrym-oai
547be54ee8 Only list failed tests (#6619)
Makes output easier to parse
2025-11-13 13:50:33 -08:00
Dan Hernandez
b4a53aef47 feat: Add support for --add-dir to exec and TypeScript SDK (#6565)
## Summary

Adds support for specifying additional directories in the TypeScript SDK
through a new `additionalDirectories` option in `ThreadOptions`.

## Changes

- Added `additionalDirectories` parameter to `ThreadOptions` interface
- Updated `CodexExec` to accept and pass through additional directories
via the `--config` flag for `sandbox_workspace_write.writable_roots`
- Added comprehensive test coverage for the new functionality

## Test plan

- Added test case that verifies `additionalDirectories` is correctly
passed as repeated flags
- Existing tests continue to pass

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-13 13:47:10 -08:00
Dan Hernandez
439bc5dbbe Add AbortSignal support to TypeScript SDK (#6378)
## Summary
Adds AbortSignal support to the TypeScript SDK for canceling thread
execution using AbortController.

## Changes
- Add `signal?: AbortSignal` property to `TurnOptions` type
- Pass signal through Thread class methods to exec layer  
- Add signal parameter to `CodexExecArgs`
- Leverage Node.js native `spawn()` signal support for automatic
cancellation
- Add comprehensive test coverage (6 tests covering all abort scenarios)

## Implementation
The implementation uses Node.js's built-in AbortSignal support in
`spawn()` (available since Node v15, SDK requires >=18), which
automatically handles:
- Checking if already aborted before starting
- Killing the child process when abort is triggered
- Emitting appropriate error events
- All cleanup operations

This is a one-line change to the core implementation (`signal:
args.signal` passed to spawn), making it simple, reliable, and
maintainable.

## Usage Example
```typescript
import { Codex } from '@openai/codex-sdk';

const codex = new Codex({ apiKey: 'your-api-key' });
const thread = codex.startThread();

// Create AbortController
const controller = new AbortController();

// Run with abort signal
const resultPromise = thread.run("Your prompt here", {
  signal: controller.signal
});

// Cancel anytime
controller.abort('User requested cancellation');
```

## Testing
All tests pass (23 total across SDK):
-  Aborts when signal is already aborted (both run and runStreamed)
-  Aborts during execution/iteration
-  Completes normally when not aborted
-  Backward compatible (signal is optional)

Tests verified to fail correctly when signal support is removed (no
false positives).

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-11-13 13:35:42 -08:00
pakrym-oai
c95bd345ea Enable close-stale-contributor-prs.yml workflow (#6615)
Tested on https://github.com/openai/codex/pull/3036
2025-11-13 11:50:54 -08:00
pakrym-oai
0792a7953d Update default yield time (#6610)
10s for exec and 250ms for write_stdin
2025-11-13 10:24:41 -08:00
pakrym-oai
6cda3de3a4 Close stale PRs workflow (#6594)
Closes stale PRs from OpenAI employees.
2025-11-13 10:01:51 -08:00
pakrym-oai
041d6ad902 Migrate prompt caching tests to test_codex (#6605)
To hopefully fix the flakiness
2025-11-13 09:19:38 -08:00
pakrym-oai
e6995174c1 Revert "Overhaul shell detection and centralize command generation for unified exec" (#6606)
Reverts openai/codex#6577
2025-11-13 08:43:00 -08:00
pakrym-oai
d28e912214 Overhaul shell detection and centralize command generation for unified exec (#6577)
This fixes command display for unified exec. All `cd`s and `ls`es are
now parsed.

<img width="452" height="237" alt="image"
src="https://github.com/user-attachments/assets/ce92d81f-f74c-485a-9b34-1eaa29290ec6"
/>

Deletes a ton of tests that were doing nothing from shell.rs.

---------

Co-authored-by: Pavel Krymets <pavel@krymets.com>
2025-11-13 08:28:09 -08:00
Ahmed Ibrahim
ba74cee6f7 fix model picker wrapping (#6589)
Previously the popup measured rows using the full content width while
the renderer drew them with 2 columns of padding, so at certain widths
the layout allocated too little vertical space and hid the third option.
Now both desired_height and render call a shared helper that subtracts
the padding before measuring, so the height we reserve always matches
what we draw and the menu doesn't drops entries.


https://github.com/user-attachments/assets/59058fd9-1e34-4325-b5fe-fc888dfcb6bc
2025-11-13 08:09:13 -08:00
jif-oai
2a417c47ac feat: proxy context left after compaction (#6597) 2025-11-13 16:54:03 +01:00
Dylan Hurd
8dcbd29edd chore(core) Update prompt for gpt-5.1 (#6588)
## Summary
Updates the prompt for GPT-5.1
2025-11-13 07:51:28 -08:00
pakrym-oai
34621166d5 Default to explicit medium reasoning for 5.1 (#6593) 2025-11-13 07:58:42 +00:00
pakrym-oai
e3dd362c94 Reasoning level update (#6586)
Automatically update reasoning levels when migrating between models
2025-11-13 06:24:36 +00:00
Ahmed Ibrahim
305fe73d83 copy for model migration nudge (#6585) 2025-11-13 05:56:30 +00:00
Ahmed Ibrahim
e3aaee00c8 feat: show gpt mini (#6583) 2025-11-13 05:21:00 +00:00
Ahmed Ibrahim
b1979b70a8 remove porcupine model slug (#6580) 2025-11-13 04:43:31 +00:00
Eric Traut
73ed30d7e5 Avoid hang when tool's process spawns grandchild that shares stderr/stdout (#6575)
We've received many reports of codex hanging when calling certain tools.
[Here](https://github.com/openai/codex/issues/3204) is one example. This
is likely a major cause. The problem occurs when
`consume_truncated_output` waits for `stdout` and `stderr` to be closed
once the child process terminates. This normally works fine, but it
doesn't handle the case where the child has spawned grandchild processes
that inherits `stdout` and `stderr`.

The fix was originally written by @md-oai in [this
PR](https://github.com/openai/codex/pull/1852), which has gone stale.
I've copied the original fix (which looks sound to me) and added an
integration test to prevent future regressions.
2025-11-12 20:08:12 -08:00
Ahmed Ibrahim
ad7eaa80f9 Change model picker to include gpt5.1 (#6569)
- Change the presets
- Change the tests that make sure we keep the list of tools updated
- Filter out deprecated models
2025-11-12 19:44:53 -08:00
Ahmed Ibrahim
966d71c02a Update subtitle of model picker as part of the nux (#6567) 2025-11-12 18:30:43 -08:00
pakrym-oai
f97874093e Set verbosity to low for 5.1 (#6568)
And improve test coverage
2025-11-13 01:40:52 +00:00
Ahmed Ibrahim
e63ab0dd65 NUX for gpt5.1 (#6561)
- Introducing a screen to inform users of model changes. 
- Config name is being passed to be able to reuse this component in the
future for future models
2025-11-13 01:24:21 +00:00
Owen Lin
964220ac94 [app-server] feat: thread/resume supports history, path, and overrides (#6483)
This updates `thread/resume` to be at parity with v1's
`ResumeConversationParams`. Turns out history is useful for codex cloud
and path is useful for the VSCode extension. And config overrides are
always useful.
2025-11-12 22:02:43 +00:00
pakrym-oai
2f58e69997 Do not double encode request bodies in logging (#6558) 2025-11-12 21:28:42 +00:00
pakrym-oai
ec69a4a810 Add gpt-5.1 model definitions (#6551) 2025-11-12 12:44:36 -08:00
Eric Traut
ad09c138b9 Fixed status output to use auth information from AuthManager (#6529)
This PR addresses https://github.com/openai/codex/issues/6360. The root
problem is that the TUI was directly loading the `auth.json` file to
access the auth information. It should instead be using the AuthManager,
which records the current auth information. The `auth.json` file can be
overwritten at any time by other instances of the CLI or extension, so
its information can be out of sync with the current instance. The
`/status` command should always report the auth information associated
with the current instance.

An alternative fix for this bug was submitted by @chojs23 in [this
PR](https://github.com/openai/codex/pull/6495). That approach was only a
partial fix.
2025-11-12 10:26:50 -08:00
jif-oai
e00eb50db3 feat: only wait for mutating tools for ghost commit (#6534) 2025-11-12 18:16:32 +00:00
pakrym-oai
7d9ad3effd Fix otel tests (#6541)
Mount responses only once, remove unneeded retries and add a final
assistant messages to complete the turn.
2025-11-12 16:35:34 +00:00
Michael Bolin
c3a710ee14 chore: verify boolean values can be parsed as config overrides (#6516)
This is important to ensure that this:

```
codex --enable unified_exec
```

and this:

```
codex --config features.unified_exec=true
```

are equivalent. Also that when it is passed programmatically:


807e2c27f0/codex-rs/app-server-protocol/src/protocol/v1.rs (L55)

then this should work for `config`:

```json
{"features": {"shell_command_tool": true}}
```

though I believe also this:

```json
{"features.shell_command_tool": true}
```
2025-11-12 08:19:16 -08:00
Michael Bolin
29364f3a9b feat: shell_command tool (#6510)
This adds support for a new variant of the shell tool behind a flag. To
test, run `codex` with `--enable shell_command_tool`, which will
register the tool with Codex under the name `shell_command` that accepts
the following shape:

```python
{
  command: str
  workdir: str | None,
  timeout_ms: int | None,
  with_escalated_permissions: bool | None,
  justification: str | None,
}
```

This is comparable to the existing tool registered under
`shell`/`container.exec`. The primary difference is that it accepts
`command` as a `str` instead of a `str[]`. The `shell_command` tool
executes by running `execvp(["bash", "-lc", command])`, though the exact
arguments to `execvp(3)` depend on the user's default shell.

The hypothesis is that this will simplify things for the model. For
example, on Windows, instead of generating:

```json
{"command": ["pwsh.exe", "-NoLogo", "-Command", "ls -Name"]}
```

The model could simply generate:

```json
{"command": "ls -Name"}
```

As part of this change, I extracted some logic out of `user_shell.rs` as
`Shell::derive_exec_args()` so that it can be reused in
`codex-rs/core/src/tools/handlers/shell.rs`. Note the original code
generated exec arg lists like:

```javascript
["bash", "-lc", command]
["zsh", "-lc", command]
["pwsh.exe", "-NoProfile", "-Command", command]
```

Using `-l` for Bash and Zsh, but then specifying `-NoProfile` for
PowerShell seemed inconsistent to me, so I changed this in the new
implementation while also adding a `use_login_shell: bool` option to
make this explicit. If we decide to add a `login: bool` to
`ShellCommandToolCallParams` like we have for unified exec:


807e2c27f0/codex-rs/core/src/tools/handlers/unified_exec.rs (L33-L34)

Then this should make it straightforward to support.
2025-11-12 08:18:57 -08:00
jif-oai
530db0ad73 feat: warning switch model on resume (#6507)
<img width="1259" height="40" alt="Screenshot 2025-11-11 at 14 01 41"
src="https://github.com/user-attachments/assets/48ead3d2-d89c-4d8a-a578-82d9663dbd88"
/>
2025-11-12 11:13:37 +00:00
Gabriel Peal
424bfecd0b Re-add prettier log-level=warn to generate-ts (#6528)
I added it in https://github.com/openai/codex/pull/6342 but it was
removed in
https://github.com/openai/codex/pull/5063/files#diff-e2aa6dad1e886b7765158a27aefd1be5de99baa71b44f6bc5ce3fe462b9ae5d3R135
as a result of a bad diamond merge
2025-11-11 21:30:01 -05:00
Lionel Cheng
eb1c651c00 Update full-auto description with on-request (#6523)
This PR fixes #6522 by correcting the comment for `full-auto` in both
`codex-rs/exec/src/cli.rs` and `codex-rs/tui/src/cli.rs` from `-a
on-failure` to `-a on-request` to make it coherent with
`codex-rs/tui/src/lib.rs:97-105`:

```rust
pub async fn run_main(
    mut cli: Cli,
    codex_linux_sandbox_exe: Option<PathBuf>,
) -> std::io::Result<AppExitInfo> {
    let (sandbox_mode, approval_policy) = if cli.full_auto {
        (
            Some(SandboxMode::WorkspaceWrite),
            Some(AskForApproval::OnRequest),
        )
```

Running `just codex --help` or `just codex exec --help` should now yield
the correct description of `full-auto` CLI argument.

Signed-off-by: lionelchg <lionel.cheng@hotmail.fr>
2025-11-11 15:59:20 -08:00
Celia Chen
e357fc723d [app-server] add item started/completed events for turn items (#6517)
This one should be quite straightforward, as it's just a translation of
TurnItem events we already emit to ThreadItem that app-server exposes to
customers.

To test, cp my change to owen/app_server_test_client and do the
following:
```
cargo build -p codex-cli
RUST_LOG=codex_app_server=info CODEX_BIN=target/debug/codex cargo run -p codex-app-server-test-client -- send-message-v2 "hello"
```

example event before (still kept there for backward compatibility):
```
{
<   "method": "codex/event/item_completed",
<   "params": {
<     "conversationId": "019a74cc-fad9-7ab3-83a3-f42827b7b074",
<     "id": "0",
<     "msg": {
<       "item": {
<         "Reasoning": {
<           "id": "rs_03d183492e07e20a016913a936eb8c81a1a7671a103fee8afc",
<           "raw_content": [],
<           "summary_text": [
<             "Hey! What would you like to work on? I can explore the repo, run specific tests, or implement a change. Let's keep it short and straightforward. There's no need for a lengthy introduction or elaborate planning, just a friendly greeting and an open offer to help. I want to make sure the user feels welcomed and understood right from the start. It's all about keeping the tone friendly and concise!"
<           ]
<         }
<       },
<       "thread_id": "019a74cc-fad9-7ab3-83a3-f42827b7b074",
<       "turn_id": "0",
<       "type": "item_completed"
<     }
<   }
< }
```

after (v2):
```
< {
<   "method": "item/completed",
<   "params": {
<     "item": {
<       "id": "rs_03d183492e07e20a016913a936eb8c81a1a7671a103fee8afc",
<       "text": "Hey! What would you like to work on? I can explore the repo, run specific tests, or implement a change. Let's keep it short and straightforward. There's no need for a lengthy introduction or elaborate planning, just a friendly greeting and an open offer to help. I want to make sure the user feels welcomed and understood right from the start. It's all about keeping the tone friendly and concise!",
<       "type": "reasoning"
<     }
<   }
< }
```
2025-11-11 22:43:24 +00:00
pakrym-oai
807e2c27f0 Add unified exec escalation handling and tests (#6492)
Similar implementation to the shell tool
2025-11-11 08:19:35 -08:00
jif-oai
ad279eacdc nit: logs to trace (#6503) 2025-11-11 13:37:06 +00:00
jif-oai
052b052832 Enable ghost_commit feature by default (#6041)
## Summary
- enable the ghost_commit feature flag by default

## Testing
- just fmt

------
https://chatgpt.com/codex/tasks/task_i_6904ce2d0370832dbb3c2c09a90fb188
2025-11-11 09:20:46 +00:00
Celia Chen
6951872776 [hygiene][app-server] have a helper function for duplicate code in turn APIs (#6488)
turn_start and turn_interrupt have some logic that can be shared. have a
helper function for it.
2025-11-11 02:44:47 +00:00
pakrym-oai
bb7b0213a8 Colocate more of bash parsing (#6489)
Move a few callsites that were detecting `bash -lc` into a shared
helper.
2025-11-11 02:38:36 +00:00
pakrym-oai
6c36318bd8 Use codex-linux-sandbox in unified exec (#6480)
Unified exec isn't working on Linux because we don't provide the correct
arg0.

The library we use for pty management doesn't allow setting arg0
separately from executable. Use the same aliasing strategy we use for
`apply_patch` for `codex-linux-sandbox`.

Use `#[ctor]` hack to dispatch codex-linux-sandbox calls.


Addresses https://github.com/openai/codex/issues/6450
2025-11-10 17:17:09 -08:00
zhao-oai
930f81a17b flip rate limit status bar (#6482)
flipping rate limit status bar to match chat.com/codex/settings/usage

<img width="848" height="420" alt="Screenshot 2025-11-10 at 4 53 41 PM"
src="https://github.com/user-attachments/assets/e326db3f-4405-412d-9e62-337282ec9a35"
/>
2025-11-11 01:13:10 +00:00
iceweasel-oai
9aff64e017 upload Windows .exe file artifacts for CLI releases (#6478)
This PR is to unlock future WinGet installation. WinGet struggles to
create command aliases when installing from nested ZIPs on some clients,
so adding raw Windows x64/Arm64 executables lets the manifest use
InstallerType: portable with direct EXEs, which reliably registers the
codex alias. This makes “winget install → codex” work out of the box
without PATH changes.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-10 23:31:06 +00:00
Owen Lin
3838d6739c [app-server] update macro to make renaming methods less boilerplate-y (#6470)
We already do this for notification definitions and it's really nice.

Verified there are no changes to actual exported files by diff'ing
before and after this change.
2025-11-10 15:15:08 -08:00
Josh McKinney
60deb6773a refactor(tui): job-control for Ctrl-Z handling (#6477)
- Moved the unix-only suspend/resume logic into a dedicated job_control
module housing SuspendContext, replacing scattered cfg-gated fields and
helpers in tui.rs.
- Tui now holds a single suspend_context (Arc-backed) instead of
multiple atomics, and the event stream uses it directly for Ctrl-Z
handling.
- Added detailed docs around the suspend/resume flow, cursor tracking,
and the Arc/atomic ownership model for the 'static event stream.
- Renamed the process-level SIGTSTP helper to suspend_process and the
cursor tracker to set_cursor_y to better reflect their roles.
2025-11-10 23:13:43 +00:00
Jeremy Rose
0271c20d8f add codex debug seatbelt --log-denials (#4098)
This adds a debugging tool for analyzing why certain commands fail to
execute under the sandbox.

Example output:

```
$ codex debug seatbelt --log-denials bash -lc "(echo foo > ~/foo.txt)"
bash: /Users/nornagon/foo.txt: Operation not permitted

=== Sandbox denials ===
(bash) file-write-data /dev/tty
(bash) file-write-data /dev/ttys001
(bash) sysctl-read kern.ngroups
(bash) file-write-create /Users/nornagon/foo.txt
```

It operates by:

1. spawning `log stream` to watch system logs, and
2. tracking all descendant PIDs using kqueue + proc_listchildpids.

this is a "best-effort" technique, as `log stream` may drop logs(?), and
kqueue + proc_listchildpids isn't atomic and can end up missing very
short-lived processes. But it works well enough in my testing to be
useful :)
2025-11-10 22:48:14 +00:00
George Nesterenok
52e97b9b6b Fix wayland image paste error (#4824)
## Summary
- log and surface clipboard failures instead of silently ignoring them
when `Ctrl+V` pastes an image (`paste_image_to_temp_png()` now feeds an
error history cell)
- enable `arboard`’s `wayland-data-control` feature so native Wayland
sessions can deliver image data without XWayland
- keep the success path unchanged: valid images still attach and show
the `[image …]` placeholder as before

Fixes #4818

---------

Co-authored-by: Eric Traut <etraut@openai.com>
Co-authored-by: Jeremy Rose <172423086+nornagon-openai@users.noreply.github.com>
2025-11-10 14:35:30 -08:00
Owen Lin
2ac49fea58 [app-server] chore: move initialize out of deprecated API section (#6468)
Self-explanatory - `initialize` is not a deprecated API and works
equally well with the v2 APIs.
2025-11-10 20:24:36 +00:00
jif-oai
f01f2ec9ee feat: add workdir to unified_exec (#6466) 2025-11-10 19:53:36 +00:00
zhao-oai
980886498c Add user command event types (#6246)
adding new user command event, logic in TUI to render user command
events
2025-11-10 19:18:45 +00:00
Ahmed Ibrahim
e743d251a7 Add opt-out for rate limit model nudge (#6433)
## Summary
- add a `hide_rate_limit_model_nudge` notice flag plus config edit
plumbing so the rate limit reminder preference is persisted and
documented
- extend the chat widget prompt with a "never show again" option, and
wire new app events so selecting it hides future nudges immediately and
writes the config
- add unit coverage and refresh the snapshot for the three-option prompt

## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-core`
- `cargo test -p codex-tui`
- `cargo test -p codex-core` *(fails at
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`:
grandchild process still alive)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6910d7f407748321b2661fc355416994)
2025-11-10 09:21:53 -08:00
Shijie Rao
788badd221 fix: update brew auto update version check (#6238)
### Summary
* Use
`https://github.com/Homebrew/homebrew-cask/blob/main/Casks/c/codex.rb`
to get the latest available version for brew usage.
2025-11-10 09:05:00 -08:00
Owen Lin
fbdedd9a06 [app-server] feat: add command to generate json schema (#6406)
Add a `codex generate-json-schema` command for generating a JSON schema
bundle of app-server types, analogous to the existing `codex
generate-ts` command for Typescript.
2025-11-10 16:59:14 +00:00
Eric Traut
5916153157 Don't lock PRs that have been closed without merging (#6422)
The CLA action is designed to automatically lock a PR when it is closed.
This preserves the CLA agreement statements, preventing the contributor
from deleting them after the fact. However, this action is currently
locking PRs that are closed without merging. I'd like to keep such PRs
open so the contributor can respond with additional comments. I'm
currently manually unlocking PRs that I close, but I'd like to eliminate
this manual step.
2025-11-10 08:46:11 -08:00
Eric Traut
b46012e483 Support exiting from the login menu (#6419)
I recently fixed a bug in [this
PR](https://github.com/openai/codex/pull/6285) that prevented Ctrl+C
from dismissing the login menu in the TUI and leaving the user unauthed.

A [user pointed out](https://github.com/openai/codex/issues/6418) that
this makes Ctrl+C can no longer be used to exit the app. This PR changes
the behavior so we exit the app rather than ignoring the Ctrl+C.
2025-11-10 08:43:11 -08:00
Owen Lin
42683dadfb fix: use generate_ts from app_server_protocol (#6407)
Update `codex generate-ts` to use the TS export code from
`app-server-protocol/src/export.rs`.

I realized there were two duplicate implementations of Typescript export
code:
- `app-server-protocol/src/export.rs`
- the `codex-protocol-ts` crate

The `codex-protocol-ts` crate that `codex generate-ts` uses is out of
date now since it doesn't handle the V2 namespace from:
https://github.com/openai/codex/pull/6212.
2025-11-10 08:08:12 -08:00
Eric Traut
65cb1a1b77 Updated docs to reflect recent changes in web_search configuration (#6376)
This is a simplified version of [a
PR](https://github.com/openai/codex/pull/6134) supplied by a community
member.

It updates the docs to reflect a recent config deprecation.
2025-11-10 07:57:56 -08:00
jif-oai
50a77dc138 Move compact (#6454) 2025-11-10 11:59:48 +00:00
Raduan A.
557ac63094 Fix config documentation: correct TOML parsing description (#6424)
The CLI help text and inline comments incorrectly stated that -c
key=value flag parses values as JSON, when the implementation actually
uses TOML parsing via parse_toml_value(). This caused confusion when
users attempted to configure MCP servers using JSON syntax based on the
documentation.

Changes:
- Updated help text to correctly state TOML parsing instead of JSON

Fixes #4531
2025-11-09 22:58:32 -08:00
Andrew Nikolin
131c384361 Fix warning message phrasing (#6446)
Small fix for sentence phrasing in the warning message

Co-authored-by: AndrewNikolin <877163+AndrewNikolin@users.noreply.github.com>
2025-11-09 22:12:28 -08:00
dependabot[bot]
e2598f5094 chore(deps): bump zeroize from 1.8.1 to 1.8.2 in /codex-rs (#6444)
Bumps [zeroize](https://github.com/RustCrypto/utils) from 1.8.1 to
1.8.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c100874101"><code>c100874</code></a>
zeroize v1.8.2 (<a
href="https://redirect.github.com/RustCrypto/utils/issues/1229">#1229</a>)</li>
<li><a
href="3940ccbebd"><code>3940ccb</code></a>
Switch from <code>doc_auto_cfg</code> to <code>doc_cfg</code> (<a
href="https://redirect.github.com/RustCrypto/utils/issues/1228">#1228</a>)</li>
<li><a
href="c68a5204b2"><code>c68a520</code></a>
Fix Nightly warnings (<a
href="https://redirect.github.com/RustCrypto/utils/issues/1080">#1080</a>)</li>
<li><a
href="b15cc6c1cd"><code>b15cc6c</code></a>
cargo: point <code>repository</code> metadata to clonable URLs (<a
href="https://redirect.github.com/RustCrypto/utils/issues/1079">#1079</a>)</li>
<li><a
href="3db6690f7b"><code>3db6690</code></a>
zeroize: fix <code>homepage</code>/<code>repository</code> in Cargo.toml
(<a
href="https://redirect.github.com/RustCrypto/utils/issues/1076">#1076</a>)</li>
<li>See full diff in <a
href="https://github.com/RustCrypto/utils/compare/zeroize-v1.8.1...zeroize-v1.8.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=zeroize&package-manager=cargo&previous-version=1.8.1&new-version=1.8.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-09 22:03:36 -08:00
dependabot[bot]
78b2aeea55 chore(deps): bump askama from 0.12.1 to 0.14.0 in /codex-rs (#6443)
Bumps [askama](https://github.com/askama-rs/askama) from 0.12.1 to
0.14.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/askama-rs/askama/releases">askama's
releases</a>.</em></p>
<blockquote>
<h2>v0.14.0</h2>
<h2>Added Features</h2>
<ul>
<li>Implement <code>Values</code> on tuple by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/391">askama-rs/askama#391</a></li>
<li>Pass variables to sub-templates more reliably even if indirectly by
<a href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/397">askama-rs/askama#397</a></li>
<li>Implement <code>first</code> and <code>blank</code> arguments for
<code>|indent</code> by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/401">askama-rs/askama#401</a></li>
<li>Add named arguments for builtin filters by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/403">askama-rs/askama#403</a></li>
<li>Add <code>unique</code> filter by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/405">askama-rs/askama#405</a></li>
</ul>
<h2>Bug Fixes And Consistency</h2>
<ul>
<li><code>askama_derive</code> accidentally exposed as a feature by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/384">askama-rs/askama#384</a></li>
<li>Track config files by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/385">askama-rs/askama#385</a></li>
<li>If using local variable as value when creating a new variable, do
not put it behind a reference by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/392">askama-rs/askama#392</a></li>
<li>generator: make <code>CARGO_MANIFEST_DIR</code> part of
<code>ConfigKey</code> by <a
href="https://github.com/strickczq"><code>@​strickczq</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/395">askama-rs/askama#395</a></li>
<li>Do not put question mark initialization expressions behind a
reference by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/400">askama-rs/askama#400</a></li>
<li>Update to more current rust version on readthedocs by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/410">askama-rs/askama#410</a></li>
<li>Fix <code>unique</code> filter implementation by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/417">askama-rs/askama#417</a></li>
<li>Add <code>|titlecase</code> as alias for <code>|title</code> by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/416">askama-rs/askama#416</a></li>
</ul>
<h2>Further Changes</h2>
<ul>
<li>book: add page about <code>FastWritable</code> by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/407">askama-rs/askama#407</a></li>
<li>Add throughput to derive benchmark by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/413">askama-rs/askama#413</a></li>
<li>Move <code>FastWritable</code> into <code>askama</code> root by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/411">askama-rs/askama#411</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/strickczq"><code>@​strickczq</code></a>
made their first contribution in <a
href="https://redirect.github.com/askama-rs/askama/pull/395">askama-rs/askama#395</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/askama-rs/askama/compare/v0.13.0...v0.14.0">https://github.com/askama-rs/askama/compare/v0.13.0...v0.14.0</a></p>
<h2>v0.13.1</h2>
<h2>What's Changed</h2>
<ul>
<li><code>askama_derive</code> accidentally exposed as a feature by <a
href="https://github.com/Kijewski"><code>@​Kijewski</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/384">askama-rs/askama#384</a></li>
<li>Track config files by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/385">askama-rs/askama#385</a></li>
<li>Implement <code>Values</code> on tuple by <a
href="https://github.com/GuillaumeGomez"><code>@​GuillaumeGomez</code></a>
in <a
href="https://redirect.github.com/askama-rs/askama/pull/391">askama-rs/askama#391</a></li>
<li>generator: make <code>CARGO_MANIFEST_DIR</code> part of
<code>ConfigKey</code> by <a
href="https://github.com/strickczq"><code>@​strickczq</code></a> in <a
href="https://redirect.github.com/askama-rs/askama/pull/395">askama-rs/askama#395</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/strickczq"><code>@​strickczq</code></a>
made their first contribution in <a
href="https://redirect.github.com/askama-rs/askama/pull/395">askama-rs/askama#395</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/askama-rs/askama/compare/v0.13.0...v0.13.1">https://github.com/askama-rs/askama/compare/v0.13.0...v0.13.1</a></p>
<h2>v0.13.0 – Rinja is Askama, again!</h2>
<p>With this release, the <a
href="https://blog.guillaume-gomez.fr/articles/2024-07-31+docs.rs+switching+jinja+template+framework+from+tera+to+rinja">fork</a>
rinja got merged back into the main project. Please have a look at our
<a
href="https://blog.guillaume-gomez.fr/articles/2025-03-19+Askama+and+Rinja+merge">blog
post</a> for more information about the split and the merge.</p>
<h2>What's Changed</h2>
<p>This release (v0.13.0), when <a
href="https://github.com/askama-rs/askama/compare/0.12.1...v0.13.0">compared
to</a> the last stable askama release (v0.12.1), consists of:</p>
<ul>
<li>over 1000 commits</li>
<li>with changes in over 500 files</li>
<li>with over 40k additions and 8000 deletions</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="95867ac8ce"><code>95867ac</code></a>
Merge pull request <a
href="https://redirect.github.com/askama-rs/askama/issues/416">#416</a>
from Kijewski/pr-upgrading-0.14</li>
<li><a
href="61b7422497"><code>61b7422</code></a>
Add <code>|titlecase</code> as alias for <code>|title</code></li>
<li><a
href="79be271593"><code>79be271</code></a>
Run doctests</li>
<li><a
href="72bbe3ede1"><code>72bbe3e</code></a>
Bump version number to v0.14.0</li>
<li><a
href="57750338fa"><code>5775033</code></a>
book: update <code>upgrading.md</code></li>
<li><a
href="a5b43c0aa2"><code>a5b43c0</code></a>
Fix <code>unique</code> filter implementation</li>
<li><a
href="7fccbdf1d7"><code>7fccbdf</code></a>
Remove usage of <code>nextest</code></li>
<li><a
href="6a16256f24"><code>6a16256</code></a>
Fix new clippy lints</li>
<li><a
href="04a4d5b020"><code>04a4d5b</code></a>
Update MSRV to 1.83</li>
<li><a
href="d2a788a740"><code>d2a788a</code></a>
Add doc about <code>unique</code> filter</li>
<li>Additional commits viewable in <a
href="https://github.com/askama-rs/askama/compare/0.12.1...v0.14.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=askama&package-manager=cargo&previous-version=0.12.1&new-version=0.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-09 22:02:26 -08:00
dependabot[bot]
082d2fa19a chore(deps): bump taiki-e/install-action from 2.60.0 to 2.62.49 (#6438)
Bumps
[taiki-e/install-action](https://github.com/taiki-e/install-action) from
2.60.0 to 2.62.49.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/taiki-e/install-action/releases">taiki-e/install-action's
releases</a>.</em></p>
<blockquote>
<h2>2.62.49</h2>
<ul>
<li>
<p>Update <code>cargo-binstall@latest</code> to 1.15.11.</p>
</li>
<li>
<p>Update <code>cargo-auditable@latest</code> to 0.7.2.</p>
</li>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.2.</p>
</li>
</ul>
<h2>2.62.48</h2>
<ul>
<li>
<p>Update <code>mise@latest</code> to 2025.11.3.</p>
</li>
<li>
<p>Update <code>cargo-audit@latest</code> to 0.22.0.</p>
</li>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.1.</p>
</li>
<li>
<p>Update <code>uv@latest</code> to 0.9.8.</p>
</li>
<li>
<p>Update <code>cargo-udeps@latest</code> to 0.1.60.</p>
</li>
<li>
<p>Update <code>zizmor@latest</code> to 1.16.3.</p>
</li>
</ul>
<h2>2.62.47</h2>
<ul>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.0.</p>
</li>
<li>
<p>Update <code>cargo-nextest@latest</code> to 0.9.111.</p>
</li>
<li>
<p>Update <code>cargo-shear@latest</code> to 1.6.2.</p>
</li>
</ul>
<h2>2.62.46</h2>
<ul>
<li>
<p>Update <code>vacuum@latest</code> to 0.19.5.</p>
</li>
<li>
<p>Update <code>syft@latest</code> to 1.37.0.</p>
</li>
<li>
<p>Update <code>mise@latest</code> to 2025.11.2.</p>
</li>
<li>
<p>Update <code>knope@latest</code> to 0.21.5.</p>
</li>
</ul>
<h2>2.62.45</h2>
<ul>
<li>
<p>Update <code>zizmor@latest</code> to 1.16.2.</p>
</li>
<li>
<p>Update <code>cargo-binstall@latest</code> to 1.15.10.</p>
</li>
<li>
<p>Update <code>ubi@latest</code> to 0.8.4.</p>
</li>
<li>
<p>Update <code>mise@latest</code> to 2025.11.1.</p>
</li>
<li>
<p>Update <code>cargo-semver-checks@latest</code> to 0.45.0.</p>
</li>
</ul>
<h2>2.62.44</h2>
<ul>
<li>Update <code>mise@latest</code> to 2025.11.0.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md">taiki-e/install-action's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<p>All notable changes to this project will be documented in this
file.</p>
<p>This project adheres to <a href="https://semver.org">Semantic
Versioning</a>.</p>
<!-- raw HTML omitted -->
<h2>[Unreleased]</h2>
<h2>[2.62.49] - 2025-11-09</h2>
<ul>
<li>
<p>Update <code>cargo-binstall@latest</code> to 1.15.11.</p>
</li>
<li>
<p>Update <code>cargo-auditable@latest</code> to 0.7.2.</p>
</li>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.2.</p>
</li>
</ul>
<h2>[2.62.48] - 2025-11-08</h2>
<ul>
<li>
<p>Update <code>mise@latest</code> to 2025.11.3.</p>
</li>
<li>
<p>Update <code>cargo-audit@latest</code> to 0.22.0.</p>
</li>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.1.</p>
</li>
<li>
<p>Update <code>uv@latest</code> to 0.9.8.</p>
</li>
<li>
<p>Update <code>cargo-udeps@latest</code> to 0.1.60.</p>
</li>
<li>
<p>Update <code>zizmor@latest</code> to 1.16.3.</p>
</li>
</ul>
<h2>[2.62.47] - 2025-11-05</h2>
<ul>
<li>
<p>Update <code>vacuum@latest</code> to 0.20.0.</p>
</li>
<li>
<p>Update <code>cargo-nextest@latest</code> to 0.9.111.</p>
</li>
<li>
<p>Update <code>cargo-shear@latest</code> to 1.6.2.</p>
</li>
</ul>
<h2>[2.62.46] - 2025-11-04</h2>
<ul>
<li>
<p>Update <code>vacuum@latest</code> to 0.19.5.</p>
</li>
<li>
<p>Update <code>syft@latest</code> to 1.37.0.</p>
</li>
<li>
<p>Update <code>mise@latest</code> to 2025.11.2.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="44c6d64aa6"><code>44c6d64</code></a>
Release 2.62.49</li>
<li><a
href="3a701df4c2"><code>3a701df</code></a>
Update <code>cargo-binstall@latest</code> to 1.15.11</li>
<li><a
href="4242e04eb8"><code>4242e04</code></a>
Update <code>cargo-auditable@latest</code> to 0.7.2</li>
<li><a
href="3df5533ef8"><code>3df5533</code></a>
Update <code>vacuum@latest</code> to 0.20.2</li>
<li><a
href="e797ba6a25"><code>e797ba6</code></a>
Release 2.62.48</li>
<li><a
href="bcf91e02ac"><code>bcf91e0</code></a>
Update <code>mise@latest</code> to 2025.11.3</li>
<li><a
href="e78113b60c"><code>e78113b</code></a>
Update <code>cargo-audit@latest</code> to 0.22.0</li>
<li><a
href="0ef486444e"><code>0ef4864</code></a>
Update <code>vacuum@latest</code> to 0.20.1</li>
<li><a
href="5eda7b1985"><code>5eda7b1</code></a>
Update <code>uv@latest</code> to 0.9.8</li>
<li><a
href="3853a413e6"><code>3853a41</code></a>
Update <code>cargo-udeps@latest</code> to 0.1.60</li>
<li>Additional commits viewable in <a
href="0c5db7f7f8...44c6d64aa6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=taiki-e/install-action&package-manager=github_actions&previous-version=2.60.0&new-version=2.62.49)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-09 22:00:08 -08:00
dependabot[bot]
7c7c7567d5 chore(deps): bump codespell-project/actions-codespell from 2.1 to 2.2 (#6437)
Bumps
[codespell-project/actions-codespell](https://github.com/codespell-project/actions-codespell)
from 2.1 to 2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codespell-project/actions-codespell/releases">codespell-project/actions-codespell's
releases</a>.</em></p>
<blockquote>
<h2>v2.2</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>Add the config file option and tests by <a
href="https://github.com/rdimaio"><code>@​rdimaio</code></a> in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/80">codespell-project/actions-codespell#80</a></li>
<li>Use <code>pip install</code> with <code>--no-cache-dir</code> in the
Dockerfile by <a
href="https://github.com/PeterDaveHello"><code>@​PeterDaveHello</code></a>
in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/89">codespell-project/actions-codespell#89</a></li>
<li>Upgrade to Python 3.13 by <a
href="https://github.com/candrews"><code>@​candrews</code></a> in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/82">codespell-project/actions-codespell#82</a></li>
<li>Add checkout action and problem matcher to README by <a
href="https://github.com/vadi2"><code>@​vadi2</code></a> in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/32">codespell-project/actions-codespell#32</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/rdimaio"><code>@​rdimaio</code></a> made
their first contribution in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/80">codespell-project/actions-codespell#80</a></li>
<li><a
href="https://github.com/PeterDaveHello"><code>@​PeterDaveHello</code></a>
made their first contribution in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/89">codespell-project/actions-codespell#89</a></li>
<li><a href="https://github.com/candrews"><code>@​candrews</code></a>
made their first contribution in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/82">codespell-project/actions-codespell#82</a></li>
<li><a href="https://github.com/vadi2"><code>@​vadi2</code></a> made
their first contribution in <a
href="https://redirect.github.com/codespell-project/actions-codespell/pull/32">codespell-project/actions-codespell#32</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/codespell-project/actions-codespell/compare/v2...v2.2">https://github.com/codespell-project/actions-codespell/compare/v2...v2.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8f01853be1"><code>8f01853</code></a>
MAINT: Release notes</li>
<li><a
href="23a4abea24"><code>23a4abe</code></a>
Add checkout action and problem matcher to README (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/32">#32</a>)</li>
<li><a
href="906f13fba1"><code>906f13f</code></a>
Upgrade to Python 3.13 (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/82">#82</a>)</li>
<li><a
href="df0bba344d"><code>df0bba3</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/85">#85</a>)</li>
<li><a
href="c460eef33e"><code>c460eef</code></a>
Use <code>pip install</code> with <code>--no-cache-dir</code> in the
Dockerfile (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/89">#89</a>)</li>
<li><a
href="037a23a348"><code>037a23a</code></a>
Add the config file option and tests (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/80">#80</a>)</li>
<li><a
href="8d1a4b1bd9"><code>8d1a4b1</code></a>
Bump actions/setup-python from 5 to 6 (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/92">#92</a>)</li>
<li><a
href="71286cb40f"><code>71286cb</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/91">#91</a>)</li>
<li><a
href="fad9339798"><code>fad9339</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/codespell-project/actions-codespell/issues/84">#84</a>)</li>
<li>See full diff in <a
href="406322ec52...8f01853be1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codespell-project/actions-codespell&package-manager=github_actions&previous-version=2.1&new-version=2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-09 21:58:51 -08:00
iceweasel-oai
625f2208c4 For npm upgrade on Windows, go through cmd.exe to get path traversal working (#6387)
On Windows, `npm` by itself does not resolve under std::process::Command
which does not consider PATHEXT to resolve it to `npm.cmd` in the PATH.
By running the npm upgrade command via cmd.exe we get proper path
semantics so it actually works.
2025-11-09 21:07:44 -08:00
kinopeee
5f1fab0e7c fix(cloud-tasks): respect cli_auth_credentials_store config (#5856)
## Problem

`codex cloud` always instantiated `AuthManager` with `File` mode,
ignoring the user's actual `cli_auth_credentials_store` setting. This
caused users with `cli_auth_credentials_store = "keyring"` (or `"auto"`)
to see "Not signed in" errors even when they had valid credentials
stored in the system keyring.

## Root cause

The code called `Config::load_from_base_config_with_overrides()` with an
empty `ConfigToml::default()`, which always returned `File` as the
default store mode instead of loading the actual user configuration.

## Solution

- **Added `util::load_cli_auth_manager()` helper**  
Properly loads user config via
`load_config_as_toml_with_cli_overrides()` and extracts the
`cli_auth_credentials_store` setting before creating `AuthManager`.

- **Updated callers**  
  - `init_backend()` - used when starting cloud tasks UI
  - `build_chatgpt_headers()` - used for API requests

## Testing

-  `just fmt`
-  `just fix -p codex-cloud-tasks`
-  `cargo test -p codex-cloud-tasks`

## Files changed

- `codex-rs/cloud-tasks/src/lib.rs`
- `codex-rs/cloud-tasks/src/util.rs`

## Verification

Users with keyring-based auth can now run `codex cloud` successfully
without "Not signed in" errors.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
Co-authored-by: celia-oai <celia@openai.com>
2025-11-10 00:35:08 +00:00
Oliver Mannion
c07461e6f3 fix(seatbelt): Allow reading hw.physicalcpu (#6421)
Allow reading `hw.physicalcpu` so numpy can be imported when running in
the sandbox.
 
resolves #6420
2025-11-09 08:53:36 -08:00
Raduan A.
8b80a0a269 Fix SDK documentation: replace 'file diffs' with 'file change notifications' (#6425)
The TypeScript SDK's README incorrectly claimed that runStreamed() emits
"file diffs". However, the FileChangeItem type only contains metadata
(path, kind, status) without actual diff content.

Updated line 36 to accurately describe the SDK as providing "file change
notifications" instead of "file diffs" to match the actual
implementation in items.ts.

Fixes #5850
2025-11-09 08:37:16 -08:00
iceweasel-oai
a47181e471 more world-writable warning improvements (#6389)
3 improvements:
1. show up to 3 actual paths that are world-writable
2. do the scan/warning for Read-Only mode too, because it also applies
there
3. remove the "Cancel" option since it doesn't always apply (like on
startup)
2025-11-08 11:35:43 -08:00
Raduan A.
5beb6167c8 feat(tui): Display keyboard shortcuts inline for approval options (#5889)
Shows single-key shortcuts (y, a, n) next to approval options to make
them more discoverable. Previously these shortcuts worked but were
hidden, making the feature hard to discover.

Changes:
- "Yes, proceed" now shows "y" shortcut
- "Yes, and don't ask again" now shows "a" shortcut
- "No, and tell Codex..." continues to show "esc" shortcut

This improves UX by surfacing the quick keyboard shortcuts that were
already functional but undiscoverable in the UI.

---

Update:

added parentheses for better visual clarity 
<img width="1540" height="486" alt="CleanShot 2025-11-05 at 11 47 07@2x"
src="https://github.com/user-attachments/assets/f951c34a-9ec8-4b81-b151-7b2ccba94658"
/>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-08 09:08:42 -08:00
iceweasel-oai
917f39ec12 Improve world-writable scan (#6381)
1. scan many more directories since it's much faster than the original
implementation
2. limit overall scan time to 2s
3. skip some directories that are noisy - ApplicationData, Installer,
etc.
2025-11-07 21:28:55 -08:00
Luca King
a2fdfce02a Kill shell tool process groups on timeout (#5258)
## Summary
- launch shell tool processes in their own process group so Codex owns
the full tree
- on timeout or ctrl-c, send SIGKILL to the process group before
terminating the tracked child
- document that the default shell/unified_exec timeout remains 1000 ms

## Original Bug
Long-lived shell tool commands hang indefinitely because the timeout
handler only terminated the direct child process; any grandchildren it
spawned kept running and held the PTY open, preventing Codex from
regaining control.

## Repro Original Bug
Install next.js and run `next dev` (which is a long-running shell
process with children). On openai:main, it will cause the agent to
permanently get stuck here until human intervention. On this branch,
this command will be terminated successfully after timeout_ms which will
unblock the agent. This is a critical fix for unmonitored / lightly
monitored agents that don't have immediate human observation to unblock
them.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-11-07 17:54:35 -08:00
pakrym-oai
91b16b8682 Don't request approval for safe commands in unified exec (#6380) 2025-11-07 16:36:04 -08:00
Alexander Smirnov
183fc8e01a core: replace Cloudflare 403 HTML with friendly message (#6252)
### Motivation

When Codex is launched from a region where Cloudflare blocks access (for
example, Russia), the CLI currently dumps Cloudflare’s entire HTML error
page. This isn’t actionable and makes it hard for users to understand
what happened. We want to detect the Cloudflare block and show a
concise, user-friendly explanation instead.

### What Changed

- Added CLOUDFLARE_BLOCKED_MESSAGE and a friendly_message() helper to
UnexpectedResponseError. Whenever we see a 403 whose body contains the
Cloudflare block notice, we now emit a single-line message (Access
blocked by Cloudflare…) while preserving the HTTP status and request id.
All other responses keep the original behaviour.
- Added two focused unit tests:
- unexpected_status_cloudflare_html_is_simplified ensures the Cloudflare
HTML case yields the friendly message.
- unexpected_status_non_html_is_unchanged confirms plain-text 403s still
return the raw body.

### Testing

- cargo build -p codex-cli
- cargo test -p codex-core
- just fix -p codex-core
- cargo test --all-features

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-07 15:55:16 -08:00
Josh McKinney
9fba811764 refactor(terminal): cleanup deprecated flush logic (#6373)
Removes flush logic that was leftover to test against ratatui's flush
Cleaned up the flush logic so it's a bit more intent revealing.
DrawCommand now owns the Cells that it draws as this works around a
borrow checker problem.
2025-11-07 15:54:07 -08:00
Celia Chen
db408b9e62 [App-server] add initialization to doc (#6377)
Address comments in #6353.
2025-11-07 23:52:20 +00:00
Jakob Malmo
2eecc1a2e4 fix(wsl): normalize Windows paths during update (#6086) (#6097)
When running under WSL, the update command could receive Windows-style
absolute paths (e.g., `C:\...`) and pass them to Linux processes
unchanged, which fails because WSL expects those paths in
`/mnt/<drive>/...` form.

This patch adds a tiny helper in the CLI (`cli/src/wsl_paths.rs`) that:
- Detects WSL (`WSL_DISTRO_NAME` or `"microsoft"` in `/proc/version`)  
- Converts `X:\...` → `/mnt/x/...`  

`run_update_action` now normalizes the package-manager command and
arguments under WSL before spawning.
Non-WSL platforms are unaffected.  

Includes small unit tests for the converter.  

**Fixes:** #6086, #6084

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-07 14:49:17 -08:00
Dan Hernandez
c76528ca1f [SDK] Add network_access and web_search options to TypeScript SDK (#6367)
## Summary

This PR adds two new optional boolean fields to `ThreadOptions` in the
TypeScript SDK:

- **`networkAccess`**: Enables network access in the sandbox by setting
`sandbox_workspace_write.network_access` config
- **`webSearch`**: Enables the web search tool by setting
`tools.web_search` config

These options map to existing Codex configuration options and are
properly threaded through the SDK layers:
1. `ThreadOptions` (threadOptions.ts) - User-facing API
2. `CodexExecArgs` (exec.ts) - Internal execution args  
3. CLI flags via `--config` in the `codex exec` command

## Changes

- `sdk/typescript/src/threadOptions.ts`: Added `networkAccess` and
`webSearch` fields to `ThreadOptions` type
- `sdk/typescript/src/exec.ts`: Added fields to `CodexExecArgs` and CLI
flag generation
- `sdk/typescript/src/thread.ts`: Pass options through to exec layer

## Test Plan

- [x] Build succeeds (`pnpm build`)
- [x] Linter passes (`pnpm lint`)
- [x] Type definitions are properly exported
- [ ] Manual testing with sample code (to be done by reviewer)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-07 13:19:34 -08:00
Michael Bolin
bb47f2226f feat: add --promote-alpha option to create_github_release script (#6370)
Historically, running `create_github_release --publish-release` would
always publish a new release from latest `main`, which isn't always the
best idea. We should really publish an alpha, let it bake, and then
promote it.

This PR introduces a new flag, `--promote-alpha`, which does exactly
that. It also works with `--dry-run`, so you can sanity check the commit
it will use as the base commit for the new release before running it for
real.

```shell
$ ./codex-rs/scripts/create_github_release --dry-run --promote-alpha 0.56.0-alpha.2
Publishing version 0.56.0
Running gh api GET /repos/openai/codex/git/refs/tags/rust-v0.56.0-alpha.2
Running gh api GET /repos/openai/codex/git/tags/7d4ef77bc35b011aa0c76c5cbe6cd7d3e53f1dfe
Running gh api GET /repos/openai/codex/compare/main...8b49211e67d3c863df5ecc13fc5f88516a20fa69
Would publish version 0.56.0 using base commit 62474a30e8 derived from rust-v0.56.0-alpha.2.
```
2025-11-07 20:05:22 +00:00
Jeremy Rose
c6ab92bc50 tui: add comments to tui.rs (#6369) 2025-11-07 18:17:52 +00:00
pakrym-oai
4c1a6f0ee0 Promote shell config tool to model family config (#6351) 2025-11-07 10:11:11 -08:00
Owen Lin
361d43b969 [app-server] doc: update README for threads and turns (#6368)
Self explanatory!
2025-11-07 17:02:49 +00:00
Celia Chen
2e81f1900d [App-server] Add auth v2 doc & update codex mcp interface auth section (#6353)
Added doc for auth v2 endpoints. Updated the auth section in Codex MCP
interface doc too.
2025-11-07 08:17:19 -08:00
Owen Lin
2030b28083 [app-server] feat: expose additional fields on Thread (#6338)
Add the following fields to Thread:

```
    pub preview: String,
    pub model_provider: String,
    pub created_at: i64,
```

Will prob need another PR once this lands:
https://github.com/openai/codex/pull/6337
2025-11-07 04:08:45 +00:00
Celia Chen
e84e39940b [App-server] Implement account/read endpoint (#6336)
This PR does two things:
1. add a new function in core that maps the core-internal plan type to
the external plan type;
2. implement account/read that get account status (v2 of
`getAuthStatus`).
2025-11-06 19:43:13 -08:00
pakrym-oai
e8905f6d20 Prefer wait_for_event over wait_for_event_with_timeout (#6349) 2025-11-06 18:11:11 -08:00
Shane Vitarana
316352be94 Fix apply_patch rename move path resolution (#5486)
Fixes https://github.com/openai/codex/issues/5485.

Fixed rename hunks so `apply_patch` resolves the destination path using
the verifier’s effective cwd, ensuring patches that run under `cd
<worktree> && apply_patch` stay inside the worktree.

Added a regression test
(`test_apply_patch_resolves_move_path_with_effective_cwd`) that
reproduced the old behavior (dest path resolved in the main repo) and
now passes.

Related to https://github.com/openai/codex/issues/5483.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-06 17:02:09 -08:00
pakrym-oai
f8b30af6dc Prefer wait_for_event over wait_for_event_with_timeout. (#6346)
No need to specify the timeout in most cases.
2025-11-06 16:14:43 -08:00
Eric Traut
039a4b070e Updated the AI labeler rules to match the most recent issue tracker labels (#6347)
This PR updates the AI prompt used for the workflow that adds automated
labels to incoming issues. I've been updating and refining the list of
labels as I work through the issue backlog, and the old prompt was
becoming somewhat outdated.
2025-11-06 16:02:12 -08:00
pakrym-oai
c368c6aeea Remove shell tool when unified exec is enabled (#6345)
Also drop streameable shell that's just an alias for unified exec.
2025-11-06 15:46:24 -08:00
Eric Traut
0c647bc566 Don't retry "insufficient_quota" errors (#6340)
This PR makes an "insufficient quota" error fatal so we don't attempt to
retry it multiple times in the agent loop.

We have multiple bug reports from users about intermittent retry
behaviors, and this could explain some of them. With this change, we'll
eliminate the retries and surface a clear error message.

The PR is a nearly identical copy of [this
PR](https://github.com/openai/codex/pull/4837) contributed by
@abimaelmartell. The original PR has gone stale. Rather than wait for
the contributor to resolve merge conflicts, I wanted to get this change
in.
2025-11-06 15:12:01 -08:00
Ejaz Ahmed
e30f65118d feat: Enable CTRL-n and CTRL-p for navigating slash commands, files, history (#1994)
Adds CTRL-n and CTRL-p navigation for slash commands, files, and
history.
Closes #1992

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-06 14:58:18 -08:00
Jeremy Rose
1bd2d7a659 tui: fix backtracking past /status (#6335)
Fixes https://github.com/openai/codex/issues/4722

Supersedes https://github.com/openai/codex/pull/5058

Ideally we'd have a clearer way of separating history per-session than
by detecting a specific history cell type, but this is a fairly minimal
fix for now.
2025-11-06 14:50:07 -08:00
Gabriel Peal
65d53fd4b1 Make generate_ts prettier output warn-only (#6342)
Before, every file would be outputted with the time prettier spent
formatting it. This made downstream scripts way too noisy.
2025-11-06 17:45:51 -05:00
pakrym-oai
b5349202e9 Freeform unified exec output formatting (#6233) 2025-11-06 22:14:27 +00:00
Gabriel Peal
1b8cc8b625 [App Server] Add more session metadata to listConversations (#6337)
This unlocks a few new product experience for app server consumers
2025-11-06 17:13:24 -05:00
Jeremy Rose
8501b0b768 core: widen sandbox to allow certificate ops when network is enabled (#5980)
This allows `gh api` to work in the workspace-write sandbox w/ network
enabled. Without this we see e.g.

```
$ codex debug seatbelt --full-auto gh api repos/openai/codex/pulls --paginate -X GET -F state=all
Get "https://api.github.com/repos/openai/codex/pulls?per_page=100&state=all": tls: failed to verify certificate: x509: OSStatus -26276
```
2025-11-06 12:47:20 -08:00
Eric Traut
fe7eb18104 Updated contributing guidelines and PR template to request link to bug report in PR notes (#6332)
Some PRs are being submitted without reference to existing bug reports
or feature requests. This updates the PR template and contributing
guidelines to request that all PRs from the community contain such a
link. This provides additional context and helps prioritize, track, and
assess PRs.
2025-11-06 12:02:39 -08:00
Thibault Sottiaux
8c75ed39d5 feat: clarify that gpt-5-codex should not amend commits unless requested (#6333) 2025-11-06 11:42:47 -08:00
Owen Lin
fdb9fa301e chore: move relevant tests to app-server/tests/suite/v2 (#6289)
These are technically app-server v2 APIs, so move them to the same
directory as the others.
2025-11-06 10:53:17 -08:00
iceweasel-oai
871d442b8e Windows Sandbox: Show Everyone-writable directory warning (#6283)
Show a warning when Auto Sandbox mode becomes enabled, if we detect
Everyone-writable directories, since they cannot be protected by the
current implementation of the Sandbox.

This PR also includes changes to how we detect Everyone-writable to be
*much* faster
2025-11-06 10:44:42 -08:00
316 changed files with 23610 additions and 32666 deletions

View File

@@ -1,3 +1,3 @@
model = "gpt-5"
model = "gpt-5.1"
# Consider setting [mcp_servers] here!

View File

@@ -4,3 +4,5 @@ Before opening this Pull Request, please read the dedicated "Contributing" markd
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.

View File

@@ -13,17 +13,39 @@ permissions:
jobs:
cla:
# Only run the CLA assistant for the canonical openai repo so forks are not blocked
# and contributors who signed previously do not receive duplicate CLA notifications.
if: ${{ github.repository_owner == 'openai' }}
runs-on: ubuntu-latest
steps:
- uses: contributor-assistant/github-action@v2.6.1
# Run on close only if the PR was merged. This will lock the PR to preserve
# the CLA agreement. We don't want to lock PRs that have been closed without
# merging because the contributor may want to respond with additional comments.
# This action has a "lock-pullrequest-aftermerge" option that can be set to false,
# but that would unconditionally skip locking even in cases where the PR was merged.
if: |
github.event_name == 'pull_request_target' ||
github.event.comment.body == 'recheck' ||
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'
(
github.event_name == 'pull_request_target' &&
(
github.event.action == 'opened' ||
github.event.action == 'synchronize' ||
(github.event.action == 'closed' && github.event.pull_request.merged == true)
)
) ||
(
github.event_name == 'issue_comment' &&
(
github.event.comment.body == 'recheck' ||
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'
)
)
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
path-to-document: https://github.com/openai/codex/blob/main/docs/CLA.md
path-to-signatures: signatures/cla.json
branch: cla-signatures
allowlist: dependabot[bot]
allowlist: |
codex
dependabot[bot]

View File

@@ -0,0 +1,105 @@
name: Close stale contributor PRs
on:
workflow_dispatch:
schedule:
- cron: "0 6 * * *"
permissions:
contents: read
issues: write
pull-requests: write
jobs:
close-stale-contributor-prs:
runs-on: ubuntu-latest
steps:
- name: Close inactive PRs from contributors
uses: actions/github-script@v8
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const DAYS_INACTIVE = 14;
const cutoff = new Date(Date.now() - DAYS_INACTIVE * 24 * 60 * 60 * 1000);
const { owner, repo } = context.repo;
const dryRun = false;
const stalePrs = [];
core.info(`Dry run mode: ${dryRun}`);
const prs = await github.paginate(github.rest.pulls.list, {
owner,
repo,
state: "open",
per_page: 100,
sort: "updated",
direction: "asc",
});
for (const pr of prs) {
const lastUpdated = new Date(pr.updated_at);
if (lastUpdated > cutoff) {
core.info(`PR ${pr.number} is fresh`);
continue;
}
if (!pr.user || pr.user.type !== "User") {
core.info(`PR ${pr.number} wasn't created by a user`);
continue;
}
let permission;
try {
const permissionResponse = await github.rest.repos.getCollaboratorPermissionLevel({
owner,
repo,
username: pr.user.login,
});
permission = permissionResponse.data.permission;
} catch (error) {
if (error.status === 404) {
core.info(`Author ${pr.user.login} is not a collaborator; skipping #${pr.number}`);
continue;
}
throw error;
}
const hasContributorAccess = ["admin", "maintain", "write"].includes(permission);
if (!hasContributorAccess) {
core.info(`Author ${pr.user.login} has ${permission} access; skipping #${pr.number}`);
continue;
}
stalePrs.push(pr);
}
if (!stalePrs.length) {
core.info("No stale contributor pull requests found.");
return;
}
for (const pr of stalePrs) {
const issue_number = pr.number;
const closeComment = `Closing this pull request because it has had no updates for more than ${DAYS_INACTIVE} days. If you plan to continue working on it, feel free to reopen or open a new PR.`;
if (dryRun) {
core.info(`[dry-run] Would close contributor PR #${issue_number} from ${pr.user.login}`);
continue;
}
await github.rest.issues.createComment({
owner,
repo,
issue_number,
body: closeComment,
});
await github.rest.pulls.update({
owner,
repo,
pull_number: issue_number,
state: "closed",
});
core.info(`Closed contributor PR #${issue_number} from ${pr.user.login}`);
}

View File

@@ -22,6 +22,6 @@ jobs:
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
- name: Codespell
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2.1
uses: codespell-project/actions-codespell@8f01853be192eb0f849a5c7d721450e7a467c579 # v2.2
with:
ignore_words_file: .codespellignore

View File

@@ -46,7 +46,7 @@ jobs:
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
model: gpt-5
model: gpt-5.1
prompt: |
You are an assistant that triages new GitHub issues by identifying potential duplicates.

View File

@@ -26,21 +26,36 @@ jobs:
prompt: |
You are an assistant that reviews GitHub issues for the repository.
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
Your job is to choose the most appropriate labels for the issue described later in this prompt.
Follow these rules:
- Only pick labels out of the list below.
- Prefer a small set of precise labels over many broad ones.
Labels to apply:
- Add one (and only one) of the following three labels to distinguish the type of issue. Default to "bug" if unsure.
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
3. extension — VS Code (or other IDE) extension-specific issues.
4. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
5. mcp — Topics involving Model Context Protocol servers/clients.
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
8. azure — Problems or requests tied to Azure OpenAI deployments.
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
10. model-behaviorUndesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
3. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
- If applicable, add one of the following labels to specify which sub-product or product surface the issue relates to.
1. CLI — the Codex command line interface.
2. extension — VS Code (or other IDE) extension-specific issues.
3. codex-web — Issues targeting the Codex web UI/Cloud experience.
4. github-actionIssues with the Codex GitHub action.
5. iOS — Issues with the Codex iOS app.
- Additionally add zero or more of the following labels that are relevant to the issue content. Prefer a small set of precise labels over many broad ones.
1. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
2. mcp — Topics involving Model Context Protocol servers/clients.
3. mcp-server — Problems related to the codex mcp-server command, where codex runs as an MCP server.
4. azure — Problems or requests tied to Azure OpenAI deployments.
5. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
6. code-review — Issues related to the code review feature or functionality.
7. auth - Problems related to authentication, login, or access tokens.
8. codex-exec - Problems related to the "codex exec" command or functionality.
9. context-management - Problems related to compaction, context windows, or available context reporting.
10. custom-model - Problems that involve using custom model providers, local models, or OSS models.
11. rate-limits - Problems related to token limits, rate limits, or token usage reporting.
12. sandbox - Issues related to local sandbox environments or tool call approvals to override sandbox restrictions.
13. tool-calls - Problems related to specific tool call invocations including unexpected errors, failures, or hangs.
14. TUI - Problems with the terminal user interface (TUI) including keyboard shortcuts, copy & pasting, menus, or screen update issues.
Issue number: ${{ github.event.issue.number }}

View File

@@ -76,7 +76,7 @@ jobs:
steps:
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.90
- uses: taiki-e/install-action@0c5db7f7f897c03b771660e91d065338615679f4 # v2
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
version: 1.5.1
@@ -95,8 +95,8 @@ jobs:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects.
RUSTC_WRAPPER: sccache
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
@@ -170,12 +170,14 @@ jobs:
# Install and restore sccache cache
- name: Install sccache
uses: taiki-e/install-action@0c5db7f7f897c03b771660e91d065338615679f4 # v2
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
@@ -188,8 +190,13 @@ jobs:
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.SCCACHE_GHA_ENABLED != 'true' }}
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v4
with:
@@ -228,7 +235,7 @@ jobs:
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@0c5db7f7f897c03b771660e91d065338615679f4 # v2
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
@@ -274,7 +281,7 @@ jobs:
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}-${{ hashFiles('codex-rs/rust-toolchain.toml') }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.SCCACHE_GHA_ENABLED != 'true'
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v4
with:
@@ -282,12 +289,12 @@ jobs:
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}-${{ github.run_id }}
- name: sccache stats
if: always()
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always()
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
@@ -326,7 +333,8 @@ jobs:
run:
working-directory: codex-rs
env:
RUSTC_WRAPPER: sccache
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
@@ -370,12 +378,14 @@ jobs:
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- name: Install sccache
uses: taiki-e/install-action@0c5db7f7f897c03b771660e91d065338615679f4 # v2
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
@@ -388,8 +398,13 @@ jobs:
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.SCCACHE_GHA_ENABLED != 'true' }}
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v4
with:
@@ -399,17 +414,17 @@ jobs:
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@0c5db7f7f897c03b771660e91d065338615679f4 # v2
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
- name: tests
id: test
continue-on-error: true
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test
env:
RUST_BACKTRACE: 1
NEXTEST_STATUS_LEVEL: leak
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
@@ -424,7 +439,7 @@ jobs:
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}-${{ hashFiles('codex-rs/rust-toolchain.toml') }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.SCCACHE_GHA_ENABLED != 'true'
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v4
with:
@@ -432,12 +447,12 @@ jobs:
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}-${{ github.run_id }}
- name: sccache stats
if: always()
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always()
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{

View File

@@ -295,6 +295,15 @@ jobs:
# ${{ matrix.target }}
dest="dist/${{ matrix.target }}"
# We want to ship the raw Windows executables in the GitHub Release
# in addition to the compressed archives. Keep the originals for
# Windows targets; remove them elsewhere to limit the number of
# artifacts that end up in the GitHub Release.
keep_originals=false
if [[ "${{ matrix.runner }}" == windows* ]]; then
keep_originals=true
fi
# For compatibility with environments that lack the `zstd` tool we
# additionally create a `.tar.gz` for all platforms and `.zip` for
# Windows alongside every single binary that we publish. The end result is:
@@ -324,7 +333,11 @@ jobs:
# Also create .zst (existing behaviour) *and* remove the original
# uncompressed binary to keep the directory small.
zstd -T0 -19 --rm "$dest/$base"
zstd_args=(-T0 -19)
if [[ "${keep_originals}" == false ]]; then
zstd_args+=(--rm)
fi
zstd "${zstd_args[@]}" "$dest/$base"
done
- name: Remove signing keychain

3
.gitignore vendored
View File

@@ -64,6 +64,9 @@ apply_patch/
# coverage
coverage/
# personal files
personal/
# os
.DS_Store
Thumbs.db

View File

@@ -84,6 +84,8 @@ If you dont have the tool:
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
- `ResponsesRequest` exposes helpers (`body_json`, `input`, `function_call_output`, `custom_tool_call_output`, `call_output`, `header`, `path`, `query_param`) so assertions can target structured payloads instead of manual JSON digging.
- Build SSE payloads with the provided `ev_*` constructors and the `sse(...)`.
- Prefer `wait_for_event` over `wait_for_event_with_timeout`.
- Prefer `mount_sse_once` over `mount_sse_once_match` or `mount_sse_sequence`
- Typical pattern:

View File

@@ -69,6 +69,39 @@ Codex can access MCP servers. To configure them, refer to the [config docs](./do
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
### Execpolicy quickstart
Codex can enforce your own rules-based execution policy before it runs shell commands.
1. Create a policy directory: `mkdir -p ~/.codex/policy`.
2. Create one or more `.codexpolicy` files into that folder. Codex automatically loads every `.codexpolicy` file in there on startup.
3. Write `prefix_rule` entries to describe the commands you want to allow, prompt, or block:
```starlark
prefix_rule(
pattern = ["git", ["push", "fetch"]],
decision = "prompt", # allow | prompt | forbidden
match = [["git", "push", "origin", "main"]], # examples that must match
not_match = [["git", "status"]], # examples that must not match
)
```
- `pattern` is a list of shell tokens, evaluated from left to right; wrap tokens in a nested list to express alternatives (e.g., match both `push` and `fetch`).
- `decision` sets the severity; Codex picks the strictest decision when multiple rules match.
- `match` and `not_match` act as (optional) unit tests. Codex validates them when it loads your policy, so you get feedback if an example has unexpected behavior.
In this example rule, if Codex wants to run commands with the prefix `git push` or `git fetch`, it will first ask for user approval.
Note: If Codex wants to run a command that matches with multiple rules, it will use the strictest decision among the matched rules (forbidden > prompt > allow).
Use the `codex execpolicycheck` subcommand to preview decisions before you save a rule (see the [`execpolicy2` README](./codex-rs/execpolicy2/README.md) for syntax details):
```shell
codex execpolicycheck --policy ~/.codex/policy/default.codexpolicy git push origin main
```
Pass multiple `--policy` flags to test how several files combine, and use `--pretty` for formatted JSON output. See the [`codex-rs/execpolicy2` README](./codex-rs/execpolicy2/README.md) for a more detailed walkthrough of the available syntax.
---
### Docs & FAQ

View File

@@ -0,0 +1,9 @@
[profile.default]
# Do not increase, fix your test instead
slow-timeout = { period = "15s", terminate-after = 2 }
[[profile.default.overrides]]
# Do not add new tests here
filter = 'test(rmcp_client) | test(humanlike_typing_1000_chars_appears_live_no_placeholder)'
slow-timeout = { period = "1m", terminate-after = 4 }

300
codex-rs/Cargo.lock generated
View File

@@ -211,6 +211,7 @@ dependencies = [
"parking_lot",
"percent-encoding",
"windows-sys 0.59.0",
"wl-clipboard-rs",
"x11rb",
]
@@ -237,46 +238,44 @@ dependencies = [
[[package]]
name = "askama"
version = "0.12.1"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b79091df18a97caea757e28cd2d5fda49c6cd4bd01ddffd7ff01ace0c0ad2c28"
checksum = "f75363874b771be265f4ffe307ca705ef6f3baa19011c149da8674a87f1b75c4"
dependencies = [
"askama_derive",
"askama_escape",
"humansize",
"num-traits",
"itoa",
"percent-encoding",
"serde",
"serde_json",
]
[[package]]
name = "askama_derive"
version = "0.12.5"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19fe8d6cb13c4714962c072ea496f3392015f0989b1a2847bb4b2d9effd71d83"
checksum = "129397200fe83088e8a68407a8e2b1f826cf0086b21ccdb866a722c8bcd3a94f"
dependencies = [
"askama_parser",
"basic-toml",
"mime",
"mime_guess",
"memchr",
"proc-macro2",
"quote",
"rustc-hash 2.1.1",
"serde",
"serde_derive",
"syn 2.0.104",
]
[[package]]
name = "askama_escape"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "619743e34b5ba4e9703bba34deac3427c72507c7159f5fd030aea8cac0cfe341"
[[package]]
name = "askama_parser"
version = "0.2.1"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acb1161c6b64d1c3d83108213c2a2533a342ac225aabd0bda218278c2ddb00c0"
checksum = "d6ab5630b3d5eaf232620167977f95eb51f3432fc76852328774afbd242d4358"
dependencies = [
"nom",
"memchr",
"serde",
"serde_derive",
"winnow",
]
[[package]]
@@ -851,6 +850,7 @@ dependencies = [
"codex-protocol",
"codex-utils-json-to-toml",
"core_test_support",
"mcp-types",
"opentelemetry-appender-tracing",
"os_info",
"pretty_assertions",
@@ -874,7 +874,6 @@ dependencies = [
"clap",
"codex-protocol",
"mcp-types",
"paste",
"pretty_assertions",
"schemars 0.8.22",
"serde",
@@ -884,6 +883,19 @@ dependencies = [
"uuid",
]
[[package]]
name = "codex-app-server-test-client"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"codex-app-server-protocol",
"codex-protocol",
"serde",
"serde_json",
"uuid",
]
[[package]]
name = "codex-apply-patch"
version = "0.0.0"
@@ -977,25 +989,28 @@ dependencies = [
"codex-common",
"codex-core",
"codex-exec",
"codex-execpolicy2",
"codex-login",
"codex-mcp-server",
"codex-process-hardening",
"codex-protocol",
"codex-protocol-ts",
"codex-responses-api-proxy",
"codex-rmcp-client",
"codex-stdio-to-uds",
"codex-tui",
"codex-windows-sandbox",
"ctor 0.5.0",
"libc",
"owo-colors",
"predicates",
"pretty_assertions",
"regex-lite",
"serde_json",
"supports-color",
"tempfile",
"tokio",
"toml",
"tracing",
]
[[package]]
@@ -1046,7 +1061,10 @@ dependencies = [
"clap",
"codex-app-server-protocol",
"codex-core",
"codex-lmstudio",
"codex-ollama",
"codex-protocol",
"once_cell",
"serde",
"toml",
]
@@ -1066,7 +1084,9 @@ dependencies = [
"chrono",
"codex-app-server-protocol",
"codex-apply-patch",
"codex-arg0",
"codex-async-utils",
"codex-execpolicy2",
"codex-file-search",
"codex-git",
"codex-keyring-store",
@@ -1080,6 +1100,7 @@ dependencies = [
"codex-windows-sandbox",
"core-foundation 0.9.4",
"core_test_support",
"ctor 0.5.0",
"dirs",
"dunce",
"env-flags",
@@ -1111,6 +1132,7 @@ dependencies = [
"similar",
"strum_macros 0.27.2",
"tempfile",
"test-case",
"test-log",
"thiserror 2.0.17",
"time",
@@ -1140,7 +1162,6 @@ dependencies = [
"codex-arg0",
"codex-common",
"codex-core",
"codex-ollama",
"codex-protocol",
"core_test_support",
"libc",
@@ -1183,6 +1204,21 @@ dependencies = [
"tempfile",
]
[[package]]
name = "codex-execpolicy2"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"multimap",
"pretty_assertions",
"serde",
"serde_json",
"shlex",
"starlark",
"thiserror 2.0.17",
]
[[package]]
name = "codex-feedback"
version = "0.0.0"
@@ -1244,6 +1280,19 @@ dependencies = [
"tokio",
]
[[package]]
name = "codex-lmstudio"
version = "0.0.0"
dependencies = [
"codex-core",
"reqwest",
"serde_json",
"tokio",
"tracing",
"which",
"wiremock",
]
[[package]]
name = "codex-login"
version = "0.0.0"
@@ -1365,16 +1414,6 @@ dependencies = [
"uuid",
]
[[package]]
name = "codex-protocol-ts"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"codex-app-server-protocol",
"ts-rs",
]
[[package]]
name = "codex-responses-api-proxy"
version = "0.0.0"
@@ -1418,6 +1457,7 @@ dependencies = [
"tracing",
"urlencoding",
"webbrowser",
"which",
]
[[package]]
@@ -1445,15 +1485,17 @@ dependencies = [
"codex-ansi-escape",
"codex-app-server-protocol",
"codex-arg0",
"codex-backend-client",
"codex-common",
"codex-core",
"codex-feedback",
"codex-file-search",
"codex-login",
"codex-ollama",
"codex-protocol",
"codex-windows-sandbox",
"color-eyre",
"crossterm",
"derive_more 2.0.1",
"diffy",
"dirs",
"dunce",
@@ -1471,6 +1513,7 @@ dependencies = [
"ratatui",
"ratatui-macros",
"regex-lite",
"reqwest",
"serde",
"serde_json",
"serial_test",
@@ -1553,9 +1596,11 @@ name = "codex-utils-tokenizer"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-utils-cache",
"pretty_assertions",
"thiserror 2.0.17",
"tiktoken-rs",
"tokio",
]
[[package]]
@@ -1564,6 +1609,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"dirs-next",
"dunce",
"rand 0.8.5",
"serde",
"serde_json",
@@ -1657,6 +1703,15 @@ dependencies = [
"unicode-segmentation",
]
[[package]]
name = "convert_case"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb402b8d4c85569410425650ce3eddc7d698ed96d39a73f941b08fb63082f1e7"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "core-foundation"
version = "0.9.4"
@@ -2002,7 +2057,7 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22"
dependencies = [
"convert_case",
"convert_case 0.6.0",
"proc-macro2",
"quote",
"syn 2.0.104",
@@ -2015,6 +2070,7 @@ version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bda628edc44c4bb645fbe0f758797143e4e07926f7ebf4e9bdfbd3d2ce621df3"
dependencies = [
"convert_case 0.7.1",
"proc-macro2",
"quote",
"syn 2.0.104",
@@ -2878,15 +2934,6 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "humansize"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6cb51c9a029ddc91b07a787f1d86b53ccfa49b0e86688c946ebe8d3555685dd7"
dependencies = [
"libm",
]
[[package]]
name = "hyper"
version = "1.7.0"
@@ -3530,12 +3577,6 @@ dependencies = [
"pkg-config",
]
[[package]]
name = "libm"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredox"
version = "0.1.6"
@@ -4302,6 +4343,16 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "os_pipe"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db335f4760b14ead6290116f2427bf33a14d4f0617d49f78a246de10c1831224"
dependencies = [
"libc",
"windows-sys 0.59.0",
]
[[package]]
name = "owo-colors"
version = "4.2.2"
@@ -4449,7 +4500,7 @@ checksum = "3af6b589e163c5a788fab00ce0c0366f6efbb9959c2f9874b224936af7fce7e1"
dependencies = [
"base64",
"indexmap 2.12.0",
"quick-xml",
"quick-xml 0.38.0",
"serde",
"time",
]
@@ -4678,6 +4729,15 @@ version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a993555f31e5a609f617c12db6250dedcac1b0a85076912c436e6fc9b2c8e6a3"
[[package]]
name = "quick-xml"
version = "0.37.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "331e97a1af0bf59823e6eadffe373d7b27f485be8748f71471c662c1f269b7fb"
dependencies = [
"memchr",
]
[[package]]
name = "quick-xml"
version = "0.38.0"
@@ -6150,6 +6210,39 @@ version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683"
[[package]]
name = "test-case"
version = "3.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb2550dd13afcd286853192af8601920d959b14c401fcece38071d53bf0768a8"
dependencies = [
"test-case-macros",
]
[[package]]
name = "test-case-core"
version = "3.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adcb7fd841cd518e279be3d5a3eb0636409487998a4aff22f3de87b81e88384f"
dependencies = [
"cfg-if",
"proc-macro2",
"quote",
"syn 2.0.104",
]
[[package]]
name = "test-case-macros"
version = "3.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c89e72a01ed4c579669add59014b9a524d609c0c88c6a585ce37485879f6ffb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.104",
"test-case-core",
]
[[package]]
name = "test-log"
version = "0.2.18"
@@ -6257,9 +6350,9 @@ dependencies = [
[[package]]
name = "tiktoken-rs"
version = "0.7.0"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25563eeba904d770acf527e8b370fe9a5547bacd20ff84a0b6c3bc41288e5625"
checksum = "3a19830747d9034cd9da43a60eaa8e552dfda7712424aebf187b7a60126bae0d"
dependencies = [
"anyhow",
"base64",
@@ -6715,6 +6808,18 @@ version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4013970217383f67b18aef68f6fb2e8d409bc5755227092d32efb0422ba24b8"
[[package]]
name = "tree_magic_mini"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f943391d896cdfe8eec03a04d7110332d445be7df856db382dd96a730667562c"
dependencies = [
"memchr",
"nom",
"once_cell",
"petgraph",
]
[[package]]
name = "try-lock"
version = "0.2.5"
@@ -7052,6 +7157,76 @@ dependencies = [
"web-sys",
]
[[package]]
name = "wayland-backend"
version = "0.3.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673a33c33048a5ade91a6b139580fa174e19fb0d23f396dca9fa15f2e1e49b35"
dependencies = [
"cc",
"downcast-rs",
"rustix 1.0.8",
"smallvec",
"wayland-sys",
]
[[package]]
name = "wayland-client"
version = "0.31.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c66a47e840dc20793f2264eb4b3e4ecb4b75d91c0dd4af04b456128e0bdd449d"
dependencies = [
"bitflags 2.10.0",
"rustix 1.0.8",
"wayland-backend",
"wayland-scanner",
]
[[package]]
name = "wayland-protocols"
version = "0.32.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efa790ed75fbfd71283bd2521a1cfdc022aabcc28bdcff00851f9e4ae88d9901"
dependencies = [
"bitflags 2.10.0",
"wayland-backend",
"wayland-client",
"wayland-scanner",
]
[[package]]
name = "wayland-protocols-wlr"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd94963ed43cf9938a090ca4f7da58eb55325ec8200c3848963e98dc25b78ec"
dependencies = [
"bitflags 2.10.0",
"wayland-backend",
"wayland-client",
"wayland-protocols",
"wayland-scanner",
]
[[package]]
name = "wayland-scanner"
version = "0.31.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "54cb1e9dc49da91950bdfd8b848c49330536d9d1fb03d4bfec8cae50caa50ae3"
dependencies = [
"proc-macro2",
"quick-xml 0.37.5",
"quote",
]
[[package]]
name = "wayland-sys"
version = "0.31.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34949b42822155826b41db8e5d0c1be3a2bd296c747577a43a3e6daefc296142"
dependencies = [
"pkg-config",
]
[[package]]
name = "web-sys"
version = "0.3.77"
@@ -7623,6 +7798,25 @@ dependencies = [
"bitflags 2.10.0",
]
[[package]]
name = "wl-clipboard-rs"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e5ff8d0e60065f549fafd9d6cb626203ea64a798186c80d8e7df4f8af56baeb"
dependencies = [
"libc",
"log",
"os_pipe",
"rustix 0.38.44",
"tempfile",
"thiserror 2.0.17",
"tree_magic_mini",
"wayland-backend",
"wayland-client",
"wayland-protocols",
"wayland-protocols-wlr",
]
[[package]]
name = "writeable"
version = "0.6.2"
@@ -7791,9 +7985,9 @@ dependencies = [
[[package]]
name = "zeroize"
version = "1.8.1"
version = "1.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde"
checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
dependencies = [
"zeroize_derive",
]

View File

@@ -5,6 +5,7 @@ members = [
"async-utils",
"app-server",
"app-server-protocol",
"app-server-test-client",
"apply-patch",
"arg0",
"feedback",
@@ -16,16 +17,17 @@ members = [
"core",
"exec",
"execpolicy",
"execpolicy2",
"keyring-store",
"file-search",
"linux-sandbox",
"lmstudio",
"login",
"mcp-server",
"mcp-types",
"ollama",
"process-hardening",
"protocol",
"protocol-ts",
"rmcp-client",
"responses-api-proxy",
"stdio-to-uds",
@@ -64,18 +66,19 @@ codex-chatgpt = { path = "chatgpt" }
codex-common = { path = "common" }
codex-core = { path = "core" }
codex-exec = { path = "exec" }
codex-execpolicy2 = { path = "execpolicy2" }
codex-feedback = { path = "feedback" }
codex-file-search = { path = "file-search" }
codex-git = { path = "utils/git" }
codex-keyring-store = { path = "keyring-store" }
codex-linux-sandbox = { path = "linux-sandbox" }
codex-lmstudio = { path = "lmstudio" }
codex-login = { path = "login" }
codex-mcp-server = { path = "mcp-server" }
codex-ollama = { path = "ollama" }
codex-otel = { path = "otel" }
codex-process-hardening = { path = "process-hardening" }
codex-protocol = { path = "protocol" }
codex-protocol-ts = { path = "protocol-ts" }
codex-responses-api-proxy = { path = "responses-api-proxy" }
codex-rmcp-client = { path = "rmcp-client" }
codex-stdio-to-uds = { path = "stdio-to-uds" }
@@ -87,7 +90,7 @@ codex-utils-pty = { path = "utils/pty" }
codex-utils-readiness = { path = "utils/readiness" }
codex-utils-string = { path = "utils/string" }
codex-utils-tokenizer = { path = "utils/tokenizer" }
codex-windows-sandbox = { path = "windows-sandbox" }
codex-windows-sandbox = { path = "windows-sandbox-rs" }
core_test_support = { path = "core/tests/common" }
mcp-types = { path = "mcp-types" }
mcp_test_support = { path = "mcp-server/tests/common" }
@@ -96,8 +99,8 @@ mcp_test_support = { path = "mcp-server/tests/common" }
allocative = "0.3.3"
ansi-to-tui = "7.0.0"
anyhow = "1"
arboard = "3"
askama = "0.12"
arboard = { version = "3", features = ["wayland-data-control"] }
askama = "0.14"
assert_cmd = "2"
assert_matches = "1.5.0"
async-channel = "2.3.1"
@@ -124,14 +127,14 @@ eventsource-stream = "0.2.3"
futures = { version = "0.3", default-features = false }
http = "1.3.1"
icu_decimal = "2.1"
icu_provider = { version = "2.1", features = ["sync"] }
icu_locale_core = "2.1"
icu_provider = { version = "2.1", features = ["sync"] }
ignore = "0.4.23"
image = { version = "^0.25.8", default-features = false }
indexmap = "2.12.0"
insta = "1.43.2"
itertools = "0.14.0"
keyring = "3.6"
keyring = { version = "3.6", default-features = false }
landlock = "0.4.1"
lazy_static = "1"
libc = "0.2.175"
@@ -142,6 +145,7 @@ mime_guess = "2.0.5"
multimap = "0.10.0"
notify = "8.2.0"
nucleo-matcher = "0.3.1"
once_cell = "1"
openssl-sys = "*"
opentelemetry = "0.30.0"
opentelemetry-appender-tracing = "0.30.0"
@@ -150,7 +154,6 @@ opentelemetry-semantic-conventions = "0.30.0"
opentelemetry_sdk = "0.30.0"
os_info = "3.12.0"
owo-colors = "4.2.0"
paste = "1.0.15"
path-absolutize = "3.1.1"
pathdiff = "0.2"
portable-pty = "0.9.0"
@@ -183,6 +186,7 @@ tempfile = "3.23.0"
test-log = "0.2.18"
textwrap = "0.16.2"
thiserror = "2.0.17"
tiktoken-rs = "0.9"
time = "0.3"
tiny_http = "0.12"
tokio = "1"
@@ -213,7 +217,7 @@ which = "6"
wildmatch = "2.5.0"
wiremock = "0.6"
zeroize = "1.8.1"
zeroize = "1.8.2"
[workspace.lints]
rust = {}
@@ -280,8 +284,8 @@ opt-level = 0
[patch.crates-io]
# Uncomment to debug local changes.
# ratatui = { path = "../../ratatui" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
crossterm = { git = "https://github.com/nornagon/crossterm", branch = "nornagon/color-query" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }

View File

@@ -58,7 +58,7 @@ To test to see what happens when a command is run under the sandbox provided by
```
# macOS
codex sandbox macos [--full-auto] [COMMAND]...
codex sandbox macos [--full-auto] [--log-denials] [COMMAND]...
# Linux
codex sandbox linux [--full-auto] [COMMAND]...
@@ -67,7 +67,7 @@ codex sandbox linux [--full-auto] [COMMAND]...
codex sandbox windows [--full-auto] [COMMAND]...
# Legacy aliases
codex debug seatbelt [--full-auto] [COMMAND]...
codex debug seatbelt [--full-auto] [--log-denials] [COMMAND]...
codex debug landlock [--full-auto] [COMMAND]...
```

View File

@@ -15,7 +15,6 @@ anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-protocol = { workspace = true }
mcp-types = { workspace = true }
paste = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }

View File

@@ -13,10 +13,7 @@ use crate::export_server_responses;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::SandboxPolicy;
use schemars::JsonSchema;
use schemars::schema_for;
use serde::Serialize;
@@ -92,6 +89,8 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
{
let status = Command::new(prettier_bin)
.arg("--write")
.arg("--log-level")
.arg("warn")
.args(ts_files.iter().map(|p| p.as_os_str()))
.status()
.with_context(|| format!("Failed to invoke Prettier at {}", prettier_bin.display()))?;
@@ -118,10 +117,6 @@ pub fn generate_json(out_dir: &Path) -> Result<()> {
|d| write_json_schema_with_return::<crate::ClientNotification>(d, "ClientNotification"),
|d| write_json_schema_with_return::<crate::ServerNotification>(d, "ServerNotification"),
|d| write_json_schema_with_return::<EventMsg>(d, "EventMsg"),
|d| write_json_schema_with_return::<FileChange>(d, "FileChange"),
|d| write_json_schema_with_return::<crate::protocol::v1::InputItem>(d, "InputItem"),
|d| write_json_schema_with_return::<ParsedCommand>(d, "ParsedCommand"),
|d| write_json_schema_with_return::<SandboxPolicy>(d, "SandboxPolicy"),
];
let mut schemas: Vec<GeneratedSchema> = Vec::new();
@@ -150,13 +145,10 @@ fn build_schema_bundle(schemas: Vec<GeneratedSchema>) -> Result<Value> {
"ClientNotification",
"ClientRequest",
"EventMsg",
"FileChange",
"InputItem",
"ParsedCommand",
"SandboxPolicy",
"ServerNotification",
"ServerRequest",
];
const IGNORED_DEFINITIONS: &[&str] = &["Option<()>"];
let namespaced_types = collect_namespaced_types(&schemas);
let mut definitions = Map::new();
@@ -169,6 +161,10 @@ fn build_schema_bundle(schemas: Vec<GeneratedSchema>) -> Result<Value> {
in_v1_dir,
} = schema;
if IGNORED_DEFINITIONS.contains(&logical_name.as_str()) {
continue;
}
if let Some(ref ns) = namespace {
rewrite_refs_to_namespace(&mut value, ns);
}
@@ -179,6 +175,9 @@ fn build_schema_bundle(schemas: Vec<GeneratedSchema>) -> Result<Value> {
&& let Value::Object(defs_obj) = defs
{
for (def_name, mut def_schema) in defs_obj {
if IGNORED_DEFINITIONS.contains(&def_name.as_str()) {
continue;
}
if SPECIAL_DEFINITIONS.contains(&def_name.as_str()) {
continue;
}
@@ -384,14 +383,6 @@ fn variant_definition_name(base: &str, variant: &Value) -> Option<String> {
});
}
if let Some(mode_literal) = literal_from_property(props, "mode") {
let pascal = to_pascal_case(mode_literal);
return Some(match base {
"SandboxPolicy" => format!("{pascal}SandboxPolicy"),
_ => format!("{pascal}{base}"),
});
}
if props.len() == 1
&& let Some(key) = props.keys().next()
{
@@ -666,6 +657,8 @@ fn ts_files_in_recursive(dir: &Path) -> Result<Vec<PathBuf>> {
Ok(files)
}
/// Generate an index.ts file that re-exports all generated types.
/// This allows consumers to import all types from a single file.
fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
let mut entries: Vec<String> = Vec::new();
let mut stems: Vec<String> = ts_files_in(out_dir)?
@@ -715,6 +708,7 @@ mod tests {
use uuid::Uuid;
#[test]
#[ignore = "timing out"]
fn generated_ts_has_no_optional_nullable_fields() -> Result<()> {
// Assert that there are no types of the form "?: T | null" in the generated TS files.
let output_dir = std::env::temp_dir().join(format!("codex_ts_types_{}", Uuid::now_v7()));

View File

@@ -1,6 +1,4 @@
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
@@ -9,12 +7,6 @@ use crate::export::GeneratedSchema;
use crate::export::write_json_schema;
use crate::protocol::v1;
use crate::protocol::v2;
use codex_protocol::ConversationId;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxCommandAssessment;
use paste::paste;
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
@@ -46,7 +38,7 @@ macro_rules! client_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident {
$variant:ident $(=> $wire:literal)? {
params: $(#[$params_meta:meta])* $params:ty,
response: $response:ty,
}
@@ -58,6 +50,7 @@ macro_rules! client_request_definitions {
pub enum ClientRequest {
$(
$(#[$variant_meta])*
$(#[serde(rename = $wire)] #[ts(rename = $wire)])?
$variant {
#[serde(rename = "id")]
request_id: RequestId,
@@ -101,105 +94,78 @@ macro_rules! client_request_definitions {
}
client_request_definitions! {
/// NEW APIs
// Thread lifecycle
#[serde(rename = "thread/start")]
#[ts(rename = "thread/start")]
ThreadStart {
params: v2::ThreadStartParams,
response: v2::ThreadStartResponse,
},
#[serde(rename = "thread/resume")]
#[ts(rename = "thread/resume")]
ThreadResume {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
#[serde(rename = "thread/archive")]
#[ts(rename = "thread/archive")]
ThreadArchive {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
#[serde(rename = "thread/list")]
#[ts(rename = "thread/list")]
ThreadList {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
#[serde(rename = "thread/compact")]
#[ts(rename = "thread/compact")]
ThreadCompact {
params: v2::ThreadCompactParams,
response: v2::ThreadCompactResponse,
},
#[serde(rename = "turn/start")]
#[ts(rename = "turn/start")]
TurnStart {
params: v2::TurnStartParams,
response: v2::TurnStartResponse,
},
#[serde(rename = "turn/interrupt")]
#[ts(rename = "turn/interrupt")]
TurnInterrupt {
params: v2::TurnInterruptParams,
response: v2::TurnInterruptResponse,
},
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ModelList {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
#[serde(rename = "account/login/start")]
#[ts(rename = "account/login/start")]
LoginAccount {
params: v2::LoginAccountParams,
response: v2::LoginAccountResponse,
},
#[serde(rename = "account/login/cancel")]
#[ts(rename = "account/login/cancel")]
CancelLoginAccount {
params: v2::CancelLoginAccountParams,
response: v2::CancelLoginAccountResponse,
},
#[serde(rename = "account/logout")]
#[ts(rename = "account/logout")]
LogoutAccount {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::LogoutAccountResponse,
},
#[serde(rename = "account/rateLimits/read")]
#[ts(rename = "account/rateLimits/read")]
GetAccountRateLimits {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::GetAccountRateLimitsResponse,
},
#[serde(rename = "feedback/upload")]
#[ts(rename = "feedback/upload")]
FeedbackUpload {
params: v2::FeedbackUploadParams,
response: v2::FeedbackUploadResponse,
},
#[serde(rename = "account/read")]
#[ts(rename = "account/read")]
GetAccount {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::GetAccountResponse,
},
/// DEPRECATED APIs below
Initialize {
params: v1::InitializeParams,
response: v1::InitializeResponse,
},
/// NEW APIs
// Thread lifecycle
ThreadStart => "thread/start" {
params: v2::ThreadStartParams,
response: v2::ThreadStartResponse,
},
ThreadResume => "thread/resume" {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
ThreadArchive => "thread/archive" {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
ThreadList => "thread/list" {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
ThreadCompact => "thread/compact" {
params: v2::ThreadCompactParams,
response: v2::ThreadCompactResponse,
},
TurnStart => "turn/start" {
params: v2::TurnStartParams,
response: v2::TurnStartResponse,
},
TurnInterrupt => "turn/interrupt" {
params: v2::TurnInterruptParams,
response: v2::TurnInterruptResponse,
},
ModelList => "model/list" {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
LoginAccount => "account/login/start" {
params: v2::LoginAccountParams,
response: v2::LoginAccountResponse,
},
CancelLoginAccount => "account/login/cancel" {
params: v2::CancelLoginAccountParams,
response: v2::CancelLoginAccountResponse,
},
LogoutAccount => "account/logout" {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::LogoutAccountResponse,
},
GetAccountRateLimits => "account/rateLimits/read" {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::GetAccountRateLimitsResponse,
},
FeedbackUpload => "feedback/upload" {
params: v2::FeedbackUploadParams,
response: v2::FeedbackUploadResponse,
},
GetAccount => "account/read" {
params: v2::GetAccountParams,
response: v2::GetAccountResponse,
},
/// DEPRECATED APIs below
NewConversation {
params: v1::NewConversationParams,
response: v1::NewConversationResponse,
@@ -263,6 +229,7 @@ client_request_definitions! {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v1::LogoutChatGptResponse,
},
/// DEPRECATED in favor of GetAccount
GetAuthStatus {
params: v1::GetAuthStatusParams,
response: v1::GetAuthStatusResponse,
@@ -302,34 +269,36 @@ macro_rules! server_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident
$variant:ident $(=> $wire:literal)? {
params: $params:ty,
response: $response:ty,
}
),* $(,)?
) => {
paste! {
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
params: [<$variant Params>],
},
)*
}
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
$(
$(#[$variant_meta])*
$(#[serde(rename = $wire)] #[ts(rename = $wire)])?
$variant {
#[serde(rename = "id")]
request_id: RequestId,
params: $params,
},
)*
}
#[derive(Debug, Clone, PartialEq, JsonSchema)]
pub enum ServerRequestPayload {
$( $variant([<$variant Params>]), )*
}
#[derive(Debug, Clone, PartialEq, JsonSchema)]
pub enum ServerRequestPayload {
$( $variant($params), )*
}
impl ServerRequestPayload {
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
match self {
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
}
impl ServerRequestPayload {
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
match self {
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
}
}
}
@@ -337,9 +306,9 @@ macro_rules! server_request_definitions {
pub fn export_server_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
paste! {
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
}
$(
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
)*
Ok(())
}
@@ -348,9 +317,12 @@ macro_rules! server_request_definitions {
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(schemas.push(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?);)*
}
$(
schemas.push(crate::export::write_json_schema::<$response>(
out_dir,
concat!(stringify!($variant), "Response"),
)?);
)*
Ok(schemas)
}
@@ -359,9 +331,12 @@ macro_rules! server_request_definitions {
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(schemas.push(crate::export::write_json_schema::<[<$variant Params>]>(out_dir, stringify!([<$variant Params>]))?);)*
}
$(
schemas.push(crate::export::write_json_schema::<$params>(
out_dir,
concat!(stringify!($variant), "Params"),
)?);
)*
Ok(schemas)
}
};
@@ -451,49 +426,27 @@ impl TryFrom<JSONRPCRequest> for ServerRequest {
}
server_request_definitions! {
/// NEW APIs
/// Sent when approval is requested for a specific command execution.
/// This request is used for Turns started via turn/start.
CommandExecutionRequestApproval => "item/commandExecution/requestApproval" {
params: v2::CommandExecutionRequestApprovalParams,
response: v2::CommandExecutionRequestApprovalResponse,
},
/// DEPRECATED APIs below
/// Request to approve a patch.
ApplyPatchApproval,
/// This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).
ApplyPatchApproval {
params: v1::ApplyPatchApprovalParams,
response: v1::ApplyPatchApprovalResponse,
},
/// Request to exec a command.
ExecCommandApproval,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
/// and [codex_core::protocol::PatchApplyEndEvent].
pub call_id: String,
pub file_changes: HashMap<PathBuf, FileChange>,
/// Optional explanatory reason (e.g. request for extra write access).
pub reason: Option<String>,
/// When set, the agent is asking the user to allow writes under this root
/// for the remainder of the session (unclear if this is honored today).
pub grant_root: Option<PathBuf>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
/// and [codex_core::protocol::ExecCommandEndEvent].
pub call_id: String,
pub command: Vec<String>,
pub cwd: PathBuf,
pub reason: Option<String>,
pub risk: Option<SandboxCommandAssessment>,
pub parsed_cmd: Vec<ParsedCommand>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct ExecCommandApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct ApplyPatchApprovalResponse {
pub decision: ReviewDecision,
/// This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).
ExecCommandApproval {
params: v1::ExecCommandApprovalParams,
response: v1::ExecCommandApprovalResponse,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -533,6 +486,9 @@ server_notification_definitions! {
McpToolCallProgress => "item/mcpToolCall/progress" (v2::McpToolCallProgressNotification),
AccountUpdated => "account/updated" (v2::AccountUpdatedNotification),
AccountRateLimitsUpdated => "account/rateLimits/updated" (v2::AccountRateLimitsUpdatedNotification),
ReasoningSummaryTextDelta => "item/reasoning/summaryTextDelta" (v2::ReasoningSummaryTextDeltaNotification),
ReasoningSummaryPartAdded => "item/reasoning/summaryPartAdded" (v2::ReasoningSummaryPartAddedNotification),
ReasoningTextDelta => "item/reasoning/textDelta" (v2::ReasoningTextDeltaNotification),
#[serde(rename = "account/login/completed")]
#[ts(rename = "account/login/completed")]
@@ -555,17 +511,20 @@ client_notification_definitions! {
mod tests {
use super::*;
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::account::PlanType;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::AskForApproval;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::PathBuf;
#[test]
fn serialize_new_conversation() -> Result<()> {
let request = ClientRequest::NewConversation {
request_id: RequestId::Integer(42),
params: v1::NewConversationParams {
model: Some("gpt-5-codex".to_string()),
model: Some("gpt-5.1-codex".to_string()),
model_provider: None,
profile: None,
cwd: None,
@@ -583,7 +542,7 @@ mod tests {
"method": "newConversation",
"id": 42,
"params": {
"model": "gpt-5-codex",
"model": "gpt-5.1-codex",
"modelProvider": null,
"profile": null,
"cwd": null,
@@ -638,7 +597,7 @@ mod tests {
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = ExecCommandApprovalParams {
let params = v1::ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
command: vec!["echo".to_string(), "hello".to_string()],
@@ -758,12 +717,17 @@ mod tests {
fn serialize_get_account() -> Result<()> {
let request = ClientRequest::GetAccount {
request_id: RequestId::Integer(5),
params: None,
params: v2::GetAccountParams {
refresh_token: false,
},
};
assert_eq!(
json!({
"method": "account/read",
"id": 5,
"params": {
"refreshToken": false
}
}),
serde_json::to_value(&request)?,
);
@@ -772,19 +736,16 @@ mod tests {
#[test]
fn account_serializes_fields_in_camel_case() -> Result<()> {
let api_key = v2::Account::ApiKey {
api_key: "secret".to_string(),
};
let api_key = v2::Account::ApiKey {};
assert_eq!(
json!({
"type": "apiKey",
"apiKey": "secret",
}),
serde_json::to_value(&api_key)?,
);
let chatgpt = v2::Account::Chatgpt {
email: Some("user@example.com".to_string()),
email: "user@example.com".to_string(),
plan_type: PlanType::Plus,
};
assert_eq!(

View File

@@ -8,9 +8,14 @@ use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::models::ResponseItem;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxCommandAssessment;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::TurnAbortReason;
use schemars::JsonSchema;
use serde::Deserialize;
@@ -113,6 +118,18 @@ pub struct ConversationSummary {
pub preview: String,
pub timestamp: Option<String>,
pub model_provider: String,
pub cwd: PathBuf,
pub cli_version: String,
pub source: SessionSource,
pub git_info: Option<ConversationGitInfo>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
pub struct ConversationGitInfo {
pub sha: Option<String>,
pub branch: Option<String>,
pub origin_url: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -178,6 +195,46 @@ pub struct GitDiffToRemoteResponse {
pub diff: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
/// and [codex_core::protocol::PatchApplyEndEvent].
pub call_id: String,
pub file_changes: HashMap<PathBuf, FileChange>,
/// Optional explanatory reason (e.g. request for extra write access).
pub reason: Option<String>,
/// When set, the agent is asking the user to allow writes under this root
/// for the remainder of the session (unclear if this is honored today).
pub grant_root: Option<PathBuf>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
/// and [codex_core::protocol::ExecCommandEndEvent].
pub call_id: String,
pub command: Vec<String>,
pub cwd: PathBuf,
pub reason: Option<String>,
pub risk: Option<SandboxCommandAssessment>,
pub parsed_cmd: Vec<ParsedCommand>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
pub struct ExecCommandApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct CancelLoginChatGptParams {

View File

@@ -4,8 +4,13 @@ use std::path::PathBuf;
use crate::protocol::common::AuthMode;
use codex_protocol::ConversationId;
use codex_protocol::account::PlanType;
use codex_protocol::approvals::SandboxCommandAssessment as CoreSandboxCommandAssessment;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::items::AgentMessageContent as CoreAgentMessageContent;
use codex_protocol::items::TurnItem as CoreTurnItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::parse_command::ParsedCommand as CoreParsedCommand;
use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::user_input::UserInput as CoreUserInput;
@@ -17,7 +22,7 @@ use serde_json::Value as JsonValue;
use ts_rs::TS;
// Macro to declare a camelCased API v2 enum mirroring a core enum which
// tends to use kebab-case.
// tends to use either snake_case or kebab-case.
macro_rules! v2_enum_from_core {
(
pub enum $Name:ident from $Src:path { $( $Variant:ident ),+ $(,)? }
@@ -53,13 +58,32 @@ v2_enum_from_core!(
}
);
v2_enum_from_core!(
pub enum CommandRiskLevel from codex_protocol::approvals::SandboxRiskLevel {
Low,
Medium,
High
}
);
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "mode", rename_all = "camelCase")]
#[ts(tag = "mode")]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum ApprovalDecision {
Accept,
Decline,
Cancel,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum SandboxPolicy {
DangerFullAccess,
ReadOnly,
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
WorkspaceWrite {
#[serde(default)]
writable_roots: Vec<PathBuf>,
@@ -116,6 +140,98 @@ impl From<codex_protocol::protocol::SandboxPolicy> for SandboxPolicy {
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SandboxCommandAssessment {
pub description: String,
pub risk_level: CommandRiskLevel,
}
impl SandboxCommandAssessment {
pub fn into_core(self) -> CoreSandboxCommandAssessment {
CoreSandboxCommandAssessment {
description: self.description,
risk_level: self.risk_level.to_core(),
}
}
}
impl From<CoreSandboxCommandAssessment> for SandboxCommandAssessment {
fn from(value: CoreSandboxCommandAssessment) -> Self {
Self {
description: value.description,
risk_level: CommandRiskLevel::from(value.risk_level),
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum CommandAction {
Read {
command: String,
name: String,
path: PathBuf,
},
ListFiles {
command: String,
path: Option<String>,
},
Search {
command: String,
query: Option<String>,
path: Option<String>,
},
Unknown {
command: String,
},
}
impl CommandAction {
pub fn into_core(self) -> CoreParsedCommand {
match self {
CommandAction::Read {
command: cmd,
name,
path,
} => CoreParsedCommand::Read { cmd, name, path },
CommandAction::ListFiles { command: cmd, path } => {
CoreParsedCommand::ListFiles { cmd, path }
}
CommandAction::Search {
command: cmd,
query,
path,
} => CoreParsedCommand::Search { cmd, query, path },
CommandAction::Unknown { command: cmd } => CoreParsedCommand::Unknown { cmd },
}
}
}
impl From<CoreParsedCommand> for CommandAction {
fn from(value: CoreParsedCommand) -> Self {
match value {
CoreParsedCommand::Read { cmd, name, path } => CommandAction::Read {
command: cmd,
name,
path,
},
CoreParsedCommand::ListFiles { cmd, path } => {
CommandAction::ListFiles { command: cmd, path }
}
CoreParsedCommand::Search { cmd, query, path } => CommandAction::Search {
command: cmd,
query,
path,
},
CoreParsedCommand::Unknown { cmd } => CommandAction::Unknown { command: cmd },
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
@@ -123,14 +239,11 @@ impl From<codex_protocol::protocol::SandboxPolicy> for SandboxPolicy {
pub enum Account {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey { api_key: String },
ApiKey {},
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
Chatgpt {
email: Option<String>,
plan_type: PlanType,
},
Chatgpt { email: String, plan_type: PlanType },
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -193,11 +306,20 @@ pub struct GetAccountRateLimitsResponse {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountParams {
#[serde(default)]
pub refresh_token: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountResponse {
pub account: Account,
pub account: Option<Account>,
pub requires_openai_auth: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
@@ -270,7 +392,7 @@ pub struct ThreadStartParams {
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub config: Option<HashMap<String, JsonValue>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
@@ -282,11 +404,39 @@ pub struct ThreadStartResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// There are three ways to resume a thread:
/// 1. By thread_id: load the thread from disk by thread_id and resume it.
/// 2. By history: instantiate the thread from memory and resume it.
/// 3. By path: load the thread from disk by path and resume it.
///
/// The precedence is: history > path > thread_id.
/// If using history or path, the thread_id param will be ignored.
///
/// Prefer using thread_id whenever possible.
pub struct ThreadResumeParams {
pub thread_id: String,
/// [UNSTABLE] FOR CODEX CLOUD - DO NOT USE.
/// If specified, the thread will be resumed with the provided history
/// instead of loaded from disk.
pub history: Option<Vec<ResponseItem>>,
/// [UNSTABLE] Specify the rollout path to resume from.
/// If specified, the thread_id param will be ignored.
pub path: Option<PathBuf>,
/// Configuration overrides for the resumed thread, if any.
pub model: Option<String>,
pub model_provider: Option<String>,
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -348,6 +498,13 @@ pub struct ThreadCompactResponse {}
#[ts(export_to = "v2/")]
pub struct Thread {
pub id: String,
/// Usually the first user message in the thread, if available.
pub preview: String,
pub model_provider: String,
/// Unix timestamp (in seconds) when the thread was created.
pub created_at: i64,
/// [UNSTABLE] Path to the thread on disk.
pub path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -446,36 +603,66 @@ impl UserInput {
}
}
impl From<CoreUserInput> for UserInput {
fn from(value: CoreUserInput) -> Self {
match value {
CoreUserInput::Text { text } => UserInput::Text { text },
CoreUserInput::Image { image_url } => UserInput::Image { url: image_url },
CoreUserInput::LocalImage { path } => UserInput::LocalImage { path },
_ => unreachable!("unsupported user input variant"),
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum ThreadItem {
UserMessage {
id: String,
content: Vec<UserInput>,
},
AgentMessage {
id: String,
text: String,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
UserMessage { id: String, content: Vec<UserInput> },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
AgentMessage { id: String, text: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
Reasoning {
id: String,
text: String,
#[serde(default)]
summary: Vec<String>,
#[serde(default)]
content: Vec<String>,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
CommandExecution {
id: String,
/// The command to be executed.
command: String,
aggregated_output: String,
exit_code: Option<i32>,
/// The command's working directory.
cwd: PathBuf,
status: CommandExecutionStatus,
/// A best-effort parsing of the command to understand the action(s) it will perform.
/// This returns a list of CommandAction objects because a single shell command may
/// be composed of many commands piped together.
command_actions: Vec<CommandAction>,
/// The command's output, aggregated from stdout and stderr.
aggregated_output: Option<String>,
/// The command's exit code.
exit_code: Option<i32>,
/// The duration of the command execution in milliseconds.
duration_ms: Option<i64>,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
FileChange {
id: String,
changes: Vec<FileUpdateChange>,
status: PatchApplyStatus,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
McpToolCall {
id: String,
server: String,
@@ -485,22 +672,48 @@ pub enum ThreadItem {
result: Option<McpToolCallResult>,
error: Option<McpToolCallError>,
},
WebSearch {
id: String,
query: String,
},
TodoList {
id: String,
items: Vec<TodoItem>,
},
ImageView {
id: String,
path: String,
},
CodeReview {
id: String,
review: String,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
WebSearch { id: String, query: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
TodoList { id: String, items: Vec<TodoItem> },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
ImageView { id: String, path: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
CodeReview { id: String, review: String },
}
impl From<CoreTurnItem> for ThreadItem {
fn from(value: CoreTurnItem) -> Self {
match value {
CoreTurnItem::UserMessage(user) => ThreadItem::UserMessage {
id: user.id,
content: user.content.into_iter().map(UserInput::from).collect(),
},
CoreTurnItem::AgentMessage(agent) => {
let text = agent
.content
.into_iter()
.map(|entry| match entry {
CoreAgentMessageContent::Text { text } => text,
})
.collect::<String>();
ThreadItem::AgentMessage { id: agent.id, text }
}
CoreTurnItem::Reasoning(reasoning) => ThreadItem::Reasoning {
id: reasoning.id,
summary: reasoning.summary_text,
content: reasoning.raw_content,
},
CoreTurnItem::WebSearch(search) => ThreadItem::WebSearch {
id: search.id,
query: search.query,
},
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -552,7 +765,7 @@ pub enum McpToolCallStatus {
#[ts(export_to = "v2/")]
pub struct McpToolCallResult {
pub content: Vec<McpContentBlock>,
pub structured_content: JsonValue,
pub structured_content: Option<JsonValue>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -628,6 +841,32 @@ pub struct AgentMessageDeltaNotification {
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ReasoningSummaryTextDeltaNotification {
pub item_id: String,
pub delta: String,
pub summary_index: i64,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ReasoningSummaryPartAddedNotification {
pub item_id: String,
pub summary_index: i64,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ReasoningTextDeltaNotification {
pub item_id: String,
pub delta: String,
pub content_index: i64,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -644,6 +883,39 @@ pub struct McpToolCallProgressNotification {
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionRequestApprovalParams {
pub thread_id: String,
pub turn_id: String,
pub item_id: String,
/// Optional explanatory reason (e.g. request for network access).
pub reason: Option<String>,
/// Optional model-provided risk assessment describing the blocked command.
pub risk: Option<SandboxCommandAssessment>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionRequestAcceptSettings {
/// If true, automatically approve this command for the duration of the session.
#[serde(default)]
pub for_session: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionRequestApprovalResponse {
pub decision: ApprovalDecision,
/// Optional approval settings for when the decision is `accept`.
/// Ignored if the decision is `decline` or `cancel`.
#[serde(default)]
pub accept_settings: Option<CommandExecutionRequestAcceptSettings>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -697,3 +969,101 @@ pub struct AccountLoginCompletedNotification {
pub success: bool,
pub error: Option<String>,
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::items::AgentMessageContent;
use codex_protocol::items::AgentMessageItem;
use codex_protocol::items::ReasoningItem;
use codex_protocol::items::TurnItem;
use codex_protocol::items::UserMessageItem;
use codex_protocol::items::WebSearchItem;
use codex_protocol::user_input::UserInput as CoreUserInput;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
#[test]
fn core_turn_item_into_thread_item_converts_supported_variants() {
let user_item = TurnItem::UserMessage(UserMessageItem {
id: "user-1".to_string(),
content: vec![
CoreUserInput::Text {
text: "hello".to_string(),
},
CoreUserInput::Image {
image_url: "https://example.com/image.png".to_string(),
},
CoreUserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
],
});
assert_eq!(
ThreadItem::from(user_item),
ThreadItem::UserMessage {
id: "user-1".to_string(),
content: vec![
UserInput::Text {
text: "hello".to_string(),
},
UserInput::Image {
url: "https://example.com/image.png".to_string(),
},
UserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
],
}
);
let agent_item = TurnItem::AgentMessage(AgentMessageItem {
id: "agent-1".to_string(),
content: vec![
AgentMessageContent::Text {
text: "Hello ".to_string(),
},
AgentMessageContent::Text {
text: "world".to_string(),
},
],
});
assert_eq!(
ThreadItem::from(agent_item),
ThreadItem::AgentMessage {
id: "agent-1".to_string(),
text: "Hello world".to_string(),
}
);
let reasoning_item = TurnItem::Reasoning(ReasoningItem {
id: "reasoning-1".to_string(),
summary_text: vec!["line one".to_string(), "line two".to_string()],
raw_content: vec![],
});
assert_eq!(
ThreadItem::from(reasoning_item),
ThreadItem::Reasoning {
id: "reasoning-1".to_string(),
summary: vec!["line one".to_string(), "line two".to_string()],
content: vec![],
}
);
let search_item = TurnItem::WebSearch(WebSearchItem {
id: "search-1".to_string(),
query: "docs".to_string(),
});
assert_eq!(
ThreadItem::from(search_item),
ThreadItem::WebSearch {
id: "search-1".to_string(),
query: "docs".to_string(),
}
);
}
}

1298
codex-rs/app-server-test-client/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
[package]
name = "codex-app-server-test-client"
version = { workspace = true }
edition = "2024"
[lints]
workspace = true
[dependencies]
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive", "env"] }
codex-app-server-protocol = { workspace = true }
codex-protocol = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
uuid = { workspace = true, features = ["v4"] }

View File

@@ -0,0 +1,2 @@
# App Server Test Client
Exercises simple `codex app-server` flows end-to-end, logging JSON-RPC messages sent between client and server to stdout.

View File

@@ -0,0 +1,691 @@
use std::collections::VecDeque;
use std::io::BufRead;
use std::io::BufReader;
use std::io::Write;
use std::process::Child;
use std::process::ChildStdin;
use std::process::ChildStdout;
use std::process::Command;
use std::process::Stdio;
use std::thread;
use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::bail;
use clap::Parser;
use clap::Subcommand;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::AskForApproval;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::GetAccountRateLimitsResponse;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxPolicy;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::ConversationId;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use serde::de::DeserializeOwned;
use serde_json::Value;
use uuid::Uuid;
/// Minimal launcher that initializes the Codex app-server and logs the handshake.
#[derive(Parser)]
#[command(author = "Codex", version, about = "Bootstrap Codex app-server", long_about = None)]
struct Cli {
/// Path to the `codex` CLI binary.
#[arg(long, env = "CODEX_BIN", default_value = "codex")]
codex_bin: String,
#[command(subcommand)]
command: CliCommand,
}
#[derive(Subcommand)]
enum CliCommand {
/// Send a user message through the Codex app-server.
SendMessage {
/// User message to send to Codex.
#[arg()]
user_message: String,
},
/// Send a user message through the app-server V2 thread/turn APIs.
SendMessageV2 {
/// User message to send to Codex.
#[arg()]
user_message: String,
},
/// Start a V2 turn that elicits an ExecCommand approval.
#[command(name = "trigger-cmd-approval")]
TriggerCmdApproval {
/// Optional prompt; defaults to a simple python command.
#[arg()]
user_message: Option<String>,
},
/// Start a V2 turn that elicits an ApplyPatch approval.
#[command(name = "trigger-patch-approval")]
TriggerPatchApproval {
/// Optional prompt; defaults to creating a file via apply_patch.
#[arg()]
user_message: Option<String>,
},
/// Start a V2 turn that should not elicit an ExecCommand approval.
#[command(name = "no-trigger-cmd-approval")]
NoTriggerCmdApproval,
/// Trigger the ChatGPT login flow and wait for completion.
TestLogin,
/// Fetch the current account rate limits from the Codex app-server.
GetAccountRateLimits,
}
fn main() -> Result<()> {
let Cli { codex_bin, command } = Cli::parse();
match command {
CliCommand::SendMessage { user_message } => send_message(codex_bin, user_message),
CliCommand::SendMessageV2 { user_message } => send_message_v2(codex_bin, user_message),
CliCommand::TriggerCmdApproval { user_message } => {
trigger_cmd_approval(codex_bin, user_message)
}
CliCommand::TriggerPatchApproval { user_message } => {
trigger_patch_approval(codex_bin, user_message)
}
CliCommand::NoTriggerCmdApproval => no_trigger_cmd_approval(codex_bin),
CliCommand::TestLogin => test_login(codex_bin),
CliCommand::GetAccountRateLimits => get_account_rate_limits(codex_bin),
}
}
fn send_message(codex_bin: String, user_message: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let conversation = client.new_conversation()?;
println!("< newConversation response: {conversation:?}");
let subscription = client.add_conversation_listener(&conversation.conversation_id)?;
println!("< addConversationListener response: {subscription:?}");
let send_response = client.send_user_message(&conversation.conversation_id, &user_message)?;
println!("< sendUserMessage response: {send_response:?}");
client.stream_conversation(&conversation.conversation_id)?;
client.remove_conversation_listener(subscription.subscription_id)?;
Ok(())
}
fn send_message_v2(codex_bin: String, user_message: String) -> Result<()> {
send_message_v2_with_policies(codex_bin, user_message, None, None)
}
fn trigger_cmd_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
let default_prompt =
"Run `touch /tmp/should-trigger-approval` so I can confirm the file exists.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn trigger_patch_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
let default_prompt =
"Create a file named APPROVAL_DEMO.txt containing a short hello message using apply_patch.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn no_trigger_cmd_approval(codex_bin: String) -> Result<()> {
let prompt = "Run `touch should_not_trigger_approval.txt`";
send_message_v2_with_policies(codex_bin, prompt.to_string(), None, None)
}
fn send_message_v2_with_policies(
codex_bin: String,
user_message: String,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let thread_response = client.thread_start(ThreadStartParams::default())?;
println!("< thread/start response: {thread_response:?}");
let mut turn_params = TurnStartParams {
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text { text: user_message }],
..Default::default()
};
turn_params.approval_policy = approval_policy;
turn_params.sandbox_policy = sandbox_policy;
let turn_response = client.turn_start(turn_params)?;
println!("< turn/start response: {turn_response:?}");
client.stream_turn(&thread_response.thread.id, &turn_response.turn.id)?;
Ok(())
}
fn test_login(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let login_response = client.login_chat_gpt()?;
println!("< loginChatGpt response: {login_response:?}");
println!(
"Open the following URL in your browser to continue:\n{}",
login_response.auth_url
);
let completion = client.wait_for_login_completion(&login_response.login_id)?;
println!("< loginChatGptComplete notification: {completion:?}");
if completion.success {
println!("Login succeeded.");
Ok(())
} else {
bail!(
"login failed: {}",
completion
.error
.as_deref()
.unwrap_or("unknown error from loginChatGptComplete")
);
}
}
fn get_account_rate_limits(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let response = client.get_account_rate_limits()?;
println!("< account/rateLimits/read response: {response:?}");
Ok(())
}
struct CodexClient {
child: Child,
stdin: Option<ChildStdin>,
stdout: BufReader<ChildStdout>,
pending_notifications: VecDeque<JSONRPCNotification>,
}
impl CodexClient {
fn spawn(codex_bin: String) -> Result<Self> {
let mut codex_app_server = Command::new(&codex_bin)
.arg("app-server")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::inherit())
.spawn()
.with_context(|| format!("failed to start `{codex_bin}` app-server"))?;
let stdin = codex_app_server
.stdin
.take()
.context("codex app-server stdin unavailable")?;
let stdout = codex_app_server
.stdout
.take()
.context("codex app-server stdout unavailable")?;
Ok(Self {
child: codex_app_server,
stdin: Some(stdin),
stdout: BufReader::new(stdout),
pending_notifications: VecDeque::new(),
})
}
fn initialize(&mut self) -> Result<InitializeResponse> {
let request_id = self.request_id();
let request = ClientRequest::Initialize {
request_id: request_id.clone(),
params: InitializeParams {
client_info: ClientInfo {
name: "codex-toy-app-server".to_string(),
title: Some("Codex Toy App Server".to_string()),
version: env!("CARGO_PKG_VERSION").to_string(),
},
},
};
self.send_request(request, request_id, "initialize")
}
fn new_conversation(&mut self) -> Result<NewConversationResponse> {
let request_id = self.request_id();
let request = ClientRequest::NewConversation {
request_id: request_id.clone(),
params: NewConversationParams::default(),
};
self.send_request(request, request_id, "newConversation")
}
fn add_conversation_listener(
&mut self,
conversation_id: &ConversationId,
) -> Result<AddConversationSubscriptionResponse> {
let request_id = self.request_id();
let request = ClientRequest::AddConversationListener {
request_id: request_id.clone(),
params: AddConversationListenerParams {
conversation_id: *conversation_id,
experimental_raw_events: false,
},
};
self.send_request(request, request_id, "addConversationListener")
}
fn remove_conversation_listener(&mut self, subscription_id: Uuid) -> Result<()> {
let request_id = self.request_id();
let request = ClientRequest::RemoveConversationListener {
request_id: request_id.clone(),
params: codex_app_server_protocol::RemoveConversationListenerParams { subscription_id },
};
self.send_request::<codex_app_server_protocol::RemoveConversationSubscriptionResponse>(
request,
request_id,
"removeConversationListener",
)?;
Ok(())
}
fn send_user_message(
&mut self,
conversation_id: &ConversationId,
message: &str,
) -> Result<SendUserMessageResponse> {
let request_id = self.request_id();
let request = ClientRequest::SendUserMessage {
request_id: request_id.clone(),
params: SendUserMessageParams {
conversation_id: *conversation_id,
items: vec![InputItem::Text {
text: message.to_string(),
}],
},
};
self.send_request(request, request_id, "sendUserMessage")
}
fn thread_start(&mut self, params: ThreadStartParams) -> Result<ThreadStartResponse> {
let request_id = self.request_id();
let request = ClientRequest::ThreadStart {
request_id: request_id.clone(),
params,
};
self.send_request(request, request_id, "thread/start")
}
fn turn_start(&mut self, params: TurnStartParams) -> Result<TurnStartResponse> {
let request_id = self.request_id();
let request = ClientRequest::TurnStart {
request_id: request_id.clone(),
params,
};
self.send_request(request, request_id, "turn/start")
}
fn login_chat_gpt(&mut self) -> Result<LoginChatGptResponse> {
let request_id = self.request_id();
let request = ClientRequest::LoginChatGpt {
request_id: request_id.clone(),
params: None,
};
self.send_request(request, request_id, "loginChatGpt")
}
fn get_account_rate_limits(&mut self) -> Result<GetAccountRateLimitsResponse> {
let request_id = self.request_id();
let request = ClientRequest::GetAccountRateLimits {
request_id: request_id.clone(),
params: None,
};
self.send_request(request, request_id, "account/rateLimits/read")
}
fn stream_conversation(&mut self, conversation_id: &ConversationId) -> Result<()> {
loop {
let notification = self.next_notification()?;
if !notification.method.starts_with("codex/event/") {
continue;
}
if let Some(event) = self.extract_event(notification, conversation_id)? {
match &event.msg {
EventMsg::AgentMessage(event) => {
println!("{}", event.message);
}
EventMsg::AgentMessageDelta(event) => {
print!("{}", event.delta);
std::io::stdout().flush().ok();
}
EventMsg::TaskComplete(event) => {
println!("\n[task complete: {event:?}]");
break;
}
EventMsg::TurnAborted(event) => {
println!("\n[turn aborted: {:?}]", event.reason);
break;
}
EventMsg::Error(event) => {
println!("[error] {event:?}");
}
_ => {
println!("[UNKNOWN EVENT] {:?}", event.msg);
}
}
}
}
Ok(())
}
fn wait_for_login_completion(
&mut self,
expected_login_id: &Uuid,
) -> Result<LoginChatGptCompleteNotification> {
loop {
let notification = self.next_notification()?;
if let Ok(server_notification) = ServerNotification::try_from(notification) {
match server_notification {
ServerNotification::LoginChatGptComplete(completion) => {
if &completion.login_id == expected_login_id {
return Ok(completion);
}
println!(
"[ignoring loginChatGptComplete for unexpected login_id: {}]",
completion.login_id
);
}
ServerNotification::AuthStatusChange(status) => {
println!("< authStatusChange notification: {status:?}");
}
ServerNotification::AccountRateLimitsUpdated(snapshot) => {
println!("< accountRateLimitsUpdated notification: {snapshot:?}");
}
ServerNotification::SessionConfigured(_) => {
// SessionConfigured notifications are unrelated to login; skip.
}
_ => {}
}
}
// Not a server notification (likely a conversation event); keep waiting.
}
}
fn stream_turn(&mut self, thread_id: &str, turn_id: &str) -> Result<()> {
loop {
let notification = self.next_notification()?;
let Ok(server_notification) = ServerNotification::try_from(notification) else {
continue;
};
match server_notification {
ServerNotification::ThreadStarted(payload) => {
if payload.thread.id == thread_id {
println!("< thread/started notification: {:?}", payload.thread);
}
}
ServerNotification::TurnStarted(payload) => {
if payload.turn.id == turn_id {
println!("< turn/started notification: {:?}", payload.turn.status);
}
}
ServerNotification::AgentMessageDelta(delta) => {
print!("{}", delta.delta);
std::io::stdout().flush().ok();
}
ServerNotification::CommandExecutionOutputDelta(delta) => {
print!("{}", delta.delta);
std::io::stdout().flush().ok();
}
ServerNotification::ItemStarted(payload) => {
println!("\n< item started: {:?}", payload.item);
}
ServerNotification::ItemCompleted(payload) => {
println!("< item completed: {:?}", payload.item);
}
ServerNotification::TurnCompleted(payload) => {
if payload.turn.id == turn_id {
println!("\n< turn/completed notification: {:?}", payload.turn.status);
if let Some(error) = payload.turn.error {
println!("[turn error] {}", error.message);
}
println!("< usage: {:?}", payload.usage);
break;
}
}
ServerNotification::McpToolCallProgress(payload) => {
println!("< MCP tool progress: {}", payload.message);
}
_ => {
println!("[UNKNOWN SERVER NOTIFICATION] {server_notification:?}");
}
}
}
Ok(())
}
fn extract_event(
&self,
notification: JSONRPCNotification,
conversation_id: &ConversationId,
) -> Result<Option<Event>> {
let params = notification
.params
.context("event notification missing params")?;
let mut map = match params {
Value::Object(map) => map,
other => bail!("unexpected params shape: {other:?}"),
};
let conversation_value = map
.remove("conversationId")
.context("event missing conversationId")?;
let notification_conversation: ConversationId = serde_json::from_value(conversation_value)
.context("conversationId was not a valid UUID")?;
if &notification_conversation != conversation_id {
return Ok(None);
}
let event_value = Value::Object(map);
let event: Event =
serde_json::from_value(event_value).context("failed to decode event payload")?;
Ok(Some(event))
}
fn send_request<T>(
&mut self,
request: ClientRequest,
request_id: RequestId,
method: &str,
) -> Result<T>
where
T: DeserializeOwned,
{
self.write_request(&request)?;
self.wait_for_response(request_id, method)
}
fn write_request(&mut self, request: &ClientRequest) -> Result<()> {
let request_json = serde_json::to_string(request)?;
let request_pretty = serde_json::to_string_pretty(request)?;
print_multiline_with_prefix("> ", &request_pretty);
if let Some(stdin) = self.stdin.as_mut() {
writeln!(stdin, "{request_json}")?;
stdin
.flush()
.context("failed to flush request to codex app-server")?;
} else {
bail!("codex app-server stdin closed");
}
Ok(())
}
fn wait_for_response<T>(&mut self, request_id: RequestId, method: &str) -> Result<T>
where
T: DeserializeOwned,
{
loop {
let message = self.read_jsonrpc_message()?;
match message {
JSONRPCMessage::Response(JSONRPCResponse { id, result }) => {
if id == request_id {
return serde_json::from_value(result)
.with_context(|| format!("{method} response missing payload"));
}
}
JSONRPCMessage::Error(err) => {
if err.id == request_id {
bail!("{method} failed: {err:?}");
}
}
JSONRPCMessage::Notification(notification) => {
self.pending_notifications.push_back(notification);
}
JSONRPCMessage::Request(_) => {
bail!("unexpected request from codex app-server");
}
}
}
}
fn next_notification(&mut self) -> Result<JSONRPCNotification> {
if let Some(notification) = self.pending_notifications.pop_front() {
return Ok(notification);
}
loop {
let message = self.read_jsonrpc_message()?;
match message {
JSONRPCMessage::Notification(notification) => return Ok(notification),
JSONRPCMessage::Response(_) | JSONRPCMessage::Error(_) => {
// No outstanding requests, so ignore stray responses/errors for now.
continue;
}
JSONRPCMessage::Request(_) => {
bail!("unexpected request from codex app-server");
}
}
}
}
fn read_jsonrpc_message(&mut self) -> Result<JSONRPCMessage> {
loop {
let mut response_line = String::new();
let bytes = self
.stdout
.read_line(&mut response_line)
.context("failed to read from codex app-server")?;
if bytes == 0 {
bail!("codex app-server closed stdout");
}
let trimmed = response_line.trim();
if trimmed.is_empty() {
continue;
}
let parsed: Value =
serde_json::from_str(trimmed).context("response was not valid JSON-RPC")?;
let pretty = serde_json::to_string_pretty(&parsed)?;
print_multiline_with_prefix("< ", &pretty);
let message: JSONRPCMessage = serde_json::from_value(parsed)
.context("response was not a valid JSON-RPC message")?;
return Ok(message);
}
}
fn request_id(&self) -> RequestId {
RequestId::String(Uuid::new_v4().to_string())
}
}
fn print_multiline_with_prefix(prefix: &str, payload: &str) {
for line in payload.lines() {
println!("{prefix}{line}");
}
}
impl Drop for CodexClient {
fn drop(&mut self) {
let _ = self.stdin.take();
if let Ok(Some(status)) = self.child.try_wait() {
println!("[codex app-server exited: {status}]");
return;
}
thread::sleep(Duration::from_millis(100));
if let Ok(Some(status)) = self.child.try_wait() {
println!("[codex app-server exited: {status}]");
return;
}
let _ = self.child.kill();
let _ = self.child.wait();
}
}

View File

@@ -46,6 +46,7 @@ app_test_support = { workspace = true }
assert_cmd = { workspace = true }
base64 = { workspace = true }
core_test_support = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
serial_test = { workspace = true }

View File

@@ -1,6 +1,16 @@
# codex-app-server
`codex app-server` is the harness Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
`codex app-server` is the interface Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
## Table of Contents
- [Protocol](#protocol)
- [Message Schema](#message-schema)
- [Lifecycle Overview](#lifecycle-overview)
- [Initialization](#initialization)
- [Core primitives](#core-primitives)
- [Thread & turn endpoints](#thread--turn-endpoints)
- [Auth endpoints](#auth-endpoints)
- [Events (work-in-progress)](#v2-streaming-events-work-in-progress)
## Protocol
@@ -8,8 +18,291 @@ Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports
## Message Schema
Currently, you can dump a TypeScript version of the schema using `codex generate-ts`. It is specific to the version of Codex you used to run `generate-ts`, so the two are guaranteed to be compatible.
Currently, you can dump a TypeScript version of the schema using `codex app-server generate-ts`, or a JSON Schema bundle via `codex app-server generate-json-schema`. Each output is specific to the version of Codex you used to run the command, so the generated artifacts are guaranteed to match that version.
```
codex generate-ts --out DIR
codex app-server generate-ts --out DIR
codex app-server generate-json-schema --out DIR
```
## Lifecycle Overview
- Initialize once: Immediately after launching the codex app-server process, send an `initialize` request with your client metadata, then emit an `initialized` notification. Any other request before this handshake gets rejected.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead.
- Begin a turn: To send user input, call `turn/start` with the target `threadId` and the user's input. Optional fields let you override model, cwd, sandbox policy, etc. This immediately returns the new turn object and triggers a `turn/started` notification.
- Stream events: After `turn/start`, keep reading JSON-RPC notifications on stdout. Youll see `item/started`, `item/completed`, deltas like `item/agentMessage/delta`, tool progress, etc. These represent streaming model output plus any side effects (commands, tool calls, reasoning notes).
- Finish the turn: When the model is done (or the turn is interrupted via making the `turn/interrupt` call), the server sends `turn/completed` with the final turn state and token usage.
## Initialization
Clients must send a single `initialize` request before invoking any other method, then acknowledge with an `initialized` notification. The server returns the user agent string it will present to upstream services; subsequent requests issued before initialization receive a `"Not initialized"` error, and repeated `initialize` calls receive an `"Already initialized"` error.
Example:
```json
{ "method": "initialize", "id": 0, "params": {
"clientInfo": { "name": "codex-vscode", "title": "Codex VS Code Extension", "version": "0.1.0" }
} }
{ "id": 0, "result": { "userAgent": "codex-app-server/0.1.0 codex-vscode/0.1.0" } }
{ "method": "initialized" }
```
## Core primitives
We have 3 top level primitives:
- Thread - a conversation between the Codex agent and a user. Each thread contains multiple turns.
- Turn - one turn of the conversation, typically starting with a user message and finishing with an agent message. Each turn contains multiple items.
- Item - represents user inputs and agent outputs as part of the turn, persisted and used as the context for future conversations.
## Thread & turn endpoints
The JSON-RPC API exposes dedicated methods for managing Codex conversations. Threads store long-lived conversation metadata, and turns store the per-message exchange (input → Codex output, including streamed items). Use the thread APIs to create, list, or archive sessions, then drive the conversation with turn APIs and notifications.
### Quick reference
- `thread/start` — create a new thread; emits `thread/started` and auto-subscribes you to turn/item events for that thread.
- `thread/resume` — reopen an existing thread by id so subsequent `turn/start` calls append to it.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
### 1) Start or resume a thread
Start a fresh thread when you need a new Codex conversation.
```json
{ "method": "thread/start", "id": 10, "params": {
// Optionally set config settings. If not specified, will use the user's
// current config settings.
"model": "gpt-5.1-codex",
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
} }
{ "id": 10, "result": {
"thread": {
"id": "thr_123",
"preview": "",
"modelProvider": "openai",
"createdAt": 1730910000
}
} }
{ "method": "thread/started", "params": { "thread": { } } }
```
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted:
```json
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
{ "id": 11, "result": { "thread": { "id": "thr_123", } } }
```
### 2) List threads (pagination & filters)
`thread/list` lets you render a history UI. Pass any combination of:
- `cursor` — opaque string from a prior response; omit for the first page.
- `limit` — server defaults to a reasonable page size if unset.
- `modelProviders` — restrict results to specific providers; unset, null, or an empty array will include all providers.
Example:
```json
{ "method": "thread/list", "id": 20, "params": {
"cursor": null,
"limit": 25,
} }
{ "id": 20, "result": {
"data": [
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111 },
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000 }
],
"nextCursor": "opaque-token-or-null"
} }
```
When `nextCursor` is `null`, youve reached the final page.
### 3) Archive a thread
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
```json
{ "method": "thread/archive", "id": 21, "params": { "threadId": "thr_b" } }
{ "id": 21, "result": {} }
```
An archived thread will not appear in future calls to `thread/list`.
### 4) Start a turn (send user input)
Turns attach user input (text or images) to a thread and trigger Codex generation. The `input` field is a list of discriminated unions:
- `{"type":"text","text":"Explain this diff"}`
- `{"type":"image","url":"https://…png"}`
- `{"type":"localImage","path":"/tmp/screenshot.png"}`
You can optionally specify config overrides on the new turn. If specified, these settings become the default for subsequent turns on the same thread.
```json
{ "method": "turn/start", "id": 30, "params": {
"threadId": "thr_123",
"input": [ { "type": "text", "text": "Run tests" } ],
// Below are optional config overrides
"cwd": "/Users/me/project",
"approvalPolicy": "unlessTrusted",
"sandboxPolicy": {
"mode": "workspaceWrite",
"writableRoots": ["/Users/me/project"],
"networkAccess": true
},
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise"
} }
{ "id": 30, "result": { "turn": {
"id": "turn_456",
"status": "inProgress",
"items": [],
"error": null
} } }
```
### 5) Interrupt an active turn
You can cancel a running Turn with `turn/interrupt`.
```json
{ "method": "turn/interrupt", "id": 31, "params": {
"threadId": "thr_123",
"turnId": "turn_456"
} }
{ "id": 31, "result": {} }
```
The server requests cancellations for running subprocesses, then emits a `turn/completed` event with `status: "interrupted"`. Rely on the `turn/completed` to know when Codex-side cleanup is done.
## Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.
### Quick reference
- `account/read` — fetch current account info; optionally refresh tokens.
- `account/login/start` — begin login (`apiKey` or `chatgpt`).
- `account/login/completed` (notify) — emitted when a login attempt finishes (success or error).
- `account/login/cancel` — cancel a pending ChatGPT login by `loginId`.
- `account/logout` — sign out; triggers `account/updated`.
- `account/updated` (notify) — emitted whenever auth mode changes (`authMode`: `apikey`, `chatgpt`, or `null`).
- `account/rateLimits/read` — fetch ChatGPT rate limits; updates arrive via `account/rateLimits/updated` (notify).
### 1) Check auth state
Request:
```json
{ "method": "account/read", "id": 1, "params": { "refreshToken": false } }
```
Response examples:
```json
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": false } } // No OpenAI auth needed (e.g., OSS/local models)
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": true } } // OpenAI auth required (typical for OpenAI-hosted models)
{ "id": 1, "result": { "account": { "type": "apiKey" }, "requiresOpenaiAuth": true } }
{ "id": 1, "result": { "account": { "type": "chatgpt", "email": "user@example.com", "planType": "pro" }, "requiresOpenaiAuth": true } }
```
Field notes:
- `refreshToken` (bool): set `true` to force a token refresh.
- `requiresOpenaiAuth` reflects the active provider; when `false`, Codex can run without OpenAI credentials.
### 2) Log in with an API key
1. Send:
```json
{ "method": "account/login/start", "id": 2, "params": { "type": "apiKey", "apiKey": "sk-…" } }
```
2. Expect:
```json
{ "id": 2, "result": { "type": "apiKey" } }
```
3. Notifications:
```json
{ "method": "account/login/completed", "params": { "loginId": null, "success": true, "error": null } }
{ "method": "account/updated", "params": { "authMode": "apikey" } }
```
### 3) Log in with ChatGPT (browser flow)
1. Start:
```json
{ "method": "account/login/start", "id": 3, "params": { "type": "chatgpt" } }
{ "id": 3, "result": { "type": "chatgpt", "loginId": "<uuid>", "authUrl": "https://chatgpt.com/…&redirect_uri=http%3A%2F%2Flocalhost%3A<port>%2Fauth%2Fcallback" } }
```
2. Open `authUrl` in a browser; the app-server hosts the local callback.
3. Wait for notifications:
```json
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": true, "error": null } }
{ "method": "account/updated", "params": { "authMode": "chatgpt" } }
```
### 4) Cancel a ChatGPT login
```json
{ "method": "account/login/cancel", "id": 4, "params": { "loginId": "<uuid>" } }
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": false, "error": "…" } }
```
### 5) Logout
```json
{ "method": "account/logout", "id": 5 }
{ "id": 5, "result": {} }
{ "method": "account/updated", "params": { "authMode": null } }
```
### 6) Rate limits (ChatGPT)
```json
{ "method": "account/rateLimits/read", "id": 6 }
{ "id": 6, "result": { "rateLimits": { "primary": { "usedPercent": 25, "windowDurationMins": 15, "resetsAt": 1730947200 }, "secondary": null } } }
{ "method": "account/rateLimits/updated", "params": { "rateLimits": { … } } }
```
Field notes:
- `usedPercent` is current usage within the OpenAI quota window.
- `windowDurationMins` is the quota window length.
- `resetsAt` is a Unix timestamp (seconds) for the next reset.
### Dev notes
- `codex app-server generate-ts --out <dir>` emits v2 types under `v2/`.
- `codex app-server generate-json-schema --out <dir>` outputs `codex_app_server_protocol.schemas.json`.
- See [“Authentication and authorization” in the config docs](../../docs/config.md#authentication-and-authorization) for configuration knobs.
## Events (work-in-progress)
Event notifications are the server-initiated event stream for thread lifecycles, turn lifecycles, and the items within them. After you start or resume a thread, keep reading stdout for `thread/started`, `turn/*`, and `item/*` notifications.
### Turn events
The app-server streams JSON-RPC notifications while a turn is running. Each turn starts with `turn/started` (initial `turn`) and ends with `turn/completed` (final `turn` plus token `usage`), and clients subscribe to the events they care about, rendering each item incrementally as updates arrive. The per-item lifecycle is always: `item/started` → zero or more item-specific deltas → `item/completed`.
#### Thread items
`ThreadItem` is the tagged union carried in turn responses and `item/*` notifications. Currently we support events for the following items:
- `userMessage` — `{id, content}` where `content` is a list of user inputs (`text`, `image`, or `localImage`).
- `agentMessage` — `{id, text}` containing the accumulated agent reply.
- `reasoning` — `{id, summary, content}` where `summary` holds streamed reasoning summaries (applicable for most OpenAI models) and `content` holds raw reasoning blocks (applicable for e.g. open source models).
- `mcpToolCall` — `{id, server, tool, status, arguments, result?, error?}` describing MCP calls; `status` is `inProgress`, `completed`, or `failed`.
- `webSearch` — `{id, query}` for a web search request issued by the agent.
All items emit two shared lifecycle events:
- `item/started` — emits the full `item` when a new unit of work begins so the UI can render it immediately; the `item.id` in this payload matches the `itemId` used by deltas.
- `item/completed` — sends the final `item` once that work finishes (e.g., after a tool call or message completes); treat this as the authoritative state.
There are additional item-specific events:
#### agentMessage
- `item/agentMessage/delta` — appends streamed text for the agent message; concatenate `delta` values for the same `itemId` in order to reconstruct the full reply.
#### reasoning
- `item/reasoning/summaryTextDelta` — streams readable reasoning summaries; `summaryIndex` increments when a new summary section opens.
- `item/reasoning/summaryPartAdded` — marks the boundary between reasoning summary sections for an `itemId`; subsequent `summaryTextDelta` entries share the same `summaryIndex`.
- `item/reasoning/textDelta` — streams raw reasoning text (only applicable for e.g. open source models); use `contentIndex` to group deltas that belong together before showing them in the UI.

View File

@@ -0,0 +1,628 @@
use crate::codex_message_processor::ApiVersion;
use crate::codex_message_processor::PendingInterrupts;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AgentMessageDeltaNotification;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::CommandAction as V2ParsedCommand;
use codex_app_server_protocol::CommandExecutionOutputDeltaNotification;
use codex_app_server_protocol::CommandExecutionRequestApprovalParams;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::McpToolCallError;
use codex_app_server_protocol::McpToolCallResult;
use codex_app_server_protocol::McpToolCallStatus;
use codex_app_server_protocol::ReasoningSummaryPartAddedNotification;
use codex_app_server_protocol::ReasoningSummaryTextDeltaNotification;
use codex_app_server_protocol::ReasoningTextDeltaNotification;
use codex_app_server_protocol::SandboxCommandAssessment as V2SandboxCommandAssessment;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::TurnInterruptResponse;
use codex_core::CodexConversation;
use codex_core::parse_command::shlex_join;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecApprovalRequestEvent;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::McpToolCallBeginEvent;
use codex_core::protocol::McpToolCallEndEvent;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewDecision;
use codex_protocol::ConversationId;
use std::convert::TryFrom;
use std::sync::Arc;
use tokio::sync::oneshot;
use tracing::error;
type JsonValue = serde_json::Value;
pub(crate) async fn apply_bespoke_event_handling(
event: Event,
conversation_id: ConversationId,
conversation: Arc<CodexConversation>,
outgoing: Arc<OutgoingMessageSender>,
pending_interrupts: PendingInterrupts,
api_version: ApiVersion,
) {
let Event { id: event_id, msg } = event;
match msg {
EventMsg::ApplyPatchApprovalRequest(ApplyPatchApprovalRequestEvent {
call_id,
changes,
reason,
grant_root,
}) => {
let params = ApplyPatchApprovalParams {
conversation_id,
call_id,
file_changes: changes,
reason,
grant_root,
};
let rx = outgoing
.send_request(ServerRequestPayload::ApplyPatchApproval(params))
.await;
tokio::spawn(async move {
on_patch_approval_response(event_id, rx, conversation).await;
});
}
EventMsg::ExecApprovalRequest(ExecApprovalRequestEvent {
call_id,
turn_id,
command,
cwd,
reason,
risk,
parsed_cmd,
}) => match api_version {
ApiVersion::V1 => {
let params = ExecCommandApprovalParams {
conversation_id,
call_id,
command,
cwd,
reason,
risk,
parsed_cmd,
};
let rx = outgoing
.send_request(ServerRequestPayload::ExecCommandApproval(params))
.await;
tokio::spawn(async move {
on_exec_approval_response(event_id, rx, conversation).await;
});
}
ApiVersion::V2 => {
let params = CommandExecutionRequestApprovalParams {
thread_id: conversation_id.to_string(),
turn_id: turn_id.clone(),
// Until we migrate the core to be aware of a first class CommandExecutionItem
// and emit the corresponding EventMsg, we repurpose the call_id as the item_id.
item_id: call_id.clone(),
reason,
risk: risk.map(V2SandboxCommandAssessment::from),
};
let rx = outgoing
.send_request(ServerRequestPayload::CommandExecutionRequestApproval(
params,
))
.await;
tokio::spawn(async move {
on_command_execution_request_approval_response(event_id, rx, conversation)
.await;
});
}
},
// TODO(celia): properly construct McpToolCall TurnItem in core.
EventMsg::McpToolCallBegin(begin_event) => {
let notification = construct_mcp_tool_call_notification(begin_event).await;
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::McpToolCallEnd(end_event) => {
let notification = construct_mcp_tool_call_end_notification(end_event).await;
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::AgentMessageContentDelta(event) => {
let notification = AgentMessageDeltaNotification {
item_id: event.item_id,
delta: event.delta,
};
outgoing
.send_server_notification(ServerNotification::AgentMessageDelta(notification))
.await;
}
EventMsg::ReasoningContentDelta(event) => {
let notification = ReasoningSummaryTextDeltaNotification {
item_id: event.item_id,
delta: event.delta,
summary_index: event.summary_index,
};
outgoing
.send_server_notification(ServerNotification::ReasoningSummaryTextDelta(
notification,
))
.await;
}
EventMsg::ReasoningRawContentDelta(event) => {
let notification = ReasoningTextDeltaNotification {
item_id: event.item_id,
delta: event.delta,
content_index: event.content_index,
};
outgoing
.send_server_notification(ServerNotification::ReasoningTextDelta(notification))
.await;
}
EventMsg::AgentReasoningSectionBreak(event) => {
let notification = ReasoningSummaryPartAddedNotification {
item_id: event.item_id,
summary_index: event.summary_index,
};
outgoing
.send_server_notification(ServerNotification::ReasoningSummaryPartAdded(
notification,
))
.await;
}
EventMsg::TokenCount(token_count_event) => {
if let Some(rate_limits) = token_count_event.rate_limits {
outgoing
.send_server_notification(ServerNotification::AccountRateLimitsUpdated(
AccountRateLimitsUpdatedNotification {
rate_limits: rate_limits.into(),
},
))
.await;
}
}
EventMsg::ItemStarted(item_started_event) => {
let item: ThreadItem = item_started_event.item.clone().into();
let notification = ItemStartedNotification { item };
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::ItemCompleted(item_completed_event) => {
let item: ThreadItem = item_completed_event.item.clone().into();
let notification = ItemCompletedNotification { item };
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::ExecCommandBegin(exec_command_begin_event) => {
let item = ThreadItem::CommandExecution {
id: exec_command_begin_event.call_id.clone(),
command: shlex_join(&exec_command_begin_event.command),
cwd: exec_command_begin_event.cwd,
status: CommandExecutionStatus::InProgress,
command_actions: exec_command_begin_event
.parsed_cmd
.into_iter()
.map(V2ParsedCommand::from)
.collect(),
aggregated_output: None,
exit_code: None,
duration_ms: None,
};
let notification = ItemStartedNotification { item };
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::ExecCommandOutputDelta(exec_command_output_delta_event) => {
let notification = CommandExecutionOutputDeltaNotification {
item_id: exec_command_output_delta_event.call_id.clone(),
delta: String::from_utf8_lossy(&exec_command_output_delta_event.chunk).to_string(),
};
outgoing
.send_server_notification(ServerNotification::CommandExecutionOutputDelta(
notification,
))
.await;
}
EventMsg::ExecCommandEnd(exec_command_end_event) => {
let ExecCommandEndEvent {
call_id,
command,
cwd,
parsed_cmd,
aggregated_output,
exit_code,
duration,
..
} = exec_command_end_event;
let status = if exit_code == 0 {
CommandExecutionStatus::Completed
} else {
CommandExecutionStatus::Failed
};
let aggregated_output = if aggregated_output.is_empty() {
None
} else {
Some(aggregated_output)
};
let duration_ms = i64::try_from(duration.as_millis()).unwrap_or(i64::MAX);
let item = ThreadItem::CommandExecution {
id: call_id,
command: shlex_join(&command),
cwd,
status,
command_actions: parsed_cmd.into_iter().map(V2ParsedCommand::from).collect(),
aggregated_output,
exit_code: Some(exit_code),
duration_ms: Some(duration_ms),
};
let notification = ItemCompletedNotification { item };
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
// If this is a TurnAborted, reply to any pending interrupt requests.
EventMsg::TurnAborted(turn_aborted_event) => {
let pending = {
let mut map = pending_interrupts.lock().await;
map.remove(&conversation_id).unwrap_or_default()
};
if !pending.is_empty() {
for (rid, ver) in pending {
match ver {
ApiVersion::V1 => {
let response = InterruptConversationResponse {
abort_reason: turn_aborted_event.reason.clone(),
};
outgoing.send_response(rid, response).await;
}
ApiVersion::V2 => {
let response = TurnInterruptResponse {};
outgoing.send_response(rid, response).await;
}
}
}
}
}
_ => {}
}
}
async fn on_patch_approval_response(
event_id: String,
receiver: oneshot::Receiver<JsonValue>,
codex: Arc<CodexConversation>,
) {
let response = receiver.await;
let value = match response {
Ok(value) => value,
Err(err) => {
error!("request failed: {err:?}");
if let Err(submit_err) = codex
.submit(Op::PatchApproval {
id: event_id.clone(),
decision: ReviewDecision::Denied,
})
.await
{
error!("failed to submit denied PatchApproval after request failure: {submit_err}");
}
return;
}
};
let response =
serde_json::from_value::<ApplyPatchApprovalResponse>(value).unwrap_or_else(|err| {
error!("failed to deserialize ApplyPatchApprovalResponse: {err}");
ApplyPatchApprovalResponse {
decision: ReviewDecision::Denied,
}
});
if let Err(err) = codex
.submit(Op::PatchApproval {
id: event_id,
decision: response.decision,
})
.await
{
error!("failed to submit PatchApproval: {err}");
}
}
async fn on_exec_approval_response(
event_id: String,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
) {
let response = receiver.await;
let value = match response {
Ok(value) => value,
Err(err) => {
error!("request failed: {err:?}");
return;
}
};
// Try to deserialize `value` and then make the appropriate call to `codex`.
let response =
serde_json::from_value::<ExecCommandApprovalResponse>(value).unwrap_or_else(|err| {
error!("failed to deserialize ExecCommandApprovalResponse: {err}");
// If we cannot deserialize the response, we deny the request to be
// conservative.
ExecCommandApprovalResponse {
decision: ReviewDecision::Denied,
}
});
if let Err(err) = conversation
.submit(Op::ExecApproval {
id: event_id,
decision: response.decision,
})
.await
{
error!("failed to submit ExecApproval: {err}");
}
}
async fn on_command_execution_request_approval_response(
event_id: String,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
) {
let response = receiver.await;
let value = match response {
Ok(value) => value,
Err(err) => {
error!("request failed: {err:?}");
return;
}
};
let response = serde_json::from_value::<CommandExecutionRequestApprovalResponse>(value)
.unwrap_or_else(|err| {
error!("failed to deserialize CommandExecutionRequestApprovalResponse: {err}");
CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Decline,
accept_settings: None,
}
});
let CommandExecutionRequestApprovalResponse {
decision,
accept_settings,
} = response;
let decision = match (decision, accept_settings) {
(ApprovalDecision::Accept, Some(settings)) if settings.for_session => {
ReviewDecision::ApprovedForSession
}
(ApprovalDecision::Accept, _) => ReviewDecision::Approved,
(ApprovalDecision::Decline, _) => ReviewDecision::Denied,
(ApprovalDecision::Cancel, _) => ReviewDecision::Abort,
};
if let Err(err) = conversation
.submit(Op::ExecApproval {
id: event_id,
decision,
})
.await
{
error!("failed to submit ExecApproval: {err}");
}
}
/// similar to handle_mcp_tool_call_begin in exec
async fn construct_mcp_tool_call_notification(
begin_event: McpToolCallBeginEvent,
) -> ItemStartedNotification {
let item = ThreadItem::McpToolCall {
id: begin_event.call_id,
server: begin_event.invocation.server,
tool: begin_event.invocation.tool,
status: McpToolCallStatus::InProgress,
arguments: begin_event.invocation.arguments.unwrap_or(JsonValue::Null),
result: None,
error: None,
};
ItemStartedNotification { item }
}
/// simiilar to handle_mcp_tool_call_end in exec
async fn construct_mcp_tool_call_end_notification(
end_event: McpToolCallEndEvent,
) -> ItemCompletedNotification {
let status = if end_event.is_success() {
McpToolCallStatus::Completed
} else {
McpToolCallStatus::Failed
};
let (result, error) = match &end_event.result {
Ok(value) => (
Some(McpToolCallResult {
content: value.content.clone(),
structured_content: value.structured_content.clone(),
}),
None,
),
Err(message) => (
None,
Some(McpToolCallError {
message: message.clone(),
}),
),
};
let item = ThreadItem::McpToolCall {
id: end_event.call_id,
server: end_event.invocation.server,
tool: end_event.invocation.tool,
status,
arguments: end_event.invocation.arguments.unwrap_or(JsonValue::Null),
result,
error,
};
ItemCompletedNotification { item }
}
#[cfg(test)]
mod tests {
use super::*;
use codex_core::protocol::McpInvocation;
use mcp_types::CallToolResult;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use serde_json::Value as JsonValue;
use std::time::Duration;
#[tokio::test]
async fn test_construct_mcp_tool_call_begin_notification_with_args() {
let begin_event = McpToolCallBeginEvent {
call_id: "call_123".to_string(),
invocation: McpInvocation {
server: "codex".to_string(),
tool: "list_mcp_resources".to_string(),
arguments: Some(serde_json::json!({"server": ""})),
},
};
let notification = construct_mcp_tool_call_notification(begin_event.clone()).await;
let expected = ItemStartedNotification {
item: ThreadItem::McpToolCall {
id: begin_event.call_id,
server: begin_event.invocation.server,
tool: begin_event.invocation.tool,
status: McpToolCallStatus::InProgress,
arguments: serde_json::json!({"server": ""}),
result: None,
error: None,
},
};
assert_eq!(notification, expected);
}
#[tokio::test]
async fn test_construct_mcp_tool_call_begin_notification_without_args() {
let begin_event = McpToolCallBeginEvent {
call_id: "call_456".to_string(),
invocation: McpInvocation {
server: "codex".to_string(),
tool: "list_mcp_resources".to_string(),
arguments: None,
},
};
let notification = construct_mcp_tool_call_notification(begin_event.clone()).await;
let expected = ItemStartedNotification {
item: ThreadItem::McpToolCall {
id: begin_event.call_id,
server: begin_event.invocation.server,
tool: begin_event.invocation.tool,
status: McpToolCallStatus::InProgress,
arguments: JsonValue::Null,
result: None,
error: None,
},
};
assert_eq!(notification, expected);
}
#[tokio::test]
async fn test_construct_mcp_tool_call_end_notification_success() {
let content = vec![ContentBlock::TextContent(TextContent {
annotations: None,
text: "{\"resources\":[]}".to_string(),
r#type: "text".to_string(),
})];
let result = CallToolResult {
content: content.clone(),
is_error: Some(false),
structured_content: None,
};
let end_event = McpToolCallEndEvent {
call_id: "call_789".to_string(),
invocation: McpInvocation {
server: "codex".to_string(),
tool: "list_mcp_resources".to_string(),
arguments: Some(serde_json::json!({"server": ""})),
},
duration: Duration::from_nanos(92708),
result: Ok(result),
};
let notification = construct_mcp_tool_call_end_notification(end_event.clone()).await;
let expected = ItemCompletedNotification {
item: ThreadItem::McpToolCall {
id: end_event.call_id,
server: end_event.invocation.server,
tool: end_event.invocation.tool,
status: McpToolCallStatus::Completed,
arguments: serde_json::json!({"server": ""}),
result: Some(McpToolCallResult {
content,
structured_content: None,
}),
error: None,
},
};
assert_eq!(notification, expected);
}
#[tokio::test]
async fn test_construct_mcp_tool_call_end_notification_error() {
let end_event = McpToolCallEndEvent {
call_id: "call_err".to_string(),
invocation: McpInvocation {
server: "codex".to_string(),
tool: "list_mcp_resources".to_string(),
arguments: None,
},
duration: Duration::from_millis(1),
result: Err("boom".to_string()),
};
let notification = construct_mcp_tool_call_end_notification(end_event.clone()).await;
let expected = ItemCompletedNotification {
item: ThreadItem::McpToolCall {
id: end_event.call_id,
server: end_event.invocation.server,
tool: end_event.invocation.tool,
status: McpToolCallStatus::Failed,
arguments: JsonValue::Null,
result: None,
error: Some(McpToolCallError {
message: "boom".to_string(),
}),
},
};
assert_eq!(notification, expected);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,10 @@ pub(crate) async fn run_fuzzy_file_search(
roots: Vec<String>,
cancellation_flag: Arc<AtomicBool>,
) -> Vec<FuzzyFileSearchResult> {
if roots.is_empty() {
return Vec::new();
}
#[expect(clippy::expect_used)]
let limit_per_root =
NonZero::new(LIMIT_PER_ROOT).expect("LIMIT_PER_ROOT should be a valid non-zero usize");

View File

@@ -28,6 +28,7 @@ use tracing_subscriber::filter::Targets;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
mod bespoke_event_handling;
mod codex_message_processor;
mod error_code;
mod fuzzy_file_search;
@@ -46,7 +47,7 @@ pub async fn run_main(
) -> IoResult<()> {
// Set up channels.
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(CHANNEL_CAPACITY);
// Task: read from stdin, push to `incoming_tx`.
let stdin_reader_handle = tokio::spawn({

View File

@@ -19,12 +19,12 @@ use crate::error_code::INTERNAL_ERROR_CODE;
/// Sends messages to the client and manages request callbacks.
pub(crate) struct OutgoingMessageSender {
next_request_id: AtomicI64,
sender: mpsc::UnboundedSender<OutgoingMessage>,
sender: mpsc::Sender<OutgoingMessage>,
request_id_to_callback: Mutex<HashMap<RequestId, oneshot::Sender<Result>>>,
}
impl OutgoingMessageSender {
pub(crate) fn new(sender: mpsc::UnboundedSender<OutgoingMessage>) -> Self {
pub(crate) fn new(sender: mpsc::Sender<OutgoingMessage>) -> Self {
Self {
next_request_id: AtomicI64::new(0),
sender,
@@ -45,8 +45,12 @@ impl OutgoingMessageSender {
}
let outgoing_message =
OutgoingMessage::Request(request.request_with_id(outgoing_message_id));
let _ = self.sender.send(outgoing_message);
OutgoingMessage::Request(request.request_with_id(outgoing_message_id.clone()));
if let Err(err) = self.sender.send(outgoing_message).await {
warn!("failed to send request {outgoing_message_id:?} to client: {err:?}");
let mut request_id_to_callback = self.request_id_to_callback.lock().await;
request_id_to_callback.remove(&outgoing_message_id);
}
rx_approve
}
@@ -72,7 +76,9 @@ impl OutgoingMessageSender {
match serde_json::to_value(response) {
Ok(result) => {
let outgoing_message = OutgoingMessage::Response(OutgoingResponse { id, result });
let _ = self.sender.send(outgoing_message);
if let Err(err) = self.sender.send(outgoing_message).await {
warn!("failed to send response to client: {err:?}");
}
}
Err(err) => {
self.send_error(
@@ -89,21 +95,29 @@ impl OutgoingMessageSender {
}
pub(crate) async fn send_server_notification(&self, notification: ServerNotification) {
let _ = self
if let Err(err) = self
.sender
.send(OutgoingMessage::AppServerNotification(notification));
.send(OutgoingMessage::AppServerNotification(notification))
.await
{
warn!("failed to send server notification to client: {err:?}");
}
}
/// All notifications should be migrated to [`ServerNotification`] and
/// [`OutgoingMessage::Notification`] should be removed.
pub(crate) async fn send_notification(&self, notification: OutgoingNotification) {
let outgoing_message = OutgoingMessage::Notification(notification);
let _ = self.sender.send(outgoing_message);
if let Err(err) = self.sender.send(outgoing_message).await {
warn!("failed to send notification to client: {err:?}");
}
}
pub(crate) async fn send_error(&self, id: RequestId, error: JSONRPCErrorError) {
let outgoing_message = OutgoingMessage::Error(OutgoingError { id, error });
let _ = self.sender.send(outgoing_message);
if let Err(err) = self.sender.send(outgoing_message).await {
warn!("failed to send error to client: {err:?}");
}
}
}

View File

@@ -19,6 +19,7 @@ use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::FeedbackUploadParams;
use codex_app_server_protocol::GetAccountParams;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
@@ -249,6 +250,15 @@ impl McpProcess {
self.send_request("account/rateLimits/read", None).await
}
/// Send an `account/read` JSON-RPC request.
pub async fn send_get_account_request(
&mut self,
params: GetAccountParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("account/read", params).await
}
/// Send a `feedback/upload` JSON-RPC request.
pub async fn send_feedback_upload_request(
&mut self,

View File

@@ -27,7 +27,7 @@ fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
std::fs::write(
config_toml,
r#"
model = "gpt-5-codex"
model = "gpt-5.1-codex"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
model_reasoning_summary = "detailed"
@@ -87,7 +87,7 @@ async fn get_config_toml_parses_all_fields() -> Result<()> {
}),
forced_chatgpt_workspace_id: Some("12345678-0000-0000-0000-000000000000".into()),
forced_login_method: Some(ForcedLoginMethod::Chatgpt),
model: Some("gpt-5-codex".into()),
model: Some("gpt-5.1-codex".into()),
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: Some(ReasoningSummary::Detailed),
model_verbosity: Some(Verbosity::Medium),

View File

@@ -7,8 +7,6 @@ mod fuzzy_file_search;
mod interrupt;
mod list_resume;
mod login;
mod model_list;
mod rate_limits;
mod send_message;
mod set_default_model;
mod user_agent;

View File

@@ -57,7 +57,7 @@ fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
std::fs::write(
config_toml,
r#"
model = "gpt-5-codex"
model = "gpt-5.1-codex"
model_reasoning_effort = "medium"
"#,
)

View File

@@ -2,11 +2,15 @@ use anyhow::Result;
use anyhow::bail;
use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::ChatGptAuthFixture;
use app_test_support::write_chatgpt_auth;
use codex_app_server_protocol::Account;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginAccountResponse;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::GetAccountParams;
use codex_app_server_protocol::GetAccountResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginAccountResponse;
@@ -15,6 +19,7 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_login::login_with_api_key;
use codex_protocol::account::PlanType as AccountPlanType;
use pretty_assertions::assert_eq;
use serial_test::serial;
use std::path::Path;
@@ -25,22 +30,30 @@ use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a minimal config.toml for the app server
fn create_config_toml(
codex_home: &Path,
forced_method: Option<&str>,
forced_workspace_id: Option<&str>,
) -> std::io::Result<()> {
#[derive(Default)]
struct CreateConfigTomlParams {
forced_method: Option<String>,
forced_workspace_id: Option<String>,
requires_openai_auth: Option<bool>,
}
fn create_config_toml(codex_home: &Path, params: CreateConfigTomlParams) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
let forced_line = if let Some(method) = forced_method {
let forced_line = if let Some(method) = params.forced_method {
format!("forced_login_method = \"{method}\"\n")
} else {
String::new()
};
let forced_workspace_line = if let Some(ws) = forced_workspace_id {
let forced_workspace_line = if let Some(ws) = params.forced_workspace_id {
format!("forced_chatgpt_workspace_id = \"{ws}\"\n")
} else {
String::new()
};
let requires_line = match params.requires_openai_auth {
Some(true) => "requires_openai_auth = true\n".to_string(),
Some(false) => String::new(),
None => String::new(),
};
let contents = format!(
r#"
model = "mock-model"
@@ -57,6 +70,7 @@ base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
{requires_line}
"#
);
std::fs::write(config_toml, contents)
@@ -65,7 +79,7 @@ stream_max_retries = 0
#[tokio::test]
async fn logout_account_removes_auth_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
login_with_api_key(
codex_home.path(),
@@ -104,27 +118,25 @@ async fn logout_account_removes_auth_and_notifies() -> Result<()> {
"auth.json should be deleted"
);
let status_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
let get_id = mcp
.send_get_account_request(GetAccountParams {
refresh_token: false,
})
.await?;
let status_resp: JSONRPCResponse = timeout(
let get_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(status_id)),
mcp.read_stream_until_response_message(RequestId::Integer(get_id)),
)
.await??;
let status: GetAuthStatusResponse = to_response(status_resp)?;
assert_eq!(status.auth_method, None);
assert_eq!(status.auth_token, None);
let account: GetAccountResponse = to_response(get_resp)?;
assert_eq!(account.account, None);
Ok(())
}
#[tokio::test]
async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -171,7 +183,13 @@ async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
#[tokio::test]
async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("chatgpt"), None)?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
forced_method: Some("chatgpt".to_string()),
..Default::default()
},
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -195,7 +213,13 @@ async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
#[tokio::test]
async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("api"), None)?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
forced_method: Some("api".to_string()),
..Default::default()
},
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -219,7 +243,7 @@ async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
#[serial(login_port)]
async fn login_account_chatgpt_start() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -285,7 +309,13 @@ async fn login_account_chatgpt_start() -> Result<()> {
#[serial(login_port)]
async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, Some("ws-forced"))?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
forced_workspace_id: Some("ws-forced".to_string()),
..Default::default()
},
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -307,3 +337,156 @@ async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result
);
Ok(())
}
#[tokio::test]
async fn get_account_no_auth() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
..Default::default()
},
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let params = GetAccountParams {
refresh_token: false,
};
let request_id = mcp.send_get_account_request(params).await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let account: GetAccountResponse = to_response(resp)?;
assert_eq!(account.account, None, "expected no account");
assert_eq!(account.requires_openai_auth, true);
Ok(())
}
#[tokio::test]
async fn get_account_with_api_key() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
..Default::default()
},
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_login_account_api_key_request("sk-test-key")
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let _login_ok = to_response::<LoginAccountResponse>(resp)?;
let params = GetAccountParams {
refresh_token: false,
};
let request_id = mcp.send_get_account_request(params).await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let received: GetAccountResponse = to_response(resp)?;
let expected = GetAccountResponse {
account: Some(Account::ApiKey {}),
requires_openai_auth: true,
};
assert_eq!(received, expected);
Ok(())
}
#[tokio::test]
async fn get_account_when_auth_not_required() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(false),
..Default::default()
},
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let params = GetAccountParams {
refresh_token: false,
};
let request_id = mcp.send_get_account_request(params).await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let received: GetAccountResponse = to_response(resp)?;
let expected = GetAccountResponse {
account: None,
requires_openai_auth: false,
};
assert_eq!(received, expected);
Ok(())
}
#[tokio::test]
async fn get_account_with_chatgpt() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
..Default::default()
},
)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("access-chatgpt")
.email("user@example.com")
.plan_type("pro"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let params = GetAccountParams {
refresh_token: false,
};
let request_id = mcp.send_get_account_request(params).await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let received: GetAccountResponse = to_response(resp)?;
let expected = GetAccountResponse {
account: Some(Account::Chatgpt {
email: "user@example.com".to_string(),
plan_type: AccountPlanType::Pro,
}),
requires_openai_auth: true,
};
assert_eq!(received, expected);
Ok(())
}

View File

@@ -1,4 +1,6 @@
mod account;
mod model_list;
mod rate_limits;
mod thread_archive;
mod thread_list;
mod thread_resume;

View File

@@ -19,7 +19,7 @@ use tokio::time::timeout;
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
@@ -46,9 +46,9 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
let expected_models = vec![
Model {
id: "gpt-5-codex".to_string(),
model: "gpt-5-codex".to_string(),
display_name: "gpt-5-codex".to_string(),
id: "gpt-5.1-codex".to_string(),
model: "gpt-5.1-codex".to_string(),
display_name: "gpt-5.1-codex".to_string(),
description: "Optimized for codex.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
@@ -69,15 +69,30 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
is_default: true,
},
Model {
id: "gpt-5".to_string(),
model: "gpt-5".to_string(),
display_name: "gpt-5".to_string(),
description: "Broad world knowledge with strong general reasoning.".to_string(),
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Minimal,
description: "Fastest responses with little reasoning".to_string(),
reasoning_effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: false,
},
Model {
id: "gpt-5.1".to_string(),
model: "gpt-5.1".to_string(),
display_name: "gpt-5.1".to_string(),
description: "Broad world knowledge with strong general reasoning.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward \
@@ -106,7 +121,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn list_models_pagination_works() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
@@ -132,7 +147,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(first_response)?;
assert_eq!(first_items.len(), 1);
assert_eq!(first_items[0].id, "gpt-5-codex");
assert_eq!(first_items[0].id, "gpt-5.1-codex");
let next_cursor = first_cursor.ok_or_else(|| anyhow!("cursor for second page"))?;
let second_request = mcp
@@ -154,12 +169,34 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(second_response)?;
assert_eq!(second_items.len(), 1);
assert_eq!(second_items[0].id, "gpt-5");
assert!(second_cursor.is_none());
assert_eq!(second_items[0].id, "gpt-5.1-codex-mini");
let third_cursor = second_cursor.ok_or_else(|| anyhow!("cursor for third page"))?;
let third_request = mcp
.send_list_models_request(ModelListParams {
limit: Some(1),
cursor: Some(third_cursor.clone()),
})
.await?;
let third_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(third_request)),
)
.await??;
let ModelListResponse {
data: third_items,
next_cursor: third_cursor,
} = to_response::<ModelListResponse>(third_response)?;
assert_eq!(third_items.len(), 1);
assert_eq!(third_items[0].id, "gpt-5.1");
assert!(third_cursor.is_none());
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn list_models_rejects_invalid_cursor() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;

View File

@@ -26,7 +26,7 @@ use wiremock::matchers::path;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn get_account_rate_limits_requires_auth() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -51,7 +51,7 @@ async fn get_account_rate_limits_requires_auth() -> Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn get_account_rate_limits_requires_chatgpt_auth() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -78,7 +78,7 @@ async fn get_account_rate_limits_requires_chatgpt_auth() -> Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[tokio::test]
async fn get_account_rate_limits_returns_snapshot() -> Result<()> {
let codex_home = TempDir::new()?;
write_chatgpt_auth(

View File

@@ -6,10 +6,8 @@ use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadListResponse;
use serde_json::json;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
@@ -102,6 +100,11 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
next_cursor: cursor1,
} = to_response::<ThreadListResponse>(page1_resp)?;
assert_eq!(data1.len(), 2);
for thread in &data1 {
assert_eq!(thread.preview, "Hello");
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.created_at > 0);
}
let cursor1 = cursor1.expect("expected nextCursor on first page");
// Page 2: with cursor → expect next_cursor None when no more results.
@@ -122,6 +125,11 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
next_cursor: cursor2,
} = to_response::<ThreadListResponse>(page2_resp)?;
assert!(data2.len() <= 2);
for thread in &data2 {
assert_eq!(thread.preview, "Hello");
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.created_at > 0);
}
assert_eq!(cursor2, None, "expected nextCursor to be null on last page");
Ok(())
@@ -140,46 +148,13 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
"X",
Some("mock_provider"),
)?; // mock_provider
// one with a different provider
let uuid = Uuid::new_v4();
let dir = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("02");
std::fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-2025-01-02T11-00-00-{uuid}.jsonl"));
let lines = [
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type": "session_meta",
"payload": {
"id": uuid,
"timestamp": "2025-01-02T11:00:00Z",
"cwd": "/",
"originator": "codex",
"cli_version": "0.0.0",
"instructions": null,
"source": "vscode",
"model_provider": "other_provider"
}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"response_item",
"payload": {"type":"message","role":"user","content":[{"type":"input_text","text":"X"}]}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"event_msg",
"payload": {"type":"user_message","message":"X","kind":"plain"}
})
.to_string(),
];
std::fs::write(file_path, lines.join("\n") + "\n")?;
let _b = create_fake_rollout(
codex_home.path(),
"2025-01-02T11-00-00",
"2025-01-02T11:00:00Z",
"X",
Some("other_provider"),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -200,6 +175,11 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(resp)?;
assert_eq!(data.len(), 1);
assert_eq!(next_cursor, None);
let thread = &data[0];
assert_eq!(thread.preview, "X");
assert_eq!(thread.model_provider, "other_provider");
let expected_ts = chrono::DateTime::parse_from_rfc3339("2025-01-02T11:00:00Z")?.timestamp();
assert_eq!(thread.created_at, expected_ts);
Ok(())
}

View File

@@ -8,13 +8,15 @@ use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_resume_returns_existing_thread() -> Result<()> {
async fn thread_resume_returns_original_thread() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -25,7 +27,7 @@ async fn thread_resume_returns_existing_thread() -> Result<()> {
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5-codex".to_string()),
model: Some("gpt-5.1-codex".to_string()),
..Default::default()
})
.await?;
@@ -40,6 +42,7 @@ async fn thread_resume_returns_existing_thread() -> Result<()> {
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
@@ -49,7 +52,106 @@ async fn thread_resume_returns_existing_thread() -> Result<()> {
.await??;
let ThreadResumeResponse { thread: resumed } =
to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed.id, thread.id);
assert_eq!(resumed, thread);
Ok(())
}
#[tokio::test]
async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.1-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
let thread_path = thread.path.clone();
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: "not-a-valid-thread-id".to_string(),
path: Some(thread_path),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread: resumed } =
to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed, thread);
Ok(())
}
#[tokio::test]
async fn thread_resume_supports_history_and_overrides() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.1-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
let history_text = "Hello from history";
let history = vec![ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: history_text.to_string(),
}],
}];
// Resume with explicit history and override the model.
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id,
history: Some(history),
model: Some("mock-model".to_string()),
model_provider: Some("mock_provider".to_string()),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread: resumed } =
to_response::<ThreadResumeResponse>(resume_resp)?;
assert!(!resumed.id.is_empty());
assert_eq!(resumed.model_provider, "mock_provider");
assert_eq!(resumed.preview, history_text);
Ok(())
}

View File

@@ -29,7 +29,7 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
// Start a v2 thread with an explicit model override.
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5".to_string()),
model: Some("gpt-5.1".to_string()),
..Default::default()
})
.await?;
@@ -42,6 +42,15 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(resp)?;
assert!(!thread.id.is_empty(), "thread id should not be empty");
assert!(
thread.preview.is_empty(),
"new threads should start with an empty preview"
);
assert_eq!(thread.model_provider, "mock_provider");
assert!(
thread.created_at > 0,
"created_at should be a positive UNIX timestamp"
);
// A corresponding thread/started notification should arrive.
let notif: JSONRPCNotification = timeout(
@@ -51,7 +60,7 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
.await??;
let started: ThreadStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(started.thread.id, thread.id);
assert_eq!(started.thread, thread);
Ok(())
}

View File

@@ -5,10 +5,13 @@ use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
@@ -17,9 +20,6 @@ use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::path::Path;
@@ -235,7 +235,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// turn/start — expect ExecCommandApproval request from server
// turn/start — expect CommandExecutionRequestApproval request from server
let first_turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
@@ -258,16 +258,10 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ExecCommandApproval { request_id, params } = server_req else {
panic!("expected ExecCommandApproval request");
let ServerRequest::CommandExecutionRequestApproval { request_id, params } = server_req else {
panic!("expected CommandExecutionRequestApproval request");
};
assert_eq!(params.call_id, "call1");
assert_eq!(
params.parsed_cmd,
vec![ParsedCommand::Unknown {
cmd: "python3 -c 'print(42)'".to_string()
}]
);
assert_eq!(params.item_id, "call1");
// Approve and wait for task completion
mcp.send_response(
@@ -302,7 +296,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
)
.await??;
// Ensure we do NOT receive an ExecCommandApproval request before task completes
// Ensure we do NOT receive a CommandExecutionRequestApproval request before task completes
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
@@ -314,8 +308,6 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
#[tokio::test]
async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
// When returning Result from a test, pass an Ok(()) to the skip macro
// so the early return type matches. The no-arg form returns unit.
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
@@ -424,29 +416,35 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
)
.await??;
let exec_begin_notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
)
let command_exec_item = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let item_started_notification = mcp
.read_stream_until_notification_message("item/started")
.await?;
let params = item_started_notification
.params
.clone()
.expect("item/started params");
let item_started: ItemStartedNotification =
serde_json::from_value(params).expect("deserialize item/started notification");
if matches!(item_started.item, ThreadItem::CommandExecution { .. }) {
return Ok::<ThreadItem, anyhow::Error>(item_started.item);
}
}
})
.await??;
let params = exec_begin_notification
.params
.clone()
.expect("exec_command_begin params");
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
let exec_begin = match event.msg {
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
other => panic!("expected ExecCommandBegin event, got {other:?}"),
let ThreadItem::CommandExecution {
cwd,
command,
status,
..
} = command_exec_item
else {
unreachable!("loop ensures we break on command execution items");
};
assert_eq!(exec_begin.cwd, second_cwd);
assert_eq!(
exec_begin.command,
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string()
]
);
assert_eq!(cwd, second_cwd);
assert_eq!(command, "bash -lc 'echo second turn'");
assert_eq!(status, CommandExecutionStatus::InProgress);
timeout(
DEFAULT_READ_TIMEOUT,

View File

@@ -288,7 +288,7 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
path,
ApplyPatchFileChange::Update {
unified_diff,
move_path: move_path.map(|p| cwd.join(p)),
move_path: move_path.map(|p| effective_cwd.join(p)),
new_content: contents,
},
);
@@ -1603,6 +1603,53 @@ g
);
}
#[test]
fn test_apply_patch_resolves_move_path_with_effective_cwd() {
let session_dir = tempdir().unwrap();
let worktree_rel = "alt";
let worktree_dir = session_dir.path().join(worktree_rel);
fs::create_dir_all(&worktree_dir).unwrap();
let source_name = "old.txt";
let dest_name = "renamed.txt";
let source_path = worktree_dir.join(source_name);
fs::write(&source_path, "before\n").unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {source_name}
*** Move to: {dest_name}
@@
-before
+after"#
));
let shell_script = format!("cd {worktree_rel} && apply_patch <<'PATCH'\n{patch}\nPATCH");
let argv = vec!["bash".into(), "-lc".into(), shell_script];
let result = maybe_parse_apply_patch_verified(&argv, session_dir.path());
let action = match result {
MaybeApplyPatchVerified::Body(action) => action,
other => panic!("expected verified body, got {other:?}"),
};
assert_eq!(action.cwd, worktree_dir);
let change = action
.changes()
.get(&worktree_dir.join(source_name))
.expect("source file change present");
match change {
ApplyPatchFileChange::Update { move_path, .. } => {
assert_eq!(
move_path.as_deref(),
Some(worktree_dir.join(dest_name).as_path())
);
}
other => panic!("expected update change, got {other:?}"),
}
}
#[test]
fn test_apply_patch_fails_on_write_error() {
let dir = tempdir().unwrap();

View File

@@ -11,32 +11,7 @@ const LINUX_SANDBOX_ARG0: &str = "codex-linux-sandbox";
const APPLY_PATCH_ARG0: &str = "apply_patch";
const MISSPELLED_APPLY_PATCH_ARG0: &str = "applypatch";
/// While we want to deploy the Codex CLI as a single executable for simplicity,
/// we also want to expose some of its functionality as distinct CLIs, so we use
/// the "arg0 trick" to determine which CLI to dispatch. This effectively allows
/// us to simulate deploying multiple executables as a single binary on Mac and
/// Linux (but not Windows).
///
/// When the current executable is invoked through the hard-link or alias named
/// `codex-linux-sandbox` we *directly* execute
/// [`codex_linux_sandbox::run_main`] (which never returns). Otherwise we:
///
/// 1. Load `.env` values from `~/.codex/.env` before creating any threads.
/// 2. Construct a Tokio multi-thread runtime.
/// 3. Derive the path to the current executable (so children can re-invoke the
/// sandbox) when running on Linux.
/// 4. Execute the provided async `main_fn` inside that runtime, forwarding any
/// error. Note that `main_fn` receives `codex_linux_sandbox_exe:
/// Option<PathBuf>`, as an argument, which is generally needed as part of
/// constructing [`codex_core::config::Config`].
///
/// This function should be used to wrap any `main()` function in binary crates
/// in this workspace that depends on these helper CLIs.
pub fn arg0_dispatch_or_else<F, Fut>(main_fn: F) -> anyhow::Result<()>
where
F: FnOnce(Option<PathBuf>) -> Fut,
Fut: Future<Output = anyhow::Result<()>>,
{
pub fn arg0_dispatch() -> Option<TempDir> {
// Determine if we were invoked via the special alias.
let mut args = std::env::args_os();
let argv0 = args.next().unwrap_or_default();
@@ -76,10 +51,7 @@ where
// before creating any threads/the Tokio runtime.
load_dotenv();
// Retain the TempDir so it exists for the lifetime of the invocation of
// this executable. Admittedly, we could invoke `keep()` on it, but it
// would be nice to avoid leaving temporary directories behind, if possible.
let _path_entry = match prepend_path_entry_for_apply_patch() {
match prepend_path_entry_for_codex_aliases() {
Ok(path_entry) => Some(path_entry),
Err(err) => {
// It is possible that Codex will proceed successfully even if
@@ -87,7 +59,39 @@ where
eprintln!("WARNING: proceeding, even though we could not update PATH: {err}");
None
}
};
}
}
/// While we want to deploy the Codex CLI as a single executable for simplicity,
/// we also want to expose some of its functionality as distinct CLIs, so we use
/// the "arg0 trick" to determine which CLI to dispatch. This effectively allows
/// us to simulate deploying multiple executables as a single binary on Mac and
/// Linux (but not Windows).
///
/// When the current executable is invoked through the hard-link or alias named
/// `codex-linux-sandbox` we *directly* execute
/// [`codex_linux_sandbox::run_main`] (which never returns). Otherwise we:
///
/// 1. Load `.env` values from `~/.codex/.env` before creating any threads.
/// 2. Construct a Tokio multi-thread runtime.
/// 3. Derive the path to the current executable (so children can re-invoke the
/// sandbox) when running on Linux.
/// 4. Execute the provided async `main_fn` inside that runtime, forwarding any
/// error. Note that `main_fn` receives `codex_linux_sandbox_exe:
/// Option<PathBuf>`, as an argument, which is generally needed as part of
/// constructing [`codex_core::config::Config`].
///
/// This function should be used to wrap any `main()` function in binary crates
/// in this workspace that depends on these helper CLIs.
pub fn arg0_dispatch_or_else<F, Fut>(main_fn: F) -> anyhow::Result<()>
where
F: FnOnce(Option<PathBuf>) -> Fut,
Fut: Future<Output = anyhow::Result<()>>,
{
// Retain the TempDir so it exists for the lifetime of the invocation of
// this executable. Admittedly, we could invoke `keep()` on it, but it
// would be nice to avoid leaving temporary directories behind, if possible.
let _path_entry = arg0_dispatch();
// Regular invocation create a Tokio runtime and execute the provided
// async entry-point.
@@ -144,11 +148,16 @@ where
///
/// IMPORTANT: This function modifies the PATH environment variable, so it MUST
/// be called before multiple threads are spawned.
fn prepend_path_entry_for_apply_patch() -> std::io::Result<TempDir> {
pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<TempDir> {
let temp_dir = TempDir::new()?;
let path = temp_dir.path();
for filename in &[APPLY_PATCH_ARG0, MISSPELLED_APPLY_PATCH_ARG0] {
for filename in &[
APPLY_PATCH_ARG0,
MISSPELLED_APPLY_PATCH_ARG0,
#[cfg(target_os = "linux")]
LINUX_SANDBOX_ARG0,
] {
let exe = std::env::current_exe()?;
#[cfg(unix)]

View File

@@ -26,17 +26,19 @@ codex-cloud-tasks = { path = "../cloud-tasks" }
codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-exec = { workspace = true }
codex-execpolicy2 = { workspace = true }
codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-process-hardening = { workspace = true }
codex-protocol = { workspace = true }
codex-protocol-ts = { workspace = true }
codex-responses-api-proxy = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-stdio-to-uds = { workspace = true }
codex-tui = { workspace = true }
ctor = { workspace = true }
libc = { workspace = true }
owo-colors = { workspace = true }
regex-lite = { workspace = true}
serde_json = { workspace = true }
supports-color = { workspace = true }
toml = { workspace = true }
@@ -47,6 +49,7 @@ tokio = { workspace = true, features = [
"rt-multi-thread",
"signal",
] }
tracing = { workspace = true }
[target.'cfg(target_os = "windows")'.dependencies]
codex_windows_sandbox = { package = "codex-windows-sandbox", path = "../windows-sandbox-rs" }

View File

@@ -1,3 +1,8 @@
#[cfg(target_os = "macos")]
mod pid_tracker;
#[cfg(target_os = "macos")]
mod seatbelt;
use std::path::PathBuf;
use codex_common::CliConfigOverrides;
@@ -5,6 +10,7 @@ use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::exec_env::create_env;
use codex_core::landlock::spawn_command_under_linux_sandbox;
#[cfg(target_os = "macos")]
use codex_core::seatbelt::spawn_command_under_seatbelt;
use codex_core::spawn::StdioPolicy;
use codex_protocol::config_types::SandboxMode;
@@ -14,12 +20,17 @@ use crate::SeatbeltCommand;
use crate::WindowsCommand;
use crate::exit_status::handle_exit_status;
#[cfg(target_os = "macos")]
use seatbelt::DenialLogger;
#[cfg(target_os = "macos")]
pub async fn run_command_under_seatbelt(
command: SeatbeltCommand,
codex_linux_sandbox_exe: Option<PathBuf>,
) -> anyhow::Result<()> {
let SeatbeltCommand {
full_auto,
log_denials,
config_overrides,
command,
} = command;
@@ -29,10 +40,19 @@ pub async fn run_command_under_seatbelt(
config_overrides,
codex_linux_sandbox_exe,
SandboxType::Seatbelt,
log_denials,
)
.await
}
#[cfg(not(target_os = "macos"))]
pub async fn run_command_under_seatbelt(
_command: SeatbeltCommand,
_codex_linux_sandbox_exe: Option<PathBuf>,
) -> anyhow::Result<()> {
anyhow::bail!("Seatbelt sandbox is only available on macOS");
}
pub async fn run_command_under_landlock(
command: LandlockCommand,
codex_linux_sandbox_exe: Option<PathBuf>,
@@ -48,6 +68,7 @@ pub async fn run_command_under_landlock(
config_overrides,
codex_linux_sandbox_exe,
SandboxType::Landlock,
false,
)
.await
}
@@ -67,11 +88,13 @@ pub async fn run_command_under_windows(
config_overrides,
codex_linux_sandbox_exe,
SandboxType::Windows,
false,
)
.await
}
enum SandboxType {
#[cfg(target_os = "macos")]
Seatbelt,
Landlock,
Windows,
@@ -83,6 +106,7 @@ async fn run_command_under_sandbox(
config_overrides: CliConfigOverrides,
codex_linux_sandbox_exe: Option<PathBuf>,
sandbox_type: SandboxType,
log_denials: bool,
) -> anyhow::Result<()> {
let sandbox_mode = create_sandbox_mode(full_auto);
let config = Config::load_with_cli_overrides(
@@ -125,15 +149,17 @@ async fn run_command_under_sandbox(
let env_map = env.clone();
let command_vec = command.clone();
let base_dir = config.codex_home.clone();
// Preflight audit is invoked elsewhere at the appropriate times.
let res = tokio::task::spawn_blocking(move || {
run_windows_sandbox_capture(
policy_str,
&sandbox_cwd,
base_dir.as_path(),
command_vec,
&cwd_clone,
env_map,
None,
Some(base_dir.as_path()),
)
})
.await;
@@ -167,7 +193,13 @@ async fn run_command_under_sandbox(
}
}
#[cfg(target_os = "macos")]
let mut denial_logger = log_denials.then(DenialLogger::new).flatten();
#[cfg(not(target_os = "macos"))]
let _ = log_denials;
let mut child = match sandbox_type {
#[cfg(target_os = "macos")]
SandboxType::Seatbelt => {
spawn_command_under_seatbelt(
command,
@@ -199,8 +231,27 @@ async fn run_command_under_sandbox(
unreachable!("Windows sandbox should have been handled above");
}
};
#[cfg(target_os = "macos")]
if let Some(denial_logger) = &mut denial_logger {
denial_logger.on_child_spawn(&child);
}
let status = child.wait().await?;
#[cfg(target_os = "macos")]
if let Some(denial_logger) = denial_logger {
let denials = denial_logger.finish().await;
eprintln!("\n=== Sandbox denials ===");
if denials.is_empty() {
eprintln!("None found.");
} else {
for seatbelt::SandboxDenial { name, capability } in denials {
eprintln!("({name}) {capability}");
}
}
}
handle_exit_status(status);
}

View File

@@ -0,0 +1,372 @@
use std::collections::HashSet;
use tokio::task::JoinHandle;
use tracing::warn;
/// Tracks the (recursive) descendants of a process by using `kqueue` to watch for fork events, and
/// `proc_listchildpids` to list the children of a process.
pub(crate) struct PidTracker {
kq: libc::c_int,
handle: JoinHandle<HashSet<i32>>,
}
impl PidTracker {
pub(crate) fn new(root_pid: i32) -> Option<Self> {
if root_pid <= 0 {
return None;
}
let kq = unsafe { libc::kqueue() };
let handle = tokio::task::spawn_blocking(move || track_descendants(kq, root_pid));
Some(Self { kq, handle })
}
pub(crate) async fn stop(self) -> HashSet<i32> {
trigger_stop_event(self.kq);
self.handle.await.unwrap_or_default()
}
}
unsafe extern "C" {
fn proc_listchildpids(
ppid: libc::c_int,
buffer: *mut libc::c_void,
buffersize: libc::c_int,
) -> libc::c_int;
}
/// Wrap proc_listchildpids.
fn list_child_pids(parent: i32) -> Vec<i32> {
unsafe {
let mut capacity: usize = 16;
loop {
let mut buf: Vec<i32> = vec![0; capacity];
let count = proc_listchildpids(
parent as libc::c_int,
buf.as_mut_ptr() as *mut libc::c_void,
(buf.len() * std::mem::size_of::<i32>()) as libc::c_int,
);
if count <= 0 {
return Vec::new();
}
let returned = count as usize;
if returned < capacity {
buf.truncate(returned);
return buf;
}
capacity = capacity.saturating_mul(2).max(returned + 16);
}
}
}
fn pid_is_alive(pid: i32) -> bool {
if pid <= 0 {
return false;
}
let res = unsafe { libc::kill(pid as libc::pid_t, 0) };
if res == 0 {
true
} else {
matches!(
std::io::Error::last_os_error().raw_os_error(),
Some(libc::EPERM)
)
}
}
enum WatchPidError {
ProcessGone,
Other(std::io::Error),
}
/// Add `pid` to the watch list in `kq`.
fn watch_pid(kq: libc::c_int, pid: i32) -> Result<(), WatchPidError> {
if pid <= 0 {
return Err(WatchPidError::ProcessGone);
}
let kev = libc::kevent {
ident: pid as libc::uintptr_t,
filter: libc::EVFILT_PROC,
flags: libc::EV_ADD | libc::EV_CLEAR,
fflags: libc::NOTE_FORK | libc::NOTE_EXEC | libc::NOTE_EXIT,
data: 0,
udata: std::ptr::null_mut(),
};
let res = unsafe { libc::kevent(kq, &kev, 1, std::ptr::null_mut(), 0, std::ptr::null()) };
if res < 0 {
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::ESRCH) {
Err(WatchPidError::ProcessGone)
} else {
Err(WatchPidError::Other(err))
}
} else {
Ok(())
}
}
fn watch_children(
kq: libc::c_int,
parent: i32,
seen: &mut HashSet<i32>,
active: &mut HashSet<i32>,
) {
for child_pid in list_child_pids(parent) {
add_pid_watch(kq, child_pid, seen, active);
}
}
/// Watch `pid` and its children, updating `seen` and `active` sets.
fn add_pid_watch(kq: libc::c_int, pid: i32, seen: &mut HashSet<i32>, active: &mut HashSet<i32>) {
if pid <= 0 {
return;
}
let newly_seen = seen.insert(pid);
let mut should_recurse = newly_seen;
if active.insert(pid) {
match watch_pid(kq, pid) {
Ok(()) => {
should_recurse = true;
}
Err(WatchPidError::ProcessGone) => {
active.remove(&pid);
return;
}
Err(WatchPidError::Other(err)) => {
warn!("failed to watch pid {pid}: {err}");
active.remove(&pid);
return;
}
}
}
if should_recurse {
watch_children(kq, pid, seen, active);
}
}
const STOP_IDENT: libc::uintptr_t = 1;
fn register_stop_event(kq: libc::c_int) -> bool {
let kev = libc::kevent {
ident: STOP_IDENT,
filter: libc::EVFILT_USER,
flags: libc::EV_ADD | libc::EV_CLEAR,
fflags: 0,
data: 0,
udata: std::ptr::null_mut(),
};
let res = unsafe { libc::kevent(kq, &kev, 1, std::ptr::null_mut(), 0, std::ptr::null()) };
res >= 0
}
fn trigger_stop_event(kq: libc::c_int) {
if kq < 0 {
return;
}
let kev = libc::kevent {
ident: STOP_IDENT,
filter: libc::EVFILT_USER,
flags: 0,
fflags: libc::NOTE_TRIGGER,
data: 0,
udata: std::ptr::null_mut(),
};
let _ = unsafe { libc::kevent(kq, &kev, 1, std::ptr::null_mut(), 0, std::ptr::null()) };
}
/// Put all of the above together to track all the descendants of `root_pid`.
fn track_descendants(kq: libc::c_int, root_pid: i32) -> HashSet<i32> {
if kq < 0 {
let mut seen = HashSet::new();
seen.insert(root_pid);
return seen;
}
if !register_stop_event(kq) {
let mut seen = HashSet::new();
seen.insert(root_pid);
let _ = unsafe { libc::close(kq) };
return seen;
}
let mut seen: HashSet<i32> = HashSet::new();
let mut active: HashSet<i32> = HashSet::new();
add_pid_watch(kq, root_pid, &mut seen, &mut active);
const EVENTS_CAP: usize = 32;
let mut events: [libc::kevent; EVENTS_CAP] =
unsafe { std::mem::MaybeUninit::zeroed().assume_init() };
let mut stop_requested = false;
loop {
if active.is_empty() {
if !pid_is_alive(root_pid) {
break;
}
add_pid_watch(kq, root_pid, &mut seen, &mut active);
if active.is_empty() {
continue;
}
}
let nev = unsafe {
libc::kevent(
kq,
std::ptr::null::<libc::kevent>(),
0,
events.as_mut_ptr(),
EVENTS_CAP as libc::c_int,
std::ptr::null(),
)
};
if nev < 0 {
let err = std::io::Error::last_os_error();
if err.kind() == std::io::ErrorKind::Interrupted {
continue;
}
break;
}
if nev == 0 {
continue;
}
for ev in events.iter().take(nev as usize) {
let pid = ev.ident as i32;
if ev.filter == libc::EVFILT_USER && ev.ident == STOP_IDENT {
stop_requested = true;
break;
}
if (ev.flags & libc::EV_ERROR) != 0 {
if ev.data == libc::ESRCH as isize {
active.remove(&pid);
}
continue;
}
if (ev.fflags & libc::NOTE_FORK) != 0 {
watch_children(kq, pid, &mut seen, &mut active);
}
if (ev.fflags & libc::NOTE_EXIT) != 0 {
active.remove(&pid);
}
}
if stop_requested {
break;
}
}
let _ = unsafe { libc::close(kq) };
seen
}
#[cfg(test)]
mod tests {
use super::*;
use std::process::Command;
use std::process::Stdio;
use std::time::Duration;
#[test]
fn pid_is_alive_detects_current_process() {
let pid = std::process::id() as i32;
assert!(pid_is_alive(pid));
}
#[cfg(target_os = "macos")]
#[test]
fn list_child_pids_includes_spawned_child() {
let mut child = Command::new("/bin/sleep")
.arg("5")
.stdin(Stdio::null())
.spawn()
.expect("failed to spawn child process");
let child_pid = child.id() as i32;
let parent_pid = std::process::id() as i32;
let mut found = false;
for _ in 0..100 {
if list_child_pids(parent_pid).contains(&child_pid) {
found = true;
break;
}
std::thread::sleep(Duration::from_millis(10));
}
let _ = child.kill();
let _ = child.wait();
assert!(found, "expected to find child pid {child_pid} in list");
}
#[cfg(target_os = "macos")]
#[tokio::test]
async fn pid_tracker_collects_spawned_children() {
let tracker = PidTracker::new(std::process::id() as i32).expect("failed to create tracker");
let mut child = Command::new("/bin/sleep")
.arg("0.1")
.stdin(Stdio::null())
.spawn()
.expect("failed to spawn child process");
let child_pid = child.id() as i32;
let parent_pid = std::process::id() as i32;
let _ = child.wait();
let seen = tracker.stop().await;
assert!(
seen.contains(&parent_pid),
"expected tracker to include parent pid {parent_pid}"
);
assert!(
seen.contains(&child_pid),
"expected tracker to include child pid {child_pid}"
);
}
#[cfg(target_os = "macos")]
#[tokio::test]
async fn pid_tracker_collects_bash_subshell_descendants() {
let tracker = PidTracker::new(std::process::id() as i32).expect("failed to create tracker");
let child = Command::new("/bin/bash")
.arg("-c")
.arg("(sleep 0.1 & echo $!; wait)")
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null())
.spawn()
.expect("failed to spawn bash");
let output = child.wait_with_output().unwrap().stdout;
let subshell_pid = String::from_utf8_lossy(&output)
.trim()
.parse::<i32>()
.expect("failed to parse subshell pid");
let seen = tracker.stop().await;
assert!(
seen.contains(&subshell_pid),
"expected tracker to include subshell pid {subshell_pid}"
);
}
}

View File

@@ -0,0 +1,114 @@
use std::collections::HashSet;
use tokio::io::AsyncBufReadExt;
use tokio::process::Child;
use tokio::task::JoinHandle;
use super::pid_tracker::PidTracker;
pub struct SandboxDenial {
pub name: String,
pub capability: String,
}
pub struct DenialLogger {
log_stream: Child,
pid_tracker: Option<PidTracker>,
log_reader: Option<JoinHandle<Vec<u8>>>,
}
impl DenialLogger {
pub(crate) fn new() -> Option<Self> {
let mut log_stream = start_log_stream()?;
let stdout = log_stream.stdout.take()?;
let log_reader = tokio::spawn(async move {
let mut reader = tokio::io::BufReader::new(stdout);
let mut logs = Vec::new();
let mut chunk = Vec::new();
loop {
match reader.read_until(b'\n', &mut chunk).await {
Ok(0) | Err(_) => break,
Ok(_) => {
logs.extend_from_slice(&chunk);
chunk.clear();
}
}
}
logs
});
Some(Self {
log_stream,
pid_tracker: None,
log_reader: Some(log_reader),
})
}
pub(crate) fn on_child_spawn(&mut self, child: &Child) {
if let Some(root_pid) = child.id() {
self.pid_tracker = PidTracker::new(root_pid as i32);
}
}
pub(crate) async fn finish(mut self) -> Vec<SandboxDenial> {
let pid_set = match self.pid_tracker {
Some(tracker) => tracker.stop().await,
None => Default::default(),
};
if pid_set.is_empty() {
return Vec::new();
}
let _ = self.log_stream.kill().await;
let _ = self.log_stream.wait().await;
let logs_bytes = match self.log_reader.take() {
Some(handle) => handle.await.unwrap_or_default(),
None => Vec::new(),
};
let logs = String::from_utf8_lossy(&logs_bytes);
let mut seen: HashSet<(String, String)> = HashSet::new();
let mut denials: Vec<SandboxDenial> = Vec::new();
for line in logs.lines() {
if let Ok(json) = serde_json::from_str::<serde_json::Value>(line)
&& let Some(msg) = json.get("eventMessage").and_then(|v| v.as_str())
&& let Some((pid, name, capability)) = parse_message(msg)
&& pid_set.contains(&pid)
&& seen.insert((name.clone(), capability.clone()))
{
denials.push(SandboxDenial { name, capability });
}
}
denials
}
}
fn start_log_stream() -> Option<Child> {
use std::process::Stdio;
const PREDICATE: &str = r#"(((processID == 0) AND (senderImagePath CONTAINS "/Sandbox")) OR (subsystem == "com.apple.sandbox.reporting"))"#;
tokio::process::Command::new("log")
.args(["stream", "--style", "ndjson", "--predicate", PREDICATE])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null())
.kill_on_drop(true)
.spawn()
.ok()
}
fn parse_message(msg: &str) -> Option<(i32, String, String)> {
// Example message:
// Sandbox: processname(1234) deny(1) capability-name args...
static RE: std::sync::OnceLock<regex_lite::Regex> = std::sync::OnceLock::new();
let re = RE.get_or_init(|| {
#[expect(clippy::unwrap_used)]
regex_lite::Regex::new(r"^Sandbox:\s*(.+?)\((\d+)\)\s+deny\(.*?\)\s*(.+)$").unwrap()
});
let (_, [name, pid_str, capability]) = re.captures(msg)?.extract();
let pid = pid_str.trim().parse::<i32>().ok()?;
Some((pid, name.to_string(), capability.to_string()))
}

View File

@@ -11,6 +11,10 @@ pub struct SeatbeltCommand {
#[arg(long = "full-auto", default_value_t = false)]
pub full_auto: bool,
/// While the command runs, capture macOS sandbox denials via `log stream` and print them after exit
#[arg(long = "log-denials", default_value_t = false)]
pub log_denials: bool,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,

View File

@@ -1,3 +1,4 @@
use clap::Args;
use clap::CommandFactory;
use clap::Parser;
use clap_complete::Shell;
@@ -17,17 +18,21 @@ use codex_cli::login::run_logout;
use codex_cloud_tasks::Cli as CloudTasksCli;
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
use codex_execpolicy2::ExecPolicyCheckCommand;
use codex_responses_api_proxy::Args as ResponsesApiProxyArgs;
use codex_tui::AppExitInfo;
use codex_tui::Cli as TuiCli;
use codex_tui::updates::UpdateAction;
use codex_tui::update_action::UpdateAction;
use owo_colors::OwoColorize;
use std::path::PathBuf;
use supports_color::Stream;
mod mcp_cmd;
#[cfg(not(windows))]
mod wsl_paths;
use crate::mcp_cmd::McpCli;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::features::is_known_feature_key;
@@ -79,8 +84,8 @@ enum Subcommand {
/// [experimental] Run the Codex MCP server (stdio transport).
McpServer,
/// [experimental] Run the app server.
AppServer,
/// [experimental] Run the app server or related tooling.
AppServer(AppServerCommand),
/// Generate shell completion scripts.
Completion(CompletionCommand),
@@ -96,9 +101,6 @@ enum Subcommand {
/// Resume a previous interactive session (picker by default; use --last to continue the most recent).
Resume(ResumeCommand),
/// Internal: generate TypeScript protocol bindings.
#[clap(hide = true)]
GenerateTs(GenerateTsCommand),
/// [EXPERIMENTAL] Browse tasks from Codex Cloud and apply changes locally.
#[clap(name = "cloud", alias = "cloud-tasks")]
Cloud(CloudTasksCli),
@@ -111,6 +113,10 @@ enum Subcommand {
#[clap(hide = true, name = "stdio-to-uds")]
StdioToUds(StdioToUdsCommand),
/// Check execpolicy files against a command.
#[clap(name = "execpolicycheck")]
ExecPolicyCheck(ExecPolicyCheckCommand),
/// Inspect feature flags.
Features(FeaturesCli),
}
@@ -205,6 +211,22 @@ struct LogoutCommand {
}
#[derive(Debug, Parser)]
struct AppServerCommand {
/// Omit to run the app server; specify a subcommand for tooling.
#[command(subcommand)]
subcommand: Option<AppServerSubcommand>,
}
#[derive(Debug, clap::Subcommand)]
enum AppServerSubcommand {
/// [experimental] Generate TypeScript bindings for the app server protocol.
GenerateTs(GenerateTsCommand),
/// [experimental] Generate JSON Schema for the app server protocol.
GenerateJsonSchema(GenerateJsonSchemaCommand),
}
#[derive(Debug, Args)]
struct GenerateTsCommand {
/// Output directory where .ts files will be written
#[arg(short = 'o', long = "out", value_name = "DIR")]
@@ -215,6 +237,13 @@ struct GenerateTsCommand {
prettier: Option<PathBuf>,
}
#[derive(Debug, Args)]
struct GenerateJsonSchemaCommand {
/// Output directory where the schema bundle will be written
#[arg(short = 'o', long = "out", value_name = "DIR")]
out_dir: PathBuf,
}
#[derive(Debug, Parser)]
struct StdioToUdsCommand {
/// Path to the Unix domain socket to connect to.
@@ -267,10 +296,30 @@ fn handle_app_exit(exit_info: AppExitInfo) -> anyhow::Result<()> {
/// Run the update action and print the result.
fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
println!();
let (cmd, args) = action.command_args();
let cmd_str = action.command_str();
println!("Updating Codex via `{cmd_str}`...");
let status = std::process::Command::new(cmd).args(args).status()?;
let status = {
#[cfg(windows)]
{
// On Windows, run via cmd.exe so .CMD/.BAT are correctly resolved (PATHEXT semantics).
std::process::Command::new("cmd")
.args(["/C", &cmd_str])
.status()?
}
#[cfg(not(windows))]
{
let (cmd, args) = action.command_args();
let command_path = crate::wsl_paths::normalize_for_wsl(cmd);
let normalized_args: Vec<String> = args
.iter()
.map(crate::wsl_paths::normalize_for_wsl)
.collect();
std::process::Command::new(&command_path)
.args(&normalized_args)
.status()?
}
};
if !status.success() {
anyhow::bail!("`{cmd_str}` failed with status {status}");
}
@@ -279,6 +328,12 @@ fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
Ok(())
}
fn run_execpolicycheck(cmd: ExecPolicyCheckCommand) -> anyhow::Result<()> {
let json = cmd.run()?;
println!("{json}");
Ok(())
}
#[derive(Debug, Default, Parser, Clone)]
struct FeatureToggles {
/// Enable a feature (repeatable). Equivalent to `-c features.<name>=true`.
@@ -387,9 +442,20 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
prepend_config_flags(&mut mcp_cli.config_overrides, root_config_overrides.clone());
mcp_cli.run().await?;
}
Some(Subcommand::AppServer) => {
codex_app_server::run_main(codex_linux_sandbox_exe, root_config_overrides).await?;
}
Some(Subcommand::AppServer(app_server_cli)) => match app_server_cli.subcommand {
None => {
codex_app_server::run_main(codex_linux_sandbox_exe, root_config_overrides).await?;
}
Some(AppServerSubcommand::GenerateTs(gen_cli)) => {
codex_app_server_protocol::generate_ts(
&gen_cli.out_dir,
gen_cli.prettier.as_deref(),
)?;
}
Some(AppServerSubcommand::GenerateJsonSchema(gen_cli)) => {
codex_app_server_protocol::generate_json(&gen_cli.out_dir)?;
}
},
Some(Subcommand::Resume(ResumeCommand {
session_id,
last,
@@ -504,9 +570,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
tokio::task::spawn_blocking(move || codex_stdio_to_uds::run(socket_path.as_path()))
.await??;
}
Some(Subcommand::GenerateTs(gen_cli)) => {
codex_protocol_ts::generate_ts(&gen_cli.out_dir, gen_cli.prettier.as_deref())?;
}
Some(Subcommand::ExecPolicyCheck(cmd)) => run_execpolicycheck(cmd)?,
Some(Subcommand::Features(FeaturesCli { sub })) => match sub {
FeaturesSubcommand::List => {
// Respect root-level `-c` overrides plus top-level flags like `--profile`.
@@ -709,9 +773,9 @@ mod tests {
#[test]
fn resume_model_flag_applies_when_no_root_flags() {
let interactive = finalize_from_args(["codex", "resume", "-m", "gpt-5-test"].as_ref());
let interactive = finalize_from_args(["codex", "resume", "-m", "gpt-5.1-test"].as_ref());
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
assert_eq!(interactive.model.as_deref(), Some("gpt-5.1-test"));
assert!(interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id, None);
@@ -756,7 +820,7 @@ mod tests {
"--ask-for-approval",
"on-request",
"-m",
"gpt-5-test",
"gpt-5.1-test",
"-p",
"my-profile",
"-C",
@@ -767,7 +831,7 @@ mod tests {
.as_ref(),
);
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
assert_eq!(interactive.model.as_deref(), Some("gpt-5.1-test"));
assert!(interactive.oss);
assert_eq!(interactive.config_profile.as_deref(), Some("my-profile"));
assert_matches!(

View File

@@ -0,0 +1,76 @@
use std::ffi::OsStr;
/// WSL-specific path helpers used by the updater logic.
///
/// See https://github.com/openai/codex/issues/6086.
pub fn is_wsl() -> bool {
#[cfg(target_os = "linux")]
{
if std::env::var_os("WSL_DISTRO_NAME").is_some() {
return true;
}
match std::fs::read_to_string("/proc/version") {
Ok(version) => version.to_lowercase().contains("microsoft"),
Err(_) => false,
}
}
#[cfg(not(target_os = "linux"))]
{
false
}
}
/// Convert a Windows absolute path (`C:\foo\bar` or `C:/foo/bar`) to a WSL mount path (`/mnt/c/foo/bar`).
/// Returns `None` if the input does not look like a Windows drive path.
pub fn win_path_to_wsl(path: &str) -> Option<String> {
let bytes = path.as_bytes();
if bytes.len() < 3
|| bytes[1] != b':'
|| !(bytes[2] == b'\\' || bytes[2] == b'/')
|| !bytes[0].is_ascii_alphabetic()
{
return None;
}
let drive = (bytes[0] as char).to_ascii_lowercase();
let tail = path[3..].replace('\\', "/");
if tail.is_empty() {
return Some(format!("/mnt/{drive}"));
}
Some(format!("/mnt/{drive}/{tail}"))
}
/// If under WSL and given a Windows-style path, return the equivalent `/mnt/<drive>/…` path.
/// Otherwise returns the input unchanged.
pub fn normalize_for_wsl<P: AsRef<OsStr>>(path: P) -> String {
let value = path.as_ref().to_string_lossy().to_string();
if !is_wsl() {
return value;
}
if let Some(mapped) = win_path_to_wsl(&value) {
return mapped;
}
value
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn win_to_wsl_basic() {
assert_eq!(
win_path_to_wsl(r"C:\Temp\codex.zip").as_deref(),
Some("/mnt/c/Temp/codex.zip")
);
assert_eq!(
win_path_to_wsl("D:/Work/codex.tgz").as_deref(),
Some("/mnt/d/Work/codex.tgz")
);
assert!(win_path_to_wsl("/home/user/codex").is_none());
}
#[test]
fn normalize_is_noop_on_unix_paths() {
assert_eq!(normalize_for_wsl("/home/u/x"), "/home/u/x");
}
}

View File

@@ -8,6 +8,7 @@ pub mod util;
pub use cli::Cli;
use anyhow::anyhow;
use codex_login::AuthManager;
use std::io::IsTerminal;
use std::io::Read;
use std::path::PathBuf;
@@ -56,20 +57,8 @@ async fn init_backend(user_agent_suffix: &str) -> anyhow::Result<BackendContext>
};
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
let auth = match codex_core::config::find_codex_home()
.ok()
.map(|home| {
let store_mode = codex_core::config::Config::load_from_base_config_with_overrides(
codex_core::config::ConfigToml::default(),
codex_core::config::ConfigOverrides::default(),
home.clone(),
)
.map(|cfg| cfg.cli_auth_credentials_store_mode)
.unwrap_or_default();
codex_login::AuthManager::new(home, false, store_mode)
})
.and_then(|am| am.auth())
{
let auth_manager = util::load_auth_manager().await;
let auth = match auth_manager.as_ref().and_then(AuthManager::auth) {
Some(auth) => auth,
None => {
eprintln!(
@@ -1732,6 +1721,7 @@ mod tests {
use ratatui::layout::Rect;
#[test]
#[ignore = "very slow"]
fn composer_input_renders_typed_characters() {
let mut composer = ComposerInput::new();
let key = KeyEvent::new(KeyCode::Char('a'), KeyModifiers::NONE);

View File

@@ -2,6 +2,10 @@ use base64::Engine as _;
use chrono::Utc;
use reqwest::header::HeaderMap;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_login::AuthManager;
pub fn set_user_agent_suffix(suffix: &str) {
if let Ok(mut guard) = codex_core::default_client::USER_AGENT_SUFFIX.lock() {
guard.replace(suffix.to_string());
@@ -54,6 +58,18 @@ pub fn extract_chatgpt_account_id(token: &str) -> Option<String> {
.map(str::to_string)
}
pub async fn load_auth_manager() -> Option<AuthManager> {
// TODO: pass in cli overrides once cloud tasks properly support them.
let config = Config::load_with_cli_overrides(Vec::new(), ConfigOverrides::default())
.await
.ok()?;
Some(AuthManager::new(
config.codex_home,
false,
config.cli_auth_credentials_store_mode,
))
}
/// Build headers for ChatGPT-backed requests: `User-Agent`, optional `Authorization`,
/// and optional `ChatGPT-Account-Id`.
pub async fn build_chatgpt_headers() -> HeaderMap {
@@ -69,31 +85,22 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
USER_AGENT,
HeaderValue::from_str(&ua).unwrap_or(HeaderValue::from_static("codex-cli")),
);
if let Ok(home) = codex_core::config::find_codex_home() {
let store_mode = codex_core::config::Config::load_from_base_config_with_overrides(
codex_core::config::ConfigToml::default(),
codex_core::config::ConfigOverrides::default(),
home.clone(),
)
.map(|cfg| cfg.cli_auth_credentials_store_mode)
.unwrap_or_default();
let am = codex_login::AuthManager::new(home, false, store_mode);
if let Some(auth) = am.auth()
&& let Ok(tok) = auth.get_token().await
&& !tok.is_empty()
if let Some(am) = load_auth_manager().await
&& let Some(auth) = am.auth()
&& let Ok(tok) = auth.get_token().await
&& !tok.is_empty()
{
let v = format!("Bearer {tok}");
if let Ok(hv) = HeaderValue::from_str(&v) {
headers.insert(AUTHORIZATION, hv);
}
if let Some(acc) = auth
.get_account_id()
.or_else(|| extract_chatgpt_account_id(&tok))
&& let Ok(name) = HeaderName::from_bytes(b"ChatGPT-Account-Id")
&& let Ok(hv) = HeaderValue::from_str(&acc)
{
let v = format!("Bearer {tok}");
if let Ok(hv) = HeaderValue::from_str(&v) {
headers.insert(AUTHORIZATION, hv);
}
if let Some(acc) = auth
.get_account_id()
.or_else(|| extract_chatgpt_account_id(&tok))
&& let Ok(name) = HeaderName::from_bytes(b"ChatGPT-Account-Id")
&& let Ok(hv) = HeaderValue::from_str(&acc)
{
headers.insert(name, hv);
}
headers.insert(name, hv);
}
}
headers

View File

@@ -8,9 +8,12 @@ workspace = true
[dependencies]
clap = { workspace = true, features = ["derive", "wrap_help"], optional = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-core = { workspace = true }
codex-lmstudio = { workspace = true }
codex-ollama = { workspace = true }
codex-protocol = { workspace = true }
once_cell = { workspace = true }
serde = { workspace = true, optional = true }
toml = { workspace = true, optional = true }

View File

@@ -24,21 +24,21 @@ pub fn builtin_approval_presets() -> Vec<ApprovalPreset> {
ApprovalPreset {
id: "read-only",
label: "Read Only",
description: "Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network.",
description: "Requires approval to edit files and run commands.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::ReadOnly,
},
ApprovalPreset {
id: "auto",
label: "Auto",
description: "Codex can read files, make edits, and run commands in the workspace. Codex requires approval to work outside the workspace or access network.",
label: "Agent",
description: "Read and edit files, and run commands.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::new_workspace_write_policy(),
},
ApprovalPreset {
id: "full-access",
label: "Full Access",
description: "Codex can read files, make edits, and run commands with network access, without approval. Exercise caution.",
label: "Agent (full access)",
description: "Codex can edit files outside this workspace and run commands with network access. Exercise caution when using.",
approval: AskForApproval::Never,
sandbox: SandboxPolicy::DangerFullAccess,
},

View File

@@ -19,8 +19,8 @@ use toml::Value;
pub struct CliConfigOverrides {
/// Override a configuration value that would otherwise be loaded from
/// `~/.codex/config.toml`. Use a dotted path (`foo.bar.baz`) to override
/// nested values. The `value` portion is parsed as JSON. If it fails to
/// parse as JSON, the raw string is used as a literal.
/// nested values. The `value` portion is parsed as TOML. If it fails to
/// parse as TOML, the raw string is used as a literal.
///
/// Examples:
/// - `-c model="o3"`
@@ -59,7 +59,7 @@ impl CliConfigOverrides {
return Err(format!("Empty key in override: {s}"));
}
// Attempt to parse as JSON. If that fails, treat it as a raw
// Attempt to parse as TOML. If that fails, treat it as a raw
// string. This allows convenient usage such as
// `-c model=o3` without the quotes.
let value: Value = match parse_toml_value(value_str) {
@@ -151,6 +151,15 @@ mod tests {
assert_eq!(v.as_integer(), Some(42));
}
#[test]
fn parses_bool() {
let true_literal = parse_toml_value("true").expect("parse");
assert_eq!(true_literal.as_bool(), Some(true));
let false_literal = parse_toml_value("false").expect("parse");
assert_eq!(false_literal.as_bool(), Some(false));
}
#[test]
fn fails_on_unquoted_string() {
assert!(parse_toml_value("hello").is_err());

View File

@@ -37,3 +37,5 @@ pub mod model_presets;
// Shared approval presets (AskForApproval + Sandbox) used by TUI and MCP server
// Not to be confused with AskForApproval, which we should probably rename to EscalationPolicy.
pub mod approval_presets;
// Shared OSS provider utilities used by TUI and exec
pub mod oss;

View File

@@ -1,5 +1,8 @@
use std::collections::HashMap;
use codex_app_server_protocol::AuthMode;
use codex_core::protocol_config_types::ReasoningEffort;
use once_cell::sync::Lazy;
/// A reasoning effort option that can be surfaced for a model.
#[derive(Debug, Clone, Copy)]
@@ -10,8 +13,14 @@ pub struct ReasoningEffortPreset {
pub description: &'static str,
}
#[derive(Debug, Clone)]
pub struct ModelUpgrade {
pub id: &'static str,
pub reasoning_effort_mapping: Option<HashMap<ReasoningEffort, ReasoningEffort>>,
}
/// Metadata describing a Codex-supported model.
#[derive(Debug, Clone, Copy)]
#[derive(Debug, Clone)]
pub struct ModelPreset {
/// Stable identifier for the preset.
pub id: &'static str,
@@ -27,86 +36,175 @@ pub struct ModelPreset {
pub supported_reasoning_efforts: &'static [ReasoningEffortPreset],
/// Whether this is the default model for new users.
pub is_default: bool,
/// recommended upgrade model
pub upgrade: Option<ModelUpgrade>,
}
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-codex",
model: "gpt-5-codex",
display_name: "gpt-5-codex",
description: "Optimized for codex.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Fastest responses with limited reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: true,
},
ModelPreset {
id: "gpt-5-codex-mini",
model: "gpt-5-codex-mini",
display_name: "gpt-5-codex-mini",
description: "Optimized for codex. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
ModelPreset {
id: "gpt-5",
model: "gpt-5",
display_name: "gpt-5",
description: "Broad world knowledge with strong general reasoning.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Minimal,
description: "Fastest responses with little reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
];
static PRESETS: Lazy<Vec<ModelPreset>> = Lazy::new(|| {
vec![
ModelPreset {
id: "gpt-5.1-codex",
model: "gpt-5.1-codex",
display_name: "gpt-5.1-codex",
description: "Optimized for codex.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Fastest responses with limited reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: true,
upgrade: None,
},
ModelPreset {
id: "gpt-5.1-codex-mini",
model: "gpt-5.1-codex-mini",
display_name: "gpt-5.1-codex-mini",
description: "Optimized for codex. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
upgrade: None,
},
ModelPreset {
id: "gpt-5.1",
model: "gpt-5.1",
display_name: "gpt-5.1",
description: "Broad world knowledge with strong general reasoning.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
upgrade: None,
},
// Deprecated models.
ModelPreset {
id: "gpt-5-codex",
model: "gpt-5-codex",
display_name: "gpt-5-codex",
description: "Optimized for codex.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Fastest responses with limited reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
upgrade: Some(ModelUpgrade {
id: "gpt-5.1-codex",
reasoning_effort_mapping: None,
}),
},
ModelPreset {
id: "gpt-5-codex-mini",
model: "gpt-5-codex-mini",
display_name: "gpt-5-codex-mini",
description: "Optimized for codex. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
upgrade: Some(ModelUpgrade {
id: "gpt-5.1-codex-mini",
reasoning_effort_mapping: None,
}),
},
ModelPreset {
id: "gpt-5",
model: "gpt-5",
display_name: "gpt-5",
description: "Broad world knowledge with strong general reasoning.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Minimal,
description: "Fastest responses with little reasoning",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
upgrade: Some(ModelUpgrade {
id: "gpt-5.1",
reasoning_effort_mapping: Some(HashMap::from([(
ReasoningEffort::Minimal,
ReasoningEffort::Low,
)])),
}),
},
]
});
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
let allow_codex_mini = matches!(auth_mode, Some(AuthMode::ChatGPT));
pub fn builtin_model_presets(_auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
// leave auth mode for later use
PRESETS
.iter()
.filter(|preset| allow_codex_mini || preset.id != "gpt-5-codex-mini")
.copied()
.filter(|preset| preset.upgrade.is_none())
.cloned()
.collect()
}
pub fn all_model_presets() -> &'static Vec<ModelPreset> {
&PRESETS
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -0,0 +1,60 @@
//! OSS provider utilities shared between TUI and exec.
use codex_core::LMSTUDIO_OSS_PROVIDER_ID;
use codex_core::OLLAMA_OSS_PROVIDER_ID;
use codex_core::config::Config;
/// Returns the default model for a given OSS provider.
pub fn get_default_model_for_oss_provider(provider_id: &str) -> Option<&'static str> {
match provider_id {
LMSTUDIO_OSS_PROVIDER_ID => Some(codex_lmstudio::DEFAULT_OSS_MODEL),
OLLAMA_OSS_PROVIDER_ID => Some(codex_ollama::DEFAULT_OSS_MODEL),
_ => None,
}
}
/// Ensures the specified OSS provider is ready (models downloaded, service reachable).
pub async fn ensure_oss_provider_ready(
provider_id: &str,
config: &Config,
) -> Result<(), std::io::Error> {
match provider_id {
LMSTUDIO_OSS_PROVIDER_ID => {
codex_lmstudio::ensure_oss_ready(config)
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;
}
OLLAMA_OSS_PROVIDER_ID => {
codex_ollama::ensure_oss_ready(config)
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;
}
_ => {
// Unknown provider, skip setup
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_default_model_for_provider_lmstudio() {
let result = get_default_model_for_oss_provider(LMSTUDIO_OSS_PROVIDER_ID);
assert_eq!(result, Some(codex_lmstudio::DEFAULT_OSS_MODEL));
}
#[test]
fn test_get_default_model_for_provider_ollama() {
let result = get_default_model_for_oss_provider(OLLAMA_OSS_PROVIDER_ID);
assert_eq!(result, Some(codex_ollama::DEFAULT_OSS_MODEL));
}
#[test]
fn test_get_default_model_for_provider_unknown() {
let result = get_default_model_for_oss_provider("unknown-provider");
assert_eq!(result, None);
}
}

View File

@@ -22,6 +22,7 @@ chrono = { workspace = true, features = ["serde"] }
codex-app-server-protocol = { workspace = true }
codex-apply-patch = { workspace = true }
codex-async-utils = { workspace = true }
codex-execpolicy2 = { workspace = true }
codex-file-search = { workspace = true }
codex-git = { workspace = true }
codex-keyring-store = { workspace = true }
@@ -32,6 +33,7 @@ codex-utils-pty = { workspace = true }
codex-utils-readiness = { workspace = true }
codex-utils-string = { workspace = true }
codex-utils-tokenizer = { workspace = true }
codex-windows-sandbox = { package = "codex-windows-sandbox", path = "../windows-sandbox-rs" }
dirs = { workspace = true }
dunce = { workspace = true }
env-flags = { workspace = true }
@@ -39,12 +41,7 @@ eventsource-stream = { workspace = true }
futures = { workspace = true }
http = { workspace = true }
indexmap = { workspace = true }
keyring = { workspace = true, features = [
"apple-native",
"crypto-rust",
"linux-native-async-persistent",
"windows-native",
] }
keyring = { workspace = true, features = ["crypto-rust"] }
libc = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
@@ -59,6 +56,7 @@ shlex = { workspace = true }
similar = { workspace = true }
strum_macros = { workspace = true }
tempfile = { workspace = true }
test-case = "3.3.1"
test-log = { workspace = true }
thiserror = { workspace = true }
time = { workspace = true, features = [
@@ -83,15 +81,16 @@ tree-sitter-bash = { workspace = true }
uuid = { workspace = true, features = ["serde", "v4", "v5"] }
which = { workspace = true }
wildmatch = { workspace = true }
codex_windows_sandbox = { package = "codex-windows-sandbox", path = "../windows-sandbox-rs" }
[target.'cfg(target_os = "linux")'.dependencies]
landlock = { workspace = true }
seccompiler = { workspace = true }
keyring = { workspace = true, features = ["linux-native-async-persistent"] }
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.9"
keyring = { workspace = true, features = ["apple-native"] }
# Build OpenSSL from source for musl builds.
[target.x86_64-unknown-linux-musl.dependencies]
@@ -101,10 +100,18 @@ openssl-sys = { workspace = true, features = ["vendored"] }
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { workspace = true, features = ["vendored"] }
[target.'cfg(target_os = "windows")'.dependencies]
keyring = { workspace = true, features = ["windows-native"] }
[target.'cfg(any(target_os = "freebsd", target_os = "openbsd"))'.dependencies]
keyring = { workspace = true, features = ["sync-secret-service"] }
[dev-dependencies]
assert_cmd = { workspace = true }
assert_matches = { workspace = true }
codex-arg0 = { workspace = true }
core_test_support = { workspace = true }
ctor = { workspace = true }
escargot = { workspace = true }
image = { workspace = true, features = ["jpeg", "png"] }
maplit = { workspace = true }

View File

@@ -0,0 +1,368 @@
You are GPT-5.1 running in the Codex CLI, a terminal-based coding assistant. Codex CLI is an open source project led by OpenAI. You are expected to be precise, safe, and helpful.
Your capabilities:
- Receive user prompts and other context provided by the harness, such as files in the workspace.
- Communicate with the user by streaming thinking & responses, and by making & updating plans.
- Emit function calls to run terminal commands and apply patches. Depending on how this specific run is configured, you can request that these function calls be escalated to the user for approval before running. More on this in the "Sandbox and approvals" section.
Within this context, Codex refers to the open-source agentic coding interface (not the old Codex language model built by OpenAI).
# How you work
## Personality
Your default personality and tone is concise, direct, and friendly. You communicate efficiently, always keeping the user clearly informed about ongoing actions without unnecessary detail. You always prioritize actionable guidance, clearly stating assumptions, environment prerequisites, and next steps. Unless explicitly asked, you avoid excessively verbose explanations about your work.
# AGENTS.md spec
- Repos often contain AGENTS.md files. These files can appear anywhere within the repository.
- These files are a way for humans to give you (the agent) instructions or tips for working within the container.
- Some examples might be: coding conventions, info about how code is organized, or instructions for how to run or test code.
- Instructions in AGENTS.md files:
- The scope of an AGENTS.md file is the entire directory tree rooted at the folder that contains it.
- For every file you touch in the final patch, you must obey instructions in any AGENTS.md file whose scope includes that file.
- Instructions about code style, structure, naming, etc. apply only to code within the AGENTS.md file's scope, unless the file states otherwise.
- More-deeply-nested AGENTS.md files take precedence in the case of conflicting instructions.
- Direct system/developer/user instructions (as part of a prompt) take precedence over AGENTS.md instructions.
- The contents of the AGENTS.md file at the root of the repo and any directories from the CWD up to the root are included with the developer message and don't need to be re-read. When working in a subdirectory of CWD, or a directory outside the CWD, check for any AGENTS.md files that may be applicable.
## Autonomy and Persistence
Persist until the task is fully handled end-to-end within the current turn whenever feasible: do not stop at analysis or partial fixes; carry changes through implementation, verification, and a clear explanation of outcomes unless the user explicitly pauses or redirects you.
Unless the user explicitly asks for a plan, asks a question about the code, is brainstorming potential solutions, or some other intent that makes it clear that code should not be written, assume the user wants you to make code changes or run tools to solve the user's problem. In these cases, it's bad to output your proposed solution in a message, you should go ahead and actually implement the change. If you encounter challenges or blockers, you should attempt to resolve them yourself.
## Responsiveness
### User Updates Spec
You'll work for stretches with tool calls — it's critical to keep the user updated as you work.
Frequency & Length:
- Send short updates (12 sentences) whenever there is a meaningful, important insight you need to share with the user to keep them informed.
- If you expect a longer headsdown stretch, post a brief headsdown note with why and when you'll report back; when you resume, summarize what you learned.
- Only the initial plan, plan updates, and final recap can be longer, with multiple bullets and paragraphs
Tone:
- Friendly, confident, senior-engineer energy. Positive, collaborative, humble; fix mistakes quickly.
Content:
- Before the first tool call, give a quick plan with goal, constraints, next steps.
- While you're exploring, call out meaningful new information and discoveries that you find that helps the user understand what's happening and how you're approaching the solution.
- If you change the plan (e.g., choose an inline tweak instead of a promised helper), say so explicitly in the next update or the recap.
**Examples:**
- “Ive explored the repo; now checking the API route definitions.”
- “Next, Ill patch the config and update the related tests.”
- “Im about to scaffold the CLI commands and helper functions.”
- “Ok cool, so Ive wrapped my head around the repo. Now digging into the API routes.”
- “Configs looking tidy. Next up is patching helpers to keep things in sync.”
- “Finished poking at the DB gateway. I will now chase down error handling.”
- “Alright, build pipeline order is interesting. Checking how it reports failures.”
- “Spotted a clever caching util; now hunting where it gets used.”
## Planning
You have access to an `update_plan` tool which tracks steps and progress and renders them to the user. Using the tool helps demonstrate that you've understood the task and convey how you're approaching it. Plans can help to make complex, ambiguous, or multi-phase work clearer and more collaborative for the user. A good plan should break the task into meaningful, logically ordered steps that are easy to verify as you go.
Note that plans are not for padding out simple work with filler steps or stating the obvious. The content of your plan should not involve doing anything that you aren't capable of doing (i.e. don't try to test things that you can't test). Do not use plans for simple or single-step queries that you can just do or answer immediately.
Do not repeat the full contents of the plan after an `update_plan` call — the harness already displays it. Instead, summarize the change made and highlight any important context or next step.
Before running a command, consider whether or not you have completed the previous step, and make sure to mark it as completed before moving on to the next step. It may be the case that you complete all steps in your plan after a single pass of implementation. If this is the case, you can simply mark all the planned steps as completed. Sometimes, you may need to change plans in the middle of a task: call `update_plan` with the updated plan and make sure to provide an `explanation` of the rationale when doing so.
Maintain statuses in the tool: exactly one item in_progress at a time; mark items complete when done; post timely status transitions. Do not jump an item from pending to completed: always set it to in_progress first. Do not batch-complete multiple items after the fact. Finish with all items completed or explicitly canceled/deferred before ending the turn. Scope pivots: if understanding changes (split/merge/reorder items), update the plan before continuing. Do not let the plan go stale while coding.
Use a plan when:
- The task is non-trivial and will require multiple actions over a long time horizon.
- There are logical phases or dependencies where sequencing matters.
- The work has ambiguity that benefits from outlining high-level goals.
- You want intermediate checkpoints for feedback and validation.
- When the user asked you to do more than one thing in a single prompt
- The user has asked you to use the plan tool (aka "TODOs")
- You generate additional steps while working, and plan to do them before yielding to the user
### Examples
**High-quality plans**
Example 1:
1. Add CLI entry with file args
2. Parse Markdown via CommonMark library
3. Apply semantic HTML template
4. Handle code blocks, images, links
5. Add error handling for invalid files
Example 2:
1. Define CSS variables for colors
2. Add toggle with localStorage state
3. Refactor components to use variables
4. Verify all views for readability
5. Add smooth theme-change transition
Example 3:
1. Set up Node.js + WebSocket server
2. Add join/leave broadcast events
3. Implement messaging with timestamps
4. Add usernames + mention highlighting
5. Persist messages in lightweight DB
6. Add typing indicators + unread count
**Low-quality plans**
Example 1:
1. Create CLI tool
2. Add Markdown parser
3. Convert to HTML
Example 2:
1. Add dark mode toggle
2. Save preference
3. Make styles look good
Example 3:
1. Create single-file HTML game
2. Run quick sanity check
3. Summarize usage instructions
If you need to write a plan, only write high quality plans, not low quality ones.
## Task execution
You are a coding agent. You must keep going until the query or task is completely resolved, before ending your turn and yielding back to the user. Persist until the task is fully handled end-to-end within the current turn whenever feasible and persevere even when function calls fail. Only terminate your turn when you are sure that the problem is solved. Autonomously resolve the query to the best of your ability, using the tools available to you, before coming back to the user. Do NOT guess or make up an answer.
You MUST adhere to the following criteria when solving queries:
- Working on the repo(s) in the current environment is allowed, even if they are proprietary.
- Analyzing code for vulnerabilities is allowed.
- Showing user code and tool call details is allowed.
- Use the `apply_patch` tool to edit files (NEVER try `applypatch` or `apply-patch`, only `apply_patch`). This is a FREEFORM tool, so do not wrap the patch in JSON.
If completing the user's task requires writing or modifying files, your code and final answer should follow these coding guidelines, though user instructions (i.e. AGENTS.md) may override these guidelines:
- Fix the problem at the root cause rather than applying surface-level patches, when possible.
- Avoid unneeded complexity in your solution.
- Do not attempt to fix unrelated bugs or broken tests. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
- Update documentation as necessary.
- Keep changes consistent with the style of the existing codebase. Changes should be minimal and focused on the task.
- Use `git log` and `git blame` to search the history of the codebase if additional context is required.
- NEVER add copyright or license headers unless specifically requested.
- Do not waste tokens by re-reading files after calling `apply_patch` on them. The tool call will fail if it didn't work. The same goes for making folders, deleting folders, etc.
- Do not `git commit` your changes or create new git branches unless explicitly requested.
- Do not add inline comments within code unless explicitly requested.
- Do not use one-letter variable names unless explicitly requested.
- NEVER output inline citations like "【F:README.md†L5-L14】" in your outputs. The CLI is not able to render these so they will just be broken in the UI. Instead, if you output valid filepaths, users will be able to click on them to open the files in their editor.
## Codex CLI harness, sandboxing, and approvals
The Codex CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options for `sandbox_mode` are:
- **read-only**: The sandbox only permits reading files.
- **workspace-write**: The sandbox permits reading files, and editing files in `cwd` and `writable_roots`. Editing files in other directories requires approval.
- **danger-full-access**: No filesystem sandboxing - all commands are permitted.
Network sandboxing defines whether network can be accessed without approval. Options for `network_access` are:
- **restricted**: Requires approval
- **enabled**: No approval needed
Approvals are your mechanism to get user consent to run shell commands without the sandbox. Possible configuration options for `approval_policy` are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for escalating in the tool definition.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with `approval_policy == on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /var)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval. ALWAYS proceed to use the `with_escalated_permissions` and `justification` parameters. Within this harness, prefer requesting approval via the tool over asking in natural language.
- You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
- (for all of these, you should weigh alternative paths that do not require approval)
When `sandbox_mode` is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
Although they introduce friction to the user because your work is paused until the user responds, you should leverage them when necessary to accomplish important work. If the completing the task requires escalated permissions, Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
When requesting approval to execute a command that will require escalated privileges:
- Provide the `with_escalated_permissions` parameter with the boolean value true
- Include a short, 1 sentence explanation for why you need to enable `with_escalated_permissions` in the justification parameter
## Validating your work
If the codebase has tests or the ability to build or run, consider using them to verify changes once your work is complete.
When testing, your philosophy should be to start as specific as possible to the code you changed so that you can catch issues efficiently, then make your way to broader tests as you build confidence. If there's no test for the code you changed, and if the adjacent patterns in the codebases show that there's a logical place for you to add a test, you may do so. However, do not add tests to codebases with no tests.
Similarly, once you're confident in correctness, you can suggest or use formatting commands to ensure that your code is well formatted. If there are issues you can iterate up to 3 times to get formatting right, but if you still can't manage it's better to save the user time and present them a correct solution where you call out the formatting in your final message. If the codebase does not have a formatter configured, do not add one.
For all of testing, running, building, and formatting, do not attempt to fix unrelated bugs. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
Be mindful of whether to run validation commands proactively. In the absence of behavioral guidance:
- When running in non-interactive approval modes like **never** or **on-failure**, you can proactively run tests, lint and do whatever you need to ensure you've completed the task. If you are unable to run tests, you must still do your utmost best to complete the task.
- When working in interactive approval modes like **untrusted**, or **on-request**, hold off on running tests or lint commands until the user is ready for you to finalize your output, because these commands take time to run and slow down iteration. Instead suggest what you want to do next, and let the user confirm first.
- When working on test-related tasks, such as adding tests, fixing tests, or reproducing a bug to verify behavior, you may proactively run tests regardless of approval mode. Use your judgement to decide whether this is a test-related task.
## Ambition vs. precision
For tasks that have no prior context (i.e. the user is starting something brand new), you should feel free to be ambitious and demonstrate creativity with your implementation.
If you're operating in an existing codebase, you should make sure you do exactly what the user asks with surgical precision. Treat the surrounding codebase with respect, and don't overstep (i.e. changing filenames or variables unnecessarily). You should balance being sufficiently ambitious and proactive when completing tasks of this nature.
You should use judicious initiative to decide on the right level of detail and complexity to deliver based on the user's needs. This means showing good judgment that you're capable of doing the right extras without gold-plating. This might be demonstrated by high-value, creative touches when scope of the task is vague; while being surgical and targeted when scope is tightly specified.
## Sharing progress updates
For especially longer tasks that you work on (i.e. requiring many tool calls, or a plan with multiple steps), you should provide progress updates back to the user at reasonable intervals. These updates should be structured as a concise sentence or two (no more than 8-10 words long) recapping progress so far in plain language: this update demonstrates your understanding of what needs to be done, progress so far (i.e. files explores, subtasks complete), and where you're going next.
Before doing large chunks of work that may incur latency as experienced by the user (i.e. writing a new file), you should send a concise message to the user with an update indicating what you're about to do to ensure they know what you're spending time on. Don't start editing or writing large files before informing the user what you are doing and why.
The messages you send before tool calls should describe what is immediately about to be done next in very concise language. If there was previous work done, this preamble message should also include a note about the work done so far to bring the user along.
## Presenting your work and final message
Your final message should read naturally, like an update from a concise teammate. For casual conversation, brainstorming tasks, or quick questions from the user, respond in a friendly, conversational tone. You should ask questions, suggest ideas, and adapt to the users style. If you've finished a large amount of work, when describing what you've done to the user, you should follow the final answer formatting guidelines to communicate substantive changes. You don't need to add structured formatting for one-word answers, greetings, or purely conversational exchanges.
You can skip heavy formatting for single, simple actions or confirmations. In these cases, respond in plain sentences with any relevant next step or quick option. Reserve multi-section structured responses for results that need grouping or explanation.
The user is working on the same computer as you, and has access to your work. As such there's no need to show the contents of files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.
If there's something that you think you could help with as a logical next step, concisely ask the user if they want you to do so. Good examples of this are running tests, committing changes, or building out the next logical component. If theres something that you couldn't do (even with approval) but that the user might want to do (such as verifying changes by running the app), include those instructions succinctly.
Brevity is very important as a default. You should be very concise (i.e. no more than 10 lines), but can relax this requirement for tasks where additional detail and comprehensiveness is important for the user's understanding.
### Final answer structure and style guidelines
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
**Section Headers**
- Use only when they improve clarity — they are not mandatory for every answer.
- Choose descriptive names that fit the content
- Keep headers short (13 words) and in `**Title Case**`. Always start headers with `**` and end with `**`
- Leave no blank line before the first bullet under a header.
- Section headers should only be used where they genuinely improve scanability; avoid fragmenting the answer.
**Bullets**
- Use `-` followed by a space for every bullet.
- Merge related points when possible; avoid a bullet for every trivial detail.
- Keep bullets to one line unless breaking for clarity is unavoidable.
- Group into short lists (46 bullets) ordered by importance.
- Use consistent keyword phrasing and formatting across sections.
**Monospace**
- Wrap all commands, file paths, env vars, code identifiers, and code samples in backticks (`` `...` ``).
- Apply to inline examples and to bullet keywords if the keyword itself is a literal file/command.
- Never mix monospace and bold markers; choose one based on whether its a keyword (`**`) or inline code/path (`` ` ``).
**File References**
When referencing files in your response, make sure to include the relevant start line and always follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspacerelative, a/ or b/ diff prefixes, or bare filename/suffix.
* Line/column (1based, optional): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
**Structure**
- Place related bullets together; dont mix unrelated concepts in the same section.
- Order sections from general → specific → supporting info.
- For subsections (e.g., “Binaries” under “Rust Workspace”), introduce with a bolded keyword bullet, then list items under it.
- Match structure to complexity:
- Multi-part or detailed results → use clear headers and grouped bullets.
- Simple results → minimal headers, possibly just a short list or paragraph.
**Tone**
- Keep the voice collaborative and natural, like a coding partner handing off work.
- Be concise and factual — no filler or conversational commentary and avoid unnecessary repetition
- Use present tense and active voice (e.g., “Runs tests” not “This will run tests”).
- Keep descriptions self-contained; dont refer to “above” or “below”.
- Use parallel structure in lists for consistency.
**Verbosity**
- Final answer compactness rules (enforced):
- Tiny/small single-file change (≤ ~10 lines): 25 sentences or ≤3 bullets. No headings. 01 short snippet (≤3 lines) only if essential.
- Medium change (single area or a few files): ≤6 bullets or 610 sentences. At most 12 short snippets total (≤8 lines each).
- Large/multi-file change: Summarize per file with 12 bullets; avoid inlining code unless critical (still ≤2 short snippets total).
- Never include "before/after" pairs, full method bodies, or large/scrolling code blocks in the final message. Prefer referencing file/symbol names instead.
**Dont**
- Dont use literal words “bold” or “monospace” in the content.
- Dont nest bullets or create deep hierarchies.
- Dont output ANSI escape codes directly — the CLI renderer applies them.
- Dont cram unrelated keywords into a single bullet; split for clarity.
- Dont let keyword lists run long — wrap or reformat for scanability.
Generally, ensure your final answers adapt their shape and depth to the request. For example, answers to code explanations should have a precise, structured explanation with code references that answer the question directly. For tasks with a simple implementation, lead with the outcome and supplement only with whats needed for clarity. Larger changes can be presented as a logical walkthrough of your approach, grouping related steps, explaining rationale where it adds value, and highlighting next actions to accelerate the user. Your answers should provide the right level of detail while being easily scannable.
For casual greetings, acknowledgements, or other one-off conversational messages that are not delivering substantive information or structured results, respond naturally without section headers or bullet formatting.
# Tool Guidelines
## Shell commands
When using the shell, you must adhere to the following guidelines:
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- Read files in chunks with a max chunk size of 250 lines. Do not use python scripts to attempt to output larger chunks of a file. Command line output will be truncated after 10 kilobytes or 256 lines of output, regardless of the command used.
## apply_patch
Use the `apply_patch` tool to edit files. Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
*** Begin Patch
[ one or more file sections ]
*** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
Example patch:
```
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
```
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
## `update_plan`
A tool named `update_plan` is available to you. You can use it to keep an uptodate, stepbystep plan for the task.
To create a new plan, call `update_plan` with a short list of 1sentence steps (no more than 5-7 words each) with a `status` for each step (`pending`, `in_progress`, or `completed`).
When steps have been completed, use `update_plan` to mark each finished step as `completed` and the next step you are working on as `in_progress`. There should always be exactly one `in_progress` step until everything is done. You can mark multiple items as complete in a single `update_plan` call.
If all steps are complete, ensure you call `update_plan` to mark all steps as `completed`.

View File

@@ -2,8 +2,6 @@ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CL
## General
- The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
- Always set the `workdir` param when using the shell function. Do not use `cd` unless absolutely necessary.
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
## Editing constraints
@@ -16,6 +14,7 @@ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CL
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- Do not amend a commit unless explicitly requested to do so.
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
- **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.

View File

@@ -26,10 +26,12 @@ use crate::config::Config;
use crate::default_client::CodexHttpClient;
use crate::error::RefreshTokenFailedError;
use crate::error::RefreshTokenFailedReason;
use crate::token_data::PlanType;
use crate::token_data::KnownPlan as InternalKnownPlan;
use crate::token_data::PlanType as InternalPlanType;
use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
use crate::util::try_parse_error_message;
use codex_protocol::account::PlanType as AccountPlanType;
use serde_json::Value;
use thiserror::Error;
@@ -202,7 +204,42 @@ impl CodexAuth {
self.get_current_token_data().and_then(|t| t.id_token.email)
}
pub(crate) fn get_plan_type(&self) -> Option<PlanType> {
/// Account-facing plan classification derived from the current token.
/// Returns a high-level `AccountPlanType` (e.g., Free/Plus/Pro/Team/…)
/// mapped from the ID token's internal plan value. Prefer this when you
/// need to make UI or product decisions based on the user's subscription.
pub fn account_plan_type(&self) -> Option<AccountPlanType> {
let map_known = |kp: &InternalKnownPlan| match kp {
InternalKnownPlan::Free => AccountPlanType::Free,
InternalKnownPlan::Plus => AccountPlanType::Plus,
InternalKnownPlan::Pro => AccountPlanType::Pro,
InternalKnownPlan::Team => AccountPlanType::Team,
InternalKnownPlan::Business => AccountPlanType::Business,
InternalKnownPlan::Enterprise => AccountPlanType::Enterprise,
InternalKnownPlan::Edu => AccountPlanType::Edu,
};
self.get_current_token_data()
.and_then(|t| t.id_token.chatgpt_plan_type)
.map(|pt| match pt {
InternalPlanType::Known(k) => map_known(&k),
InternalPlanType::Unknown(_) => AccountPlanType::Unknown,
})
}
/// Raw plan string from the ID token (including unknown/new plan types).
pub fn raw_plan_type(&self) -> Option<String> {
self.get_plan_type().map(|plan| match plan {
InternalPlanType::Known(k) => format!("{k:?}"),
InternalPlanType::Unknown(raw) => raw,
})
}
/// Raw internal plan value from the ID token.
/// Exposes the underlying `token_data::PlanType` without mapping it to the
/// public `AccountPlanType`. Use this when downstream code needs to inspect
/// internal/unknown plan strings exactly as issued in the token.
pub(crate) fn get_plan_type(&self) -> Option<InternalPlanType> {
self.get_current_token_data()
.and_then(|t| t.id_token.chatgpt_plan_type)
}
@@ -306,7 +343,10 @@ pub fn save_auth(
}
/// Load CLI auth data using the configured credential store backend.
/// Returns `None` when no credentials are stored.
/// Returns `None` when no credentials are stored. This function is
/// provided only for tests. Production code should not directly load
/// from the auth.json storage. It should use the AuthManager abstraction
/// instead.
pub fn load_auth_dot_json(
codex_home: &Path,
auth_credentials_store_mode: AuthCredentialsStoreMode,
@@ -609,8 +649,9 @@ mod tests {
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use crate::token_data::IdTokenInfo;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use crate::token_data::KnownPlan as InternalKnownPlan;
use crate::token_data::PlanType as InternalPlanType;
use codex_protocol::account::PlanType as AccountPlanType;
use base64::Engine;
use codex_protocol::config_types::ForcedLoginMethod;
@@ -727,7 +768,7 @@ mod tests {
tokens: Some(TokenData {
id_token: IdTokenInfo {
email: Some("user@example.com".to_string()),
chatgpt_plan_type: Some(PlanType::Known(KnownPlan::Pro)),
chatgpt_plan_type: Some(InternalPlanType::Known(InternalKnownPlan::Pro)),
chatgpt_account_id: None,
raw_jwt: fake_jwt,
},
@@ -981,6 +1022,54 @@ mod tests {
.contains("ChatGPT login is required, but an API key is currently being used.")
);
}
#[test]
fn plan_type_maps_known_plan() {
let codex_home = tempdir().unwrap();
let _jwt = write_auth_file(
AuthFileParams {
openai_api_key: None,
chatgpt_plan_type: "pro".to_string(),
chatgpt_account_id: None,
},
codex_home.path(),
)
.expect("failed to write auth file");
let auth = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
.expect("load auth")
.expect("auth available");
pretty_assertions::assert_eq!(auth.account_plan_type(), Some(AccountPlanType::Pro));
pretty_assertions::assert_eq!(
auth.get_plan_type(),
Some(InternalPlanType::Known(InternalKnownPlan::Pro))
);
}
#[test]
fn plan_type_maps_unknown_to_unknown() {
let codex_home = tempdir().unwrap();
let _jwt = write_auth_file(
AuthFileParams {
openai_api_key: None,
chatgpt_plan_type: "mystery-tier".to_string(),
chatgpt_account_id: None,
},
codex_home.path(),
)
.expect("failed to write auth file");
let auth = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
.expect("load auth")
.expect("auth available");
pretty_assertions::assert_eq!(auth.account_plan_type(), Some(AccountPlanType::Unknown));
pretty_assertions::assert_eq!(
auth.get_plan_type(),
Some(InternalPlanType::Unknown("mystery-tier".to_string()))
);
}
}
/// Central manager providing a single source of truth for auth.json derived

View File

@@ -1,8 +1,13 @@
use std::path::PathBuf;
use tree_sitter::Node;
use tree_sitter::Parser;
use tree_sitter::Tree;
use tree_sitter_bash::LANGUAGE as BASH;
use crate::shell::ShellType;
use crate::shell::detect_shell_type;
/// Parse the provided bash source using tree-sitter-bash, returning a Tree on
/// success or None if parsing failed.
pub fn try_parse_shell(shell_lc_arg: &str) -> Option<Tree> {
@@ -88,17 +93,26 @@ pub fn try_parse_word_only_commands_sequence(tree: &Tree, src: &str) -> Option<V
Some(commands)
}
pub fn extract_bash_command(command: &[String]) -> Option<(&str, &str)> {
let [shell, flag, script] = command else {
return None;
};
if !matches!(flag.as_str(), "-lc" | "-c")
|| !matches!(
detect_shell_type(&PathBuf::from(shell)),
Some(ShellType::Zsh) | Some(ShellType::Bash)
)
{
return None;
}
Some((shell, script))
}
/// Returns the sequence of plain commands within a `bash -lc "..."` or
/// `zsh -lc "..."` invocation when the script only contains word-only commands
/// joined by safe operators.
pub fn parse_shell_lc_plain_commands(command: &[String]) -> Option<Vec<Vec<String>>> {
let [shell, flag, script] = command else {
return None;
};
if flag != "-lc" || !(shell == "bash" || shell == "zsh") {
return None;
}
let (_, script) = extract_bash_command(command)?;
let tree = try_parse_shell(script)?;
try_parse_word_only_commands_sequence(&tree, script)

View File

@@ -81,6 +81,7 @@ pub(crate) async fn stream_chat_completions(
ResponseItem::CustomToolCallOutput { .. } => {}
ResponseItem::WebSearchCall { .. } => {}
ResponseItem::GhostSnapshot { .. } => {}
ResponseItem::CompactionSummary { .. } => {}
}
}
@@ -320,7 +321,8 @@ pub(crate) async fn stream_chat_completions(
}
ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::Other => {
| ResponseItem::Other
| ResponseItem::CompactionSummary { .. } => {
// Omit these items from the conversation history.
continue;
}
@@ -338,7 +340,7 @@ pub(crate) async fn stream_chat_completions(
debug!(
"POST to {}: {}",
provider.get_full_url(&None),
serde_json::to_string_pretty(&payload).unwrap_or_default()
payload.to_string()
);
let mut attempt = 0;
@@ -477,10 +479,14 @@ async fn append_reasoning_text(
..
}) = reasoning_item
{
let content_index = content.len() as i64;
content.push(ReasoningItemContent::ReasoningText { text: text.clone() });
let _ = tx_event
.send(Ok(ResponseEvent::ReasoningContentDelta(text.clone())))
.send(Ok(ResponseEvent::ReasoningContentDelta {
delta: text.clone(),
content_index,
}))
.await;
}
}
@@ -669,7 +675,9 @@ async fn process_chat_sse<S>(
}
// Emit end-of-turn when finish_reason signals completion.
if let Some(finish_reason) = choice.get("finish_reason").and_then(|v| v.as_str()) {
if let Some(finish_reason) = choice.get("finish_reason").and_then(|v| v.as_str())
&& !finish_reason.is_empty()
{
match finish_reason {
"tool_calls" if fn_call_state.active => {
// First, flush the terminal raw reasoning so UIs can finalize
@@ -898,20 +906,26 @@ where
continue;
}
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta(delta)))) => {
Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
}))) => {
// Always accumulate reasoning deltas so we can emit a final Reasoning item at Completed.
this.cumulative_reasoning.push_str(&delta);
if matches!(this.mode, AggregateMode::Streaming) {
// In streaming mode, also forward the delta immediately.
return Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta(delta))));
return Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
})));
} else {
continue;
}
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta(_)))) => {
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta { .. }))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded { .. }))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::OutputItemAdded(item)))) => {

View File

@@ -26,6 +26,7 @@ use tokio::sync::mpsc;
use tokio::time::timeout;
use tokio_util::io::ReaderStream;
use tracing::debug;
use tracing::enabled;
use tracing::trace;
use tracing::warn;
@@ -35,10 +36,10 @@ use crate::auth::RefreshTokenError;
use crate::chat_completions::AggregateStreamExt;
use crate::chat_completions::stream_chat_completions;
use crate::client_common::Prompt;
use crate::client_common::Reasoning;
use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::create_reasoning_param_for_request;
use crate::client_common::create_text_param_for_request;
use crate::config::Config;
use crate::default_client::CodexHttpClient;
@@ -78,6 +79,18 @@ struct Error {
resets_at: Option<i64>,
}
#[derive(Debug, Serialize)]
struct CompactHistoryRequest<'a> {
model: &'a str,
input: &'a [ResponseItem],
instructions: &'a str,
}
#[derive(Debug, Deserialize)]
struct CompactHistoryResponse {
output: Vec<ResponseItem>,
}
#[derive(Debug, Clone)]
pub struct ModelClient {
config: Arc<Config>,
@@ -199,12 +212,18 @@ impl ModelClient {
let auth_manager = self.auth_manager.clone();
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
let reasoning = create_reasoning_param_for_request(
&self.config.model_family,
self.effort,
self.summary,
);
let tools_json: Vec<Value> = create_tools_json_for_responses_api(&prompt.tools)?;
let reasoning = if self.config.model_family.supports_reasoning_summaries {
Some(Reasoning {
effort: self
.effort
.or(self.config.model_family.default_reasoning_effort),
summary: Some(self.summary),
})
} else {
None
};
let include: Vec<String> = if reasoning.is_some() {
vec!["reasoning.encrypted_content".to_string()]
@@ -215,7 +234,9 @@ impl ModelClient {
let input_with_instructions = prompt.get_formatted_input();
let verbosity = if self.config.model_family.support_verbosity {
self.config.model_verbosity
self.config
.model_verbosity
.or(self.config.model_family.default_verbosity)
} else {
if self.config.model_verbosity.is_some() {
warn!(
@@ -294,10 +315,9 @@ impl ModelClient {
let auth = auth_manager.as_ref().and_then(|m| m.auth());
trace!(
"POST to {}: {:?}",
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(payload_json)
.unwrap_or("<unable to serialize payload>".to_string())
payload_json.to_string()
);
let mut req_builder = self
@@ -447,6 +467,8 @@ impl ModelClient {
return Err(StreamAttemptError::Fatal(codex_err));
} else if error.r#type.as_deref() == Some("usage_not_included") {
return Err(StreamAttemptError::Fatal(CodexErr::UsageNotIncluded));
} else if is_quota_exceeded_error(&error) {
return Err(StreamAttemptError::Fatal(CodexErr::QuotaExceeded));
}
}
}
@@ -498,6 +520,70 @@ impl ModelClient {
pub fn get_auth_manager(&self) -> Option<Arc<AuthManager>> {
self.auth_manager.clone()
}
pub async fn compact_conversation_history(&self, prompt: &Prompt) -> Result<Vec<ResponseItem>> {
if prompt.input.is_empty() {
return Ok(Vec::new());
}
let auth_manager = self.auth_manager.clone();
let auth = auth_manager.as_ref().and_then(|m| m.auth());
let mut req_builder = self
.provider
.create_compact_request_builder(&self.client, &auth)
.await?;
if let SessionSource::SubAgent(sub) = &self.session_source {
let subagent = if let crate::protocol::SubAgentSource::Other(label) = sub {
label.clone()
} else {
serde_json::to_value(sub)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string())
};
req_builder = req_builder.header("x-openai-subagent", subagent);
}
if let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
&& let Some(account_id) = auth.get_account_id()
{
req_builder = req_builder.header("chatgpt-account-id", account_id);
}
let payload = CompactHistoryRequest {
model: &self.config.model,
input: &prompt.input,
instructions: &prompt.get_full_instructions(&self.config.model_family),
};
if enabled!(tracing::Level::TRACE) {
trace!(
"POST to {}: {}",
self.provider
.get_compact_url(&auth)
.unwrap_or("<none>".to_string()),
serde_json::to_value(&payload).unwrap_or_default()
);
}
let response = req_builder
.json(&payload)
.send()
.await
.map_err(|source| CodexErr::ConnectionFailed(ConnectionFailedError { source }))?;
let status = response.status();
let body = response
.text()
.await
.map_err(|source| CodexErr::ConnectionFailed(ConnectionFailedError { source }))?;
if !status.is_success() {
return Err(CodexErr::UnexpectedStatus(UnexpectedResponseError {
status,
body,
request_id: None,
}));
}
let CompactHistoryResponse { output } = serde_json::from_str(&body)?;
Ok(output)
}
}
enum StreamAttemptError {
@@ -551,6 +637,8 @@ struct SseEvent {
response: Option<Value>,
item: Option<Value>,
delta: Option<String>,
summary_index: Option<i64>,
content_index: Option<i64>,
}
#[derive(Debug, Deserialize)]
@@ -810,16 +898,22 @@ async fn process_sse<S>(
}
}
"response.reasoning_summary_text.delta" => {
if let Some(delta) = event.delta {
let event = ResponseEvent::ReasoningSummaryDelta(delta);
if let (Some(delta), Some(summary_index)) = (event.delta, event.summary_index) {
let event = ResponseEvent::ReasoningSummaryDelta {
delta,
summary_index,
};
if tx_event.send(Ok(event)).await.is_err() {
return;
}
}
}
"response.reasoning_text.delta" => {
if let Some(delta) = event.delta {
let event = ResponseEvent::ReasoningContentDelta(delta);
if let (Some(delta), Some(content_index)) = (event.delta, event.content_index) {
let event = ResponseEvent::ReasoningContentDelta {
delta,
content_index,
};
if tx_event.send(Ok(event)).await.is_err() {
return;
}
@@ -844,6 +938,8 @@ async fn process_sse<S>(
Ok(error) => {
if is_context_window_error(&error) {
response_error = Some(CodexErr::ContextWindowExceeded);
} else if is_quota_exceeded_error(&error) {
response_error = Some(CodexErr::QuotaExceeded);
} else {
let delay = try_parse_retry_after(&error);
let message = error.message.clone().unwrap_or_default();
@@ -894,10 +990,12 @@ async fn process_sse<S>(
}
}
"response.reasoning_summary_part.added" => {
// Boundary between reasoning summary sections (e.g., titles).
let event = ResponseEvent::ReasoningSummaryPartAdded;
if tx_event.send(Ok(event)).await.is_err() {
return;
if let Some(summary_index) = event.summary_index {
// Boundary between reasoning summary sections (e.g., titles).
let event = ResponseEvent::ReasoningSummaryPartAdded { summary_index };
if tx_event.send(Ok(event)).await.is_err() {
return;
}
}
}
"response.reasoning_summary_text.done" => {}
@@ -975,6 +1073,10 @@ fn is_context_window_error(error: &Error) -> bool {
error.code.as_deref() == Some("context_length_exceeded")
}
fn is_quota_exceeded_error(error: &Error) -> bool {
error.code.as_deref() == Some("insufficient_quota")
}
#[cfg(test)]
mod tests {
use super::*;
@@ -1200,7 +1302,7 @@ mod tests {
#[tokio::test]
async fn error_when_error_event() {
let raw_error = r#"{"type":"response.failed","sequence_number":3,"response":{"id":"resp_689bcf18d7f08194bf3440ba62fe05d803fee0cdac429894","object":"response","created_at":1755041560,"status":"failed","background":false,"error":{"code":"rate_limit_exceeded","message":"Rate limit reached for gpt-5 in organization org-AAA on tokens per min (TPM): Limit 30000, Used 22999, Requested 12528. Please try again in 11.054s. Visit https://platform.openai.com/account/rate-limits to learn more."}, "usage":null,"user":null,"metadata":{}}}"#;
let raw_error = r#"{"type":"response.failed","sequence_number":3,"response":{"id":"resp_689bcf18d7f08194bf3440ba62fe05d803fee0cdac429894","object":"response","created_at":1755041560,"status":"failed","background":false,"error":{"code":"rate_limit_exceeded","message":"Rate limit reached for gpt-5.1 in organization org-AAA on tokens per min (TPM): Limit 30000, Used 22999, Requested 12528. Please try again in 11.054s. Visit https://platform.openai.com/account/rate-limits to learn more."}, "usage":null,"user":null,"metadata":{}}}"#;
let sse1 = format!("event: response.failed\ndata: {raw_error}\n\n");
let provider = ModelProviderInfo {
@@ -1229,7 +1331,7 @@ mod tests {
Err(CodexErr::Stream(msg, delay)) => {
assert_eq!(
msg,
"Rate limit reached for gpt-5 in organization org-AAA on tokens per min (TPM): Limit 30000, Used 22999, Requested 12528. Please try again in 11.054s. Visit https://platform.openai.com/account/rate-limits to learn more."
"Rate limit reached for gpt-5.1 in organization org-AAA on tokens per min (TPM): Limit 30000, Used 22999, Requested 12528. Please try again in 11.054s. Visit https://platform.openai.com/account/rate-limits to learn more."
);
assert_eq!(*delay, Some(Duration::from_secs_f64(11.054)));
}
@@ -1307,6 +1409,41 @@ mod tests {
}
}
#[tokio::test]
async fn quota_exceeded_error_is_fatal() {
let raw_error = r#"{"type":"response.failed","sequence_number":3,"response":{"id":"resp_fatal_quota","object":"response","created_at":1759771626,"status":"failed","background":false,"error":{"code":"insufficient_quota","message":"You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors."},"incomplete_details":null}}"#;
let sse1 = format!("event: response.failed\ndata: {raw_error}\n\n");
let provider = ModelProviderInfo {
name: "test".to_string(),
base_url: Some("https://test.com".to_string()),
env_key: Some("TEST_API_KEY".to_string()),
env_key_instructions: None,
experimental_bearer_token: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(1000),
requires_openai_auth: false,
};
let otel_event_manager = otel_event_manager();
let events = collect_events(&[sse1.as_bytes()], provider, otel_event_manager).await;
assert_eq!(events.len(), 1);
match &events[0] {
Err(err @ CodexErr::QuotaExceeded) => {
assert_eq!(err.to_string(), CodexErr::QuotaExceeded.to_string());
}
other => panic!("unexpected quota exceeded event: {other:?}"),
}
}
// ────────────────────────────
// Table-driven test from `main`
// ────────────────────────────
@@ -1413,7 +1550,7 @@ mod tests {
fn test_try_parse_retry_after() {
let err = Error {
r#type: None,
message: Some("Rate limit reached for gpt-5 in organization org- on tokens per min (TPM): Limit 1, Used 1, Requested 19304. Please try again in 28ms. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
message: Some("Rate limit reached for gpt-5.1 in organization org- on tokens per min (TPM): Limit 1, Used 1, Requested 19304. Please try again in 28ms. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
code: Some("rate_limit_exceeded".to_string()),
plan_type: None,
resets_at: None
@@ -1427,7 +1564,7 @@ mod tests {
fn test_try_parse_retry_after_no_delay() {
let err = Error {
r#type: None,
message: Some("Rate limit reached for gpt-5 in organization <ORG> on tokens per min (TPM): Limit 30000, Used 6899, Requested 24050. Please try again in 1.898s. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
message: Some("Rate limit reached for gpt-5.1 in organization <ORG> on tokens per min (TPM): Limit 30000, Used 6899, Requested 24050. Please try again in 1.898s. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
code: Some("rate_limit_exceeded".to_string()),
plan_type: None,
resets_at: None

View File

@@ -136,7 +136,7 @@ fn reserialize_shell_outputs(items: &mut [ResponseItem]) {
}
fn is_shell_tool_name(name: &str) -> bool {
matches!(name, "shell" | "container.exec")
matches!(name, "shell" | "container.exec" | "shell_command")
}
#[derive(Deserialize)]
@@ -203,9 +203,17 @@ pub enum ResponseEvent {
token_usage: Option<TokenUsage>,
},
OutputTextDelta(String),
ReasoningSummaryDelta(String),
ReasoningContentDelta(String),
ReasoningSummaryPartAdded,
ReasoningSummaryDelta {
delta: String,
summary_index: i64,
},
ReasoningContentDelta {
delta: String,
content_index: i64,
},
ReasoningSummaryPartAdded {
summary_index: i64,
},
RateLimits(RateLimitSnapshot),
}
@@ -342,21 +350,6 @@ pub(crate) mod tools {
}
}
pub(crate) fn create_reasoning_param_for_request(
model_family: &ModelFamily,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
) -> Option<Reasoning> {
if !model_family.supports_reasoning_summaries {
return None;
}
Some(Reasoning {
effort,
summary: Some(summary),
})
}
pub(crate) fn create_text_param_for_request(
verbosity: Option<VerbosityConfig>,
output_schema: &Option<Value>,
@@ -421,6 +414,10 @@ mod tests {
slug: "gpt-5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-5.1",
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "codex-mini-latest",
expects_apply_patch_instructions: true,
@@ -430,7 +427,11 @@ mod tests {
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "gpt-5-codex",
slug: "gpt-5.1-codex",
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "gpt-5.1-codex",
expects_apply_patch_instructions: false,
},
];
@@ -456,7 +457,7 @@ mod tests {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
model: "gpt-5.1",
instructions: "i",
input: &input,
tools: &tools,
@@ -497,7 +498,7 @@ mod tests {
create_text_param_for_request(None, &Some(schema.clone())).expect("text controls");
let req = ResponsesApiRequest {
model: "gpt-5",
model: "gpt-5.1",
instructions: "i",
input: &input,
tools: &tools,
@@ -533,7 +534,7 @@ mod tests {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
model: "gpt-5.1",
instructions: "i",
input: &input,
tools: &tools,

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,39 @@
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use crate::sandboxing::SandboxPermissions;
use crate::bash::parse_shell_lc_plain_commands;
use crate::is_safe_command::is_known_safe_command;
pub fn requires_initial_appoval(
policy: AskForApproval,
sandbox_policy: &SandboxPolicy,
command: &[String],
sandbox_permissions: SandboxPermissions,
) -> bool {
if is_known_safe_command(command) {
return false;
}
match policy {
AskForApproval::Never | AskForApproval::OnFailure => false,
AskForApproval::OnRequest => {
// In DangerFullAccess, only prompt if the command looks dangerous.
if matches!(sandbox_policy, SandboxPolicy::DangerFullAccess) {
return command_might_be_dangerous(command);
}
// In restricted sandboxes (ReadOnly/WorkspaceWrite), do not prompt for
// nonescalated, nondangerous commands — let the sandbox enforce
// restrictions (e.g., block network/write) without a user prompt.
if sandbox_permissions.requires_escalated_permissions() {
return true;
}
command_might_be_dangerous(command)
}
AskForApproval::UnlessTrusted => !is_known_safe_command(command),
}
}
pub fn command_might_be_dangerous(command: &[String]) -> bool {
if is_dangerous_to_call_with_exec(command) {

View File

@@ -1,4 +1,5 @@
use crate::bash::parse_shell_lc_plain_commands;
use crate::command_safety::windows_safe_commands::is_safe_command_windows;
pub fn is_known_safe_command(command: &[String]) -> bool {
let command: Vec<String> = command
@@ -11,12 +12,9 @@ pub fn is_known_safe_command(command: &[String]) -> bool {
}
})
.collect();
#[cfg(target_os = "windows")]
{
use super::windows_safe_commands::is_safe_command_windows;
if is_safe_command_windows(&command) {
return true;
}
if is_safe_command_windows(&command) {
return true;
}
if is_safe_to_call_with_exec(&command) {

View File

@@ -1,4 +1,3 @@
pub mod is_dangerous_command;
pub mod is_safe_command;
#[cfg(target_os = "windows")]
pub mod windows_safe_commands;

View File

@@ -1,10 +1,10 @@
use std::sync::Arc;
use super::Session;
use super::TurnContext;
use super::get_last_assistant_message_from_turn;
use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::codex::get_last_assistant_message_from_turn;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::protocol::AgentMessageEvent;
@@ -14,7 +14,9 @@ use crate::protocol::EventMsg;
use crate::protocol::TaskStartedEvent;
use crate::protocol::TurnContextItem;
use crate::protocol::WarningEvent;
use crate::truncate::truncate_middle;
use crate::truncate::TruncationPolicy;
use crate::truncate::approx_token_count;
use crate::truncate::truncate_text;
use crate::util::backoff;
use codex_protocol::items::TurnItem;
use codex_protocol::models::ContentItem;
@@ -25,7 +27,8 @@ use codex_protocol::user_input::UserInput;
use futures::prelude::*;
use tracing::error;
pub const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
pub const SUMMARIZATION_PROMPT: &str = include_str!("../templates/compact/prompt.md");
pub const SUMMARY_PREFIX: &str = include_str!("../templates/compact/summary_prefix.md");
const COMPACT_USER_MESSAGE_MAX_TOKENS: usize = 20_000;
pub(crate) async fn run_inline_auto_compact_task(
@@ -58,7 +61,10 @@ async fn run_compact_task_inner(
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let mut history = sess.clone_history().await;
history.record_items(&[initial_input_for_turn.into()]);
history.record_items(
&[initial_input_for_turn.into()],
turn_context.truncation_policy,
);
let mut truncated_count = 0usize;
@@ -140,7 +146,9 @@ async fn run_compact_task_inner(
}
let history_snapshot = sess.clone_history().await.get_history();
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let summary_suffix =
get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let summary_text = format!("{SUMMARY_PREFIX}\n{summary_suffix}");
let user_messages = collect_user_messages(&history_snapshot);
let initial_context = sess.build_initial_context(turn_context.as_ref());
@@ -153,8 +161,18 @@ async fn run_compact_task_inner(
new_history.extend(ghost_snapshots);
sess.replace_history(new_history).await;
if let Some(estimated_tokens) = sess
.clone_history()
.await
.estimate_token_count(&turn_context)
{
sess.override_last_token_usage_estimate(&turn_context, estimated_tokens)
.await;
}
let rollout_item = RolloutItem::Compacted(CompactedItem {
message: summary_text.clone(),
replacement_history: None,
});
sess.persist_rollout_items(&[rollout_item]).await;
@@ -164,7 +182,7 @@ async fn run_compact_task_inner(
sess.send_event(&turn_context, event).await;
let warning = EventMsg::Warning(WarningEvent {
message: "Heads up: Long conversations and multiple compactions can cause the model to be less accurate. Start new a new conversation when possible to keep conversations small and targeted.".to_string(),
message: "Heads up: Long conversations and multiple compactions can cause the model to be less accurate. Start a new conversation when possible to keep conversations small and targeted.".to_string(),
});
sess.send_event(&turn_context, warning).await;
}
@@ -192,12 +210,22 @@ pub(crate) fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
items
.iter()
.filter_map(|item| match crate::event_mapping::parse_turn_item(item) {
Some(TurnItem::UserMessage(user)) => Some(user.message()),
Some(TurnItem::UserMessage(user)) => {
if is_summary_message(&user.message()) {
None
} else {
Some(user.message())
}
}
_ => None,
})
.collect()
}
pub(crate) fn is_summary_message(message: &str) -> bool {
message.starts_with(format!("{SUMMARY_PREFIX}\n").as_str())
}
pub(crate) fn build_compacted_history(
initial_context: Vec<ResponseItem>,
user_messages: &[String],
@@ -207,7 +235,7 @@ pub(crate) fn build_compacted_history(
initial_context,
user_messages,
summary_text,
COMPACT_USER_MESSAGE_MAX_TOKENS * 4,
COMPACT_USER_MESSAGE_MAX_TOKENS,
)
}
@@ -215,20 +243,21 @@ fn build_compacted_history_with_limit(
mut history: Vec<ResponseItem>,
user_messages: &[String],
summary_text: &str,
max_bytes: usize,
max_tokens: usize,
) -> Vec<ResponseItem> {
let mut selected_messages: Vec<String> = Vec::new();
if max_bytes > 0 {
let mut remaining = max_bytes;
if max_tokens > 0 {
let mut remaining = max_tokens;
for message in user_messages.iter().rev() {
if remaining == 0 {
break;
}
if message.len() <= remaining {
let tokens = approx_token_count(message);
if tokens <= remaining {
selected_messages.push(message.clone());
remaining = remaining.saturating_sub(message.len());
remaining = remaining.saturating_sub(tokens);
} else {
let (truncated, _) = truncate_middle(message, remaining);
let truncated = truncate_text(message, TruncationPolicy::Tokens(remaining));
selected_messages.push(truncated);
break;
}
@@ -277,7 +306,8 @@ async fn drain_to_completed(
};
match event {
Ok(ResponseEvent::OutputItemDone(item)) => {
sess.record_into_history(std::slice::from_ref(&item)).await;
sess.record_into_history(std::slice::from_ref(&item), turn_context)
.await;
}
Ok(ResponseEvent::RateLimits(snapshot)) => {
sess.update_rate_limits(turn_context, snapshot).await;
@@ -295,6 +325,7 @@ async fn drain_to_completed(
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
@@ -386,16 +417,16 @@ mod tests {
}
#[test]
fn build_compacted_history_truncates_overlong_user_messages() {
fn build_token_limited_compacted_history_truncates_overlong_user_messages() {
// Use a small truncation limit so the test remains fast while still validating
// that oversized user content is truncated.
let max_bytes = 128;
let big = "X".repeat(max_bytes + 50);
let max_tokens = 16;
let big = "word ".repeat(200);
let history = super::build_compacted_history_with_limit(
Vec::new(),
std::slice::from_ref(&big),
"SUMMARY",
max_bytes,
max_tokens,
);
assert_eq!(history.len(), 2);
@@ -428,7 +459,7 @@ mod tests {
}
#[test]
fn build_compacted_history_appends_summary_message() {
fn build_token_limited_compacted_history_appends_summary_message() {
let initial_context: Vec<ResponseItem> = Vec::new();
let user_messages = vec!["first user message".to_string()];
let summary_text = "summary text";

View File

@@ -0,0 +1,100 @@
use std::sync::Arc;
use crate::Prompt;
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::error::Result as CodexResult;
use crate::protocol::AgentMessageEvent;
use crate::protocol::CompactedItem;
use crate::protocol::ErrorEvent;
use crate::protocol::EventMsg;
use crate::protocol::RolloutItem;
use crate::protocol::TaskStartedEvent;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::user_input::UserInput;
pub(crate) async fn run_remote_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
input: Vec<UserInput>,
) -> Option<String> {
let start_event = EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: turn_context.client.get_model_context_window(),
});
sess.send_event(&turn_context, start_event).await;
match run_remote_compact_task_inner(&sess, &turn_context, input).await {
Ok(()) => {
let event = EventMsg::AgentMessage(AgentMessageEvent {
message: "Compact task completed".to_string(),
});
sess.send_event(&turn_context, event).await;
}
Err(err) => {
let event = EventMsg::Error(ErrorEvent {
message: err.to_string(),
});
sess.send_event(&turn_context, event).await;
}
}
None
}
async fn run_remote_compact_task_inner(
sess: &Arc<Session>,
turn_context: &Arc<TurnContext>,
input: Vec<UserInput>,
) -> CodexResult<()> {
let mut history = sess.clone_history().await;
if !input.is_empty() {
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
history.record_items(
&[initial_input_for_turn.into()],
turn_context.truncation_policy,
);
}
let prompt = Prompt {
input: history.get_history_for_prompt(),
tools: vec![],
parallel_tool_calls: false,
base_instructions_override: turn_context.base_instructions.clone(),
output_schema: None,
};
let mut new_history = turn_context
.client
.compact_conversation_history(&prompt)
.await?;
// Required to keep `/undo` available after compaction
let ghost_snapshots: Vec<ResponseItem> = history
.get_history()
.iter()
.filter(|item| matches!(item, ResponseItem::GhostSnapshot { .. }))
.cloned()
.collect();
if !ghost_snapshots.is_empty() {
new_history.extend(ghost_snapshots);
}
sess.replace_history(new_history.clone()).await;
if let Some(estimated_tokens) = sess
.clone_history()
.await
.estimate_token_count(turn_context.as_ref())
{
sess.override_last_token_usage_estimate(turn_context.as_ref(), estimated_tokens)
.await;
}
let compacted_item = CompactedItem {
message: String::new(),
replacement_history: Some(new_history),
};
sess.persist_rollout_items(&[RolloutItem::Compacted(compacted_item)])
.await;
Ok(())
}

View File

@@ -3,6 +3,8 @@ use crate::config::types::McpServerConfig;
use crate::config::types::Notice;
use anyhow::Context;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::TrustLevel;
use codex_utils_tokenizer::warm_model_cache;
use std::collections::BTreeMap;
use std::path::Path;
use std::path::PathBuf;
@@ -23,13 +25,19 @@ pub enum ConfigEdit {
},
/// Toggle the acknowledgement flag under `[notice]`.
SetNoticeHideFullAccessWarning(bool),
/// Toggle the Windows world-writable directories warning acknowledgement flag.
SetNoticeHideWorldWritableWarning(bool),
/// Toggle the rate limit model nudge acknowledgement flag.
SetNoticeHideRateLimitModelNudge(bool),
/// Toggle the Windows onboarding acknowledgement flag.
SetWindowsWslSetupAcknowledged(bool),
/// Toggle the model migration prompt acknowledgement flag.
SetNoticeHideModelMigrationPrompt(String, bool),
/// Replace the entire `[mcp_servers]` table.
ReplaceMcpServers(BTreeMap<String, McpServerConfig>),
/// Set trust_level = "trusted" under `[projects."<path>"]`,
/// Set trust_level under `[projects."<path>"]`,
/// migrating inline tables to explicit tables.
SetProjectTrusted(PathBuf),
SetProjectTrustLevel { path: PathBuf, level: TrustLevel },
/// Set the value stored at the exact dotted path.
SetPath {
segments: Vec<String>,
@@ -223,6 +231,9 @@ impl ConfigDocument {
fn apply(&mut self, edit: &ConfigEdit) -> anyhow::Result<bool> {
match edit {
ConfigEdit::SetModel { model, effort } => Ok({
if let Some(model) = &model {
warm_model_cache(model)
}
let mut mutated = false;
mutated |= self.write_profile_value(
&["model"],
@@ -239,6 +250,23 @@ impl ConfigDocument {
&[Notice::TABLE_KEY, "hide_full_access_warning"],
value(*acknowledged),
)),
ConfigEdit::SetNoticeHideWorldWritableWarning(acknowledged) => Ok(self.write_value(
Scope::Global,
&[Notice::TABLE_KEY, "hide_world_writable_warning"],
value(*acknowledged),
)),
ConfigEdit::SetNoticeHideRateLimitModelNudge(acknowledged) => Ok(self.write_value(
Scope::Global,
&[Notice::TABLE_KEY, "hide_rate_limit_model_nudge"],
value(*acknowledged),
)),
ConfigEdit::SetNoticeHideModelMigrationPrompt(migration_config, acknowledged) => {
Ok(self.write_value(
Scope::Global,
&[Notice::TABLE_KEY, migration_config.as_str()],
value(*acknowledged),
))
}
ConfigEdit::SetWindowsWslSetupAcknowledged(acknowledged) => Ok(self.write_value(
Scope::Global,
&["windows_wsl_setup_acknowledged"],
@@ -247,10 +275,14 @@ impl ConfigDocument {
ConfigEdit::ReplaceMcpServers(servers) => Ok(self.replace_mcp_servers(servers)),
ConfigEdit::SetPath { segments, value } => Ok(self.insert(segments, value.clone())),
ConfigEdit::ClearPath { segments } => Ok(self.clear_owned(segments)),
ConfigEdit::SetProjectTrusted(project_path) => {
ConfigEdit::SetProjectTrustLevel { path, level } => {
// Delegate to the existing, tested logic in config.rs to
// ensure tables are explicit and migration is preserved.
crate::config::set_project_trusted_inner(&mut self.doc, project_path.as_path())?;
crate::config::set_project_trust_level_inner(
&mut self.doc,
path.as_path(),
*level,
)?;
Ok(true)
}
}
@@ -473,6 +505,27 @@ impl ConfigEditsBuilder {
self
}
pub fn set_hide_world_writable_warning(mut self, acknowledged: bool) -> Self {
self.edits
.push(ConfigEdit::SetNoticeHideWorldWritableWarning(acknowledged));
self
}
pub fn set_hide_rate_limit_model_nudge(mut self, acknowledged: bool) -> Self {
self.edits
.push(ConfigEdit::SetNoticeHideRateLimitModelNudge(acknowledged));
self
}
pub fn set_hide_model_migration_prompt(mut self, model: &str, acknowledged: bool) -> Self {
self.edits
.push(ConfigEdit::SetNoticeHideModelMigrationPrompt(
model.to_string(),
acknowledged,
));
self
}
pub fn set_windows_wsl_setup_acknowledged(mut self, acknowledged: bool) -> Self {
self.edits
.push(ConfigEdit::SetWindowsWslSetupAcknowledged(acknowledged));
@@ -485,9 +538,24 @@ impl ConfigEditsBuilder {
self
}
pub fn set_project_trusted<P: Into<PathBuf>>(mut self, project_path: P) -> Self {
self.edits
.push(ConfigEdit::SetProjectTrusted(project_path.into()));
pub fn set_project_trust_level<P: Into<PathBuf>>(
mut self,
project_path: P,
trust_level: TrustLevel,
) -> Self {
self.edits.push(ConfigEdit::SetProjectTrustLevel {
path: project_path.into(),
level: trust_level,
});
self
}
/// Enable or disable a feature flag by key under the `[features]` table.
pub fn set_feature_enabled(mut self, key: &str, enabled: bool) -> Self {
self.edits.push(ConfigEdit::SetPath {
segments: vec!["features".to_string(), key.to_string()],
value: value(enabled),
});
self
}
@@ -525,7 +593,7 @@ mod tests {
codex_home,
None,
&[ConfigEdit::SetModel {
model: Some("gpt-5-codex".to_string()),
model: Some("gpt-5.1-codex".to_string()),
effort: Some(ReasoningEffort::High),
}],
)
@@ -533,7 +601,7 @@ mod tests {
let contents =
std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");
let expected = r#"model = "gpt-5-codex"
let expected = r#"model = "gpt-5.1-codex"
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
@@ -663,7 +731,7 @@ model = "o5-preview"
std::fs::write(
codex_home.join(CONFIG_TOML_FILE),
r#"[profiles."team a"]
model = "gpt-5-codex"
model = "gpt-5.1-codex"
"#,
)
.expect("seed");
@@ -720,6 +788,63 @@ hide_full_access_warning = true
assert_eq!(contents, expected);
}
#[test]
fn blocking_set_hide_rate_limit_model_nudge_preserves_table() {
let tmp = tempdir().expect("tmpdir");
let codex_home = tmp.path();
std::fs::write(
codex_home.join(CONFIG_TOML_FILE),
r#"[notice]
existing = "value"
"#,
)
.expect("seed");
apply_blocking(
codex_home,
None,
&[ConfigEdit::SetNoticeHideRateLimitModelNudge(true)],
)
.expect("persist");
let contents =
std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");
let expected = r#"[notice]
existing = "value"
hide_rate_limit_model_nudge = true
"#;
assert_eq!(contents, expected);
}
#[test]
fn blocking_set_hide_gpt5_1_migration_prompt_preserves_table() {
let tmp = tempdir().expect("tmpdir");
let codex_home = tmp.path();
std::fs::write(
codex_home.join(CONFIG_TOML_FILE),
r#"[notice]
existing = "value"
"#,
)
.expect("seed");
apply_blocking(
codex_home,
None,
&[ConfigEdit::SetNoticeHideModelMigrationPrompt(
"hide_gpt5_1_migration_prompt".to_string(),
true,
)],
)
.expect("persist");
let contents =
std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");
let expected = r#"[notice]
existing = "value"
hide_gpt5_1_migration_prompt = true
"#;
assert_eq!(contents, expected);
}
#[test]
fn blocking_replace_mcp_servers_round_trips() {
let tmp = tempdir().expect("tmpdir");
@@ -856,14 +981,14 @@ B = \"2\"
let codex_home = tmp.path().to_path_buf();
ConfigEditsBuilder::new(&codex_home)
.set_model(Some("gpt-5-codex"), Some(ReasoningEffort::High))
.set_model(Some("gpt-5.1-codex"), Some(ReasoningEffort::High))
.apply()
.await
.expect("persist");
let contents =
std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");
let expected = r#"model = "gpt-5-codex"
let expected = r#"model = "gpt-5.1-codex"
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
@@ -885,11 +1010,11 @@ model_reasoning_effort = "low"
std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");
assert_eq!(contents, initial_expected);
let updated_expected = r#"model = "gpt-5-codex"
let updated_expected = r#"model = "gpt-5.1-codex"
model_reasoning_effort = "high"
"#;
ConfigEditsBuilder::new(codex_home)
.set_model(Some("gpt-5-codex"), Some(ReasoningEffort::High))
.set_model(Some("gpt-5.1-codex"), Some(ReasoningEffort::High))
.apply_blocking()
.expect("persist update");
contents = std::fs::read_to_string(codex_home.join(CONFIG_TOML_FILE)).expect("read config");

View File

@@ -25,7 +25,9 @@ use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::LMSTUDIO_OSS_PROVIDER_ID;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::OLLAMA_OSS_PROVIDER_ID;
use crate::model_provider_info::built_in_model_providers;
use crate::openai_model_info::get_model_info;
use crate::project_doc::DEFAULT_PROJECT_DOC_FILENAME;
@@ -38,6 +40,7 @@ use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::config_types::Verbosity;
use codex_rmcp_client::OAuthCredentialsStoreMode;
use dirs::home_dir;
@@ -59,11 +62,11 @@ pub mod profile;
pub mod types;
#[cfg(target_os = "windows")]
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5.1";
#[cfg(not(target_os = "windows"))]
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5-codex";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5-codex";
pub const GPT_5_CODEX_MEDIUM_MODEL: &str = "gpt-5-codex";
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5.1-codex";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5.1-codex";
pub const GPT_5_CODEX_MEDIUM_MODEL: &str = "gpt-5.1-codex";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -78,7 +81,7 @@ pub struct Config {
/// Optional override of model selection.
pub model: String,
/// Model used specifically for review sessions. Defaults to "gpt-5-codex".
/// Model used specifically for review sessions. Defaults to "gpt-5.1-codex".
pub review_model: String,
pub model_family: ModelFamily,
@@ -192,6 +195,9 @@ pub struct Config {
/// Additional filenames to try when looking for project-level docs.
pub project_doc_fallback_filenames: Vec<String>,
/// Token budget applied when storing tool/function outputs in the context manager.
pub tool_output_token_limit: Option<usize>,
/// Directory containing all Codex state (defaults to `~/.codex` but can be
/// overridden by the `CODEX_HOME` environment variable).
pub codex_home: PathBuf,
@@ -241,8 +247,6 @@ pub struct Config {
/// When `true`, run a model-based assessment for commands denied by the sandbox.
pub experimental_sandbox_command_assessment: bool,
pub use_experimental_streamable_shell_tool: bool,
/// If set to `true`, used only the experimental unified exec tool.
pub use_experimental_unified_exec_tool: bool,
@@ -384,15 +388,16 @@ fn ensure_no_inline_bearer_tokens(value: &TomlValue) -> std::io::Result<()> {
Ok(())
}
pub(crate) fn set_project_trusted_inner(
pub(crate) fn set_project_trust_level_inner(
doc: &mut DocumentMut,
project_path: &Path,
trust_level: TrustLevel,
) -> anyhow::Result<()> {
// Ensure we render a human-friendly structure:
//
// [projects]
// [projects."/path/to/project"]
// trust_level = "trusted"
// trust_level = "trusted" or "untrusted"
//
// rather than inline tables like:
//
@@ -448,20 +453,66 @@ pub(crate) fn set_project_trusted_inner(
return Err(anyhow::anyhow!("project table missing for {project_key}"));
};
proj_tbl.set_implicit(false);
proj_tbl["trust_level"] = toml_edit::value("trusted");
proj_tbl["trust_level"] = toml_edit::value(trust_level.to_string());
Ok(())
}
/// Patch `CODEX_HOME/config.toml` project state.
/// Patch `CODEX_HOME/config.toml` project state to set trust level.
/// Use with caution.
pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Result<()> {
pub fn set_project_trust_level(
codex_home: &Path,
project_path: &Path,
trust_level: TrustLevel,
) -> anyhow::Result<()> {
use crate::config::edit::ConfigEditsBuilder;
ConfigEditsBuilder::new(codex_home)
.set_project_trusted(project_path)
.set_project_trust_level(project_path, trust_level)
.apply_blocking()
}
/// Save the default OSS provider preference to config.toml
pub fn set_default_oss_provider(codex_home: &Path, provider: &str) -> std::io::Result<()> {
// Validate that the provider is one of the known OSS providers
match provider {
LMSTUDIO_OSS_PROVIDER_ID | OLLAMA_OSS_PROVIDER_ID => {
// Valid provider, continue
}
_ => {
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!(
"Invalid OSS provider '{provider}'. Must be one of: {LMSTUDIO_OSS_PROVIDER_ID}, {OLLAMA_OSS_PROVIDER_ID}"
),
));
}
}
let config_path = codex_home.join(CONFIG_TOML_FILE);
// Read existing config or create empty string if file doesn't exist
let content = match std::fs::read_to_string(&config_path) {
Ok(content) => content,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => String::new(),
Err(e) => return Err(e),
};
// Parse as DocumentMut for editing while preserving structure
let mut doc = content.parse::<DocumentMut>().map_err(|e| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
format!("failed to parse config.toml: {e}"),
)
})?;
// Set the default_oss_provider at root level
use toml_edit::value;
doc["oss_provider"] = value(provider);
// Write the modified document back
std::fs::write(&config_path, doc.to_string())?;
Ok(())
}
/// Apply a single dotted-path override onto a TOML value.
fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
use toml::value::Table;
@@ -588,6 +639,9 @@ pub struct ConfigToml {
/// Ordered list of fallback filenames to look for when AGENTS.md is missing.
pub project_doc_fallback_filenames: Option<Vec<String>>,
/// Token budget applied when storing tool/function outputs in the context manager.
pub tool_output_token_limit: Option<usize>,
/// Profile to use from the `profiles` map.
pub profile: Option<String>,
@@ -655,11 +709,12 @@ pub struct ConfigToml {
/// Legacy, now use features
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_compact_prompt_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
/// Preferred OSS provider for local models, e.g. "lmstudio" or "ollama".
pub oss_provider: Option<String>,
}
impl From<ConfigToml> for UserSavedConfig {
@@ -689,15 +744,16 @@ impl From<ConfigToml> for UserSavedConfig {
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
pub struct ProjectConfig {
pub trust_level: Option<String>,
pub trust_level: Option<TrustLevel>,
}
impl ProjectConfig {
pub fn is_trusted(&self) -> bool {
match &self.trust_level {
Some(trust_level) => trust_level == "trusted",
None => false,
}
matches!(self.trust_level, Some(TrustLevel::Trusted))
}
pub fn is_untrusted(&self) -> bool {
matches!(self.trust_level, Some(TrustLevel::Untrusted))
}
}
@@ -738,9 +794,9 @@ impl ConfigToml {
.or(profile_sandbox_mode)
.or(self.sandbox_mode)
.or_else(|| {
// if no sandbox_mode is set, but user has marked directory as trusted, use WorkspaceWrite
// if no sandbox_mode is set, but user has marked directory as trusted or untrusted, use WorkspaceWrite
self.get_active_project(resolved_cwd).and_then(|p| {
if p.is_trusted() {
if p.is_trusted() || p.is_untrusted() {
Some(SandboxMode::WorkspaceWrite)
} else {
None
@@ -847,6 +903,34 @@ pub struct ConfigOverrides {
pub additional_writable_roots: Vec<PathBuf>,
}
/// Resolves the OSS provider from CLI override, profile config, or global config.
/// Returns `None` if no provider is configured at any level.
pub fn resolve_oss_provider(
explicit_provider: Option<&str>,
config_toml: &ConfigToml,
config_profile: Option<String>,
) -> Option<String> {
if let Some(provider) = explicit_provider {
// Explicit provider specified (e.g., via --local-provider)
Some(provider.to_string())
} else {
// Check profile config first, then global config
let profile = config_toml.get_config_profile(config_profile).ok();
if let Some(profile) = &profile {
// Check if profile has an oss provider
if let Some(profile_oss_provider) = &profile.oss_provider {
Some(profile_oss_provider.clone())
}
// If not then check if the toml has an oss provider
else {
config_toml.oss_provider.clone()
}
} else {
config_toml.oss_provider.clone()
}
}
}
impl Config {
/// Meant to be used exclusively for tests: `load_with_overrides()` should
/// be used in all other cases.
@@ -961,6 +1045,9 @@ impl Config {
if active_project.is_trusted() {
// If no explicit approval policy is set, but we trust cwd, default to OnRequest
AskForApproval::OnRequest
} else if active_project.is_untrusted() {
// If project is explicitly marked untrusted, require approval for non-safe commands
AskForApproval::UnlessTrusted
} else {
AskForApproval::default()
}
@@ -999,7 +1086,6 @@ impl Config {
let include_apply_patch_tool_flag = features.enabled(Feature::ApplyPatchFreeform);
let tools_web_search_request = features.enabled(Feature::WebSearchRequest);
let use_experimental_streamable_shell_tool = features.enabled(Feature::StreamableShell);
let use_experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
let use_experimental_use_rmcp_client = features.enabled(Feature::RmcpClient);
let experimental_sandbox_command_assessment =
@@ -1129,6 +1215,7 @@ impl Config {
}
})
.collect(),
tool_output_token_limit: cfg.tool_output_token_limit,
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
@@ -1156,7 +1243,6 @@ impl Config {
include_apply_patch_tool: include_apply_patch_tool_flag,
tools_web_search_request,
experimental_sandbox_command_assessment,
use_experimental_streamable_shell_tool,
use_experimental_unified_exec_tool,
use_experimental_use_rmcp_client,
features,
@@ -1234,6 +1320,16 @@ impl Config {
Ok(Some(s))
}
}
pub fn set_windows_sandbox_globally(&mut self, value: bool) {
crate::safety::set_windows_sandbox_enabled(value);
if value {
self.features.enable(Feature::WindowsSandbox);
} else {
self.features.disable(Feature::WindowsSandbox);
}
self.forced_auto_mode_downgraded_on_windows = !value;
}
}
fn default_model() -> String {
@@ -1328,7 +1424,7 @@ persistence = "none"
}
#[test]
fn tui_config_missing_notifications_field_defaults_to_disabled() {
fn tui_config_missing_notifications_field_defaults_to_enabled() {
let cfg = r#"
[tui]
"#;
@@ -1337,7 +1433,7 @@ persistence = "none"
.expect("TUI config without notifications should succeed");
let tui = parsed.tui.expect("config should include tui section");
assert_eq!(tui.notifications, Notifications::Enabled(false));
assert_eq!(tui.notifications, Notifications::Enabled(true));
}
#[test]
@@ -1715,7 +1811,6 @@ trust_level = "trusted"
fn legacy_toggles_map_to_features() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml {
experimental_use_exec_command_tool: Some(true),
experimental_use_unified_exec_tool: Some(true),
experimental_use_rmcp_client: Some(true),
experimental_use_freeform_apply_patch: Some(true),
@@ -1729,12 +1824,11 @@ trust_level = "trusted"
)?;
assert!(config.features.enabled(Feature::ApplyPatchFreeform));
assert!(config.features.enabled(Feature::StreamableShell));
assert!(config.features.enabled(Feature::UnifiedExec));
assert!(config.features.enabled(Feature::RmcpClient));
assert!(config.include_apply_patch_tool);
assert!(config.use_experimental_streamable_shell_tool);
assert!(config.use_experimental_unified_exec_tool);
assert!(config.use_experimental_use_rmcp_client);
@@ -2539,7 +2633,7 @@ url = "https://example.com/mcp"
let codex_home = TempDir::new()?;
ConfigEditsBuilder::new(codex_home.path())
.set_model(Some("gpt-5-codex"), Some(ReasoningEffort::High))
.set_model(Some("gpt-5.1-codex"), Some(ReasoningEffort::High))
.apply()
.await?;
@@ -2547,7 +2641,7 @@ url = "https://example.com/mcp"
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-codex"));
assert_eq!(parsed.model.as_deref(), Some("gpt-5.1-codex"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
@@ -2561,7 +2655,7 @@ url = "https://example.com/mcp"
tokio::fs::write(
&config_path,
r#"
model = "gpt-5-codex"
model = "gpt-5.1-codex"
model_reasoning_effort = "medium"
[profiles.dev]
@@ -2597,7 +2691,7 @@ model = "gpt-4.1"
ConfigEditsBuilder::new(codex_home.path())
.with_profile(Some("dev"))
.set_model(Some("gpt-5-codex"), Some(ReasoningEffort::Medium))
.set_model(Some("gpt-5.1-codex"), Some(ReasoningEffort::Medium))
.apply()
.await?;
@@ -2609,7 +2703,7 @@ model = "gpt-4.1"
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-codex"));
assert_eq!(profile.model.as_deref(), Some("gpt-5.1-codex"));
assert_eq!(
profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
@@ -2631,7 +2725,7 @@ model = "gpt-4"
model_reasoning_effort = "medium"
[profiles.prod]
model = "gpt-5-codex"
model = "gpt-5.1-codex"
"#,
)
.await?;
@@ -2660,7 +2754,7 @@ model = "gpt-5-codex"
.profiles
.get("prod")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-5-codex"),
Some("gpt-5.1-codex"),
);
Ok(())
@@ -2775,7 +2869,7 @@ model_provider = "openai"
approval_policy = "on-failure"
[profiles.gpt5]
model = "gpt-5"
model = "gpt-5.1"
model_provider = "openai"
approval_policy = "on-failure"
model_reasoning_effort = "high"
@@ -2884,6 +2978,7 @@ model_verbosity = "high"
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -2902,7 +2997,6 @@ model_verbosity = "high"
include_apply_patch_tool: false,
tools_web_search_request: false,
experimental_sandbox_command_assessment: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
features: Features::with_defaults(),
@@ -2956,6 +3050,7 @@ model_verbosity = "high"
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -2974,7 +3069,6 @@ model_verbosity = "high"
include_apply_patch_tool: false,
tools_web_search_request: false,
experimental_sandbox_command_assessment: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
features: Features::with_defaults(),
@@ -3043,6 +3137,7 @@ model_verbosity = "high"
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -3061,7 +3156,6 @@ model_verbosity = "high"
include_apply_patch_tool: false,
tools_web_search_request: false,
experimental_sandbox_command_assessment: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
features: Features::with_defaults(),
@@ -3094,9 +3188,9 @@ model_verbosity = "high"
fixture.codex_home(),
)?;
let expected_gpt5_profile_config = Config {
model: "gpt-5".to_string(),
model: "gpt-5.1".to_string(),
review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(),
model_family: find_family_for_model("gpt-5").expect("known model slug"),
model_family: find_family_for_model("gpt-5.1").expect("known model slug"),
model_context_window: Some(272_000),
model_max_output_tokens: Some(128_000),
model_auto_compact_token_limit: Some(244_800),
@@ -3116,6 +3210,7 @@ model_verbosity = "high"
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
tool_output_token_limit: None,
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -3134,7 +3229,6 @@ model_verbosity = "high"
include_apply_patch_tool: false,
tools_web_search_request: false,
experimental_sandbox_command_assessment: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
features: Features::with_defaults(),
@@ -3175,7 +3269,7 @@ model_verbosity = "high"
let project_dir = Path::new("/some/path");
let mut doc = DocumentMut::new();
set_project_trusted_inner(&mut doc, project_dir)?;
set_project_trust_level_inner(&mut doc, project_dir, TrustLevel::Trusted)?;
let contents = doc.to_string();
@@ -3215,7 +3309,7 @@ trust_level = "trusted"
let mut doc = initial.parse::<DocumentMut>()?;
// Run the function; it should convert to explicit tables and set trusted
set_project_trusted_inner(&mut doc, project_dir)?;
set_project_trust_level_inner(&mut doc, project_dir, TrustLevel::Trusted)?;
let contents = doc.to_string();
@@ -3242,7 +3336,7 @@ model = "foo""#;
// Approve a new directory
let new_project = Path::new("/Users/mbolin/code/codex2");
set_project_trusted_inner(&mut doc, new_project)?;
set_project_trust_level_inner(&mut doc, new_project, TrustLevel::Trusted)?;
let contents = doc.to_string();
@@ -3265,6 +3359,201 @@ trust_level = "trusted"
Ok(())
}
#[test]
fn test_set_default_oss_provider() -> std::io::Result<()> {
let temp_dir = TempDir::new()?;
let codex_home = temp_dir.path();
let config_path = codex_home.join(CONFIG_TOML_FILE);
// Test setting valid provider on empty config
set_default_oss_provider(codex_home, OLLAMA_OSS_PROVIDER_ID)?;
let content = std::fs::read_to_string(&config_path)?;
assert!(content.contains("oss_provider = \"ollama\""));
// Test updating existing config
std::fs::write(&config_path, "model = \"gpt-4\"\n")?;
set_default_oss_provider(codex_home, LMSTUDIO_OSS_PROVIDER_ID)?;
let content = std::fs::read_to_string(&config_path)?;
assert!(content.contains("oss_provider = \"lmstudio\""));
assert!(content.contains("model = \"gpt-4\""));
// Test overwriting existing oss_provider
set_default_oss_provider(codex_home, OLLAMA_OSS_PROVIDER_ID)?;
let content = std::fs::read_to_string(&config_path)?;
assert!(content.contains("oss_provider = \"ollama\""));
assert!(!content.contains("oss_provider = \"lmstudio\""));
// Test invalid provider
let result = set_default_oss_provider(codex_home, "invalid_provider");
assert!(result.is_err());
let error = result.unwrap_err();
assert_eq!(error.kind(), std::io::ErrorKind::InvalidInput);
assert!(error.to_string().contains("Invalid OSS provider"));
assert!(error.to_string().contains("invalid_provider"));
Ok(())
}
#[test]
fn test_untrusted_project_gets_workspace_write_sandbox() -> anyhow::Result<()> {
let config_with_untrusted = r#"
[projects."/tmp/test"]
trust_level = "untrusted"
"#;
let cfg = toml::from_str::<ConfigToml>(config_with_untrusted)
.expect("TOML deserialization should succeed");
let resolution = cfg.derive_sandbox_policy(None, None, &PathBuf::from("/tmp/test"));
// Verify that untrusted projects get WorkspaceWrite (or ReadOnly on Windows due to downgrade)
if cfg!(target_os = "windows") {
assert!(
matches!(resolution.policy, SandboxPolicy::ReadOnly),
"Expected ReadOnly on Windows, got {:?}",
resolution.policy
);
} else {
assert!(
matches!(resolution.policy, SandboxPolicy::WorkspaceWrite { .. }),
"Expected WorkspaceWrite for untrusted project, got {:?}",
resolution.policy
);
}
Ok(())
}
#[test]
fn test_resolve_oss_provider_explicit_override() {
let config_toml = ConfigToml::default();
let result = resolve_oss_provider(Some("custom-provider"), &config_toml, None);
assert_eq!(result, Some("custom-provider".to_string()));
}
#[test]
fn test_resolve_oss_provider_from_profile() {
let mut profiles = std::collections::HashMap::new();
let profile = ConfigProfile {
oss_provider: Some("profile-provider".to_string()),
..Default::default()
};
profiles.insert("test-profile".to_string(), profile);
let config_toml = ConfigToml {
profiles,
..Default::default()
};
let result = resolve_oss_provider(None, &config_toml, Some("test-profile".to_string()));
assert_eq!(result, Some("profile-provider".to_string()));
}
#[test]
fn test_resolve_oss_provider_from_global_config() {
let config_toml = ConfigToml {
oss_provider: Some("global-provider".to_string()),
..Default::default()
};
let result = resolve_oss_provider(None, &config_toml, None);
assert_eq!(result, Some("global-provider".to_string()));
}
#[test]
fn test_resolve_oss_provider_profile_fallback_to_global() {
let mut profiles = std::collections::HashMap::new();
let profile = ConfigProfile::default(); // No oss_provider set
profiles.insert("test-profile".to_string(), profile);
let config_toml = ConfigToml {
oss_provider: Some("global-provider".to_string()),
profiles,
..Default::default()
};
let result = resolve_oss_provider(None, &config_toml, Some("test-profile".to_string()));
assert_eq!(result, Some("global-provider".to_string()));
}
#[test]
fn test_resolve_oss_provider_none_when_not_configured() {
let config_toml = ConfigToml::default();
let result = resolve_oss_provider(None, &config_toml, None);
assert_eq!(result, None);
}
#[test]
fn test_resolve_oss_provider_explicit_overrides_all() {
let mut profiles = std::collections::HashMap::new();
let profile = ConfigProfile {
oss_provider: Some("profile-provider".to_string()),
..Default::default()
};
profiles.insert("test-profile".to_string(), profile);
let config_toml = ConfigToml {
oss_provider: Some("global-provider".to_string()),
profiles,
..Default::default()
};
let result = resolve_oss_provider(
Some("explicit-provider"),
&config_toml,
Some("test-profile".to_string()),
);
assert_eq!(result, Some("explicit-provider".to_string()));
}
#[test]
fn test_untrusted_project_gets_unless_trusted_approval_policy() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let test_project_dir = TempDir::new()?;
let test_path = test_project_dir.path();
let mut projects = std::collections::HashMap::new();
projects.insert(
test_path.to_string_lossy().to_string(),
ProjectConfig {
trust_level: Some(TrustLevel::Untrusted),
},
);
let cfg = ConfigToml {
projects: Some(projects),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides {
cwd: Some(test_path.to_path_buf()),
..Default::default()
},
codex_home.path().to_path_buf(),
)?;
// Verify that untrusted projects get UnlessTrusted approval policy
assert_eq!(
config.approval_policy,
AskForApproval::UnlessTrusted,
"Expected UnlessTrusted approval policy for untrusted project"
);
// Verify that untrusted projects still get WorkspaceWrite sandbox (or ReadOnly on Windows)
if cfg!(target_os = "windows") {
assert!(
matches!(config.sandbox_policy, SandboxPolicy::ReadOnly),
"Expected ReadOnly on Windows"
);
} else {
assert!(
matches!(config.sandbox_policy, SandboxPolicy::WorkspaceWrite { .. }),
"Expected WorkspaceWrite sandbox for untrusted project"
);
}
Ok(())
}
}
#[cfg(test)]

View File

@@ -25,7 +25,6 @@ pub struct ConfigProfile {
pub experimental_compact_prompt_file: Option<PathBuf>,
pub include_apply_patch_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
@@ -34,6 +33,7 @@ pub struct ConfigProfile {
/// Optional feature toggles scoped to this profile.
#[serde(default)]
pub features: Option<crate::features::FeaturesToml>,
pub oss_provider: Option<String>,
}
impl From<ConfigProfile> for codex_app_server_protocol::Profile {

View File

@@ -338,7 +338,7 @@ pub enum Notifications {
impl Default for Notifications {
fn default() -> Self {
Self::Enabled(false)
Self::Enabled(true)
}
}
@@ -346,7 +346,7 @@ impl Default for Notifications {
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct Tui {
/// Enable desktop notifications from the TUI when the terminal is unfocused.
/// Defaults to `false`.
/// Defaults to `true`.
#[serde(default)]
pub notifications: Notifications,
}
@@ -358,6 +358,12 @@ pub struct Tui {
pub struct Notice {
/// Tracks whether the user has acknowledged the full access warning prompt.
pub hide_full_access_warning: Option<bool>,
/// Tracks whether the user has acknowledged the Windows world-writable directories warning.
pub hide_world_writable_warning: Option<bool>,
/// Tracks whether the user opted out of the rate limit model switch reminder.
pub hide_rate_limit_model_nudge: Option<bool>,
/// Tracks whether the user has seen the model migration prompt
pub hide_gpt5_1_migration_prompt: Option<bool>,
}
impl Notice {

View File

@@ -1,13 +1,15 @@
use crate::codex::TurnContext;
use crate::context_manager::normalize;
use crate::truncate::TruncationPolicy;
use crate::truncate::truncate_function_output_items_with_policy;
use crate::truncate::truncate_text;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::protocol::TokenUsageInfo;
use codex_utils_tokenizer::Tokenizer;
use std::ops::Deref;
use crate::context_manager::normalize;
use crate::context_manager::truncate::format_output_for_model_body;
use crate::context_manager::truncate::globally_truncate_function_output_items;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
pub(crate) struct ContextManager {
@@ -28,6 +30,10 @@ impl ContextManager {
self.token_info.clone()
}
pub(crate) fn set_token_info(&mut self, info: Option<TokenUsageInfo>) {
self.token_info = info;
}
pub(crate) fn set_token_usage_full(&mut self, context_window: i64) {
match &mut self.token_info {
Some(info) => info.fill_to_context_window(context_window),
@@ -38,7 +44,7 @@ impl ContextManager {
}
/// `items` is ordered from oldest to newest.
pub(crate) fn record_items<I>(&mut self, items: I)
pub(crate) fn record_items<I>(&mut self, items: I, policy: TruncationPolicy)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
@@ -50,7 +56,7 @@ impl ContextManager {
continue;
}
let processed = Self::process_item(&item);
let processed = self.process_item(item_ref, policy);
self.items.push(processed);
}
}
@@ -68,6 +74,28 @@ impl ContextManager {
history
}
// Estimate the number of tokens in the history. Return None if no tokenizer
// is available. This does not consider the reasoning traces.
// /!\ The value is a lower bound estimate and does not represent the exact
// context length.
pub(crate) fn estimate_token_count(&self, turn_context: &TurnContext) -> Option<i64> {
let model = turn_context.client.get_model();
let tokenizer = Tokenizer::for_model(model.as_str()).ok()?;
let model_family = turn_context.client.get_model_family();
Some(
self.items
.iter()
.map(|item| {
serde_json::to_string(&item)
.map(|item| tokenizer.count(&item))
.unwrap_or_default()
})
.sum::<i64>()
+ tokenizer.count(model_family.base_instructions.as_str()),
)
}
pub(crate) fn remove_first_item(&mut self) {
if !self.items.is_empty() {
// Remove the oldest item (front of the list). Items are ordered from
@@ -116,14 +144,14 @@ impl ContextManager {
items.retain(|item| !matches!(item, ResponseItem::GhostSnapshot { .. }));
}
fn process_item(item: &ResponseItem) -> ResponseItem {
fn process_item(&self, item: &ResponseItem, policy: TruncationPolicy) -> ResponseItem {
match item {
ResponseItem::FunctionCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output.content.as_str());
let truncated = truncate_text(output.content.as_str(), policy);
let truncated_items = output
.content_items
.as_ref()
.map(|items| globally_truncate_function_output_items(items));
.map(|items| truncate_function_output_items_with_policy(items, policy));
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
@@ -134,7 +162,7 @@ impl ContextManager {
}
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output);
let truncated = truncate_text(output, policy);
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: truncated,
@@ -146,6 +174,7 @@ impl ContextManager {
| ResponseItem::FunctionCall { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CompactionSummary { .. }
| ResponseItem::GhostSnapshot { .. }
| ResponseItem::Other => item.clone(),
}
@@ -163,7 +192,8 @@ fn is_api_message(message: &ResponseItem) -> bool {
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
| ResponseItem::WebSearchCall { .. }
| ResponseItem::CompactionSummary { .. } => true,
ResponseItem::GhostSnapshot { .. } => false,
ResponseItem::Other => false,
}

View File

@@ -1,8 +1,8 @@
use super::*;
use crate::context_manager::truncate;
use crate::truncate;
use crate::truncate::TruncationPolicy;
use codex_git::GhostCommit;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
use codex_protocol::models::LocalShellExecAction;
@@ -12,6 +12,9 @@ use codex_protocol::models::ReasoningItemReasoningSummary;
use pretty_assertions::assert_eq;
use regex_lite::Regex;
const EXEC_FORMAT_MAX_LINES: usize = 256;
const EXEC_FORMAT_MAX_BYTES: usize = 10_000;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
@@ -24,7 +27,9 @@ fn assistant_msg(text: &str) -> ResponseItem {
fn create_history_with_items(items: Vec<ResponseItem>) -> ContextManager {
let mut h = ContextManager::new();
h.record_items(items.iter());
// Use a generous but fixed token budget; tests only rely on truncation
// behavior, not on a specific model's token limit.
h.record_items(items.iter(), TruncationPolicy::Tokens(10_000));
h
}
@@ -54,6 +59,7 @@ fn reasoning_msg(text: &str) -> ResponseItem {
#[test]
fn filters_non_api_messages() {
let mut h = ContextManager::default();
let policy = TruncationPolicy::Tokens(10_000);
// System message is not API messages; Other is ignored.
let system = ResponseItem::Message {
id: None,
@@ -63,12 +69,12 @@ fn filters_non_api_messages() {
}],
};
let reasoning = reasoning_msg("thinking...");
h.record_items([&system, &reasoning, &ResponseItem::Other]);
h.record_items([&system, &reasoning, &ResponseItem::Other], policy);
// User and assistant should be retained.
let u = user_msg("hi");
let a = assistant_msg("hello");
h.record_items([&u, &a]);
h.record_items([&u, &a], policy);
let items = h.contents();
assert_eq!(
@@ -236,6 +242,9 @@ fn normalization_retains_local_shell_outputs() {
#[test]
fn record_items_truncates_function_call_output_content() {
let mut history = ContextManager::new();
// Any reasonably small token budget works; the test only cares that
// truncation happens and the marker is present.
let policy = TruncationPolicy::Tokens(1_000);
let long_line = "a very long line to trigger truncation\n";
let long_output = long_line.repeat(2_500);
let item = ResponseItem::FunctionCallOutput {
@@ -247,15 +256,20 @@ fn record_items_truncates_function_call_output_content() {
},
};
history.record_items([&item]);
history.record_items([&item], policy);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::FunctionCallOutput { output, .. } => {
assert_ne!(output.content, long_output);
assert!(
output.content.starts_with("Total output lines:"),
"expected truncated summary, got {}",
output.content.contains("tokens truncated"),
"expected token-based truncation marker, got {}",
output.content
);
assert!(
output.content.contains("tokens truncated"),
"expected truncation marker, got {}",
output.content
);
}
@@ -266,6 +280,7 @@ fn record_items_truncates_function_call_output_content() {
#[test]
fn record_items_truncates_custom_tool_call_output_content() {
let mut history = ContextManager::new();
let policy = TruncationPolicy::Tokens(1_000);
let line = "custom output that is very long\n";
let long_output = line.repeat(2_500);
let item = ResponseItem::CustomToolCallOutput {
@@ -273,21 +288,48 @@ fn record_items_truncates_custom_tool_call_output_content() {
output: long_output.clone(),
};
history.record_items([&item]);
history.record_items([&item], policy);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::CustomToolCallOutput { output, .. } => {
assert_ne!(output, &long_output);
assert!(
output.starts_with("Total output lines:"),
"expected truncated summary, got {output}"
output.contains("tokens truncated"),
"expected token-based truncation marker, got {output}"
);
assert!(
output.contains("tokens truncated") || output.contains("bytes truncated"),
"expected truncation marker, got {output}"
);
}
other => panic!("unexpected history item: {other:?}"),
}
}
#[test]
fn record_items_respects_custom_token_limit() {
let mut history = ContextManager::new();
let policy = TruncationPolicy::Tokens(10);
let long_output = "tokenized content repeated many times ".repeat(200);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-custom-limit".to_string(),
output: FunctionCallOutputPayload {
content: long_output,
success: Some(true),
..Default::default()
},
};
history.record_items([&item], policy);
let stored = match &history.items[0] {
ResponseItem::FunctionCallOutput { output, .. } => output,
other => panic!("unexpected history item: {other:?}"),
};
assert!(stored.content.contains("tokens truncated"));
}
fn assert_truncated_message_matches(message: &str, line: &str, total_lines: usize) {
let pattern = truncated_message_pattern(line, total_lines);
let regex = Regex::new(&pattern).unwrap_or_else(|err| {
@@ -301,21 +343,22 @@ fn assert_truncated_message_matches(message: &str, line: &str, total_lines: usiz
.expect("missing body capture")
.as_str();
assert!(
body.len() <= truncate::MODEL_FORMAT_MAX_BYTES,
body.len() <= EXEC_FORMAT_MAX_BYTES,
"body exceeds byte limit: {} bytes",
body.len()
);
}
fn truncated_message_pattern(line: &str, total_lines: usize) -> String {
let head_take = truncate::MODEL_FORMAT_HEAD_LINES.min(total_lines);
let tail_take = truncate::MODEL_FORMAT_TAIL_LINES.min(total_lines.saturating_sub(head_take));
let head_lines = EXEC_FORMAT_MAX_LINES / 2;
let tail_lines = EXEC_FORMAT_MAX_LINES - head_lines;
let head_take = head_lines.min(total_lines);
let tail_take = tail_lines.min(total_lines.saturating_sub(head_take));
let omitted = total_lines.saturating_sub(head_take + tail_take);
let escaped_line = regex_lite::escape(line);
if omitted == 0 {
return format!(
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} output truncated to fit {max_bytes} bytes \.{{3}}]\n\n.*)$",
max_bytes = truncate::MODEL_FORMAT_MAX_BYTES,
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} removed \d+ bytes to fit {EXEC_FORMAT_MAX_BYTES} byte limit \.{{3}}]\n\n.*)$",
);
}
format!(
@@ -328,7 +371,7 @@ fn format_exec_output_truncates_large_error() {
let line = "very long execution error line that should trigger truncation\n";
let large_error = line.repeat(2_500); // way beyond both byte and line limits
let truncated = truncate::format_output_for_model_body(&large_error);
let truncated = truncate::truncate_with_line_bytes_budget(&large_error, EXEC_FORMAT_MAX_BYTES);
let total_lines = large_error.lines().count();
assert_truncated_message_matches(&truncated, line, total_lines);
@@ -337,13 +380,13 @@ fn format_exec_output_truncates_large_error() {
#[test]
fn format_exec_output_marks_byte_truncation_without_omitted_lines() {
let long_line = "a".repeat(truncate::MODEL_FORMAT_MAX_BYTES + 50);
let truncated = truncate::format_output_for_model_body(&long_line);
let long_line = "a".repeat(EXEC_FORMAT_MAX_BYTES + 50);
let truncated = truncate::truncate_with_line_bytes_budget(&long_line, EXEC_FORMAT_MAX_BYTES);
assert_ne!(truncated, long_line);
let removed_bytes = long_line.len().saturating_sub(EXEC_FORMAT_MAX_BYTES);
let marker_line = format!(
"[... output truncated to fit {} bytes ...]",
truncate::MODEL_FORMAT_MAX_BYTES
"[... removed {removed_bytes} bytes to fit {EXEC_FORMAT_MAX_BYTES} byte limit ...]"
);
assert!(
truncated.contains(&marker_line),
@@ -359,18 +402,21 @@ fn format_exec_output_marks_byte_truncation_without_omitted_lines() {
fn format_exec_output_returns_original_when_within_limits() {
let content = "example output\n".repeat(10);
assert_eq!(truncate::format_output_for_model_body(&content), content);
assert_eq!(
truncate::truncate_with_line_bytes_budget(&content, EXEC_FORMAT_MAX_BYTES),
content
);
}
#[test]
fn format_exec_output_reports_omitted_lines_and_keeps_head_and_tail() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 100;
let total_lines = EXEC_FORMAT_MAX_LINES + 100;
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
let omitted = total_lines - truncate::MODEL_FORMAT_MAX_LINES;
let truncated = truncate::truncate_with_line_bytes_budget(&content, EXEC_FORMAT_MAX_BYTES);
let omitted = total_lines - EXEC_FORMAT_MAX_LINES;
let expected_marker = format!("[... omitted {omitted} of {total_lines} lines ...]");
assert!(
@@ -391,99 +437,24 @@ fn format_exec_output_reports_omitted_lines_and_keeps_head_and_tail() {
#[test]
fn format_exec_output_prefers_line_marker_when_both_limits_exceeded() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 42;
let total_lines = EXEC_FORMAT_MAX_LINES + 42;
let long_line = "x".repeat(256);
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}-{long_line}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
let truncated = truncate::truncate_with_line_bytes_budget(&content, EXEC_FORMAT_MAX_BYTES);
assert!(
truncated.contains("[... omitted 42 of 298 lines ...]"),
"expected omitted marker when line count exceeds limit: {truncated}"
);
assert!(
!truncated.contains("output truncated to fit"),
!truncated.contains("byte limit"),
"line omission marker should take precedence over byte marker: {truncated}"
);
}
#[test]
fn truncates_across_multiple_under_limit_texts_and_reports_omitted() {
// Arrange: several text items, none exceeding per-item limit, but total exceeds budget.
let budget = truncate::MODEL_FORMAT_MAX_BYTES;
let t1_len = (budget / 2).saturating_sub(10);
let t2_len = (budget / 2).saturating_sub(10);
let remaining_after_t1_t2 = budget.saturating_sub(t1_len + t2_len);
let t3_len = 50; // gets truncated to remaining_after_t1_t2
let t4_len = 5; // omitted
let t5_len = 7; // omitted
let t1 = "a".repeat(t1_len);
let t2 = "b".repeat(t2_len);
let t3 = "c".repeat(t3_len);
let t4 = "d".repeat(t4_len);
let t5 = "e".repeat(t5_len);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-omit".to_string(),
output: FunctionCallOutputPayload {
content: "irrelevant".to_string(),
content_items: Some(vec![
FunctionCallOutputContentItem::InputText { text: t1 },
FunctionCallOutputContentItem::InputText { text: t2 },
FunctionCallOutputContentItem::InputImage {
image_url: "img:mid".to_string(),
},
FunctionCallOutputContentItem::InputText { text: t3 },
FunctionCallOutputContentItem::InputText { text: t4 },
FunctionCallOutputContentItem::InputText { text: t5 },
]),
success: Some(true),
},
};
let mut history = ContextManager::new();
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
let json = serde_json::to_value(&history.items[0]).expect("serialize to json");
let output = json
.get("output")
.expect("output field")
.as_array()
.expect("array output");
// Expect: t1 (full), t2 (full), image, t3 (truncated), summary mentioning 2 omitted.
assert_eq!(output.len(), 5);
let first = output[0].as_object().expect("first obj");
assert_eq!(first.get("type").unwrap(), "input_text");
let first_text = first.get("text").unwrap().as_str().unwrap();
assert_eq!(first_text.len(), t1_len);
let second = output[1].as_object().expect("second obj");
assert_eq!(second.get("type").unwrap(), "input_text");
let second_text = second.get("text").unwrap().as_str().unwrap();
assert_eq!(second_text.len(), t2_len);
assert_eq!(
output[2],
serde_json::json!({"type": "input_image", "image_url": "img:mid"})
);
let fourth = output[3].as_object().expect("fourth obj");
assert_eq!(fourth.get("type").unwrap(), "input_text");
let fourth_text = fourth.get("text").unwrap().as_str().unwrap();
assert_eq!(fourth_text.len(), remaining_after_t1_t2);
let summary = output[4].as_object().expect("summary obj");
assert_eq!(summary.get("type").unwrap(), "input_text");
let summary_text = summary.get("text").unwrap().as_str().unwrap();
assert!(summary_text.contains("omitted 2 text items"));
}
//TODO(aibrahim): run CI in release mode.
#[cfg(not(debug_assertions))]
#[test]

View File

@@ -1,6 +1,5 @@
mod history;
mod normalize;
mod truncate;
pub(crate) use crate::truncate::truncate_with_line_bytes_budget;
pub(crate) use history::ContextManager;
pub(crate) use truncate::format_output_for_model_body;

View File

@@ -1,128 +0,0 @@
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_utils_string::take_bytes_at_char_boundary;
use codex_utils_string::take_last_bytes_at_char_boundary;
// Model-formatting limits: clients get full streams; only content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
pub(crate) fn globally_truncate_function_output_items(
items: &[FunctionCallOutputContentItem],
) -> Vec<FunctionCallOutputContentItem> {
let mut out: Vec<FunctionCallOutputContentItem> = Vec::with_capacity(items.len());
let mut remaining = MODEL_FORMAT_MAX_BYTES;
let mut omitted_text_items = 0usize;
for it in items {
match it {
FunctionCallOutputContentItem::InputText { text } => {
if remaining == 0 {
omitted_text_items += 1;
continue;
}
let len = text.len();
if len <= remaining {
out.push(FunctionCallOutputContentItem::InputText { text: text.clone() });
remaining -= len;
} else {
let slice = take_bytes_at_char_boundary(text, remaining);
if !slice.is_empty() {
out.push(FunctionCallOutputContentItem::InputText {
text: slice.to_string(),
});
}
remaining = 0;
}
}
// todo(aibrahim): handle input images; resize
FunctionCallOutputContentItem::InputImage { image_url } => {
out.push(FunctionCallOutputContentItem::InputImage {
image_url: image_url.clone(),
});
}
}
}
if omitted_text_items > 0 {
out.push(FunctionCallOutputContentItem::InputText {
text: format!("[omitted {omitted_text_items} text items ...]"),
});
}
out
}
pub(crate) fn format_output_for_model_body(content: &str) -> String {
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let total_lines = content.lines().count();
if content.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return content.to_string();
}
let output = truncate_formatted_exec_output(content, total_lines);
format!("Total output lines: {total_lines}\n\n{output}")
}
fn truncate_formatted_exec_output(content: &str, total_lines: usize) -> String {
let segments: Vec<&str> = content.split_inclusive('\n').collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(segments.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(segments.len().saturating_sub(head_take));
let omitted = segments.len().saturating_sub(head_take + tail_take);
let head_slice_end: usize = segments
.iter()
.take(head_take)
.map(|segment| segment.len())
.sum();
let tail_slice_start: usize = if tail_take == 0 {
content.len()
} else {
content.len()
- segments
.iter()
.rev()
.take(tail_take)
.map(|segment| segment.len())
.sum::<usize>()
};
let head_slice = &content[..head_slice_end];
let tail_slice = &content[tail_slice_start..];
let truncated_by_bytes = content.len() > MODEL_FORMAT_MAX_BYTES;
// this is a bit wrong. We are counting metadata lines and not just shell output lines.
let marker = if omitted > 0 {
Some(format!(
"\n[... omitted {omitted} of {total_lines} lines ...]\n\n"
))
} else if truncated_by_bytes {
Some(format!(
"\n[... output truncated to fit {MODEL_FORMAT_MAX_BYTES} bytes ...]\n\n"
))
} else {
None
};
let marker_len = marker.as_ref().map_or(0, String::len);
let base_head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let head_budget = base_head_budget.min(MODEL_FORMAT_MAX_BYTES.saturating_sub(marker_len));
let head_part = take_bytes_at_char_boundary(head_slice, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(content.len()));
result.push_str(head_part);
if let Some(marker_text) = marker.as_ref() {
result.push_str(marker_text);
}
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
if remaining == 0 {
return result;
}
let tail_part = take_last_bytes_at_char_boundary(tail_slice, remaining);
result.push_str(tail_part);
result
}

View File

@@ -329,7 +329,6 @@ mod tests {
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Bash(BashShell {
shell_path: "/bin/bash".into(),
bashrc_path: "/home/user/.bashrc".into(),
})),
);
let context2 = EnvironmentContext::new(
@@ -338,7 +337,6 @@ mod tests {
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Zsh(ZshShell {
shell_path: "/bin/zsh".into(),
zshrc_path: "/home/user/.zshrc".into(),
})),
);

View File

@@ -2,7 +2,8 @@ use crate::codex::ProcessedResponseItem;
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use crate::truncate::truncate_middle;
use crate::truncate::TruncationPolicy;
use crate::truncate::truncate_text;
use chrono::DateTime;
use chrono::Datelike;
use chrono::Local;
@@ -109,6 +110,9 @@ pub enum CodexErr {
#[error("{0}")]
ConnectionFailed(ConnectionFailedError),
#[error("Quota exceeded. Check your plan and billing details.")]
QuotaExceeded,
#[error(
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
)]
@@ -235,18 +239,44 @@ pub struct UnexpectedResponseError {
pub request_id: Option<String>,
}
const CLOUDFLARE_BLOCKED_MESSAGE: &str =
"Access blocked by Cloudflare. This usually happens when connecting from a restricted region";
impl UnexpectedResponseError {
fn friendly_message(&self) -> Option<String> {
if self.status != StatusCode::FORBIDDEN {
return None;
}
if !self.body.contains("Cloudflare") || !self.body.contains("blocked") {
return None;
}
let mut message = format!("{CLOUDFLARE_BLOCKED_MESSAGE} (status {})", self.status);
if let Some(id) = &self.request_id {
message.push_str(&format!(", request id: {id}"));
}
Some(message)
}
}
impl std::fmt::Display for UnexpectedResponseError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"unexpected status {}: {}{}",
self.status,
self.body,
self.request_id
.as_ref()
.map(|id| format!(", request id: {id}"))
.unwrap_or_default()
)
if let Some(friendly) = self.friendly_message() {
write!(f, "{friendly}")
} else {
write!(
f,
"unexpected status {}: {}{}",
self.status,
self.body,
self.request_id
.as_ref()
.map(|id| format!(", request id: {id}"))
.unwrap_or_default()
)
}
}
}
@@ -432,7 +462,10 @@ pub fn get_error_message_ui(e: &CodexErr) -> String {
_ => e.to_string(),
};
truncate_middle(&message, ERROR_MESSAGE_UI_MAX_BYTES).0
truncate_text(
&message,
TruncationPolicy::Bytes(ERROR_MESSAGE_UI_MAX_BYTES),
)
}
#[cfg(test)]
@@ -662,6 +695,35 @@ mod tests {
});
}
#[test]
fn unexpected_status_cloudflare_html_is_simplified() {
let err = UnexpectedResponseError {
status: StatusCode::FORBIDDEN,
body: "<html><body>Cloudflare error: Sorry, you have been blocked</body></html>"
.to_string(),
request_id: Some("ray-id".to_string()),
};
let status = StatusCode::FORBIDDEN.to_string();
assert_eq!(
err.to_string(),
format!("{CLOUDFLARE_BLOCKED_MESSAGE} (status {status}), request id: ray-id")
);
}
#[test]
fn unexpected_status_non_html_is_unchanged() {
let err = UnexpectedResponseError {
status: StatusCode::FORBIDDEN,
body: "plain text error".to_string(),
request_id: None,
};
let status = StatusCode::FORBIDDEN.to_string();
assert_eq!(
err.to_string(),
format!("unexpected status {status}: plain text error")
);
}
#[test]
fn usage_limit_reached_includes_hours_and_minutes() {
let base = Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap();

View File

@@ -14,6 +14,7 @@ use tracing::warn;
use uuid::Uuid;
use crate::user_instructions::UserInstructions;
use crate::user_shell_command::is_user_shell_command_text;
fn is_session_prefix(text: &str) -> bool {
let trimmed = text.trim_start();
@@ -31,7 +32,7 @@ fn parse_user_message(message: &[ContentItem]) -> Option<UserMessageItem> {
for content_item in message.iter() {
match content_item {
ContentItem::InputText { text } => {
if is_session_prefix(text) {
if is_session_prefix(text) || is_user_shell_command_text(text) {
return None;
}
content.push(UserInput::Text { text: text.clone() });
@@ -197,7 +198,14 @@ mod tests {
text: "# AGENTS.md instructions for test_directory\n\n<INSTRUCTIONS>\ntest_text\n</INSTRUCTIONS>".to_string(),
}],
},
];
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<user_shell_command>echo 42</user_shell_command>".to_string(),
}],
},
];
for item in items {
let turn_item = parse_turn_item(&item);

View File

@@ -189,16 +189,20 @@ async fn exec_windows_sandbox(
};
let sandbox_cwd = cwd.clone();
let logs_base_dir = find_codex_home().ok();
let codex_home = find_codex_home().map_err(|err| {
CodexErr::Io(io::Error::other(format!(
"windows sandbox: failed to resolve codex_home: {err}"
)))
})?;
let spawn_res = tokio::task::spawn_blocking(move || {
run_windows_sandbox_capture(
policy_str,
&sandbox_cwd,
codex_home.as_ref(),
command,
&cwd,
env,
timeout_ms,
logs_base_dir.as_deref(),
)
})
.await;
@@ -313,6 +317,10 @@ pub(crate) mod errors {
SandboxTransformError::MissingLinuxSandboxExecutable => {
CodexErr::LandlockSandboxExecutableNotProvided
}
#[cfg(not(target_os = "macos"))]
SandboxTransformError::SeatbeltUnavailable => CodexErr::UnsupportedOperation(
"seatbelt sandbox is only available on macOS".to_string(),
),
}
}
}
@@ -514,6 +522,7 @@ async fn consume_truncated_output(
}
Err(_) => {
// timeout
kill_child_process_group(&mut child)?;
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE), true)
@@ -521,13 +530,58 @@ async fn consume_truncated_output(
}
}
_ = tokio::signal::ctrl_c() => {
kill_child_process_group(&mut child)?;
child.start_kill()?;
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE), false)
}
};
let stdout = stdout_handle.await??;
let stderr = stderr_handle.await??;
// Wait for the stdout/stderr collection tasks but guard against them
// hanging forever. In the normal case, both pipes are closed once the child
// terminates so the tasks exit quickly. However, if the child process
// spawned grandchildren that inherited its stdout/stderr file descriptors
// those pipes may stay open after we `kill` the direct child on timeout.
// That would cause the `read_capped` tasks to block on `read()`
// indefinitely, effectively hanging the whole agent.
const IO_DRAIN_TIMEOUT_MS: u64 = 2_000; // 2 s should be plenty for local pipes
// We need mutable bindings so we can `abort()` them on timeout.
use tokio::task::JoinHandle;
async fn await_with_timeout(
handle: &mut JoinHandle<std::io::Result<StreamOutput<Vec<u8>>>>,
timeout: Duration,
) -> std::io::Result<StreamOutput<Vec<u8>>> {
match tokio::time::timeout(timeout, &mut *handle).await {
Ok(join_res) => match join_res {
Ok(io_res) => io_res,
Err(join_err) => Err(std::io::Error::other(join_err)),
},
Err(_elapsed) => {
// Timeout: abort the task to avoid hanging on open pipes.
handle.abort();
Ok(StreamOutput {
text: Vec::new(),
truncated_after_lines: None,
})
}
}
}
let mut stdout_handle = stdout_handle;
let mut stderr_handle = stderr_handle;
let stdout = await_with_timeout(
&mut stdout_handle,
Duration::from_millis(IO_DRAIN_TIMEOUT_MS),
)
.await?;
let stderr = await_with_timeout(
&mut stderr_handle,
Duration::from_millis(IO_DRAIN_TIMEOUT_MS),
)
.await?;
drop(agg_tx);
@@ -617,6 +671,38 @@ fn synthetic_exit_status(code: i32) -> ExitStatus {
std::process::ExitStatus::from_raw(code as u32)
}
#[cfg(unix)]
fn kill_child_process_group(child: &mut Child) -> io::Result<()> {
use std::io::ErrorKind;
if let Some(pid) = child.id() {
let pid = pid as libc::pid_t;
let pgid = unsafe { libc::getpgid(pid) };
if pgid == -1 {
let err = std::io::Error::last_os_error();
if err.kind() != ErrorKind::NotFound {
return Err(err);
}
return Ok(());
}
let result = unsafe { libc::killpg(pgid, libc::SIGKILL) };
if result == -1 {
let err = std::io::Error::last_os_error();
if err.kind() != ErrorKind::NotFound {
return Err(err);
}
}
}
Ok(())
}
#[cfg(not(unix))]
fn kill_child_process_group(_: &mut Child) -> io::Result<()> {
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
@@ -689,4 +775,51 @@ mod tests {
let output = make_exec_output(exit_code, "", "", "");
assert!(is_likely_sandbox_denied(SandboxType::LinuxSeccomp, &output));
}
#[cfg(unix)]
#[tokio::test]
async fn kill_child_process_group_kills_grandchildren_on_timeout() -> Result<()> {
let command = vec![
"/bin/bash".to_string(),
"-c".to_string(),
"sleep 60 & echo $!; sleep 60".to_string(),
];
let env: HashMap<String, String> = std::env::vars().collect();
let params = ExecParams {
command,
cwd: std::env::current_dir()?,
timeout_ms: Some(500),
env,
with_escalated_permissions: None,
justification: None,
arg0: None,
};
let output = exec(params, SandboxType::None, &SandboxPolicy::ReadOnly, None).await?;
assert!(output.timed_out);
let stdout = output.stdout.from_utf8_lossy().text;
let pid_line = stdout.lines().next().unwrap_or("").trim();
let pid: i32 = pid_line.parse().map_err(|error| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("Failed to parse pid from stdout '{pid_line}': {error}"),
)
})?;
let mut killed = false;
for _ in 0..20 {
// Use kill(pid, 0) to check if the process is alive.
if unsafe { libc::kill(pid, 0) } == -1
&& let Some(libc::ESRCH) = std::io::Error::last_os_error().raw_os_error()
{
killed = true;
break;
}
tokio::time::sleep(Duration::from_millis(100)).await;
}
assert!(killed, "grandchild process with pid {pid} is still alive");
Ok(())
}
}

View File

@@ -0,0 +1,365 @@
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use crate::command_safety::is_dangerous_command::requires_initial_appoval;
use codex_execpolicy2::Decision;
use codex_execpolicy2::Evaluation;
use codex_execpolicy2::Policy;
use codex_execpolicy2::PolicyParser;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use thiserror::Error;
use tokio::fs;
use crate::bash::parse_shell_lc_plain_commands;
use crate::features::Feature;
use crate::features::Features;
use crate::sandboxing::SandboxPermissions;
use crate::tools::sandboxing::ApprovalRequirement;
const FORBIDDEN_REASON: &str = "execpolicy forbids this command";
const PROMPT_REASON: &str = "execpolicy requires approval for this command";
const POLICY_DIR_NAME: &str = "policy";
const POLICY_EXTENSION: &str = "codexpolicy";
#[derive(Debug, Error)]
pub enum ExecPolicyError {
#[error("failed to read execpolicy files from {dir}: {source}")]
ReadDir {
dir: PathBuf,
source: std::io::Error,
},
#[error("failed to read execpolicy file {path}: {source}")]
ReadFile {
path: PathBuf,
source: std::io::Error,
},
#[error("failed to parse execpolicy file {path}: {source}")]
ParsePolicy {
path: String,
source: codex_execpolicy2::Error,
},
}
pub(crate) async fn exec_policy_for(
features: &Features,
codex_home: &Path,
) -> Result<Arc<Policy>, ExecPolicyError> {
if !features.enabled(Feature::ExecPolicy) {
return Ok(Arc::new(Policy::empty()));
}
let policy_dir = codex_home.join(POLICY_DIR_NAME);
let policy_paths = collect_policy_files(&policy_dir).await?;
let mut parser = PolicyParser::new();
for policy_path in &policy_paths {
let contents =
fs::read_to_string(policy_path)
.await
.map_err(|source| ExecPolicyError::ReadFile {
path: policy_path.clone(),
source,
})?;
let identifier = policy_path.to_string_lossy().to_string();
parser
.parse(&identifier, &contents)
.map_err(|source| ExecPolicyError::ParsePolicy {
path: identifier,
source,
})?;
}
let policy = Arc::new(parser.build());
tracing::debug!(
"loaded execpolicy from {} files in {}",
policy_paths.len(),
policy_dir.display()
);
Ok(policy)
}
fn evaluate_with_policy(
policy: &Policy,
command: &[String],
approval_policy: AskForApproval,
) -> Option<ApprovalRequirement> {
let commands = parse_shell_lc_plain_commands(command).unwrap_or_else(|| vec![command.to_vec()]);
let evaluation = policy.check_multiple(commands.iter());
match evaluation {
Evaluation::Match { decision, .. } => match decision {
Decision::Forbidden => Some(ApprovalRequirement::Forbidden {
reason: FORBIDDEN_REASON.to_string(),
}),
Decision::Prompt => {
let reason = PROMPT_REASON.to_string();
if matches!(approval_policy, AskForApproval::Never) {
Some(ApprovalRequirement::Forbidden { reason })
} else {
Some(ApprovalRequirement::NeedsApproval {
reason: Some(reason),
})
}
}
Decision::Allow => Some(ApprovalRequirement::Skip),
},
Evaluation::NoMatch { .. } => None,
}
}
pub(crate) fn create_approval_requirement_for_command(
policy: &Policy,
command: &[String],
approval_policy: AskForApproval,
sandbox_policy: &SandboxPolicy,
sandbox_permissions: SandboxPermissions,
) -> ApprovalRequirement {
if let Some(requirement) = evaluate_with_policy(policy, command, approval_policy) {
return requirement;
}
if requires_initial_appoval(
approval_policy,
sandbox_policy,
command,
sandbox_permissions,
) {
ApprovalRequirement::NeedsApproval { reason: None }
} else {
ApprovalRequirement::Skip
}
}
async fn collect_policy_files(dir: &Path) -> Result<Vec<PathBuf>, ExecPolicyError> {
let mut read_dir = match fs::read_dir(dir).await {
Ok(read_dir) => read_dir,
Err(err) if err.kind() == ErrorKind::NotFound => return Ok(Vec::new()),
Err(source) => {
return Err(ExecPolicyError::ReadDir {
dir: dir.to_path_buf(),
source,
});
}
};
let mut policy_paths = Vec::new();
while let Some(entry) =
read_dir
.next_entry()
.await
.map_err(|source| ExecPolicyError::ReadDir {
dir: dir.to_path_buf(),
source,
})?
{
let path = entry.path();
let file_type = entry
.file_type()
.await
.map_err(|source| ExecPolicyError::ReadDir {
dir: dir.to_path_buf(),
source,
})?;
if path
.extension()
.and_then(|ext| ext.to_str())
.is_some_and(|ext| ext == POLICY_EXTENSION)
&& file_type.is_file()
{
policy_paths.push(path);
}
}
policy_paths.sort();
Ok(policy_paths)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::features::Feature;
use crate::features::Features;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use pretty_assertions::assert_eq;
use std::fs;
use tempfile::tempdir;
#[tokio::test]
async fn returns_empty_policy_when_feature_disabled() {
let mut features = Features::with_defaults();
features.disable(Feature::ExecPolicy);
let temp_dir = tempdir().expect("create temp dir");
let policy = exec_policy_for(&features, temp_dir.path())
.await
.expect("policy result");
let commands = [vec!["rm".to_string()]];
assert!(matches!(
policy.check_multiple(commands.iter()),
Evaluation::NoMatch { .. }
));
assert!(!temp_dir.path().join(POLICY_DIR_NAME).exists());
}
#[tokio::test]
async fn collect_policy_files_returns_empty_when_dir_missing() {
let temp_dir = tempdir().expect("create temp dir");
let policy_dir = temp_dir.path().join(POLICY_DIR_NAME);
let files = collect_policy_files(&policy_dir)
.await
.expect("collect policy files");
assert!(files.is_empty());
}
#[tokio::test]
async fn loads_policies_from_policy_subdirectory() {
let temp_dir = tempdir().expect("create temp dir");
let policy_dir = temp_dir.path().join(POLICY_DIR_NAME);
fs::create_dir_all(&policy_dir).expect("create policy dir");
fs::write(
policy_dir.join("deny.codexpolicy"),
r#"prefix_rule(pattern=["rm"], decision="forbidden")"#,
)
.expect("write policy file");
let policy = exec_policy_for(&Features::with_defaults(), temp_dir.path())
.await
.expect("policy result");
let command = [vec!["rm".to_string()]];
assert!(matches!(
policy.check_multiple(command.iter()),
Evaluation::Match { .. }
));
}
#[tokio::test]
async fn ignores_policies_outside_policy_dir() {
let temp_dir = tempdir().expect("create temp dir");
fs::write(
temp_dir.path().join("root.codexpolicy"),
r#"prefix_rule(pattern=["ls"], decision="prompt")"#,
)
.expect("write policy file");
let policy = exec_policy_for(&Features::with_defaults(), temp_dir.path())
.await
.expect("policy result");
let command = [vec!["ls".to_string()]];
assert!(matches!(
policy.check_multiple(command.iter()),
Evaluation::NoMatch { .. }
));
}
#[test]
fn evaluates_bash_lc_inner_commands() {
let policy_src = r#"
prefix_rule(pattern=["rm"], decision="forbidden")
"#;
let mut parser = PolicyParser::new();
parser
.parse("test.codexpolicy", policy_src)
.expect("parse policy");
let policy = parser.build();
let forbidden_script = vec![
"bash".to_string(),
"-lc".to_string(),
"rm -rf /tmp".to_string(),
];
let requirement =
evaluate_with_policy(&policy, &forbidden_script, AskForApproval::OnRequest)
.expect("expected match for forbidden command");
assert_eq!(
requirement,
ApprovalRequirement::Forbidden {
reason: FORBIDDEN_REASON.to_string()
}
);
}
#[test]
fn approval_requirement_prefers_execpolicy_match() {
let policy_src = r#"prefix_rule(pattern=["rm"], decision="prompt")"#;
let mut parser = PolicyParser::new();
parser
.parse("test.codexpolicy", policy_src)
.expect("parse policy");
let policy = parser.build();
let command = vec!["rm".to_string()];
let requirement = create_approval_requirement_for_command(
&policy,
&command,
AskForApproval::OnRequest,
&SandboxPolicy::DangerFullAccess,
SandboxPermissions::UseDefault,
);
assert_eq!(
requirement,
ApprovalRequirement::NeedsApproval {
reason: Some(PROMPT_REASON.to_string())
}
);
}
#[test]
fn approval_requirement_respects_approval_policy() {
let policy_src = r#"prefix_rule(pattern=["rm"], decision="prompt")"#;
let mut parser = PolicyParser::new();
parser
.parse("test.codexpolicy", policy_src)
.expect("parse policy");
let policy = parser.build();
let command = vec!["rm".to_string()];
let requirement = create_approval_requirement_for_command(
&policy,
&command,
AskForApproval::Never,
&SandboxPolicy::DangerFullAccess,
SandboxPermissions::UseDefault,
);
assert_eq!(
requirement,
ApprovalRequirement::Forbidden {
reason: PROMPT_REASON.to_string()
}
);
}
#[test]
fn approval_requirement_falls_back_to_heuristics() {
let command = vec!["python".to_string()];
let empty_policy = Policy::empty();
let requirement = create_approval_requirement_for_command(
&empty_policy,
&command,
AskForApproval::UnlessTrusted,
&SandboxPolicy::ReadOnly,
SandboxPermissions::UseDefault,
);
assert_eq!(
requirement,
ApprovalRequirement::NeedsApproval { reason: None }
);
}
}

View File

@@ -27,10 +27,13 @@ pub enum Stage {
/// Unique features toggled via configuration.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum Feature {
/// Create a ghost commit at each turn.
GhostCommit,
/// Use the single unified PTY-backed exec tool.
UnifiedExec,
/// Use the streamable exec-command/write-stdin tool pair.
StreamableShell,
/// Use the shell command tool that takes `command` as a single string of
/// shell instead of an array of args passed to `execvp(3)`.
ShellCommandTool,
/// Enable experimental RMCP features such as OAuth login.
RmcpClient,
/// Include the freeform apply_patch tool.
@@ -39,12 +42,18 @@ pub enum Feature {
ViewImageTool,
/// Allow the model to request web searches.
WebSearchRequest,
/// Gate the execpolicy enforcement for shell/unified exec.
ExecPolicy,
/// Enable the model-based risk assessments for sandboxed commands.
SandboxCommandAssessment,
/// Create a ghost commit at each turn.
GhostCommit,
/// Enable Windows sandbox (restricted token) on Windows.
WindowsSandbox,
/// Remote compaction enabled (only for ChatGPT auth)
RemoteCompaction,
/// Enable the default shell tool.
ShellTool,
/// Allow model to call multiple tools in parallel (only for models supporting it).
ParallelToolCalls,
}
impl Feature {
@@ -118,8 +127,9 @@ impl Features {
self.enabled.contains(&f)
}
pub fn enable(&mut self, f: Feature) {
pub fn enable(&mut self, f: Feature) -> &mut Self {
self.enabled.insert(f);
self
}
pub fn disable(&mut self, f: Feature) -> &mut Self {
@@ -178,7 +188,6 @@ impl Features {
let base_legacy = LegacyFeatureToggles {
experimental_sandbox_command_assessment: cfg.experimental_sandbox_command_assessment,
experimental_use_freeform_apply_patch: cfg.experimental_use_freeform_apply_patch,
experimental_use_exec_command_tool: cfg.experimental_use_exec_command_tool,
experimental_use_unified_exec_tool: cfg.experimental_use_unified_exec_tool,
experimental_use_rmcp_client: cfg.experimental_use_rmcp_client,
tools_web_search: cfg.tools.as_ref().and_then(|t| t.web_search),
@@ -197,7 +206,7 @@ impl Features {
.experimental_sandbox_command_assessment,
experimental_use_freeform_apply_patch: config_profile
.experimental_use_freeform_apply_patch,
experimental_use_exec_command_tool: config_profile.experimental_use_exec_command_tool,
experimental_use_unified_exec_tool: config_profile.experimental_use_unified_exec_tool,
experimental_use_rmcp_client: config_profile.experimental_use_rmcp_client,
tools_web_search: config_profile.tools_web_search,
@@ -246,6 +255,14 @@ pub struct FeatureSpec {
}
pub const FEATURES: &[FeatureSpec] = &[
// Stable features.
FeatureSpec {
id: Feature::GhostCommit,
key: "undo",
stage: Stage::Stable,
default_enabled: true,
},
// Unstable features.
FeatureSpec {
id: Feature::UnifiedExec,
key: "unified_exec",
@@ -253,8 +270,8 @@ pub const FEATURES: &[FeatureSpec] = &[
default_enabled: false,
},
FeatureSpec {
id: Feature::StreamableShell,
key: "streamable_shell",
id: Feature::ShellCommandTool,
key: "shell_command_tool",
stage: Stage::Experimental,
default_enabled: false,
},
@@ -283,14 +300,14 @@ pub const FEATURES: &[FeatureSpec] = &[
default_enabled: false,
},
FeatureSpec {
id: Feature::SandboxCommandAssessment,
key: "experimental_sandbox_command_assessment",
id: Feature::ExecPolicy,
key: "exec_policy",
stage: Stage::Experimental,
default_enabled: false,
default_enabled: true,
},
FeatureSpec {
id: Feature::GhostCommit,
key: "ghost_commit",
id: Feature::SandboxCommandAssessment,
key: "experimental_sandbox_command_assessment",
stage: Stage::Experimental,
default_enabled: false,
},
@@ -300,4 +317,22 @@ pub const FEATURES: &[FeatureSpec] = &[
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::RemoteCompaction,
key: "remote_compaction",
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::ParallelToolCalls,
key: "parallel",
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::ShellTool,
key: "shell_tool",
stage: Stage::Stable,
default_enabled: true,
},
];

View File

@@ -17,10 +17,6 @@ const ALIASES: &[Alias] = &[
legacy_key: "experimental_use_unified_exec_tool",
feature: Feature::UnifiedExec,
},
Alias {
legacy_key: "experimental_use_exec_command_tool",
feature: Feature::StreamableShell,
},
Alias {
legacy_key: "experimental_use_rmcp_client",
feature: Feature::RmcpClient,
@@ -54,7 +50,6 @@ pub struct LegacyFeatureToggles {
pub include_apply_patch_tool: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub tools_web_search: Option<bool>,
@@ -81,12 +76,6 @@ impl LegacyFeatureToggles {
self.experimental_use_freeform_apply_patch,
"experimental_use_freeform_apply_patch",
);
set_if_some(
features,
Feature::StreamableShell,
self.experimental_use_exec_command_tool,
"experimental_use_exec_command_tool",
);
set_if_some(
features,
Feature::UnifiedExec,

View File

@@ -13,6 +13,7 @@ mod client;
mod client_common;
pub mod codex;
mod codex_conversation;
mod compact_remote;
pub use codex_conversation::CodexConversation;
mod codex_delegate;
mod command_safety;
@@ -24,6 +25,7 @@ mod environment_context;
pub mod error;
pub mod exec;
pub mod exec_env;
mod exec_policy;
pub mod features;
mod flags;
pub mod git_info;
@@ -34,14 +36,18 @@ mod mcp_tool_call;
mod message_history;
mod model_provider_info;
pub mod parse_command;
pub mod powershell;
mod response_processing;
pub mod sandboxing;
pub mod token_data;
mod truncate;
mod unified_exec;
mod user_instructions;
pub use model_provider_info::BUILT_IN_OSS_MODEL_PROVIDER_ID;
pub use model_provider_info::DEFAULT_LMSTUDIO_PORT;
pub use model_provider_info::DEFAULT_OLLAMA_PORT;
pub use model_provider_info::LMSTUDIO_OSS_PROVIDER_ID;
pub use model_provider_info::ModelProviderInfo;
pub use model_provider_info::OLLAMA_OSS_PROVIDER_ID;
pub use model_provider_info::WireApi;
pub use model_provider_info::built_in_model_providers;
pub use model_provider_info::create_oss_provider_with_base_url;
@@ -81,6 +87,7 @@ mod function_tool;
mod state;
mod tasks;
mod user_notification;
mod user_shell_command;
pub mod util;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
@@ -99,11 +106,12 @@ pub use client_common::Prompt;
pub use client_common::REVIEW_PROMPT;
pub use client_common::ResponseEvent;
pub use client_common::ResponseStream;
pub use codex::compact::content_items_to_text;
pub use codex_protocol::models::ContentItem;
pub use codex_protocol::models::LocalShellAction;
pub use codex_protocol::models::LocalShellExecAction;
pub use codex_protocol::models::LocalShellStatus;
pub use codex_protocol::models::ResponseItem;
pub use compact::content_items_to_text;
pub use event_mapping::parse_turn_item;
pub mod compact;
pub mod otel_init;

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,17 @@
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::Verbosity;
use crate::config::types::ReasoningSummaryFormat;
use crate::tools::handlers::apply_patch::ApplyPatchToolType;
use crate::tools::spec::ConfigShellToolType;
use crate::truncate::TruncationPolicy;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
const GPT_5_CODEX_INSTRUCTIONS: &str = include_str!("../gpt_5_codex_prompt.md");
const GPT_5_1_INSTRUCTIONS: &str = include_str!("../gpt_5_1_prompt.md");
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
@@ -26,15 +33,12 @@ pub struct ModelFamily {
// `summary` is optional).
pub supports_reasoning_summaries: bool,
// The reasoning effort to use for this model family when none is explicitly chosen.
pub default_reasoning_effort: Option<ReasoningEffort>,
// Define if we need a special handling of reasoning summary
pub reasoning_summary_format: ReasoningSummaryFormat,
// This should be set to true when the model expects a tool named
// "local_shell" to be provided. Its contract must be understood natively by
// the model such that its description can be omitted.
// See https://platform.openai.com/docs/guides/tools-local-shell
pub uses_local_shell_tool: bool,
/// Whether this model supports parallel tool calls when using the
/// Responses API.
pub supports_parallel_tool_calls: bool,
@@ -57,6 +61,14 @@ pub struct ModelFamily {
/// If the model family supports setting the verbosity level when using Responses API.
pub support_verbosity: bool,
// The default verbosity level for this model family when using Responses API.
pub default_verbosity: Option<Verbosity>,
/// Preferred shell tool type for this model family when features do not override it.
pub shell_type: ConfigShellToolType,
pub truncation_policy: TruncationPolicy,
}
macro_rules! model_family {
@@ -64,20 +76,25 @@ macro_rules! model_family {
$slug:expr, $family:expr $(, $key:ident : $value:expr )* $(,)?
) => {{
// defaults
#[allow(unused_mut)]
let mut mf = ModelFamily {
slug: $slug.to_string(),
family: $family.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
supports_parallel_tool_calls: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
experimental_supported_tools: Vec::new(),
effective_context_window_percent: 95,
support_verbosity: false,
shell_type: ConfigShellToolType::Default,
default_verbosity: None,
default_reasoning_effort: None,
truncation_policy: TruncationPolicy::Bytes(10_000),
};
// apply overrides
$(
mf.$key = $value;
@@ -105,8 +122,8 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "codex-mini-latest",
supports_reasoning_summaries: true,
uses_local_shell_tool: true,
needs_special_apply_patch_instructions: true,
shell_type: ConfigShellToolType::Local,
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -119,7 +136,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("gpt-3.5") {
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("test-gpt-5-codex") {
} else if slug.starts_with("test-gpt-5") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
@@ -133,6 +150,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
],
supports_parallel_tool_calls: true,
support_verbosity: true,
truncation_policy: TruncationPolicy::Tokens(10_000),
)
// Internal models.
@@ -148,19 +166,39 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
"list_dir".to_string(),
"read_file".to_string(),
],
shell_type: if cfg!(windows) { ConfigShellToolType::ShellCommand } else { ConfigShellToolType::Default },
supports_parallel_tool_calls: true,
support_verbosity: true,
truncation_policy: TruncationPolicy::Tokens(10_000),
)
// Production models.
} else if slug.starts_with("gpt-5-codex") || slug.starts_with("codex-") {
} else if slug.starts_with("gpt-5-codex")
|| slug.starts_with("gpt-5.1-codex")
|| slug.starts_with("codex-")
{
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
shell_type: if cfg!(windows) { ConfigShellToolType::ShellCommand } else { ConfigShellToolType::Default },
supports_parallel_tool_calls: true,
support_verbosity: false,
truncation_policy: TruncationPolicy::Tokens(10_000),
)
} else if slug.starts_with("gpt-5.1") {
model_family!(
slug, "gpt-5.1",
supports_reasoning_summaries: true,
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
support_verbosity: true,
default_verbosity: Some(Verbosity::Low),
base_instructions: GPT_5_1_INSTRUCTIONS.to_string(),
default_reasoning_effort: Some(ReasoningEffort::Medium),
truncation_policy: TruncationPolicy::Bytes(10_000),
supports_parallel_tool_calls: true,
)
} else if slug.starts_with("gpt-5") {
model_family!(
@@ -168,6 +206,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
support_verbosity: true,
truncation_policy: TruncationPolicy::Bytes(10_000),
)
} else {
None
@@ -181,12 +220,15 @@ pub fn derive_default_model_family(model: &str) -> ModelFamily {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
supports_parallel_tool_calls: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
experimental_supported_tools: Vec::new(),
effective_context_window_percent: 95,
support_verbosity: false,
shell_type: ConfigShellToolType::Default,
default_verbosity: None,
default_reasoning_effort: None,
truncation_policy: TruncationPolicy::Bytes(10_000),
}
}

View File

@@ -8,6 +8,7 @@
use crate::CodexAuth;
use crate::default_client::CodexHttpClient;
use crate::default_client::CodexRequestBuilder;
use crate::error::CodexErr;
use codex_app_server_protocol::AuthMode;
use serde::Deserialize;
use serde::Serialize;
@@ -109,21 +110,7 @@ impl ModelProviderInfo {
client: &'a CodexHttpClient,
auth: &Option<CodexAuth>,
) -> crate::error::Result<CodexRequestBuilder> {
let effective_auth = if let Some(secret_key) = &self.experimental_bearer_token {
Some(CodexAuth::from_api_key(secret_key))
} else {
match self.api_key() {
Ok(Some(key)) => Some(CodexAuth::from_api_key(&key)),
Ok(None) => auth.clone(),
Err(err) => {
if auth.is_some() {
auth.clone()
} else {
return Err(err);
}
}
}
};
let effective_auth = self.effective_auth(auth)?;
let url = self.get_full_url(&effective_auth);
@@ -136,6 +123,51 @@ impl ModelProviderInfo {
Ok(self.apply_http_headers(builder))
}
pub async fn create_compact_request_builder<'a>(
&'a self,
client: &'a CodexHttpClient,
auth: &Option<CodexAuth>,
) -> crate::error::Result<CodexRequestBuilder> {
if self.wire_api != WireApi::Responses {
return Err(CodexErr::UnsupportedOperation(
"Compaction endpoint requires Responses API providers".to_string(),
));
}
let effective_auth = self.effective_auth(auth)?;
let url = self.get_compact_url(&effective_auth).ok_or_else(|| {
CodexErr::UnsupportedOperation(
"Compaction endpoint requires Responses API providers".to_string(),
)
})?;
let mut builder = client.post(url);
if let Some(auth) = effective_auth.as_ref() {
builder = builder.bearer_auth(auth.get_token().await?);
}
Ok(self.apply_http_headers(builder))
}
fn effective_auth(&self, auth: &Option<CodexAuth>) -> crate::error::Result<Option<CodexAuth>> {
if let Some(secret_key) = &self.experimental_bearer_token {
return Ok(Some(CodexAuth::from_api_key(secret_key)));
}
match self.api_key() {
Ok(Some(key)) => Ok(Some(CodexAuth::from_api_key(&key))),
Ok(None) => Ok(auth.clone()),
Err(err) => {
if auth.is_some() {
Ok(auth.clone())
} else {
Err(err)
}
}
}
}
fn get_query_string(&self) -> String {
self.query_params
.as_ref()
@@ -173,6 +205,18 @@ impl ModelProviderInfo {
}
}
pub(crate) fn get_compact_url(&self, auth: &Option<CodexAuth>) -> Option<String> {
if self.wire_api != WireApi::Responses {
return None;
}
let full = self.get_full_url(auth);
if let Some((path, query)) = full.split_once('?') {
Some(format!("{path}/compact?{query}"))
} else {
Some(format!("{full}/compact"))
}
}
pub(crate) fn is_azure_responses_endpoint(&self) -> bool {
if self.wire_api != WireApi::Responses {
return false;
@@ -258,9 +302,11 @@ impl ModelProviderInfo {
}
}
const DEFAULT_OLLAMA_PORT: u32 = 11434;
pub const DEFAULT_LMSTUDIO_PORT: u16 = 1234;
pub const DEFAULT_OLLAMA_PORT: u16 = 11434;
pub const BUILT_IN_OSS_MODEL_PROVIDER_ID: &str = "oss";
pub const LMSTUDIO_OSS_PROVIDER_ID: &str = "lmstudio";
pub const OLLAMA_OSS_PROVIDER_ID: &str = "ollama";
/// Built-in default provider list.
pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
@@ -311,14 +357,21 @@ pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
requires_openai_auth: true,
},
),
(BUILT_IN_OSS_MODEL_PROVIDER_ID, create_oss_provider()),
(
OLLAMA_OSS_PROVIDER_ID,
create_oss_provider(DEFAULT_OLLAMA_PORT, WireApi::Chat),
),
(
LMSTUDIO_OSS_PROVIDER_ID,
create_oss_provider(DEFAULT_LMSTUDIO_PORT, WireApi::Responses),
),
]
.into_iter()
.map(|(k, v)| (k.to_string(), v))
.collect()
}
pub fn create_oss_provider() -> ModelProviderInfo {
pub fn create_oss_provider(default_provider_port: u16, wire_api: WireApi) -> ModelProviderInfo {
// These CODEX_OSS_ environment variables are experimental: we may
// switch to reading values from config.toml instead.
let codex_oss_base_url = match std::env::var("CODEX_OSS_BASE_URL")
@@ -331,22 +384,21 @@ pub fn create_oss_provider() -> ModelProviderInfo {
port = std::env::var("CODEX_OSS_PORT")
.ok()
.filter(|v| !v.trim().is_empty())
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(DEFAULT_OLLAMA_PORT)
.and_then(|v| v.parse::<u16>().ok())
.unwrap_or(default_provider_port)
),
};
create_oss_provider_with_base_url(&codex_oss_base_url)
create_oss_provider_with_base_url(&codex_oss_base_url, wire_api)
}
pub fn create_oss_provider_with_base_url(base_url: &str) -> ModelProviderInfo {
pub fn create_oss_provider_with_base_url(base_url: &str, wire_api: WireApi) -> ModelProviderInfo {
ModelProviderInfo {
name: "gpt-oss".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
experimental_bearer_token: None,
wire_api: WireApi::Chat,
wire_api,
query_params: None,
http_headers: None,
env_http_headers: None,

View File

@@ -70,7 +70,7 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
// https://platform.openai.com/docs/models/gpt-3.5-turbo
"gpt-3.5-turbo" => Some(ModelInfo::new(16_385, 4_096)),
_ if slug.starts_with("gpt-5-codex") => {
_ if slug.starts_with("gpt-5-codex") || slug.starts_with("gpt-5.1-codex") => {
Some(ModelInfo::new(CONTEXT_WINDOW_272K, MAX_OUTPUT_TOKENS_128K))
}

View File

@@ -1,15 +1,22 @@
use crate::bash::extract_bash_command;
use crate::bash::try_parse_shell;
use crate::bash::try_parse_word_only_commands_sequence;
use crate::powershell::extract_powershell_command;
use codex_protocol::parse_command::ParsedCommand;
use shlex::split as shlex_split;
use shlex::try_join as shlex_try_join;
use std::path::PathBuf;
fn shlex_join(tokens: &[String]) -> String {
pub fn shlex_join(tokens: &[String]) -> String {
shlex_try_join(tokens.iter().map(String::as_str))
.unwrap_or_else(|_| "<command included NUL byte>".to_string())
}
/// Extracts the shell and script from a command, regardless of platform
pub fn extract_shell_command(command: &[String]) -> Option<(&str, &str)> {
extract_bash_command(command).or_else(|| extract_powershell_command(command))
}
/// DO NOT REVIEW THIS CODE BY HAND
/// This parsing code is quite complex and not easy to hand-modify.
/// The easiest way to iterate is to add unit tests and have Codex fix the implementation.
@@ -853,6 +860,65 @@ mod tests {
}],
);
}
#[test]
fn bin_bash_lc_sed() {
assert_parsed(
&shlex_split_safe("/bin/bash -lc 'sed -n '1,10p' Cargo.toml'"),
vec![ParsedCommand::Read {
cmd: "sed -n '1,10p' Cargo.toml".to_string(),
name: "Cargo.toml".to_string(),
path: PathBuf::from("Cargo.toml"),
}],
);
}
#[test]
fn bin_zsh_lc_sed() {
assert_parsed(
&shlex_split_safe("/bin/zsh -lc 'sed -n '1,10p' Cargo.toml'"),
vec![ParsedCommand::Read {
cmd: "sed -n '1,10p' Cargo.toml".to_string(),
name: "Cargo.toml".to_string(),
path: PathBuf::from("Cargo.toml"),
}],
);
}
#[test]
fn powershell_command_is_stripped() {
assert_parsed(
&vec_str(&["powershell", "-Command", "Get-ChildItem"]),
vec![ParsedCommand::Unknown {
cmd: "Get-ChildItem".to_string(),
}],
);
}
#[test]
fn pwsh_with_noprofile_and_c_alias_is_stripped() {
assert_parsed(
&vec_str(&["pwsh", "-NoProfile", "-c", "Write-Host hi"]),
vec![ParsedCommand::Unknown {
cmd: "Write-Host hi".to_string(),
}],
);
}
#[test]
fn powershell_with_path_is_stripped() {
let command = if cfg!(windows) {
"C:\\windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
} else {
"/usr/local/bin/powershell.exe"
};
assert_parsed(
&vec_str(&[command, "-NoProfile", "-c", "Write-Host hi"]),
vec![ParsedCommand::Unknown {
cmd: "Write-Host hi".to_string(),
}],
);
}
}
pub fn parse_command_impl(command: &[String]) -> Vec<ParsedCommand> {
@@ -860,6 +926,12 @@ pub fn parse_command_impl(command: &[String]) -> Vec<ParsedCommand> {
return commands;
}
if let Some((_, script)) = extract_powershell_command(command) {
return vec![ParsedCommand::Unknown {
cmd: script.to_string(),
}];
}
let normalized = normalize_tokens(command);
let parts = if contains_connectors(&normalized) {
@@ -1166,18 +1238,14 @@ fn parse_find_query_and_path(tail: &[String]) -> (Option<String>, Option<String>
}
fn parse_shell_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
let [shell, flag, script] = original else {
return None;
};
if flag != "-lc" || !(shell == "bash" || shell == "zsh") {
return None;
}
// Only handle bash/zsh here; PowerShell is stripped separately without bash parsing.
let (_, script) = extract_bash_command(original)?;
if let Some(tree) = try_parse_shell(script)
&& let Some(all_commands) = try_parse_word_only_commands_sequence(&tree, script)
&& !all_commands.is_empty()
{
let script_tokens = shlex_split(script)
.unwrap_or_else(|| vec![shell.clone(), flag.clone(), script.clone()]);
let script_tokens = shlex_split(script).unwrap_or_else(|| vec![script.to_string()]);
// Strip small formatting helpers (e.g., head/tail/awk/wc/etc) so we
// bias toward the primary command when pipelines are present.
// First, drop obvious small formatting helpers (e.g., wc/awk/etc).
@@ -1186,7 +1254,7 @@ fn parse_shell_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
let filtered_commands = drop_small_formatting_commands(all_commands);
if filtered_commands.is_empty() {
return Some(vec![ParsedCommand::Unknown {
cmd: script.clone(),
cmd: script.to_string(),
}]);
}
// Build parsed commands, tracking `cd` segments to compute effective file paths.
@@ -1250,7 +1318,7 @@ fn parse_shell_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
});
if has_pipe && has_sed_n {
ParsedCommand::Read {
cmd: script.clone(),
cmd: script.to_string(),
name,
path,
}
@@ -1295,7 +1363,7 @@ fn parse_shell_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
return Some(commands);
}
Some(vec![ParsedCommand::Unknown {
cmd: script.clone(),
cmd: script.to_string(),
}])
}

View File

@@ -0,0 +1,93 @@
use std::path::PathBuf;
use crate::shell::ShellType;
use crate::shell::detect_shell_type;
const POWERSHELL_FLAGS: &[&str] = &["-nologo", "-noprofile", "-command", "-c"];
/// Extract the PowerShell script body from an invocation such as:
///
/// - ["pwsh", "-NoProfile", "-Command", "Get-ChildItem -Recurse | Select-String foo"]
/// - ["powershell.exe", "-Command", "Write-Host hi"]
/// - ["powershell", "-NoLogo", "-NoProfile", "-Command", "...script..."]
///
/// Returns (`shell`, `script`) when the first arg is a PowerShell executable and a
/// `-Command` (or `-c`) flag is present followed by a script string.
pub fn extract_powershell_command(command: &[String]) -> Option<(&str, &str)> {
if command.len() < 3 {
return None;
}
let shell = &command[0];
if detect_shell_type(&PathBuf::from(shell)) != Some(ShellType::PowerShell) {
return None;
}
// Find the first occurrence of -Command (accept common short alias -c as well)
let mut i = 1usize;
while i + 1 < command.len() {
let flag = &command[i];
// Reject unknown flags
if !POWERSHELL_FLAGS.contains(&flag.to_ascii_lowercase().as_str()) {
return None;
}
if flag.eq_ignore_ascii_case("-Command") || flag.eq_ignore_ascii_case("-c") {
let script = &command[i + 1];
return Some((shell, script.as_str()));
}
i += 1;
}
None
}
#[cfg(test)]
mod tests {
use super::extract_powershell_command;
#[test]
fn extracts_basic_powershell_command() {
let cmd = vec![
"powershell".to_string(),
"-Command".to_string(),
"Write-Host hi".to_string(),
];
let (_shell, script) = extract_powershell_command(&cmd).expect("extract");
assert_eq!(script, "Write-Host hi");
}
#[test]
fn extracts_lowercase_flags() {
let cmd = vec![
"powershell".to_string(),
"-nologo".to_string(),
"-command".to_string(),
"Write-Host hi".to_string(),
];
let (_shell, script) = extract_powershell_command(&cmd).expect("extract");
assert_eq!(script, "Write-Host hi");
}
#[test]
fn extracts_full_path_powershell_command() {
let command = if cfg!(windows) {
"C:\\windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe".to_string()
} else {
"/usr/local/bin/powershell.exe".to_string()
};
let cmd = vec![command, "-Command".to_string(), "Write-Host hi".to_string()];
let (_shell, script) = extract_powershell_command(&cmd).expect("extract");
assert_eq!(script, "Write-Host hi");
}
#[test]
fn extracts_with_noprofile_and_alias() {
let cmd = vec![
"pwsh".to_string(),
"-NoProfile".to_string(),
"-c".to_string(),
"Get-ChildItem | Select-String foo".to_string(),
];
let (_shell, script) = extract_powershell_command(&cmd).expect("extract");
assert_eq!(script, "Get-ChildItem | Select-String foo");
}
}

View File

@@ -13,92 +13,58 @@ pub(crate) async fn process_items(
sess: &Session,
turn_context: &TurnContext,
) -> (Vec<ResponseInputItem>, Vec<ResponseItem>) {
let mut items_to_record_in_conversation_history = Vec::<ResponseItem>::new();
let mut outputs_to_record = Vec::<ResponseItem>::new();
let mut new_inputs_to_record = Vec::<ResponseItem>::new();
let mut responses = Vec::<ResponseInputItem>::new();
for processed_response_item in processed_items {
let crate::codex::ProcessedResponseItem { item, response } = processed_response_item;
match (&item, &response) {
(ResponseItem::Message { role, .. }, None) if role == "assistant" => {
// If the model returned a message, we need to record it.
items_to_record_in_conversation_history.push(item);
}
(
ResponseItem::LocalShellCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(ResponseItem::FunctionCallOutput {
if let Some(response) = &response {
responses.push(response.clone());
}
match response {
Some(ResponseInputItem::FunctionCallOutput { call_id, output }) => {
new_inputs_to_record.push(ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: output.clone(),
});
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(ResponseItem::FunctionCallOutput {
Some(ResponseInputItem::CustomToolCallOutput { call_id, output }) => {
new_inputs_to_record.push(ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: output.clone(),
});
}
(
ResponseItem::CustomToolCall { .. },
Some(ResponseInputItem::CustomToolCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: output.clone(),
});
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::McpToolCallOutput { call_id, result }),
) => {
items_to_record_in_conversation_history.push(item);
Some(ResponseInputItem::McpToolCallOutput { call_id, result }) => {
let output = match result {
Ok(call_tool_result) => FunctionCallOutputPayload::from(call_tool_result),
Ok(call_tool_result) => FunctionCallOutputPayload::from(&call_tool_result),
Err(err) => FunctionCallOutputPayload {
content: err.clone(),
success: Some(false),
..Default::default()
},
};
items_to_record_in_conversation_history.push(ResponseItem::FunctionCallOutput {
new_inputs_to_record.push(ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output,
});
}
(
ResponseItem::Reasoning {
id,
summary,
content,
encrypted_content,
},
None,
) => {
items_to_record_in_conversation_history.push(ResponseItem::Reasoning {
id: id.clone(),
summary: summary.clone(),
content: content.clone(),
encrypted_content: encrypted_content.clone(),
});
}
None => {}
_ => {
warn!("Unexpected response item: {item:?} with response: {response:?}");
}
};
if let Some(response) = response {
responses.push(response);
}
outputs_to_record.push(item);
}
let all_items_to_record = [outputs_to_record, new_inputs_to_record].concat();
// Only attempt to take the lock if there is something to record.
if !items_to_record_in_conversation_history.is_empty() {
sess.record_conversation_items(turn_context, &items_to_record_in_conversation_history)
if !all_items_to_record.is_empty() {
sess.record_conversation_items(turn_context, &all_items_to_record)
.await;
}
(responses, items_to_record_in_conversation_history)
(responses, all_items_to_record)
}

View File

@@ -27,7 +27,8 @@ pub(crate) fn should_persist_response_item(item: &ResponseItem) -> bool {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::GhostSnapshot { .. } => true,
| ResponseItem::GhostSnapshot { .. }
| ResponseItem::CompactionSummary { .. } => true,
ResponseItem::Other => false,
}
}
@@ -72,6 +73,8 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::UndoStarted(_)
| EventMsg::McpListToolsResponse(_)
| EventMsg::McpStartupUpdate(_)
| EventMsg::McpStartupComplete(_)
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::ShutdownComplete

View File

@@ -814,6 +814,7 @@ async fn test_tail_skips_trailing_non_responses() -> Result<()> {
timestamp: format!("{ts}-compacted"),
item: RolloutItem::Compacted(CompactedItem {
message: "compacted".into(),
replacement_history: None,
}),
};
writeln!(file, "{}", serde_json::to_string(&compacted_line)?)?;

Some files were not shown because too many files have changed in this diff Show More