Compare commits

...

9 Commits

Author SHA1 Message Date
jif-oai
8558e8aa51 codex debug 3 (guardian approved) (#17119)
Removes lines 15-21 from core/templates/agents/orchestrator.md.
2026-04-08 14:10:22 +01:00
jif-oai
22c1fc0131 codex debug 1 (guardian approved) (#17117)
Removes lines 1-7 from core/templates/agents/orchestrator.md.
2026-04-08 14:10:15 +01:00
jif-oai
2bbab7d8f9 feat: single app-server bootstrap in TUI (#16582)
Before this, the TUI was starting 2 app-server. One to check the login
status and one to actually start the session

This PR make only one app-server startup and defer the login check in
async, outside of the frame rendering path

---------

Co-authored-by: Codex <noreply@openai.com>
2026-04-08 13:49:06 +01:00
Vivian Fang
d47b755aa2 Render namespace description for tools (#16879) 2026-04-08 02:39:40 -07:00
Vivian Fang
9091999c83 Render function attribute descriptions (#16880) 2026-04-08 02:10:45 -07:00
Vivian Fang
ea516f9a40 Support anyOf and enum in JsonSchema (#16875)
This brings us into better alignment with the JSON schema subset that is
supported in
<https://developers.openai.com/api/docs/guides/structured-outputs#supported-schemas>,
and also allows us to render richer function signatures in code mode
(e.g., anyOf{null, OtherObjectType})
2026-04-08 01:07:55 -07:00
Eric Traut
abc678f9e8 Remove obsolete codex-cli README (#17096)
Problem: codex-cli/README.md is obsolete and confusing to keep around.

Solution: Delete codex-cli/README.md so the stale README is no longer
present in the repository.
2026-04-08 00:18:23 -07:00
Eric Traut
79768dd61c Remove expired April 2nd tooltip copy (#16698)
Addresses #16677

Problem: Paid-plan startup tooltips still advertised 2x rate limits
until April 2nd after that promo had expired.

Solution: Remove the stale expiry copy and use evergreen Codex App /
Codex startup tips instead.
2026-04-07 22:20:04 -07:00
viyatb-oai
3c1adbabcd fix: refresh network proxy settings when sandbox mode changes (#17040)
## Summary

Fix network proxy sessions so changing sandbox mode recomputes the
effective managed network policy and applies it to the already-running
per-session proxy.

## Root Cause

`danger_full_access_denylist_only` injects `"*"` only while building the
proxy spec for Full Access. Sessions built that spec once at startup, so
a later permission switch to Full Access left the live proxy in its
original restricted policy. Switching back needed the same recompute
path to remove the synthetic wildcard again.

## What Changed

- Preserve the original managed network proxy config/requirements so the
effective spec can be recomputed for a new sandbox policy.
- Refresh the current session proxy when sandbox settings change, then
reapply exec-policy network overlays.
- Add an in-place proxy state update path while rejecting
listener/port/SOCKS changes that cannot be hot-reloaded.
- Keep runtime proxy settings cheap to snapshot and update.
- Add regression coverage for workspace-write -> Full Access ->
workspace-write.
2026-04-08 03:07:55 +00:00
68 changed files with 2733 additions and 2638 deletions

View File

@@ -63,10 +63,5 @@ jobs:
- name: Check root README ToC
run: python3 scripts/readme_toc.py README.md
- name: Ensure codex-cli/README.md contains only ASCII and certain Unicode code points
run: ./scripts/asciicheck.py codex-cli/README.md
- name: Check codex-cli/README ToC
run: python3 scripts/readme_toc.py codex-cli/README.md
- name: Prettier (run `pnpm run format:fix` to fix)
run: pnpm run format

View File

@@ -1,736 +0,0 @@
<h1 align="center">OpenAI Codex CLI</h1>
<p align="center">Lightweight coding agent that runs in your terminal</p>
<p align="center"><code>npm i -g @openai/codex</code></p>
> [!IMPORTANT]
> This is the documentation for the _legacy_ TypeScript implementation of the Codex CLI. It has been superseded by the _Rust_ implementation. See the [README in the root of the Codex repository](https://github.com/openai/codex/blob/main/README.md) for details.
![Codex demo GIF using: codex "explain this codebase to me"](../.github/demo.gif)
---
<details>
<summary><strong>Table of contents</strong></summary>
<!-- Begin ToC -->
- [Experimental technology disclaimer](#experimental-technology-disclaimer)
- [Quickstart](#quickstart)
- [Why Codex?](#why-codex)
- [Security model & permissions](#security-model--permissions)
- [Platform sandboxing details](#platform-sandboxing-details)
- [System requirements](#system-requirements)
- [CLI reference](#cli-reference)
- [Memory & project docs](#memory--project-docs)
- [Non-interactive / CI mode](#non-interactive--ci-mode)
- [Tracing / verbose logging](#tracing--verbose-logging)
- [Recipes](#recipes)
- [Installation](#installation)
- [Configuration guide](#configuration-guide)
- [Basic configuration parameters](#basic-configuration-parameters)
- [Custom AI provider configuration](#custom-ai-provider-configuration)
- [History configuration](#history-configuration)
- [Configuration examples](#configuration-examples)
- [Full configuration example](#full-configuration-example)
- [Custom instructions](#custom-instructions)
- [Environment variables setup](#environment-variables-setup)
- [FAQ](#faq)
- [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
- [Codex open source fund](#codex-open-source-fund)
- [Contributing](#contributing)
- [Development workflow](#development-workflow)
- [Git hooks with Husky](#git-hooks-with-husky)
- [Debugging](#debugging)
- [Writing high-impact code changes](#writing-high-impact-code-changes)
- [Opening a pull request](#opening-a-pull-request)
- [Review process](#review-process)
- [Community values](#community-values)
- [Getting help](#getting-help)
- [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
- [Quick fixes](#quick-fixes)
- [Releasing `codex`](#releasing-codex)
- [Alternative build options](#alternative-build-options)
- [Nix flake development](#nix-flake-development)
- [Security & responsible AI](#security--responsible-ai)
- [License](#license)
<!-- End ToC -->
</details>
---
## Experimental technology disclaimer
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
- Bug reports
- Feature requests
- Pull requests
- Good vibes
Help us improve by filing issues or submitting PRs (see the section below for how to contribute)!
## Quickstart
Install globally:
```shell
npm install -g @openai/codex
```
Next, set your OpenAI API key as an environment variable:
```shell
export OPENAI_API_KEY="your-api-key-here"
```
> **Note:** This command sets the key only for your current terminal session. You can add the `export` line to your shell's configuration file (e.g., `~/.zshrc`) but we recommend setting for the session. **Tip:** You can also place your API key into a `.env` file at the root of your project:
>
> ```env
> OPENAI_API_KEY=your-api-key-here
> ```
>
> The CLI will automatically load variables from `.env` (via `dotenv/config`).
<details>
<summary><strong>Use <code>--provider</code> to use other models</strong></summary>
> Codex also allows you to use other providers that support the OpenAI Chat Completions API. You can set the provider in the config file or use the `--provider` flag. The possible options for `--provider` are:
>
> - openai (default)
> - openrouter
> - azure
> - gemini
> - ollama
> - mistral
> - deepseek
> - xai
> - groq
> - arceeai
> - any other provider that is compatible with the OpenAI API
>
> If you use a provider other than OpenAI, you will need to set the API key for the provider in the config file or in the environment variable as:
>
> ```shell
> export <provider>_API_KEY="your-api-key-here"
> ```
>
> If you use a provider not listed above, you must also set the base URL for the provider:
>
> ```shell
> export <provider>_BASE_URL="https://your-provider-api-base-url"
> ```
</details>
<br />
Run interactively:
```shell
codex
```
Or, run with a prompt as input (and optionally in `Full Auto` mode):
```shell
codex "explain this codebase to me"
```
```shell
codex --approval-mode full-auto "create the fanciest todo-list app"
```
That's it - Codex will scaffold a file, run it inside a sandbox, install any
missing dependencies, and show you the live result. Approve the changes and
they'll be committed to your working directory.
---
## Why Codex?
Codex CLI is built for developers who already **live in the terminal** and want
ChatGPT-level reasoning **plus** the power to actually run code, manipulate
files, and iterate - all under version control. In short, it's _chat-driven
development_ that understands and executes your repo.
- **Zero setup** - bring your OpenAI API key and it just works!
- **Full auto-approval, while safe + secure** by running network-disabled and directory-sandboxed
- **Multimodal** - pass in screenshots or diagrams to implement features ✨
And it's **fully open-source** so you can see and contribute to how it develops!
---
## Security model & permissions
Codex lets you decide _how much autonomy_ the agent receives and auto-approval policy via the
`--approval-mode` flag (or the interactive onboarding prompt):
| Mode | What the agent may do without asking | Still requires approval |
| ------------------------- | --------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| **Suggest** <br>(default) | <li>Read any file in the repo | <li>**All** file writes/patches<li> **Any** arbitrary shell commands (aside from reading files) |
| **Auto Edit** | <li>Read **and** apply-patch writes to files | <li>**All** shell commands |
| **Full Auto** | <li>Read/write files <li> Execute shell commands (network disabled, writes limited to your workdir) | - |
In **Full Auto** every command is run **network-disabled** and confined to the
current working directory (plus temporary files) for defense-in-depth. Codex
will also show a warning/confirmation if you start in **auto-edit** or
**full-auto** while the directory is _not_ tracked by Git, so you always have a
safety net.
Coming soon: you'll be able to whitelist specific commands to auto-execute with
the network enabled, once we're confident in additional safeguards.
### Platform sandboxing details
The hardening mechanism Codex uses depends on your OS:
- **macOS 12+** - commands are wrapped with **Apple Seatbelt** (`sandbox-exec`).
- Everything is placed in a read-only jail except for a small set of
writable roots (`$PWD`, `$TMPDIR`, `~/.codex`, etc.).
- Outbound network is _fully blocked_ by default - even if a child process
tries to `curl` somewhere it will fail.
- **Linux** - there is no sandboxing by default.
We recommend using Docker for sandboxing, where Codex launches itself inside a **minimal
container image** and mounts your repo _read/write_ at the same path. A
custom `iptables`/`ipset` firewall script denies all egress except the
OpenAI API. This gives you deterministic, reproducible runs without needing
root on the host. You can use the [`run_in_container.sh`](../codex-cli/scripts/run_in_container.sh) script to set up the sandbox.
---
## System requirements
| Requirement | Details |
| --------------------------- | --------------------------------------------------------------- |
| Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
| Node.js | **16 or newer** (Node 20 LTS recommended) |
| Git (optional, recommended) | 2.23+ for built-in PR helpers |
| RAM | 4-GB minimum (8-GB recommended) |
> Never run `sudo npm install -g`; fix npm permissions instead.
---
## CLI reference
| Command | Purpose | Example |
| ------------------------------------ | ----------------------------------- | ------------------------------------ |
| `codex` | Interactive REPL | `codex` |
| `codex "..."` | Initial prompt for interactive REPL | `codex "fix lint errors"` |
| `codex -q "..."` | Non-interactive "quiet mode" | `codex -q --json "explain utils.ts"` |
| `codex completion <bash\|zsh\|fish>` | Print shell completion script | `codex completion bash` |
Key flags: `--model/-m`, `--approval-mode/-a`, `--quiet/-q`, and `--notify`.
---
## Memory & project docs
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down:
1. `~/.codex/AGENTS.md` - personal global guidance
2. `AGENTS.md` at repo root - shared project notes
3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
Disable loading of these files with `--no-project-doc` or the environment variable `CODEX_DISABLE_PROJECT_DOC=1`.
---
## Non-interactive / CI mode
Run Codex head-less in pipelines. Example GitHub Action step:
```yaml
- name: Update changelog via Codex
run: |
npm install -g @openai/codex
export OPENAI_API_KEY="${{ secrets.OPENAI_KEY }}"
codex -a auto-edit --quiet "update CHANGELOG for next release"
```
Set `CODEX_QUIET_MODE=1` to silence interactive UI noise.
## Tracing / verbose logging
Setting the environment variable `DEBUG=true` prints full API request and response details:
```shell
DEBUG=true codex
```
---
## Recipes
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md) for more tips and usage patterns.
| ✨ | What you type | What happens |
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
---
## Installation
<details open>
<summary><strong>From npm (Recommended)</strong></summary>
```bash
npm install -g @openai/codex
# or
yarn global add @openai/codex
# or
bun install -g @openai/codex
# or
pnpm add -g @openai/codex
```
</details>
<details>
<summary><strong>Build from source</strong></summary>
```bash
# Clone the repository and navigate to the CLI package
git clone https://github.com/openai/codex.git
cd codex/codex-cli
# Enable corepack
corepack enable
# Install dependencies and build
pnpm install
pnpm build
# Linux-only: download prebuilt sandboxing binaries (requires gh and zstd).
./scripts/install_native_deps.sh
# Get the usage and the options
node ./dist/cli.js --help
# Run the locally-built CLI directly
node ./dist/cli.js
# Or link the command globally for convenience
pnpm link
```
</details>
---
## Configuration guide
Codex configuration files can be placed in the `~/.codex/` directory, supporting both YAML and JSON formats.
### Basic configuration parameters
| Parameter | Type | Default | Description | Available Options |
| ------------------- | ------- | ---------- | -------------------------------- | ---------------------------------------------------------------------------------------------- |
| `model` | string | `o4-mini` | AI model to use | Any model name supporting OpenAI API |
| `approvalMode` | string | `suggest` | AI assistant's permission mode | `suggest` (suggestions only)<br>`auto-edit` (automatic edits)<br>`full-auto` (fully automatic) |
| `fullAutoErrorMode` | string | `ask-user` | Error handling in full-auto mode | `ask-user` (prompt for user input)<br>`ignore-and-continue` (ignore and proceed) |
| `notify` | boolean | `true` | Enable desktop notifications | `true`/`false` |
### Custom AI provider configuration
In the `providers` object, you can configure multiple AI service providers. Each provider requires the following parameters:
| Parameter | Type | Description | Example |
| --------- | ------ | --------------------------------------- | ----------------------------- |
| `name` | string | Display name of the provider | `"OpenAI"` |
| `baseURL` | string | API service URL | `"https://api.openai.com/v1"` |
| `envKey` | string | Environment variable name (for API key) | `"OPENAI_API_KEY"` |
### History configuration
In the `history` object, you can configure conversation history settings:
| Parameter | Type | Description | Example Value |
| ------------------- | ------- | ------------------------------------------------------ | ------------- |
| `maxSize` | number | Maximum number of history entries to save | `1000` |
| `saveHistory` | boolean | Whether to save history | `true` |
| `sensitivePatterns` | array | Patterns of sensitive information to filter in history | `[]` |
### Configuration examples
1. YAML format (save as `~/.codex/config.yaml`):
```yaml
model: o4-mini
approvalMode: suggest
fullAutoErrorMode: ask-user
notify: true
```
2. JSON format (save as `~/.codex/config.json`):
```json
{
"model": "o4-mini",
"approvalMode": "suggest",
"fullAutoErrorMode": "ask-user",
"notify": true
}
```
### Full configuration example
Below is a comprehensive example of `config.json` with multiple custom providers:
```json
{
"model": "o4-mini",
"provider": "openai",
"providers": {
"openai": {
"name": "OpenAI",
"baseURL": "https://api.openai.com/v1",
"envKey": "OPENAI_API_KEY"
},
"azure": {
"name": "AzureOpenAI",
"baseURL": "https://YOUR_PROJECT_NAME.openai.azure.com/openai",
"envKey": "AZURE_OPENAI_API_KEY"
},
"openrouter": {
"name": "OpenRouter",
"baseURL": "https://openrouter.ai/api/v1",
"envKey": "OPENROUTER_API_KEY"
},
"gemini": {
"name": "Gemini",
"baseURL": "https://generativelanguage.googleapis.com/v1beta/openai",
"envKey": "GEMINI_API_KEY"
},
"ollama": {
"name": "Ollama",
"baseURL": "http://localhost:11434/v1",
"envKey": "OLLAMA_API_KEY"
},
"mistral": {
"name": "Mistral",
"baseURL": "https://api.mistral.ai/v1",
"envKey": "MISTRAL_API_KEY"
},
"deepseek": {
"name": "DeepSeek",
"baseURL": "https://api.deepseek.com",
"envKey": "DEEPSEEK_API_KEY"
},
"xai": {
"name": "xAI",
"baseURL": "https://api.x.ai/v1",
"envKey": "XAI_API_KEY"
},
"groq": {
"name": "Groq",
"baseURL": "https://api.groq.com/openai/v1",
"envKey": "GROQ_API_KEY"
},
"arceeai": {
"name": "ArceeAI",
"baseURL": "https://conductor.arcee.ai/v1",
"envKey": "ARCEEAI_API_KEY"
}
},
"history": {
"maxSize": 1000,
"saveHistory": true,
"sensitivePatterns": []
}
}
```
### Custom instructions
You can create a `~/.codex/AGENTS.md` file to define custom guidance for the agent:
```markdown
- Always respond with emojis
- Only use git commands when explicitly requested
```
### Environment variables setup
For each AI provider, you need to set the corresponding API key in your environment variables. For example:
```bash
# OpenAI
export OPENAI_API_KEY="your-api-key-here"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
export AZURE_OPENAI_API_VERSION="2025-04-01-preview" (Optional)
# OpenRouter
export OPENROUTER_API_KEY="your-openrouter-key-here"
# Similarly for other providers
```
---
## FAQ
<details>
<summary>OpenAI released a model called Codex in 2021 - is this related?</summary>
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
</details>
<details>
<summary>Which models are supported?</summary>
Any model available with [Responses API](https://platform.openai.com/docs/api-reference/responses). The default is `o4-mini`, but pass `--model gpt-4.1` or set `model: gpt-4.1` in your config file to override.
</details>
<details>
<summary>Why does <code>o3</code> or <code>o4-mini</code> not work for me?</summary>
It's possible that your [API account needs to be verified](https://help.openai.com/en/articles/10910291-api-organization-verification) in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know!
</details>
<details>
<summary>How do I stop Codex from editing my files?</summary>
Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback.
</details>
<details>
<summary>Does it work on Windows?</summary>
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex is regularly tested on macOS and Linux with Node 20+, and also supports Node 16.
</details>
---
## Zero data retention (ZDR) usage
Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
```
OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
```
You may need to upgrade to a more recent version with: `npm i -g @openai/codex@latest`
---
## Codex open source fund
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
- Grants are awarded up to **$25,000** API credits.
- Applications are reviewed **on a rolling basis**.
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
---
## Contributing
This project is under active development and the code will likely change pretty significantly. We'll update this message once that's complete!
More broadly we welcome contributions - whether you are opening your very first pull request or you're a seasoned maintainer. At the same time we care about reliability and long-term maintainability, so the bar for merging code is intentionally **high**. The guidelines below spell out what "high-quality" means in practice and should make the whole process transparent and friendly.
### Development workflow
- Create a _topic branch_ from `main` - e.g. `feat/interactive-prompt`.
- Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs.
- Use `pnpm test:watch` during development for super-fast feedback.
- We use **Vitest** for unit tests, **ESLint** + **Prettier** for style, and **TypeScript** for type-checking.
- Before pushing, run the full test/type/lint suite:
### Git hooks with Husky
This project uses [Husky](https://typicode.github.io/husky/) to enforce code quality checks:
- **Pre-commit hook**: Automatically runs lint-staged to format and lint files before committing
- **Pre-push hook**: Runs tests and type checking before pushing to the remote
These hooks help maintain code quality and prevent pushing code with failing tests. For more details, see [HUSKY.md](./HUSKY.md).
```bash
pnpm test && pnpm run lint && pnpm run typecheck
```
- If you have **not** yet signed the Contributor License Agreement (CLA), add a PR comment containing the exact text
```text
I have read the CLA Document and I hereby sign the CLA
```
The CLA-Assistant bot will turn the PR status green once all authors have signed.
```bash
# Watch mode (tests rerun on change)
pnpm test:watch
# Type-check without emitting files
pnpm typecheck
# Automatically fix lint + prettier issues
pnpm lint:fix
pnpm format:fix
```
### Debugging
To debug the CLI with a visual debugger, do the following in the `codex-cli` folder:
- Run `pnpm run build` to build the CLI, which will generate `cli.js.map` alongside `cli.js` in the `dist` folder.
- Run the CLI with `node --inspect-brk ./dist/cli.js` The program then waits until a debugger is attached before proceeding. Options:
- In VS Code, choose **Debug: Attach to Node Process** from the command palette and choose the option in the dropdown with debug port `9229` (likely the first option)
- Go to <chrome://inspect> in Chrome and find **localhost:9229** and click **trace**
### Writing high-impact code changes
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
### Opening a pull request
- Fill in the PR template (or include similar information) - **What? Why? How?**
- Run **all** checks locally (`npm test && npm run lint && npm run typecheck`). CI failures that could have been caught locally slow down the process.
- Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts.
- Mark the PR as **Ready for review** only when you believe it is in a merge-able state.
### Review process
1. One maintainer will be assigned as a primary reviewer.
2. We may ask for changes - please do not take this personally. We value the work, we just also value consistency and long-term maintainability.
3. When there is consensus that the PR meets the bar, a maintainer will squash-and-merge.
### Community values
- **Be kind and inclusive.** Treat others with respect; we follow the [Contributor Covenant](https://www.contributor-covenant.org/).
- **Assume good intent.** Written communication is hard - err on the side of generosity.
- **Teach & learn.** If you spot something confusing, open an issue or PR with improvements.
### Getting help
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
### Contributor license agreement (CLA)
All contributors **must** accept the CLA. The process is lightweight:
1. Open your pull request.
2. Paste the following comment (or reply `recheck` if you've signed before):
```text
I have read the CLA Document and I hereby sign the CLA
```
3. The CLA-Assistant bot records your signature in the repo and marks the status check as passed.
No special Git commands, email attachments, or commit footers required.
#### Quick fixes
| Scenario | Command |
| ----------------- | ------------------------------------------------ |
| Amend last commit | `git commit --amend -s --no-edit && git push -f` |
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
### Releasing `codex`
To publish a new version of the CLI you first need to stage the npm package. A
helper script in `codex-cli/scripts/` does all the heavy lifting. Inside the
`codex-cli` folder run:
```bash
# Classic, JS implementation that includes small, native binaries for Linux sandboxing.
pnpm stage-release
# Optionally specify the temp directory to reuse between runs.
RELEASE_DIR=$(mktemp -d)
pnpm stage-release --tmp "$RELEASE_DIR"
# "Fat" package that additionally bundles the native Rust CLI binaries for
# Linux. End-users can then opt-in at runtime by setting CODEX_RUST=1.
pnpm stage-release --native
```
Go to the folder where the release is staged and verify that it works as intended. If so, run the following from the temp folder:
```
cd "$RELEASE_DIR"
npm publish
```
### Alternative build options
#### Nix flake development
Prerequisite: Nix >= 2.4 with flakes enabled (`experimental-features = nix-command flakes` in `~/.config/nix/nix.conf`).
Enter a Nix development shell:
```bash
# Use either one of the commands according to which implementation you want to work with
nix develop .#codex-cli # For entering codex-cli specific shell
nix develop .#codex-rs # For entering codex-rs specific shell
```
This shell includes Node.js, installs dependencies, builds the CLI, and provides a `codex` command alias.
Build and run the CLI directly:
```bash
# Use either one of the commands according to which implementation you want to work with
nix build .#codex-cli # For building codex-cli
nix build .#codex-rs # For building codex-rs
./result/bin/codex --help
```
Run the CLI via the flake app:
```bash
# Use either one of the commands according to which implementation you want to work with
nix run .#codex-cli # For running codex-cli
nix run .#codex-rs # For running codex-rs
```
Use direnv with flakes
If you have direnv installed, you can use the following `.envrc` to automatically enter the Nix shell when you `cd` into the project directory:
```bash
cd codex-rs
echo "use flake ../flake.nix#codex-cli" >> .envrc && direnv allow
cd codex-cli
echo "use flake ../flake.nix#codex-rs" >> .envrc && direnv allow
```
---
## Security & responsible AI
Have you discovered a vulnerability or have concerns about model output? Please e-mail **security@openai.com** and we will respond promptly.
---
## License
This repository is licensed under the [Apache-2.0 License](LICENSE).

View File

@@ -9138,6 +9138,24 @@ mod tests {
validate_dynamic_tools(&tools).expect("valid schema");
}
#[test]
fn validate_dynamic_tools_accepts_nullable_field_schema() {
let tools = vec![ApiDynamicToolSpec {
name: "my_tool".to_string(),
description: "test".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"query": {"type": ["string", "null"]}
},
"required": ["query"],
"additionalProperties": false
}),
defer_loading: false,
}];
validate_dynamic_tools(&tools).expect("valid schema");
}
#[test]
fn config_load_error_marks_cloud_requirements_failures_for_relogin() {
let err = std::io::Error::other(CloudRequirementsLoadError::new(

View File

@@ -1,6 +1,7 @@
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value as JsonValue;
use std::collections::BTreeMap;
use crate::PUBLIC_TOOL_NAME;
@@ -57,6 +58,12 @@ pub struct ToolDefinition {
pub output_schema: Option<JsonValue>,
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct ToolNamespaceDescription {
pub name: String,
pub description: String,
}
#[derive(Debug, Default, Deserialize, PartialEq, Eq)]
#[serde(deny_unknown_fields)]
struct CodeModeExecPragma {
@@ -163,6 +170,7 @@ pub fn is_code_mode_nested_tool(tool_name: &str) -> bool {
pub fn build_exec_tool_description(
enabled_tools: &[(String, String)],
namespace_descriptions: &BTreeMap<String, ToolNamespaceDescription>,
code_mode_only: bool,
) -> String {
if !code_mode_only {
@@ -175,17 +183,38 @@ pub fn build_exec_tool_description(
];
if !enabled_tools.is_empty() {
let nested_tool_reference = enabled_tools
.iter()
.map(|(name, nested_description)| {
let global_name = normalize_code_mode_identifier(name);
format!(
"### `{global_name}` (`{name}`)\n{}",
nested_description.trim()
)
})
.collect::<Vec<_>>()
.join("\n\n");
let mut current_namespace: Option<&str> = None;
let mut nested_tool_sections = Vec::with_capacity(enabled_tools.len());
for (name, nested_description) in enabled_tools {
let next_namespace = namespace_descriptions
.get(name)
.map(|namespace_description| namespace_description.name.as_str());
if next_namespace != current_namespace {
if let Some(namespace_description) = namespace_descriptions.get(name) {
let namespace_description_text = namespace_description.description.trim();
if !namespace_description_text.is_empty() {
nested_tool_sections.push(format!(
"## {}\n{namespace_description_text}",
namespace_description.name
));
}
}
current_namespace = next_namespace;
}
let global_name = normalize_code_mode_identifier(name);
let nested_description = nested_description.trim();
if nested_description.is_empty() {
nested_tool_sections.push(format!("### `{global_name}` (`{name}`)"));
} else {
nested_tool_sections.push(format!(
"### `{global_name}` (`{name}`)\n{nested_description}"
));
}
}
let nested_tool_reference = nested_tool_sections.join("\n\n");
sections.push(nested_tool_reference);
}
@@ -408,6 +437,45 @@ fn render_json_schema_array(map: &serde_json::Map<String, JsonValue>) -> String
"unknown[]".to_string()
}
fn append_additional_properties_line(
lines: &mut Vec<String>,
map: &serde_json::Map<String, JsonValue>,
properties: &serde_json::Map<String, JsonValue>,
line_prefix: &str,
) {
if let Some(additional_properties) = map.get("additionalProperties") {
let property_type = match additional_properties {
JsonValue::Bool(true) => Some("unknown".to_string()),
JsonValue::Bool(false) => None,
value => Some(render_json_schema_to_typescript_inner(value)),
};
if let Some(property_type) = property_type {
lines.push(format!("{line_prefix}[key: string]: {property_type};"));
}
} else if properties.is_empty() {
lines.push(format!("{line_prefix}[key: string]: unknown;"));
}
}
fn has_property_description(value: &JsonValue) -> bool {
value
.get("description")
.and_then(JsonValue::as_str)
.is_some_and(|description| !description.is_empty())
}
fn render_json_schema_object_property(name: &str, value: &JsonValue, required: &[&str]) -> String {
let optional = if required.iter().any(|required_name| required_name == &name) {
""
} else {
"?"
};
let property_name = render_json_schema_property_name(name);
let property_type = render_json_schema_to_typescript_inner(value);
format!("{property_name}{optional}: {property_type};")
}
fn render_json_schema_object(map: &serde_json::Map<String, JsonValue>) -> String {
let required = map
.get("required")
@@ -427,33 +495,39 @@ fn render_json_schema_object(map: &serde_json::Map<String, JsonValue>) -> String
let mut sorted_properties = properties.iter().collect::<Vec<_>>();
sorted_properties.sort_unstable_by(|(name_a, _), (name_b, _)| name_a.cmp(name_b));
if sorted_properties
.iter()
.any(|(_, value)| has_property_description(value))
{
let mut lines = vec!["{".to_string()];
for (name, value) in sorted_properties {
if let Some(description) = value.get("description").and_then(JsonValue::as_str) {
for description_line in description
.lines()
.map(str::trim)
.filter(|line| !line.is_empty())
{
lines.push(format!(" // {description_line}"));
}
}
lines.push(format!(
" {}",
render_json_schema_object_property(name, value, &required)
));
}
append_additional_properties_line(&mut lines, map, &properties, " ");
lines.push("}".to_string());
return lines.join("\n");
}
let mut lines = sorted_properties
.into_iter()
.map(|(name, value)| {
let optional = if required.iter().any(|required_name| required_name == name) {
""
} else {
"?"
};
let property_name = render_json_schema_property_name(name);
let property_type = render_json_schema_to_typescript_inner(value);
format!("{property_name}{optional}: {property_type};")
})
.map(|(name, value)| render_json_schema_object_property(name, value, &required))
.collect::<Vec<_>>();
if let Some(additional_properties) = map.get("additionalProperties") {
let property_type = match additional_properties {
JsonValue::Bool(true) => Some("unknown".to_string()),
JsonValue::Bool(false) => None,
value => Some(render_json_schema_to_typescript_inner(value)),
};
if let Some(property_type) = property_type {
lines.push(format!("[key: string]: {property_type};"));
}
} else if properties.is_empty() {
lines.push("[key: string]: unknown;".to_string());
}
append_additional_properties_line(&mut lines, map, &properties, "");
if lines.is_empty() {
return "{}".to_string();
@@ -479,12 +553,14 @@ mod tests {
use super::CodeModeToolKind;
use super::ParsedExecSource;
use super::ToolDefinition;
use super::ToolNamespaceDescription;
use super::augment_tool_definition;
use super::build_exec_tool_description;
use super::normalize_code_mode_identifier;
use super::parse_exec_source;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::collections::BTreeMap;
#[test]
fn parse_exec_source_without_pragma() {
@@ -550,10 +626,58 @@ mod tests {
);
}
#[test]
fn augment_tool_definition_includes_property_descriptions_as_comments() {
let definition = ToolDefinition {
name: "weather_tool".to_string(),
description: "Weather tool".to_string(),
kind: CodeModeToolKind::Function,
input_schema: Some(json!({
"type": "object",
"properties": {
"weather": {
"type": "array",
"description": "look up weather for a given list of locations",
"items": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
},
"required": ["weather"]
})),
output_schema: Some(json!({
"type": "object",
"properties": {
"forecast": {
"type": "string",
"description": "human readable weather forecast"
}
},
"required": ["forecast"]
})),
};
let description = augment_tool_definition(definition).description;
assert!(description.contains(
r#"weather_tool(args: {
// look up weather for a given list of locations
weather: Array<{ location: string; }>;
}): Promise<{
// human readable weather forecast
forecast: string;
}>;"#
));
}
#[test]
fn code_mode_only_description_includes_nested_tools() {
let description = build_exec_tool_description(
&[("foo".to_string(), "bar".to_string())],
&BTreeMap::new(),
/*code_mode_only*/ true,
);
assert!(description.contains("### `foo` (`foo`)"));
@@ -561,8 +685,67 @@ mod tests {
#[test]
fn exec_description_mentions_timeout_helpers() {
let description = build_exec_tool_description(&[], /*code_mode_only*/ false);
let description =
build_exec_tool_description(&[], &BTreeMap::new(), /*code_mode_only*/ false);
assert!(description.contains("`setTimeout(callback: () => void, delayMs?: number)`"));
assert!(description.contains("`clearTimeout(timeoutId?: number)`"));
}
#[test]
fn code_mode_only_description_groups_namespace_instructions_once() {
let namespace_descriptions = BTreeMap::from([
(
"mcp__sample__alpha".to_string(),
ToolNamespaceDescription {
name: "mcp__sample".to_string(),
description: "Shared namespace guidance.".to_string(),
},
),
(
"mcp__sample__beta".to_string(),
ToolNamespaceDescription {
name: "mcp__sample".to_string(),
description: "Shared namespace guidance.".to_string(),
},
),
]);
let description = build_exec_tool_description(
&[
("mcp__sample__alpha".to_string(), "First tool".to_string()),
("mcp__sample__beta".to_string(), "Second tool".to_string()),
],
&namespace_descriptions,
/*code_mode_only*/ true,
);
assert_eq!(description.matches("## mcp__sample").count(), 1);
assert!(description.contains(
r#"## mcp__sample
Shared namespace guidance.
### `mcp__sample__alpha` (`mcp__sample__alpha`)
First tool
### `mcp__sample__beta` (`mcp__sample__beta`)
Second tool"#
));
}
#[test]
fn code_mode_only_description_omits_empty_namespace_sections() {
let namespace_descriptions = BTreeMap::from([(
"mcp__sample__alpha".to_string(),
ToolNamespaceDescription {
name: "mcp__sample".to_string(),
description: String::new(),
},
)]);
let description = build_exec_tool_description(
&[("mcp__sample__alpha".to_string(), "First tool".to_string())],
&namespace_descriptions,
/*code_mode_only*/ true,
);
assert!(!description.contains("## mcp__sample"));
assert!(description.contains("### `mcp__sample__alpha` (`mcp__sample__alpha`)"));
}
}

View File

@@ -6,6 +6,7 @@ mod service;
pub use description::CODE_MODE_PRAGMA_PREFIX;
pub use description::CodeModeToolKind;
pub use description::ToolDefinition;
pub use description::ToolNamespaceDescription;
pub use description::append_code_mode_sample;
pub use description::augment_tool_definition;
pub use description::build_exec_tool_description;

View File

@@ -186,6 +186,8 @@ pub struct ToolInfo {
pub server_name: String,
pub tool_name: String,
pub tool_namespace: String,
#[serde(default)]
pub server_instructions: Option<String>,
pub tool: Tool,
pub connector_id: Option<String>,
pub connector_name: Option<String>,
@@ -356,6 +358,7 @@ struct ManagedClient {
tools: Vec<ToolInfo>,
tool_filter: ToolFilter,
tool_timeout: Option<Duration>,
server_instructions: Option<String>,
server_supports_sandbox_state_capability: bool,
codex_apps_tools_cache_context: Option<CodexAppsToolsCacheContext>,
}
@@ -842,6 +845,7 @@ impl McpConnectionManager {
CODEX_APPS_MCP_SERVER_NAME,
&managed_client.client,
managed_client.tool_timeout,
managed_client.server_instructions.as_deref(),
)
.await
.with_context(|| {
@@ -1374,9 +1378,14 @@ async fn start_server_task(
let list_start = Instant::now();
let fetch_start = Instant::now();
let tools = list_tools_for_client_uncached(&server_name, &client, startup_timeout)
.await
.map_err(StartupOutcomeError::from)?;
let tools = list_tools_for_client_uncached(
&server_name,
&client,
startup_timeout,
initialize_result.instructions.as_deref(),
)
.await
.map_err(StartupOutcomeError::from)?;
emit_duration(
MCP_TOOLS_FETCH_UNCACHED_DURATION_METRIC,
fetch_start.elapsed(),
@@ -1407,6 +1416,7 @@ async fn start_server_task(
tools,
tool_timeout: Some(tool_timeout),
tool_filter,
server_instructions: initialize_result.instructions,
server_supports_sandbox_state_capability,
codex_apps_tools_cache_context,
};
@@ -1587,6 +1597,7 @@ async fn list_tools_for_client_uncached(
server_name: &str,
client: &Arc<RmcpClient>,
timeout: Option<Duration>,
server_instructions: Option<&str>,
) -> Result<Vec<ToolInfo>> {
let resp = client
.list_tools_with_connector_ids(/*params*/ None, timeout)
@@ -1617,6 +1628,7 @@ async fn list_tools_for_client_uncached(
server_name: server_name.to_owned(),
tool_name,
tool_namespace,
server_instructions: server_instructions.map(str::to_string),
tool: tool_def,
connector_id: tool.connector_id,
connector_name,

View File

@@ -15,6 +15,7 @@ fn create_test_tool(server_name: &str, tool_name: &str) -> ToolInfo {
} else {
server_name.to_string()
},
server_instructions: None,
tool: Tool {
name: tool_name.to_string().into(),
title: None,

View File

@@ -1,9 +1,9 @@
use codex_execpolicy::Decision;
use codex_execpolicy::PatternToken;
use codex_execpolicy::Policy;
use codex_execpolicy::PrefixPattern;
use codex_execpolicy::PrefixRule;
use codex_execpolicy::RuleRef;
use codex_execpolicy::rule::PatternToken;
use codex_execpolicy::rule::PrefixPattern;
use codex_execpolicy::rule::PrefixRule;
use multimap::MultiMap;
use serde::Deserialize;
use std::sync::Arc;

View File

@@ -815,6 +815,9 @@ pub(crate) struct Session {
agent_status: watch::Sender<AgentStatus>,
out_of_band_elicitation_paused: watch::Sender<bool>,
state: Mutex<SessionState>,
/// Serializes rebuild/apply cycles for the running proxy; each cycle
/// rebuilds from the current SessionState while holding this lock.
managed_network_proxy_refresh_lock: Mutex<()>,
/// The set of enabled features should be invariant for the lifetime of the
/// session.
features: ManagedFeatures,
@@ -1327,6 +1330,48 @@ impl Session {
Ok((network_proxy, session_network_proxy))
}
async fn refresh_managed_network_proxy_for_current_sandbox_policy(&self) {
let Some(started_proxy) = self.services.network_proxy.as_ref() else {
return;
};
let _refresh_guard = self.managed_network_proxy_refresh_lock.lock().await;
let session_configuration = {
let state = self.state.lock().await;
state.session_configuration.clone()
};
let Some(spec) = session_configuration
.original_config_do_not_use
.permissions
.network
.as_ref()
else {
return;
};
let spec = match spec
.recompute_for_sandbox_policy(session_configuration.sandbox_policy.get())
{
Ok(spec) => spec,
Err(err) => {
warn!("failed to rebuild managed network proxy policy for sandbox change: {err}");
return;
}
};
let current_exec_policy = self.services.exec_policy.current();
let spec = match spec.with_exec_policy_network_rules(current_exec_policy.as_ref()) {
Ok(spec) => spec,
Err(err) => {
warn!(
"failed to apply execpolicy network rules while refreshing managed network proxy: {err}"
);
spec
}
};
if let Err(err) = spec.apply_to_started_proxy(started_proxy).await {
warn!("failed to refresh managed network proxy for sandbox change: {err}");
}
}
/// Don't expand the number of mutated arguments on config. We are in the process of getting rid of it.
pub(crate) fn build_per_turn_config(session_configuration: &SessionConfiguration) -> Config {
// todo(aibrahim): store this state somewhere else so we don't need to mut config
@@ -1981,6 +2026,7 @@ impl Session {
agent_status,
out_of_band_elicitation_paused,
state: Mutex::new(state),
managed_network_proxy_refresh_lock: Mutex::new(()),
features: config.features.clone(),
pending_mcp_server_refresh_config: Mutex::new(None),
conversation: Arc::new(RealtimeConversationManager::new()),
@@ -2397,6 +2443,8 @@ impl Session {
match state.session_configuration.apply(&updates) {
Ok(updated) => {
let previous_cwd = state.session_configuration.cwd.clone();
let sandbox_policy_changed =
state.session_configuration.sandbox_policy != updated.sandbox_policy;
let next_cwd = updated.cwd.clone();
let codex_home = updated.codex_home.clone();
let session_source = updated.session_source.clone();
@@ -2409,6 +2457,10 @@ impl Session {
&codex_home,
&session_source,
);
if sandbox_policy_changed {
self.refresh_managed_network_proxy_for_current_sandbox_policy()
.await;
}
Ok(())
}
@@ -2495,6 +2547,8 @@ impl Session {
.set_approval_policy(&session_configuration.approval_policy);
if sandbox_policy_changed {
self.refresh_managed_network_proxy_for_current_sandbox_policy()
.await;
let sandbox_state = SandboxState {
sandbox_policy: per_turn_config.permissions.sandbox_policy.get().clone(),
codex_linux_sandbox_exe: per_turn_config.codex_linux_sandbox_exe.clone(),
@@ -2930,6 +2984,7 @@ impl Session {
amendment: &NetworkPolicyAmendment,
network_approval_context: &NetworkApprovalContext,
) -> anyhow::Result<()> {
let _refresh_guard = self.managed_network_proxy_refresh_lock.lock().await;
let host =
Self::validated_network_policy_amendment_host(amendment, network_approval_context)?;
let codex_home = self
@@ -6833,16 +6888,18 @@ pub(crate) async fn built_tools(
} else {
app_tools
};
let mcp_tool_router_inputs =
has_mcp_servers.then(|| crate::tools::router::map_mcp_tool_infos(&mcp_tools));
Ok(Arc::new(ToolRouter::from_config(
&turn_context.tools_config,
ToolRouterParams {
mcp_tools: has_mcp_servers.then(|| {
mcp_tools
.into_iter()
.map(|(name, tool)| (name, tool.tool))
.collect()
}),
mcp_tools: mcp_tool_router_inputs
.as_ref()
.map(|inputs| inputs.mcp_tools.clone()),
tool_namespaces: mcp_tool_router_inputs
.as_ref()
.map(|inputs| inputs.tool_namespaces.clone()),
app_tools,
discoverable_tools,
dynamic_tools: turn_context.dynamic_tools.as_slice(),

View File

@@ -121,7 +121,7 @@ mod guardian_tests;
struct InstructionsTestCase {
slug: &'static str,
expects_apply_patch_instructions: bool,
expects_apply_patch_description: bool,
}
fn user_message(text: &str) -> ResponseItem {
@@ -305,6 +305,7 @@ fn test_tool_runtime(session: Arc<Session>, turn_context: Arc<TurnContext>) -> T
&turn_context.tools_config,
crate::tools::router::ToolRouterParams {
mcp_tools: None,
tool_namespaces: None,
app_tools: None,
discoverable_tools: None,
dynamic_tools: turn_context.dynamic_tools.as_slice(),
@@ -413,6 +414,7 @@ fn make_mcp_tool(
server_name: server_name.to_string(),
tool_name: tool_name.to_string(),
tool_namespace,
server_instructions: None,
tool: Tool {
name: tool_name.to_string().into(),
title: None,
@@ -544,6 +546,139 @@ async fn start_managed_network_proxy_ignores_invalid_execpolicy_network_rules()
Ok(())
}
#[tokio::test]
async fn managed_network_proxy_refreshes_when_sandbox_policy_changes() -> anyhow::Result<()> {
let spec = crate::config::NetworkProxySpec::from_config_and_constraints(
NetworkProxyConfig::default(),
Some(NetworkConstraints {
domains: Some(NetworkDomainPermissionsToml {
entries: std::collections::BTreeMap::from([(
"blocked.example.com".to_string(),
NetworkDomainPermissionToml::Deny,
)]),
}),
danger_full_access_denylist_only: Some(true),
allow_local_binding: Some(false),
..Default::default()
}),
&SandboxPolicy::new_workspace_write_policy(),
)?;
let exec_policy = Policy::empty();
let (started_proxy, _) = Session::start_managed_network_proxy(
&spec,
&exec_policy,
&SandboxPolicy::new_workspace_write_policy(),
/*network_policy_decider*/ None,
/*blocked_request_observer*/ None,
/*managed_network_requirements_enabled*/ false,
crate::config::NetworkProxyAuditMetadata::default(),
)
.await?;
assert!(!started_proxy.proxy().allow_local_binding());
let current_cfg = started_proxy.proxy().current_cfg().await?;
assert_eq!(current_cfg.network.allowed_domains(), None);
assert_eq!(
current_cfg.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
let spec = spec.recompute_for_sandbox_policy(&SandboxPolicy::DangerFullAccess)?;
spec.apply_to_started_proxy(&started_proxy).await?;
assert!(started_proxy.proxy().allow_local_binding());
let current_cfg = started_proxy.proxy().current_cfg().await?;
assert_eq!(
current_cfg.network.allowed_domains(),
Some(vec!["*".to_string()])
);
assert_eq!(
current_cfg.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
let spec = spec.recompute_for_sandbox_policy(&SandboxPolicy::new_workspace_write_policy())?;
spec.apply_to_started_proxy(&started_proxy).await?;
assert!(!started_proxy.proxy().allow_local_binding());
let current_cfg = started_proxy.proxy().current_cfg().await?;
assert_eq!(current_cfg.network.allowed_domains(), None);
assert_eq!(
current_cfg.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
Ok(())
}
#[tokio::test]
async fn managed_network_proxy_decider_survives_full_access_start() -> anyhow::Result<()> {
let spec = crate::config::NetworkProxySpec::from_config_and_constraints(
NetworkProxyConfig::default(),
Some(NetworkConstraints {
enabled: Some(true),
danger_full_access_denylist_only: Some(true),
..Default::default()
}),
&SandboxPolicy::DangerFullAccess,
)?;
let exec_policy = Policy::empty();
let decider_calls = Arc::new(std::sync::atomic::AtomicUsize::new(0));
let network_policy_decider: Arc<dyn codex_network_proxy::NetworkPolicyDecider> = Arc::new({
let decider_calls = Arc::clone(&decider_calls);
move |_request| {
decider_calls.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
async { codex_network_proxy::NetworkDecision::ask("not_allowed") }
}
});
let (started_proxy, _) = Session::start_managed_network_proxy(
&spec,
&exec_policy,
&SandboxPolicy::DangerFullAccess,
Some(network_policy_decider),
/*blocked_request_observer*/ None,
/*managed_network_requirements_enabled*/ true,
crate::config::NetworkProxyAuditMetadata::default(),
)
.await?;
let spec = spec.recompute_for_sandbox_policy(&SandboxPolicy::new_workspace_write_policy())?;
spec.apply_to_started_proxy(&started_proxy).await?;
let current_cfg = started_proxy.proxy().current_cfg().await?;
assert_eq!(current_cfg.network.allowed_domains(), None);
use tokio::io::AsyncReadExt as _;
use tokio::io::AsyncWriteExt as _;
let mut stream = tokio::net::TcpStream::connect(started_proxy.proxy().http_addr()).await?;
stream
.write_all(
b"GET http://example.com/ HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n",
)
.await?;
let mut buffer = [0_u8; 4096];
let bytes_read = tokio::time::timeout(StdDuration::from_secs(2), stream.read(&mut buffer))
.await
.expect("timed out waiting for proxy response")?;
let response = String::from_utf8_lossy(&buffer[..bytes_read]);
assert!(
response.starts_with("HTTP/1.1 403 Forbidden"),
"unexpected proxy response: {response}"
);
assert!(
response.contains("x-proxy-error: blocked-by-allowlist"),
"unexpected proxy response: {response}"
);
assert_eq!(
decider_calls.load(std::sync::atomic::Ordering::SeqCst),
1,
"unexpected proxy response: {response}"
);
Ok(())
}
#[tokio::test]
async fn get_base_instructions_no_user_content() {
let prompt_with_apply_patch_instructions =
@@ -562,19 +697,19 @@ async fn get_base_instructions_no_user_content() {
let test_cases = vec![
InstructionsTestCase {
slug: "gpt-5",
expects_apply_patch_instructions: false,
expects_apply_patch_description: false,
},
InstructionsTestCase {
slug: "gpt-5.1",
expects_apply_patch_instructions: false,
expects_apply_patch_description: false,
},
InstructionsTestCase {
slug: "gpt-5.1-codex",
expects_apply_patch_instructions: false,
expects_apply_patch_description: false,
},
InstructionsTestCase {
slug: "gpt-5.1-codex-max",
expects_apply_patch_instructions: false,
expects_apply_patch_description: false,
},
];
@@ -583,7 +718,7 @@ async fn get_base_instructions_no_user_content() {
for test_case in test_cases {
let model_info = model_info_for_slug(test_case.slug, &config);
if test_case.expects_apply_patch_instructions {
if test_case.expects_apply_patch_description {
assert_eq!(
model_info.base_instructions.as_str(),
prompt_with_apply_patch_instructions
@@ -2811,6 +2946,7 @@ pub(crate) async fn make_session_and_context() -> (Session, TurnContext) {
agent_status: agent_status_tx,
out_of_band_elicitation_paused: watch::channel(false).0,
state: Mutex::new(state),
managed_network_proxy_refresh_lock: Mutex::new(()),
features: config.features.clone(),
pending_mcp_server_refresh_config: Mutex::new(None),
conversation: Arc::new(RealtimeConversationManager::new()),
@@ -3652,6 +3788,7 @@ pub(crate) async fn make_session_and_context_with_dynamic_tools_and_rx(
agent_status: agent_status_tx,
out_of_band_elicitation_paused: watch::channel(false).0,
state: Mutex::new(state),
managed_network_proxy_refresh_lock: Mutex::new(()),
features: config.features.clone(),
pending_mcp_server_refresh_config: Mutex::new(None),
conversation: Arc::new(RealtimeConversationManager::new()),
@@ -5157,15 +5294,12 @@ async fn fatal_tool_error_stops_turn_and_reports_error() {
.await
};
let app_tools = Some(tools.clone());
let mcp_tool_router_inputs = crate::tools::router::map_mcp_tool_infos(&tools);
let router = ToolRouter::from_config(
&turn_context.tools_config,
crate::tools::router::ToolRouterParams {
mcp_tools: Some(
tools
.into_iter()
.map(|(name, tool)| (name, tool.tool))
.collect(),
),
mcp_tools: Some(mcp_tool_router_inputs.mcp_tools),
tool_namespaces: Some(mcp_tool_router_inputs.tool_namespaces),
app_tools,
discoverable_tools: None,
dynamic_tools: turn_context.dynamic_tools.as_slice(),

View File

@@ -24,6 +24,8 @@ const GLOBAL_ALLOWLIST_PATTERN: &str = "*";
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct NetworkProxySpec {
base_config: NetworkProxyConfig,
requirements: Option<NetworkConstraints>,
config: NetworkProxyConfig,
constraints: NetworkProxyConstraints,
hard_deny_allowlist_misses: bool,
@@ -91,13 +93,14 @@ impl NetworkProxySpec {
requirements: Option<NetworkConstraints>,
sandbox_policy: &SandboxPolicy,
) -> std::io::Result<Self> {
let base_config = config.clone();
let hard_deny_allowlist_misses = requirements
.as_ref()
.is_some_and(Self::managed_allowed_domains_only);
let (config, constraints) = if let Some(requirements) = requirements {
let (config, constraints) = if let Some(requirements) = requirements.as_ref() {
Self::apply_requirements(
config,
&requirements,
requirements,
sandbox_policy,
hard_deny_allowlist_misses,
)
@@ -111,6 +114,8 @@ impl NetworkProxySpec {
)
})?;
Ok(Self {
base_config,
requirements,
config,
constraints,
hard_deny_allowlist_misses,
@@ -127,21 +132,16 @@ impl NetworkProxySpec {
) -> std::io::Result<StartedNetworkProxy> {
let state = self.build_state_with_audit_metadata(audit_metadata)?;
let mut builder = NetworkProxy::builder().state(Arc::new(state));
if enable_network_approval_flow
&& !self.hard_deny_allowlist_misses
&& matches!(
if enable_network_approval_flow && !self.hard_deny_allowlist_misses {
if let Some(policy_decider) = policy_decider {
builder = builder.policy_decider_arc(policy_decider);
} else if matches!(
sandbox_policy,
SandboxPolicy::ReadOnly { .. } | SandboxPolicy::WorkspaceWrite { .. }
)
{
builder = match policy_decider {
Some(policy_decider) => builder.policy_decider_arc(policy_decider),
None => builder.policy_decider(|_request| async {
// In restricted sandbox modes, allowlist misses should ask for
// explicit network approval instead of hard-denying.
NetworkDecision::ask("not_allowed")
}),
};
) {
builder = builder
.policy_decider(|_request| async { NetworkDecision::ask("not_allowed") });
}
}
if let Some(blocked_request_observer) = blocked_request_observer {
builder = builder.blocked_request_observer_arc(blocked_request_observer);
@@ -156,6 +156,17 @@ impl NetworkProxySpec {
Ok(StartedNetworkProxy::new(proxy, handle))
}
pub(crate) fn recompute_for_sandbox_policy(
&self,
sandbox_policy: &SandboxPolicy,
) -> std::io::Result<Self> {
Self::from_config_and_constraints(
self.base_config.clone(),
self.requirements.clone(),
sandbox_policy,
)
}
pub(crate) fn with_exec_policy_network_rules(
&self,
exec_policy: &Policy,
@@ -171,14 +182,25 @@ impl NetworkProxySpec {
Ok(spec)
}
pub(crate) async fn apply_to_started_proxy(
&self,
started_proxy: &StartedNetworkProxy,
) -> std::io::Result<()> {
let state = self.build_config_state_for_spec()?;
started_proxy
.proxy()
.replace_config_state(state)
.await
.map_err(|err| {
std::io::Error::other(format!("failed to update network proxy state: {err}"))
})
}
fn build_state_with_audit_metadata(
&self,
audit_metadata: NetworkProxyAuditMetadata,
) -> std::io::Result<NetworkProxyState> {
let state =
build_config_state(self.config.clone(), self.constraints.clone()).map_err(|err| {
std::io::Error::other(format!("failed to build network proxy state: {err}"))
})?;
let state = self.build_config_state_for_spec()?;
let reloader = Arc::new(StaticNetworkProxyReloader::new(state.clone()));
Ok(NetworkProxyState::with_reloader_and_audit_metadata(
state,
@@ -187,6 +209,12 @@ impl NetworkProxySpec {
))
}
fn build_config_state_for_spec(&self) -> std::io::Result<ConfigState> {
build_config_state(self.config.clone(), self.constraints.clone()).map_err(|err| {
std::io::Error::other(format!("failed to build network proxy state: {err}"))
})
}
fn apply_requirements(
mut config: NetworkProxyConfig,
requirements: &NetworkConstraints,

View File

@@ -21,6 +21,8 @@ fn domain_permissions(
#[test]
fn build_state_with_audit_metadata_threads_metadata_to_state() {
let spec = NetworkProxySpec {
base_config: NetworkProxyConfig::default(),
requirements: None,
config: NetworkProxyConfig::default(),
constraints: NetworkProxyConstraints::default(),
hard_deny_allowlist_misses: false,
@@ -322,6 +324,55 @@ fn danger_full_access_denylist_only_does_not_change_workspace_write_behavior() {
assert_eq!(spec.constraints.denylist_expansion_enabled, Some(true));
}
#[test]
fn recompute_for_sandbox_policy_rebuilds_denylist_only_full_access_policy() {
let requirements = NetworkConstraints {
domains: Some(domain_permissions([(
"blocked.example.com",
NetworkDomainPermissionToml::Deny,
)])),
danger_full_access_denylist_only: Some(true),
..Default::default()
};
let spec = NetworkProxySpec::from_config_and_constraints(
NetworkProxyConfig::default(),
Some(requirements),
&SandboxPolicy::new_workspace_write_policy(),
)
.expect("workspace-write policy should load");
assert_eq!(spec.config.network.allowed_domains(), None);
assert_eq!(
spec.config.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
let spec = spec
.recompute_for_sandbox_policy(&SandboxPolicy::DangerFullAccess)
.expect("full-access policy should load");
assert_eq!(
spec.config.network.allowed_domains(),
Some(vec!["*".to_string()])
);
assert_eq!(
spec.config.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
assert!(spec.config.network.allow_local_binding);
let spec = spec
.recompute_for_sandbox_policy(&SandboxPolicy::new_workspace_write_policy())
.expect("workspace-write policy should reload");
assert_eq!(spec.config.network.allowed_domains(), None);
assert_eq!(
spec.config.network.denied_domains(),
Some(vec!["blocked.example.com".to_string()])
);
assert!(!spec.config.network.allow_local_binding);
}
#[test]
fn managed_allowed_domains_only_disables_default_mode_allowlist_expansion() {
let mut config = NetworkProxyConfig::default();

View File

@@ -112,6 +112,7 @@ fn codex_app_tool(
server_name: CODEX_APPS_MCP_SERVER_NAME.to_string(),
tool_name: tool_name.to_string(),
tool_namespace,
server_instructions: None,
tool: test_tool_definition(tool_name),
connector_id: Some(connector_id.to_string()),
connector_name: connector_name.map(ToOwned::to_owned),
@@ -190,6 +191,7 @@ fn accessible_connectors_from_mcp_tools_carries_plugin_display_names() {
server_name: "sample".to_string(),
tool_name: "echo".to_string(),
tool_namespace: "sample".to_string(),
server_instructions: None,
tool: test_tool_definition("echo"),
connector_id: None,
connector_name: None,
@@ -314,6 +316,7 @@ fn accessible_connectors_from_mcp_tools_preserves_description() {
server_name: CODEX_APPS_MCP_SERVER_NAME.to_string(),
tool_name: "calendar_create_event".to_string(),
tool_namespace: "mcp__codex_apps__calendar".to_string(),
server_instructions: None,
tool: Tool {
name: "calendar_create_event".to_string().into(),
title: None,

View File

@@ -257,15 +257,14 @@ async fn build_nested_router(exec: &ExecContext) -> ToolRouter {
.read()
.await
.list_all_tools()
.await
.into_iter()
.map(|(name, tool_info)| (name, tool_info.tool))
.collect();
.await;
let mcp_tool_router_inputs = crate::tools::router::map_mcp_tool_infos(&mcp_tools);
ToolRouter::from_config(
&nested_tools_config,
ToolRouterParams {
mcp_tools: Some(mcp_tools),
mcp_tools: Some(mcp_tool_router_inputs.mcp_tools),
tool_namespaces: Some(mcp_tool_router_inputs.tool_namespaces),
app_tools: None,
discoverable_tools: None,
dynamic_tools: exec.turn.dynamic_tools.as_slice(),

View File

@@ -147,11 +147,11 @@ fn tool_search_payloads_roundtrip_as_tool_search_outputs() {
description: String::new(),
strict: false,
defer_loading: Some(true),
parameters: codex_tools::JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: codex_tools::JsonSchema::object(
/*properties*/ Default::default(),
/*required*/ None,
/*additional_properties*/ None,
),
output_schema: None,
},
)],

View File

@@ -1561,16 +1561,13 @@ impl JsReplManager {
.await
.list_all_tools()
.await;
let mcp_tool_router_inputs = crate::tools::router::map_mcp_tool_infos(&mcp_tools);
let router = ToolRouter::from_config(
&exec.turn.tools_config,
crate::tools::router::ToolRouterParams {
mcp_tools: Some(
mcp_tools
.into_iter()
.map(|(name, tool)| (name, tool.tool))
.collect(),
),
mcp_tools: Some(mcp_tool_router_inputs.mcp_tools),
tool_namespaces: Some(mcp_tool_router_inputs.tool_namespaces),
app_tools: None,
discoverable_tools: None,
dynamic_tools: exec.turn.dynamic_tools.as_slice(),

View File

@@ -19,6 +19,7 @@ use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::WarningEvent;
use indexmap::IndexMap;
use std::collections::HashMap;
@@ -118,6 +119,13 @@ fn allows_network_approval_flow(policy: AskForApproval) -> bool {
!matches!(policy, AskForApproval::Never)
}
fn sandbox_policy_allows_network_approval_flow(policy: &SandboxPolicy) -> bool {
matches!(
policy,
SandboxPolicy::ReadOnly { .. } | SandboxPolicy::WorkspaceWrite { .. }
)
}
impl PendingApprovalDecision {
fn to_network_decision(self) -> NetworkDecision {
match self {
@@ -334,6 +342,16 @@ impl NetworkApprovalService {
.await;
return NetworkDecision::deny(REASON_NOT_ALLOWED);
};
if !sandbox_policy_allows_network_approval_flow(turn_context.sandbox_policy.get()) {
pending.set_decision(PendingApprovalDecision::Deny).await;
let mut pending_approvals = self.pending_host_approvals.lock().await;
pending_approvals.remove(&key);
self.record_outcome_for_single_active_call(NetworkApprovalOutcome::DeniedByPolicy(
policy_denial_message,
))
.await;
return NetworkDecision::deny(REASON_NOT_ALLOWED);
}
if !allows_network_approval_flow(turn_context.approval_policy.value()) {
pending.set_decision(PendingApprovalDecision::Deny).await;
let mut pending_approvals = self.pending_host_approvals.lock().await;

View File

@@ -1,6 +1,7 @@
use super::*;
use codex_network_proxy::BlockedRequestArgs;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use pretty_assertions::assert_eq;
#[tokio::test]
@@ -179,6 +180,19 @@ fn only_never_policy_disables_network_approval_flow() {
assert!(allows_network_approval_flow(AskForApproval::UnlessTrusted));
}
#[test]
fn network_approval_flow_is_limited_to_restricted_sandbox_modes() {
assert!(sandbox_policy_allows_network_approval_flow(
&SandboxPolicy::new_read_only_policy()
));
assert!(sandbox_policy_allows_network_approval_flow(
&SandboxPolicy::new_workspace_write_policy()
));
assert!(!sandbox_policy_allows_network_approval_flow(
&SandboxPolicy::DangerFullAccess
));
}
fn denied_blocked_request(host: &str) -> BlockedRequest {
BlockedRequest::new(BlockedRequestArgs {
host: host.to_string(),

View File

@@ -16,6 +16,7 @@ use codex_protocol::models::SearchToolCallParams;
use codex_protocol::models::ShellToolCallParams;
use codex_tools::ConfiguredToolSpec;
use codex_tools::DiscoverableTool;
use codex_tools::ToolNamespace;
use codex_tools::ToolSpec;
use codex_tools::ToolsConfig;
use rmcp::model::Tool;
@@ -41,15 +42,43 @@ pub struct ToolRouter {
pub(crate) struct ToolRouterParams<'a> {
pub(crate) mcp_tools: Option<HashMap<String, Tool>>,
pub(crate) tool_namespaces: Option<HashMap<String, ToolNamespace>>,
pub(crate) app_tools: Option<HashMap<String, ToolInfo>>,
pub(crate) discoverable_tools: Option<Vec<DiscoverableTool>>,
pub(crate) dynamic_tools: &'a [DynamicToolSpec],
}
pub(crate) struct McpToolRouterInputs {
pub(crate) mcp_tools: HashMap<String, Tool>,
pub(crate) tool_namespaces: HashMap<String, ToolNamespace>,
}
pub(crate) fn map_mcp_tool_infos(mcp_tools: &HashMap<String, ToolInfo>) -> McpToolRouterInputs {
McpToolRouterInputs {
mcp_tools: mcp_tools
.iter()
.map(|(name, tool)| (name.clone(), tool.tool.clone()))
.collect(),
tool_namespaces: mcp_tools
.iter()
.map(|(name, tool)| {
(
name.clone(),
ToolNamespace {
name: tool.tool_namespace.clone(),
description: tool.server_instructions.clone(),
},
)
})
.collect(),
}
}
impl ToolRouter {
pub fn from_config(config: &ToolsConfig, params: ToolRouterParams<'_>) -> Self {
let ToolRouterParams {
mcp_tools,
tool_namespaces,
app_tools,
discoverable_tools,
dynamic_tools,
@@ -58,6 +87,7 @@ impl ToolRouter {
config,
mcp_tools,
app_tools,
tool_namespaces,
discoverable_tools,
dynamic_tools,
);

View File

@@ -35,6 +35,7 @@ async fn js_repl_tools_only_blocks_direct_tool_calls() -> anyhow::Result<()> {
.map(|(name, tool)| (name, tool.tool))
.collect(),
),
tool_namespaces: None,
app_tools,
discoverable_tools: None,
dynamic_tools: turn.dynamic_tools.as_slice(),
@@ -93,6 +94,7 @@ async fn js_repl_tools_only_allows_js_repl_source_calls() -> anyhow::Result<()>
.map(|(name, tool)| (name, tool.tool))
.collect(),
),
tool_namespaces: None,
app_tools,
discoverable_tools: None,
dynamic_tools: turn.dynamic_tools.as_slice(),

View File

@@ -10,6 +10,7 @@ use codex_mcp::ToolInfo;
use codex_protocol::dynamic_tools::DynamicToolSpec;
use codex_tools::DiscoverableTool;
use codex_tools::ToolHandlerKind;
use codex_tools::ToolNamespace;
use codex_tools::ToolRegistryPlanAppTool;
use codex_tools::ToolRegistryPlanParams;
use codex_tools::ToolUserShellType;
@@ -33,6 +34,7 @@ pub(crate) fn build_specs_with_discoverable_tools(
config: &ToolsConfig,
mcp_tools: Option<HashMap<String, rmcp::model::Tool>>,
app_tools: Option<HashMap<String, ToolInfo>>,
tool_namespaces: Option<HashMap<String, ToolNamespace>>,
discoverable_tools: Option<Vec<DiscoverableTool>>,
dynamic_tools: &[DynamicToolSpec],
) -> ToolRegistryBuilder {
@@ -86,6 +88,7 @@ pub(crate) fn build_specs_with_discoverable_tools(
config,
ToolRegistryPlanParams {
mcp_tools: mcp_tools.as_ref(),
tool_namespaces: tool_namespaces.as_ref(),
app_tools: app_tool_sources.as_deref(),
discoverable_tools: discoverable_tools.as_deref(),
dynamic_tools,

View File

@@ -181,6 +181,7 @@ fn build_specs(
config,
mcp_tools,
app_tools,
/*tool_namespaces*/ None,
/*discoverable_tools*/ None,
dynamic_tools,
)
@@ -261,6 +262,7 @@ fn assert_model_tools(
&tools_config,
ToolRouterParams {
mcp_tools: None,
tool_namespaces: None,
app_tools: None,
discoverable_tools: None,
dynamic_tools: &[],
@@ -628,6 +630,7 @@ fn tool_suggest_requires_apps_and_plugins_features() {
&tools_config,
/*mcp_tools*/ None,
/*app_tools*/ None,
/*tool_namespaces*/ None,
discoverable_tools.clone(),
&[],
)
@@ -701,6 +704,7 @@ fn search_tool_description_falls_back_to_connector_name_without_description() {
server_name: CODEX_APPS_MCP_SERVER_NAME.to_string(),
tool_name: "_create_event".to_string(),
tool_namespace: "mcp__codex_apps__calendar".to_string(),
server_instructions: None,
tool: mcp_tool(
"calendar_create_event",
"Create calendar event",
@@ -751,6 +755,7 @@ fn search_tool_registers_namespaced_app_tool_aliases() {
server_name: CODEX_APPS_MCP_SERVER_NAME.to_string(),
tool_name: "_create_event".to_string(),
tool_namespace: "mcp__codex_apps__calendar".to_string(),
server_instructions: None,
tool: mcp_tool(
"calendar-create-event",
"Create calendar event",
@@ -768,6 +773,7 @@ fn search_tool_registers_namespaced_app_tool_aliases() {
server_name: CODEX_APPS_MCP_SERVER_NAME.to_string(),
tool_name: "_list_events".to_string(),
tool_namespace: "mcp__codex_apps__calendar".to_string(),
server_instructions: None,
tool: mcp_tool(
"calendar-list-events",
"List calendar events",
@@ -832,16 +838,15 @@ fn test_mcp_tool_property_missing_type_defaults_to_string() {
tool.spec,
ToolSpec::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
/*properties*/
BTreeMap::from([(
"query".to_string(),
JsonSchema::String {
description: Some("search query".to_string())
}
JsonSchema::string(Some("search query".to_string())),
)]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "Search docs".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
@@ -851,7 +856,7 @@ fn test_mcp_tool_property_missing_type_defaults_to_string() {
}
#[test]
fn test_mcp_tool_integer_normalized_to_number() {
fn test_mcp_tool_preserves_integer_schema() {
let config = test_config();
let model_info = construct_model_info_offline("gpt-5-codex", &config);
let mut features = Features::with_defaults();
@@ -890,14 +895,15 @@ fn test_mcp_tool_integer_normalized_to_number() {
tool.spec,
ToolSpec::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
/*properties*/
BTreeMap::from([(
"page".to_string(),
JsonSchema::Number { description: None }
JsonSchema::integer(/*description*/ None),
)]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "Pagination".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
@@ -947,17 +953,18 @@ fn test_mcp_tool_array_without_items_gets_default_string_items() {
tool.spec,
ToolSpec::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
/*properties*/
BTreeMap::from([(
"tags".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None
}
JsonSchema::array(
JsonSchema::string(/*description*/ None),
/*description*/ None,
),
)]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "Tags".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
@@ -1008,14 +1015,21 @@ fn test_mcp_tool_anyof_defaults_to_string() {
tool.spec,
ToolSpec::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
/*properties*/
BTreeMap::from([(
"value".to_string(),
JsonSchema::String { description: None }
JsonSchema::any_of(
vec![
JsonSchema::string(/*description*/ None),
JsonSchema::number(/*description*/ None),
],
/*description*/ None,
),
)]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "AnyOf Value".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
@@ -1082,50 +1096,51 @@ fn test_get_openai_tools_mcp_tools_with_additional_properties_schema() {
tool.spec,
ToolSpec::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(
/*properties*/
BTreeMap::from([
(
"string_argument".to_string(),
JsonSchema::String { description: None }
JsonSchema::string(/*description*/ None),
),
(
"number_argument".to_string(),
JsonSchema::Number { description: None }
JsonSchema::number(/*description*/ None),
),
(
"object_argument".to_string(),
JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::object(
BTreeMap::from([
(
"string_property".to_string(),
JsonSchema::String { description: None }
JsonSchema::string(/*description*/ None),
),
(
"number_property".to_string(),
JsonSchema::Number { description: None }
JsonSchema::number(/*description*/ None),
),
]),
required: Some(vec![
Some(vec![
"string_property".to_string(),
"number_property".to_string(),
]),
additional_properties: Some(
JsonSchema::Object {
properties: BTreeMap::from([(
Some(
JsonSchema::object(
BTreeMap::from([(
"addtl_prop".to_string(),
JsonSchema::String { description: None }
),]),
required: Some(vec!["addtl_prop".to_string(),]),
additional_properties: Some(false.into()),
}
.into()
JsonSchema::string(/*description*/ None),
)]),
Some(vec!["addtl_prop".to_string()]),
Some(false.into()),
)
.into(),
),
},
),
),
]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "Do something cool".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),

View File

@@ -1,10 +1,3 @@
You are Codex, a coding agent based on GPT-5. You and the user share the same workspace and collaborate to achieve the user's goals.
# Personality
You are a collaborative, highly capable pair-programmer AI. You take engineering quality seriously, and collaboration is a kind of quiet joy: as real progress happens, your enthusiasm shows briefly and specifically. Your default personality and tone is concise, direct, and friendly. You communicate efficiently, always keeping the user clearly informed about ongoing actions without unnecessary detail. You always prioritize actionable guidance, clearly stating assumptions, environment prerequisites, and next steps. Unless explicitly asked, you avoid excessively verbose explanations about your work.
## Tone and style
- Anything you say outside of tool use is shown to the user. Do not narrate abstractly; explain what you are doing and why, using plain language.
- Output will be rendered in a command line interface or minimal UI so keep responses tight, scannable, and low-noise. Generally avoid the use of emojis. You may format with GitHub-flavored Markdown.
- Never use nested bullets. Keep lists flat (single level). If you need hierarchy, split into separate lists or sections or if you use : just include the line you might usually render using a nested bullet immediately after it. For numbered lists, only use the `1. 2. 3.` style markers (with a period), never `1)`.
- When writing a final assistant response, state the solution first before explaining your answer. The complexity of the answer should match the task. If the task is simple, your answer should be short. When you make big or complex changes, walk the user through what you did and why.
@@ -12,13 +5,6 @@ You are a collaborative, highly capable pair-programmer AI. You take engineering
- Code samples or multi-line snippets should be wrapped in fenced code blocks. Include an info string as often as possible.
- Never output the content of large files, just provide references. Use inline code to make file paths clickable; each reference should have a stand alone path, even if it's the same file. Paths may be absolute, workspace-relative, a//b/ diff-prefixed, or bare filename/suffix; locations may be :line[:column] or #Lline[Ccolumn] (1-based; column defaults to 1). Do not use file://, vscode://, or https://, and do not provide line ranges. Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
- The user does not see command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
- Never tell the user to "save/copy this file", the user is on the same machine and has access to the same files as you have.
- If you weren't able to do something, for example run tests, tell the user.
- If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
## Responsiveness
### Collaboration posture:
- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
- Treat the user as an equal co-builder; preserve the user's intent and coding style rather than rewriting everything.
- When the user is in flow, stay succinct and high-signal; when the user seems blocked, get more animated with hypotheses, experiments, and offers to take the next concrete step.

View File

@@ -2242,7 +2242,7 @@ text(JSON.stringify(tool));
parsed,
serde_json::json!({
"name": "view_image",
"description": "View a local image from the filesystem (only use if given a full filepath by the user, and the image isn't already attached to the thread context within <image ...> tags).\n\nexec tool declaration:\n```ts\ndeclare const tools: { view_image(args: { path: string; }): Promise<{ detail: string | null; image_url: string; }>; };\n```",
"description": "View a local image from the filesystem (only use if given a full filepath by the user, and the image isn't already attached to the thread context within <image ...> tags).\n\nexec tool declaration:\n```ts\ndeclare const tools: { view_image(args: {\n // Local filesystem path to an image file\n path: string;\n}): Promise<{\n // Image detail hint returned by view_image. Returns `original` when original resolution is preserved, otherwise `null`.\n detail: string | null;\n // Data URL for the loaded image.\n image_url: string;\n}>; };\n```",
})
);

View File

@@ -5,7 +5,7 @@ pub(crate) mod execpolicycheck;
mod executable_name;
pub(crate) mod parser;
pub(crate) mod policy;
pub(crate) mod rule;
pub mod rule;
pub use amend::AmendError;
pub use amend::blocking_append_allow_prefix_rule;

View File

@@ -2,6 +2,7 @@ use crate::config;
use crate::http_proxy;
use crate::network_policy::NetworkPolicyDecider;
use crate::runtime::BlockedRequestObserver;
use crate::runtime::ConfigState;
use crate::runtime::unix_socket_permissions_supported;
use crate::socks5;
use crate::state::NetworkProxyState;
@@ -13,6 +14,7 @@ use std::net::SocketAddr;
use std::net::TcpListener as StdTcpListener;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::RwLock;
use tokio::task::JoinHandle;
use tracing::warn;
@@ -219,11 +221,9 @@ impl NetworkProxyBuilder {
http_addr,
socks_addr,
socks_enabled: current_cfg.network.enable_socks5,
allow_local_binding: current_cfg.network.allow_local_binding,
allow_unix_sockets: current_cfg.network.allow_unix_sockets(),
dangerously_allow_all_unix_sockets: current_cfg
.network
.dangerously_allow_all_unix_sockets,
runtime_settings: Arc::new(RwLock::new(NetworkProxyRuntimeSettings::from_config(
&current_cfg,
))),
reserved_listeners,
policy_decider: self.policy_decider,
})
@@ -294,15 +294,30 @@ fn reserve_loopback_ephemeral_listener() -> Result<StdTcpListener> {
.context("bind loopback ephemeral port")
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct NetworkProxyRuntimeSettings {
allow_local_binding: bool,
allow_unix_sockets: Arc<[String]>,
dangerously_allow_all_unix_sockets: bool,
}
impl NetworkProxyRuntimeSettings {
fn from_config(config: &config::NetworkProxyConfig) -> Self {
Self {
allow_local_binding: config.network.allow_local_binding,
allow_unix_sockets: config.network.allow_unix_sockets().into(),
dangerously_allow_all_unix_sockets: config.network.dangerously_allow_all_unix_sockets,
}
}
}
#[derive(Clone)]
pub struct NetworkProxy {
state: Arc<NetworkProxyState>,
http_addr: SocketAddr,
socks_addr: SocketAddr,
socks_enabled: bool,
allow_local_binding: bool,
allow_unix_sockets: Vec<String>,
dangerously_allow_all_unix_sockets: bool,
runtime_settings: Arc<RwLock<NetworkProxyRuntimeSettings>>,
reserved_listeners: Option<Arc<ReservedListeners>>,
policy_decider: Option<Arc<dyn NetworkPolicyDecider>>,
}
@@ -322,7 +337,7 @@ impl PartialEq for NetworkProxy {
fn eq(&self, other: &Self) -> bool {
self.http_addr == other.http_addr
&& self.socks_addr == other.socks_addr
&& self.allow_local_binding == other.allow_local_binding
&& self.runtime_settings() == other.runtime_settings()
}
}
@@ -488,18 +503,19 @@ impl NetworkProxy {
}
pub fn allow_local_binding(&self) -> bool {
self.allow_local_binding
self.runtime_settings().allow_local_binding
}
pub fn allow_unix_sockets(&self) -> &[String] {
&self.allow_unix_sockets
pub fn allow_unix_sockets(&self) -> Arc<[String]> {
self.runtime_settings().allow_unix_sockets
}
pub fn dangerously_allow_all_unix_sockets(&self) -> bool {
self.dangerously_allow_all_unix_sockets
self.runtime_settings().dangerously_allow_all_unix_sockets
}
pub fn apply_to_env(&self, env: &mut HashMap<String, String>) {
let allow_local_binding = self.allow_local_binding();
// Enforce proxying for child processes. We intentionally override existing values so
// command-level environment cannot bypass the managed proxy endpoint.
apply_proxy_env_overrides(
@@ -507,10 +523,50 @@ impl NetworkProxy {
self.http_addr,
self.socks_addr,
self.socks_enabled,
self.allow_local_binding,
allow_local_binding,
);
}
pub async fn replace_config_state(&self, new_state: ConfigState) -> Result<()> {
let current_cfg = self.state.current_cfg().await?;
anyhow::ensure!(
new_state.config.network.enabled == current_cfg.network.enabled,
"cannot update network.enabled on a running proxy"
);
anyhow::ensure!(
new_state.config.network.proxy_url == current_cfg.network.proxy_url,
"cannot update network.proxy_url on a running proxy"
);
anyhow::ensure!(
new_state.config.network.socks_url == current_cfg.network.socks_url,
"cannot update network.socks_url on a running proxy"
);
anyhow::ensure!(
new_state.config.network.enable_socks5 == current_cfg.network.enable_socks5,
"cannot update network.enable_socks5 on a running proxy"
);
anyhow::ensure!(
new_state.config.network.enable_socks5_udp == current_cfg.network.enable_socks5_udp,
"cannot update network.enable_socks5_udp on a running proxy"
);
let settings = NetworkProxyRuntimeSettings::from_config(&new_state.config);
self.state.replace_config_state(new_state).await?;
let mut guard = self
.runtime_settings
.write()
.unwrap_or_else(std::sync::PoisonError::into_inner);
*guard = settings;
Ok(())
}
fn runtime_settings(&self) -> NetworkProxyRuntimeSettings {
self.runtime_settings
.read()
.unwrap_or_else(std::sync::PoisonError::into_inner)
.clone()
}
pub async fn run(&self) -> Result<NetworkProxyHandle> {
let current_cfg = self.state.current_cfg().await?;
if !current_cfg.network.enabled {

View File

@@ -335,6 +335,17 @@ impl NetworkProxyState {
}
}
pub async fn replace_config_state(&self, mut new_state: ConfigState) -> Result<()> {
self.reload_if_needed().await?;
let mut guard = self.state.write().await;
log_policy_changes(&guard.config, &new_state.config);
new_state.blocked = guard.blocked.clone();
new_state.blocked_total = guard.blocked_total;
*guard = new_state;
info!("updated network proxy config state");
Ok(())
}
pub async fn host_blocked(&self, host: &str, port: u16) -> Result<HostBlockDecision> {
self.reload_if_needed().await?;
let host = match Host::parse(host) {

View File

@@ -7,64 +7,48 @@ pub fn create_spawn_agents_on_csv_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"csv_path".to_string(),
JsonSchema::String {
description: Some("Path to the CSV file containing input rows.".to_string()),
},
JsonSchema::string(Some("Path to the CSV file containing input rows.".to_string())),
),
(
"instruction".to_string(),
JsonSchema::String {
description: Some(
"Instruction template to apply to each CSV row. Use {column_name} placeholders to inject values from the row."
.to_string(),
),
},
JsonSchema::string(Some(
"Instruction template to apply to each CSV row. Use {column_name} placeholders to inject values from the row."
.to_string(),
)),
),
(
"id_column".to_string(),
JsonSchema::String {
description: Some("Optional column name to use as stable item id.".to_string()),
},
JsonSchema::string(Some(
"Optional column name to use as stable item id.".to_string(),
)),
),
(
"output_csv_path".to_string(),
JsonSchema::String {
description: Some("Optional output CSV path for exported results.".to_string()),
},
JsonSchema::string(Some("Optional output CSV path for exported results.".to_string())),
),
(
"max_concurrency".to_string(),
JsonSchema::Number {
description: Some(
"Maximum concurrent workers for this job. Defaults to 16 and is capped by config."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum concurrent workers for this job. Defaults to 16 and is capped by config."
.to_string(),
)),
),
(
"max_workers".to_string(),
JsonSchema::Number {
description: Some(
"Alias for max_concurrency. Set to 1 to run sequentially.".to_string(),
),
},
JsonSchema::number(Some(
"Alias for max_concurrency. Set to 1 to run sequentially.".to_string(),
)),
),
(
"max_runtime_seconds".to_string(),
JsonSchema::Number {
description: Some(
"Maximum runtime per worker before it is failed. Defaults to 1800 seconds."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum runtime per worker before it is failed. Defaults to 1800 seconds."
.to_string(),
)),
),
(
"output_schema".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
JsonSchema::object(BTreeMap::new(), /*required*/ None, /*additional_properties*/ None),
),
]);
@@ -74,11 +58,7 @@ pub fn create_spawn_agents_on_csv_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["csv_path".to_string(), "instruction".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["csv_path".to_string(), "instruction".to_string()]), Some(false.into())),
output_schema: None,
})
}
@@ -87,32 +67,22 @@ pub fn create_report_agent_job_result_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"job_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the job.".to_string()),
},
JsonSchema::string(Some("Identifier of the job.".to_string())),
),
(
"item_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the job item.".to_string()),
},
JsonSchema::string(Some("Identifier of the job item.".to_string())),
),
(
"result".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
JsonSchema::object(BTreeMap::new(), /*required*/ None, /*additional_properties*/ None),
),
(
"stop".to_string(),
JsonSchema::Boolean {
description: Some(
"Optional. When true, cancels the remaining job items after this result is recorded."
.to_string(),
),
},
JsonSchema::boolean(Some(
"Optional. When true, cancels the remaining job items after this result is recorded."
.to_string(),
)),
),
]);
@@ -123,15 +93,11 @@ pub fn create_report_agent_job_result_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec![
parameters: JsonSchema::object(properties, Some(vec![
"job_id".to_string(),
"item_id".to_string(),
"result".to_string(),
]),
additional_properties: Some(false.into()),
},
]), Some(false.into())),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -12,73 +13,61 @@ fn spawn_agents_on_csv_tool_requires_csv_and_instruction() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"csv_path".to_string(),
JsonSchema::String {
description: Some("Path to the CSV file containing input rows.".to_string()),
},
JsonSchema::string(Some(
"Path to the CSV file containing input rows.".to_string(),
)),
),
(
"instruction".to_string(),
JsonSchema::String {
description: Some(
"Instruction template to apply to each CSV row. Use {column_name} placeholders to inject values from the row."
.to_string(),
),
},
JsonSchema::string(Some(
"Instruction template to apply to each CSV row. Use {column_name} placeholders to inject values from the row."
.to_string(),
)),
),
(
"id_column".to_string(),
JsonSchema::String {
description: Some("Optional column name to use as stable item id.".to_string()),
},
JsonSchema::string(Some(
"Optional column name to use as stable item id.".to_string(),
)),
),
(
"output_csv_path".to_string(),
JsonSchema::String {
description: Some("Optional output CSV path for exported results.".to_string()),
},
JsonSchema::string(Some(
"Optional output CSV path for exported results.".to_string(),
)),
),
(
"max_concurrency".to_string(),
JsonSchema::Number {
description: Some(
"Maximum concurrent workers for this job. Defaults to 16 and is capped by config."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum concurrent workers for this job. Defaults to 16 and is capped by config."
.to_string(),
)),
),
(
"max_workers".to_string(),
JsonSchema::Number {
description: Some(
"Alias for max_concurrency. Set to 1 to run sequentially.".to_string(),
),
},
JsonSchema::number(Some(
"Alias for max_concurrency. Set to 1 to run sequentially.".to_string(),
)),
),
(
"max_runtime_seconds".to_string(),
JsonSchema::Number {
description: Some(
"Maximum runtime per worker before it is failed. Defaults to 1800 seconds."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum runtime per worker before it is failed. Defaults to 1800 seconds."
.to_string(),
)),
),
(
"output_schema".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None,
),
),
]),
required: Some(vec!["csv_path".to_string(), "instruction".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["csv_path".to_string(), "instruction".to_string()]), Some(false.into())),
output_schema: None,
})
);
@@ -95,45 +84,35 @@ fn report_agent_job_result_tool_requires_result_payload() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"job_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the job.".to_string()),
},
JsonSchema::string(Some("Identifier of the job.".to_string())),
),
(
"item_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the job item.".to_string()),
},
JsonSchema::string(Some("Identifier of the job item.".to_string())),
),
(
"result".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None,
),
),
(
"stop".to_string(),
JsonSchema::Boolean {
description: Some(
"Optional. When true, cancels the remaining job items after this result is recorded."
.to_string(),
),
},
JsonSchema::boolean(Some(
"Optional. When true, cancels the remaining job items after this result is recorded."
.to_string(),
)),
),
]),
required: Some(vec![
]), Some(vec![
"job_id".to_string(),
"item_id".to_string(),
"result".to_string(),
]),
additional_properties: Some(false.into()),
},
]), Some(false.into())),
output_schema: None,
})
);

View File

@@ -38,11 +38,7 @@ pub fn create_spawn_agent_tool_v1(options: SpawnAgentToolOptions<'_>) -> ToolSpe
),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, /*required*/ None, Some(false.into())),
output_schema: Some(spawn_agent_output_schema_v1()),
})
}
@@ -61,12 +57,10 @@ pub fn create_spawn_agent_tool_v2(options: SpawnAgentToolOptions<'_>) -> ToolSpe
}
properties.insert(
"task_name".to_string(),
JsonSchema::String {
description: Some(
"Task name for the new agent. Use lowercase letters, digits, and underscores."
.to_string(),
),
},
JsonSchema::string(Some(
"Task name for the new agent. Use lowercase letters, digits, and underscores."
.to_string(),
)),
);
ToolSpec::Function(ResponsesApiTool {
@@ -77,11 +71,11 @@ pub fn create_spawn_agent_tool_v2(options: SpawnAgentToolOptions<'_>) -> ToolSpe
),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["task_name".to_string(), "message".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["task_name".to_string(), "message".to_string()]),
Some(false.into()),
),
output_schema: Some(spawn_agent_output_schema_v2(
options.hide_agent_type_model_reasoning,
)),
@@ -92,28 +86,22 @@ pub fn create_send_input_tool_v1() -> ToolSpec {
let properties = BTreeMap::from([
(
"target".to_string(),
JsonSchema::String {
description: Some("Agent id to message (from spawn_agent).".to_string()),
},
JsonSchema::string(Some("Agent id to message (from spawn_agent).".to_string())),
),
(
"message".to_string(),
JsonSchema::String {
description: Some(
"Legacy plain-text message to send to the agent. Use either message or items."
.to_string(),
),
},
JsonSchema::string(Some(
"Legacy plain-text message to send to the agent. Use either message or items."
.to_string(),
)),
),
("items".to_string(), create_collab_input_items_schema()),
(
"interrupt".to_string(),
JsonSchema::Boolean {
description: Some(
"When true, stop the agent's current task and handle this immediately. When false (default), queue this message."
.to_string(),
),
},
JsonSchema::boolean(Some(
"When true, stop the agent's current task and handle this immediately. When false (default), queue this message."
.to_string(),
)),
),
]);
@@ -123,11 +111,7 @@ pub fn create_send_input_tool_v1() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["target".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["target".to_string()]), Some(false.into())),
output_schema: Some(send_input_output_schema()),
})
}
@@ -136,17 +120,15 @@ pub fn create_send_message_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"target".to_string(),
JsonSchema::String {
description: Some(
"Agent id or canonical task name to message (from spawn_agent).".to_string(),
),
},
JsonSchema::string(Some(
"Agent id or canonical task name to message (from spawn_agent).".to_string(),
)),
),
(
"message".to_string(),
JsonSchema::String {
description: Some("Message text to queue on the target agent.".to_string()),
},
JsonSchema::string(Some(
"Message text to queue on the target agent.".to_string(),
)),
),
]);
@@ -156,11 +138,7 @@ pub fn create_send_message_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["target".to_string(), "message".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["target".to_string(), "message".to_string()]), Some(false.into())),
output_schema: None,
})
}
@@ -169,26 +147,22 @@ pub fn create_followup_task_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"target".to_string(),
JsonSchema::String {
description: Some(
"Agent id or canonical task name to message (from spawn_agent).".to_string(),
),
},
JsonSchema::string(Some(
"Agent id or canonical task name to message (from spawn_agent).".to_string(),
)),
),
(
"message".to_string(),
JsonSchema::String {
description: Some("Message text to send to the target agent.".to_string()),
},
JsonSchema::string(Some(
"Message text to send to the target agent.".to_string(),
)),
),
(
"interrupt".to_string(),
JsonSchema::Boolean {
description: Some(
"When true, stop the agent's current task and handle this immediately. When false (default), queue this message."
.to_string(),
),
},
JsonSchema::boolean(Some(
"When true, stop the agent's current task and handle this immediately. When false (default), queue this message."
.to_string(),
)),
),
]);
@@ -198,11 +172,7 @@ pub fn create_followup_task_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["target".to_string(), "message".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["target".to_string(), "message".to_string()]), Some(false.into())),
output_schema: None,
})
}
@@ -210,9 +180,7 @@ pub fn create_followup_task_tool() -> ToolSpec {
pub fn create_resume_agent_tool() -> ToolSpec {
let properties = BTreeMap::from([(
"id".to_string(),
JsonSchema::String {
description: Some("Agent id to resume.".to_string()),
},
JsonSchema::string(Some("Agent id to resume.".to_string())),
)]);
ToolSpec::Function(ResponsesApiTool {
@@ -222,11 +190,7 @@ pub fn create_resume_agent_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["id".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["id".to_string()]), Some(false.into())),
output_schema: Some(resume_agent_output_schema()),
})
}
@@ -258,12 +222,10 @@ pub fn create_wait_agent_tool_v2(options: WaitAgentTimeoutOptions) -> ToolSpec {
pub fn create_list_agents_tool() -> ToolSpec {
let properties = BTreeMap::from([(
"path_prefix".to_string(),
JsonSchema::String {
description: Some(
"Optional task-path prefix. Accepts the same relative or absolute task-path syntax as other MultiAgentV2 agent targets."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional task-path prefix. Accepts the same relative or absolute task-path syntax as other MultiAgentV2 agent targets."
.to_string(),
)),
)]);
ToolSpec::Function(ResponsesApiTool {
@@ -273,11 +235,7 @@ pub fn create_list_agents_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, /*required*/ None, Some(false.into())),
output_schema: Some(list_agents_output_schema()),
})
}
@@ -285,9 +243,7 @@ pub fn create_list_agents_tool() -> ToolSpec {
pub fn create_close_agent_tool_v1() -> ToolSpec {
let properties = BTreeMap::from([(
"target".to_string(),
JsonSchema::String {
description: Some("Agent id to close (from spawn_agent).".to_string()),
},
JsonSchema::string(Some("Agent id to close (from spawn_agent).".to_string())),
)]);
ToolSpec::Function(ResponsesApiTool {
@@ -295,11 +251,7 @@ pub fn create_close_agent_tool_v1() -> ToolSpec {
description: "Close an agent and any open descendants when they are no longer needed, and return the target agent's previous status before shutdown was requested. Don't keep agents open for too long if they are not needed anymore.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["target".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["target".to_string()]), Some(false.into())),
output_schema: Some(close_agent_output_schema()),
})
}
@@ -307,11 +259,9 @@ pub fn create_close_agent_tool_v1() -> ToolSpec {
pub fn create_close_agent_tool_v2() -> ToolSpec {
let properties = BTreeMap::from([(
"target".to_string(),
JsonSchema::String {
description: Some(
"Agent id or canonical task name to close (from spawn_agent).".to_string(),
),
},
JsonSchema::string(Some(
"Agent id or canonical task name to close (from spawn_agent).".to_string(),
)),
)]);
ToolSpec::Function(ResponsesApiTool {
@@ -319,11 +269,7 @@ pub fn create_close_agent_tool_v2() -> ToolSpec {
description: "Close an agent and any open descendants when they are no longer needed, and return the target agent's previous status before shutdown was requested. Don't keep agents open for too long if they are not needed anymore.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["target".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["target".to_string()]), Some(false.into())),
output_schema: Some(close_agent_output_schema()),
})
}
@@ -522,98 +468,71 @@ fn create_collab_input_items_schema() -> JsonSchema {
let properties = BTreeMap::from([
(
"type".to_string(),
JsonSchema::String {
description: Some(
"Input item type: text, image, local_image, skill, or mention.".to_string(),
),
},
JsonSchema::string(Some(
"Input item type: text, image, local_image, skill, or mention.".to_string(),
)),
),
(
"text".to_string(),
JsonSchema::String {
description: Some("Text content when type is text.".to_string()),
},
JsonSchema::string(Some("Text content when type is text.".to_string())),
),
(
"image_url".to_string(),
JsonSchema::String {
description: Some("Image URL when type is image.".to_string()),
},
JsonSchema::string(Some("Image URL when type is image.".to_string())),
),
(
"path".to_string(),
JsonSchema::String {
description: Some(
"Path when type is local_image/skill, or structured mention target such as app://<connector-id> or plugin://<plugin-name>@<marketplace-name> when type is mention."
.to_string(),
),
},
JsonSchema::string(Some(
"Path when type is local_image/skill, or structured mention target such as app://<connector-id> or plugin://<plugin-name>@<marketplace-name> when type is mention."
.to_string(),
)),
),
(
"name".to_string(),
JsonSchema::String {
description: Some("Display name when type is skill or mention.".to_string()),
},
JsonSchema::string(Some("Display name when type is skill or mention.".to_string())),
),
]);
JsonSchema::Array {
items: Box::new(JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
}),
description: Some(
JsonSchema::array(JsonSchema::object(properties, /*required*/ None, Some(false.into())), Some(
"Structured input items. Use this to pass explicit mentions (for example app:// connector paths)."
.to_string(),
),
}
))
}
fn spawn_agent_common_properties_v1(agent_type_description: &str) -> BTreeMap<String, JsonSchema> {
BTreeMap::from([
(
"message".to_string(),
JsonSchema::String {
description: Some(
"Initial plain-text task for the new agent. Use either message or items."
.to_string(),
),
},
JsonSchema::string(Some(
"Initial plain-text task for the new agent. Use either message or items."
.to_string(),
)),
),
("items".to_string(), create_collab_input_items_schema()),
(
"agent_type".to_string(),
JsonSchema::String {
description: Some(agent_type_description.to_string()),
},
JsonSchema::string(Some(agent_type_description.to_string())),
),
(
"fork_context".to_string(),
JsonSchema::Boolean {
description: Some(
"When true, fork the current thread history into the new agent before sending the initial prompt. This must be used when you want the new agent to have exactly the same context as you."
.to_string(),
),
},
JsonSchema::boolean(Some(
"When true, fork the current thread history into the new agent before sending the initial prompt. This must be used when you want the new agent to have exactly the same context as you."
.to_string(),
)),
),
(
"model".to_string(),
JsonSchema::String {
description: Some(
"Optional model override for the new agent. Replaces the inherited model."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional model override for the new agent. Replaces the inherited model."
.to_string(),
)),
),
(
"reasoning_effort".to_string(),
JsonSchema::String {
description: Some(
"Optional reasoning effort override for the new agent. Replaces the inherited reasoning effort."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional reasoning effort override for the new agent. Replaces the inherited reasoning effort."
.to_string(),
)),
),
])
}
@@ -622,42 +541,32 @@ fn spawn_agent_common_properties_v2(agent_type_description: &str) -> BTreeMap<St
BTreeMap::from([
(
"message".to_string(),
JsonSchema::String {
description: Some("Initial plain-text task for the new agent.".to_string()),
},
JsonSchema::string(Some("Initial plain-text task for the new agent.".to_string())),
),
(
"agent_type".to_string(),
JsonSchema::String {
description: Some(agent_type_description.to_string()),
},
JsonSchema::string(Some(agent_type_description.to_string())),
),
(
"fork_turns".to_string(),
JsonSchema::String {
description: Some(
"Optional MultiAgentV2 fork mode. Use `none`, `all`, or a positive integer string such as `3` to fork only the most recent turns."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional MultiAgentV2 fork mode. Use `none`, `all`, or a positive integer string such as `3` to fork only the most recent turns."
.to_string(),
)),
),
(
"model".to_string(),
JsonSchema::String {
description: Some(
"Optional model override for the new agent. Replaces the inherited model."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional model override for the new agent. Replaces the inherited model."
.to_string(),
)),
),
(
"reasoning_effort".to_string(),
JsonSchema::String {
description: Some(
"Optional reasoning effort override for the new agent. Replaces the inherited reasoning effort."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional reasoning effort override for the new agent. Replaces the inherited reasoning effort."
.to_string(),
)),
),
])
}
@@ -750,48 +659,40 @@ fn wait_agent_tool_parameters_v1(options: WaitAgentTimeoutOptions) -> JsonSchema
let properties = BTreeMap::from([
(
"targets".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some(
JsonSchema::array(
JsonSchema::string(/*description*/ None),
Some(
"Agent ids to wait on. Pass multiple ids to wait for whichever finishes first."
.to_string(),
),
},
),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some(format!(
"Optional timeout in milliseconds. Defaults to {}, min {}, max {}. Prefer longer waits (minutes) to avoid busy polling.",
options.default_timeout_ms, options.min_timeout_ms, options.max_timeout_ms,
)),
},
JsonSchema::number(Some(format!(
"Optional timeout in milliseconds. Defaults to {}, min {}, max {}. Prefer longer waits (minutes) to avoid busy polling.",
options.default_timeout_ms, options.min_timeout_ms, options.max_timeout_ms,
))),
),
]);
JsonSchema::Object {
JsonSchema::object(
properties,
required: Some(vec!["targets".to_string()]),
additional_properties: Some(false.into()),
}
Some(vec!["targets".to_string()]),
Some(false.into()),
)
}
fn wait_agent_tool_parameters_v2(options: WaitAgentTimeoutOptions) -> JsonSchema {
let properties = BTreeMap::from([(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some(format!(
"Optional timeout in milliseconds. Defaults to {}, min {}, max {}. Prefer longer waits (minutes) to avoid busy polling.",
options.default_timeout_ms, options.min_timeout_ms, options.max_timeout_ms,
)),
},
JsonSchema::number(Some(format!(
"Optional timeout in milliseconds. Defaults to {}, min {}, max {}. Prefer longer waits (minutes) to avoid busy polling.",
options.default_timeout_ms, options.min_timeout_ms, options.max_timeout_ms,
))),
)]);
JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
}
JsonSchema::object(properties, /*required*/ None, Some(false.into()))
}
#[cfg(test)]

View File

@@ -1,4 +1,6 @@
use super::*;
use crate::JsonSchemaPrimitiveType;
use crate::JsonSchemaType;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::ReasoningEffortPreset;
@@ -47,14 +49,14 @@ fn spawn_agent_tool_v2_requires_task_name_and_lists_visible_models() {
else {
panic!("spawn_agent should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("spawn_agent should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("spawn_agent should use object params");
assert!(description.contains("visible display (`visible-model`)"));
assert!(!description.contains("hidden display (`hidden-model`)"));
assert!(properties.contains_key("task_name"));
@@ -64,13 +66,11 @@ fn spawn_agent_tool_v2_requires_task_name_and_lists_visible_models() {
assert!(!properties.contains_key("fork_context"));
assert_eq!(
properties.get("agent_type"),
Some(&JsonSchema::String {
description: Some("role help".to_string()),
})
Some(&JsonSchema::string(Some("role help".to_string())))
);
assert_eq!(
required,
Some(vec!["task_name".to_string(), "message".to_string()])
parameters.required.as_ref(),
Some(&vec!["task_name".to_string(), "message".to_string()])
);
assert_eq!(
output_schema.expect("spawn_agent output schema")["required"],
@@ -89,9 +89,14 @@ fn spawn_agent_tool_v1_keeps_legacy_fork_context_field() {
let ToolSpec::Function(ResponsesApiTool { parameters, .. }) = tool else {
panic!("spawn_agent should be a function tool");
};
let JsonSchema::Object { properties, .. } = parameters else {
panic!("spawn_agent should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("spawn_agent should use object params");
assert!(properties.contains_key("fork_context"));
assert!(!properties.contains_key("fork_turns"));
@@ -107,21 +112,21 @@ fn send_message_tool_requires_message_and_has_no_output_schema() {
else {
panic!("send_message should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("send_message should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("send_message should use object params");
assert!(properties.contains_key("target"));
assert!(properties.contains_key("message"));
assert!(!properties.contains_key("interrupt"));
assert!(!properties.contains_key("items"));
assert_eq!(
required,
Some(vec!["target".to_string(), "message".to_string()])
parameters.required.as_ref(),
Some(&vec!["target".to_string(), "message".to_string()])
);
assert_eq!(output_schema, None);
}
@@ -136,21 +141,21 @@ fn followup_task_tool_requires_message_and_has_no_output_schema() {
else {
panic!("followup_task should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("followup_task should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("followup_task should use object params");
assert!(properties.contains_key("target"));
assert!(properties.contains_key("message"));
assert!(properties.contains_key("interrupt"));
assert!(!properties.contains_key("items"));
assert_eq!(
required,
Some(vec!["target".to_string(), "message".to_string()])
parameters.required.as_ref(),
Some(&vec!["target".to_string(), "message".to_string()])
);
assert_eq!(output_schema, None);
}
@@ -169,17 +174,17 @@ fn wait_agent_tool_v2_uses_timeout_only_summary_output() {
else {
panic!("wait_agent should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("wait_agent should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("wait_agent should use object params");
assert!(!properties.contains_key("targets"));
assert!(properties.contains_key("timeout_ms"));
assert_eq!(required, None);
assert_eq!(parameters.required.as_ref(), None);
assert_eq!(
output_schema.expect("wait output schema")["properties"]["message"]["description"],
json!("Brief wait summary without the agent's final content.")
@@ -196,9 +201,14 @@ fn list_agents_tool_includes_path_prefix_and_agent_fields() {
else {
panic!("list_agents should be a function tool");
};
let JsonSchema::Object { properties, .. } = parameters else {
panic!("list_agents should use object params");
};
assert_eq!(
parameters.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = parameters
.properties
.as_ref()
.expect("list_agents should use object params");
assert!(properties.contains_key("path_prefix"));
assert_eq!(
output_schema.expect("list_agents output schema")["properties"]["agents"]["items"]["required"],

View File

@@ -102,9 +102,9 @@ pub fn create_apply_patch_freeform_tool() -> ToolSpec {
pub fn create_apply_patch_json_tool() -> ToolSpec {
let properties = BTreeMap::from([(
"input".to_string(),
JsonSchema::String {
description: Some("The entire contents of the apply_patch command".to_string()),
},
JsonSchema::string(Some(
"The entire contents of the apply_patch command".to_string(),
)),
)]);
ToolSpec::Function(ResponsesApiTool {
@@ -112,11 +112,11 @@ pub fn create_apply_patch_json_tool() -> ToolSpec {
description: APPLY_PATCH_JSON_TOOL_DESCRIPTION.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["input".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -29,18 +30,16 @@ fn create_apply_patch_json_tool_matches_expected_spec() {
description: APPLY_PATCH_JSON_TOOL_DESCRIPTION.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"input".to_string(),
JsonSchema::String {
description: Some(
"The entire contents of the apply_patch command".to_string(),
),
},
JsonSchema::string(Some(
"The entire contents of the apply_patch command".to_string(),
),),
)]),
required: Some(vec!["input".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["input".to_string()]),
Some(false.into())
),
output_schema: None,
})
);

View File

@@ -53,32 +53,26 @@ pub fn create_wait_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"cell_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the running exec cell.".to_string()),
},
JsonSchema::string(Some("Identifier of the running exec cell.".to_string())),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"How long to wait (in milliseconds) for more output before yielding again."
.to_string(),
),
},
JsonSchema::number(Some(
"How long to wait (in milliseconds) for more output before yielding again."
.to_string(),
)),
),
(
"max_tokens".to_string(),
JsonSchema::Number {
description: Some(
"Maximum number of output tokens to return for this wait call.".to_string(),
),
},
JsonSchema::number(Some(
"Maximum number of output tokens to return for this wait call.".to_string(),
)),
),
(
"terminate".to_string(),
JsonSchema::Boolean {
description: Some("Whether to terminate the running exec cell.".to_string()),
},
JsonSchema::boolean(Some(
"Whether to terminate the running exec cell.".to_string(),
)),
),
]);
@@ -90,11 +84,11 @@ pub fn create_wait_tool() -> ToolSpec {
codex_code_mode::build_wait_tool_description().trim()
),
strict: false,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["cell_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["cell_id".to_string()]),
Some(false.into()),
),
output_schema: None,
defer_loading: None,
})
@@ -102,6 +96,7 @@ pub fn create_wait_tool() -> ToolSpec {
pub fn create_code_mode_tool(
enabled_tools: &[(String, String)],
namespace_descriptions: &BTreeMap<String, codex_code_mode::ToolNamespaceDescription>,
code_mode_only_enabled: bool,
) -> ToolSpec {
const CODE_MODE_FREEFORM_GRAMMAR: &str = r#"
@@ -118,6 +113,7 @@ SOURCE: /[\s\S]+/
name: codex_code_mode::PUBLIC_TOOL_NAME.to_string(),
description: codex_code_mode::build_exec_tool_description(
enabled_tools,
namespace_descriptions,
code_mode_only_enabled,
),
format: FreeformToolFormat {

View File

@@ -20,14 +20,14 @@ fn augment_tool_spec_for_code_mode_augments_function_tools() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(AdditionalProperties::Boolean(false)),
},
Some(vec!["order_id".to_string()]),
Some(AdditionalProperties::Boolean(false))
),
output_schema: Some(json!({
"type": "object",
"properties": {
@@ -38,17 +38,23 @@ fn augment_tool_spec_for_code_mode_augments_function_tools() {
})),
ToolSpec::Function(ResponsesApiTool {
name: "lookup_order".to_string(),
description: "Look up an order\n\nexec tool declaration:\n```ts\ndeclare const tools: { lookup_order(args: { order_id: string; }): Promise<{ ok: boolean; }>; };\n```".to_string(),
description: r#"Look up an order
exec tool declaration:
```ts
declare const tools: { lookup_order(args: { order_id: string; }): Promise<{ ok: boolean; }>; };
```"#
.to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(AdditionalProperties::Boolean(false)),
},
Some(vec!["order_id".to_string()]),
Some(AdditionalProperties::Boolean(false))
),
output_schema: Some(json!({
"type": "object",
"properties": {
@@ -100,7 +106,13 @@ fn tool_spec_to_code_mode_tool_definition_returns_augmented_nested_tools() {
tool_spec_to_code_mode_tool_definition(&spec),
Some(codex_code_mode::ToolDefinition {
name: "apply_patch".to_string(),
description: "Apply a patch\n\nexec tool declaration:\n```ts\ndeclare const tools: { apply_patch(input: string): Promise<unknown>; };\n```".to_string(),
description: r#"Apply a patch
exec tool declaration:
```ts
declare const tools: { apply_patch(input: string): Promise<unknown>; };
```"#
.to_string(),
kind: codex_code_mode::CodeModeToolKind::Freeform,
input_schema: None,
output_schema: None,
@@ -114,11 +126,11 @@ fn tool_spec_to_code_mode_tool_definition_skips_unsupported_variants() {
tool_spec_to_code_mode_tool_definition(&ToolSpec::ToolSearch {
execution: "sync".to_string(),
description: "Search".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
}),
None
);
@@ -137,44 +149,32 @@ fn create_wait_tool_matches_expected_spec() {
),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"cell_id".to_string(),
JsonSchema::String {
description: Some("Identifier of the running exec cell.".to_string()),
},
JsonSchema::string(Some("Identifier of the running exec cell.".to_string()),),
),
(
"max_tokens".to_string(),
JsonSchema::Number {
description: Some(
JsonSchema::number(Some(
"Maximum number of output tokens to return for this wait call."
.to_string(),
),
},
),),
),
(
"terminate".to_string(),
JsonSchema::Boolean {
description: Some(
JsonSchema::boolean(Some(
"Whether to terminate the running exec cell.".to_string(),
),
},
),),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
JsonSchema::number(Some(
"How long to wait (in milliseconds) for more output before yielding again."
.to_string(),
),
},
),),
),
]),
required: Some(vec!["cell_id".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["cell_id".to_string()]), Some(false.into())),
output_schema: None,
})
);
@@ -185,11 +185,16 @@ fn create_code_mode_tool_matches_expected_spec() {
let enabled_tools = vec![("update_plan".to_string(), "Update the plan".to_string())];
assert_eq!(
create_code_mode_tool(&enabled_tools, /*code_mode_only_enabled*/ true),
create_code_mode_tool(
&enabled_tools,
&BTreeMap::new(),
/*code_mode_only_enabled*/ true,
),
ToolSpec::Freeform(FreeformTool {
name: codex_code_mode::PUBLIC_TOOL_NAME.to_string(),
description: codex_code_mode::build_exec_tool_description(
&enabled_tools,
&BTreeMap::new(),
/*code_mode_only*/ true
),
format: FreeformToolFormat {

View File

@@ -25,16 +25,14 @@ fn parse_dynamic_tool_sanitizes_input_schema() {
ToolDefinition {
name: "lookup_ticket".to_string(),
description: "Fetch a ticket".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::from([(
input_schema: JsonSchema::object(
BTreeMap::from([(
"id".to_string(),
JsonSchema::String {
description: Some("Ticket identifier".to_string()),
},
JsonSchema::string(Some("Ticket identifier".to_string()),),
)]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
defer_loading: false,
}
@@ -58,11 +56,11 @@ fn parse_dynamic_tool_preserves_defer_loading() {
ToolDefinition {
name: "lookup_ticket".to_string(),
description: "Fetch a ticket".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
input_schema: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
defer_loading: true,
}

View File

@@ -45,11 +45,7 @@ pub fn create_js_repl_reset_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(BTreeMap::new(), /*required*/ None, Some(false.into())),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use crate::ToolSpec;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -29,11 +30,11 @@ fn js_repl_reset_tool_matches_expected_spec() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
Some(false.into())
),
output_schema: None,
})
);

View File

@@ -4,40 +4,125 @@ use serde_json::Value as JsonValue;
use serde_json::json;
use std::collections::BTreeMap;
/// Generic JSON-Schema subset needed for our tool definitions.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "lowercase")]
pub enum JsonSchema {
Boolean {
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
},
String {
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
},
/// MCP schema allows "number" | "integer" for Number.
#[serde(alias = "integer")]
Number {
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
},
Array {
items: Box<JsonSchema>,
/// Primitive JSON Schema type names we support in tool definitions.
///
/// This mirrors the OpenAI Structured Outputs subset for JSON Schema `type`:
/// string, number, boolean, integer, object, array, and null.
/// Keywords such as `enum`, `const`, and `anyOf` are modeled separately.
/// See <https://developers.openai.com/api/docs/guides/structured-outputs#supported-schemas>.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum JsonSchemaPrimitiveType {
String,
Number,
Boolean,
Integer,
Object,
Array,
Null,
}
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
},
Object {
/// JSON Schema `type` supports either a single type name or a union of names.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(untagged)]
pub enum JsonSchemaType {
Single(JsonSchemaPrimitiveType),
Multiple(Vec<JsonSchemaPrimitiveType>),
}
/// Generic JSON-Schema subset needed for our tool definitions.
#[derive(Debug, Clone, Default, Serialize, Deserialize, PartialEq)]
pub struct JsonSchema {
#[serde(rename = "type", skip_serializing_if = "Option::is_none")]
pub schema_type: Option<JsonSchemaType>,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
#[serde(rename = "enum", skip_serializing_if = "Option::is_none")]
pub enum_values: Option<Vec<JsonValue>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub items: Option<Box<JsonSchema>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub properties: Option<BTreeMap<String, JsonSchema>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub required: Option<Vec<String>>,
#[serde(
rename = "additionalProperties",
skip_serializing_if = "Option::is_none"
)]
pub additional_properties: Option<AdditionalProperties>,
#[serde(rename = "anyOf", skip_serializing_if = "Option::is_none")]
pub any_of: Option<Vec<JsonSchema>>,
}
impl JsonSchema {
/// Construct a scalar/object/array schema with a single JSON Schema type.
fn typed(schema_type: JsonSchemaPrimitiveType, description: Option<String>) -> Self {
Self {
schema_type: Some(JsonSchemaType::Single(schema_type)),
description,
..Default::default()
}
}
pub fn any_of(variants: Vec<JsonSchema>, description: Option<String>) -> Self {
Self {
description,
any_of: Some(variants),
..Default::default()
}
}
pub fn boolean(description: Option<String>) -> Self {
Self::typed(JsonSchemaPrimitiveType::Boolean, description)
}
pub fn string(description: Option<String>) -> Self {
Self::typed(JsonSchemaPrimitiveType::String, description)
}
pub fn number(description: Option<String>) -> Self {
Self::typed(JsonSchemaPrimitiveType::Number, description)
}
pub fn integer(description: Option<String>) -> Self {
Self::typed(JsonSchemaPrimitiveType::Integer, description)
}
pub fn null(description: Option<String>) -> Self {
Self::typed(JsonSchemaPrimitiveType::Null, description)
}
pub fn string_enum(values: Vec<JsonValue>, description: Option<String>) -> Self {
Self {
schema_type: Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::String)),
description,
enum_values: Some(values),
..Default::default()
}
}
pub fn array(items: JsonSchema, description: Option<String>) -> Self {
Self {
schema_type: Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Array)),
description,
items: Some(Box::new(items)),
..Default::default()
}
}
pub fn object(
properties: BTreeMap<String, JsonSchema>,
#[serde(skip_serializing_if = "Option::is_none")]
required: Option<Vec<String>>,
#[serde(
rename = "additionalProperties",
skip_serializing_if = "Option::is_none"
)]
additional_properties: Option<AdditionalProperties>,
},
) -> Self {
Self {
schema_type: Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object)),
properties: Some(properties),
required,
additional_properties,
..Default::default()
}
}
}
/// Whether additional properties are allowed, and if so, any required schema.
@@ -64,16 +149,23 @@ impl From<JsonSchema> for AdditionalProperties {
pub fn parse_tool_input_schema(input_schema: &JsonValue) -> Result<JsonSchema, serde_json::Error> {
let mut input_schema = input_schema.clone();
sanitize_json_schema(&mut input_schema);
serde_json::from_value::<JsonSchema>(input_schema)
let schema: JsonSchema = serde_json::from_value(input_schema)?;
if matches!(
schema.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Null))
) {
return Err(singleton_null_schema_error());
}
Ok(schema)
}
/// Sanitize a JSON Schema (as serde_json::Value) so it can fit our limited
/// JsonSchema enum. This function:
/// - Ensures every schema object has a "type". If missing, infers it from
/// common keywords (properties => object, items => array, enum/const/format => string)
/// and otherwise defaults to "string".
/// - Fills required child fields (e.g. array items, object properties) with
/// permissive defaults when absent.
/// schema representation. This function:
/// - Ensures every typed schema object has a `"type"` when required.
/// - Preserves explicit `anyOf`.
/// - Collapses `const` into single-value `enum`.
/// - Fills required child fields for object/array schema types, including
/// nullable unions, with permissive defaults when absent.
fn sanitize_json_schema(value: &mut JsonValue) {
match value {
JsonValue::Bool(_) => {
@@ -96,81 +188,153 @@ fn sanitize_json_schema(value: &mut JsonValue) {
if let Some(items) = map.get_mut("items") {
sanitize_json_schema(items);
}
for combiner in ["oneOf", "anyOf", "allOf", "prefixItems"] {
if let Some(value) = map.get_mut(combiner) {
sanitize_json_schema(value);
}
}
let mut schema_type = map
.get("type")
.and_then(|value| value.as_str())
.map(str::to_string);
if schema_type.is_none()
&& let Some(JsonValue::Array(types)) = map.get("type")
if let Some(additional_properties) = map.get_mut("additionalProperties")
&& !matches!(additional_properties, JsonValue::Bool(_))
{
for candidate in types {
if let Some(candidate_type) = candidate.as_str()
&& matches!(
candidate_type,
"object" | "array" | "string" | "number" | "integer" | "boolean"
)
{
schema_type = Some(candidate_type.to_string());
break;
}
}
sanitize_json_schema(additional_properties);
}
if let Some(value) = map.get_mut("prefixItems") {
sanitize_json_schema(value);
}
if let Some(value) = map.get_mut("anyOf") {
sanitize_json_schema(value);
}
if schema_type.is_none() {
if let Some(const_value) = map.remove("const") {
map.insert("enum".to_string(), JsonValue::Array(vec![const_value]));
}
let mut schema_types = normalized_schema_types(map);
if schema_types.is_empty() && map.contains_key("anyOf") {
return;
}
if schema_types.is_empty() {
if map.contains_key("properties")
|| map.contains_key("required")
|| map.contains_key("additionalProperties")
{
schema_type = Some("object".to_string());
schema_types.push(JsonSchemaPrimitiveType::Object);
} else if map.contains_key("items") || map.contains_key("prefixItems") {
schema_type = Some("array".to_string());
} else if map.contains_key("enum")
|| map.contains_key("const")
|| map.contains_key("format")
{
schema_type = Some("string".to_string());
schema_types.push(JsonSchemaPrimitiveType::Array);
} else if map.contains_key("enum") || map.contains_key("format") {
schema_types.push(JsonSchemaPrimitiveType::String);
} else if map.contains_key("minimum")
|| map.contains_key("maximum")
|| map.contains_key("exclusiveMinimum")
|| map.contains_key("exclusiveMaximum")
|| map.contains_key("multipleOf")
{
schema_type = Some("number".to_string());
schema_types.push(JsonSchemaPrimitiveType::Number);
} else {
schema_types.push(JsonSchemaPrimitiveType::String);
}
}
let schema_type = schema_type.unwrap_or_else(|| "string".to_string());
map.insert("type".to_string(), JsonValue::String(schema_type.clone()));
if schema_type == "object" {
if !map.contains_key("properties") {
map.insert(
"properties".to_string(),
JsonValue::Object(serde_json::Map::new()),
);
}
if let Some(additional_properties) = map.get_mut("additionalProperties")
&& !matches!(additional_properties, JsonValue::Bool(_))
{
sanitize_json_schema(additional_properties);
}
}
if schema_type == "array" && !map.contains_key("items") {
map.insert("items".to_string(), json!({ "type": "string" }));
}
write_schema_types(map, &schema_types);
ensure_default_children_for_schema_types(map, &schema_types);
}
_ => {}
}
}
fn ensure_default_children_for_schema_types(
map: &mut serde_json::Map<String, JsonValue>,
schema_types: &[JsonSchemaPrimitiveType],
) {
if schema_types.contains(&JsonSchemaPrimitiveType::Object) && !map.contains_key("properties") {
map.insert(
"properties".to_string(),
JsonValue::Object(serde_json::Map::new()),
);
}
if schema_types.contains(&JsonSchemaPrimitiveType::Array) && !map.contains_key("items") {
map.insert("items".to_string(), json!({ "type": "string" }));
}
}
fn normalized_schema_types(
map: &serde_json::Map<String, JsonValue>,
) -> Vec<JsonSchemaPrimitiveType> {
let Some(schema_type) = map.get("type") else {
return Vec::new();
};
match schema_type {
JsonValue::String(schema_type) => schema_type_from_str(schema_type).into_iter().collect(),
JsonValue::Array(schema_types) => schema_types
.iter()
.filter_map(JsonValue::as_str)
.filter_map(schema_type_from_str)
.collect(),
_ => Vec::new(),
}
}
fn write_schema_types(
map: &mut serde_json::Map<String, JsonValue>,
schema_types: &[JsonSchemaPrimitiveType],
) {
match schema_types {
[] => {
map.remove("type");
}
[schema_type] => {
map.insert(
"type".to_string(),
JsonValue::String(schema_type_name(*schema_type).to_string()),
);
}
_ => {
map.insert(
"type".to_string(),
JsonValue::Array(
schema_types
.iter()
.map(|schema_type| {
JsonValue::String(schema_type_name(*schema_type).to_string())
})
.collect(),
),
);
}
}
}
fn schema_type_from_str(schema_type: &str) -> Option<JsonSchemaPrimitiveType> {
match schema_type {
"string" => Some(JsonSchemaPrimitiveType::String),
"number" => Some(JsonSchemaPrimitiveType::Number),
"boolean" => Some(JsonSchemaPrimitiveType::Boolean),
"integer" => Some(JsonSchemaPrimitiveType::Integer),
"object" => Some(JsonSchemaPrimitiveType::Object),
"array" => Some(JsonSchemaPrimitiveType::Array),
"null" => Some(JsonSchemaPrimitiveType::Null),
_ => None,
}
}
fn schema_type_name(schema_type: JsonSchemaPrimitiveType) -> &'static str {
match schema_type {
JsonSchemaPrimitiveType::String => "string",
JsonSchemaPrimitiveType::Number => "number",
JsonSchemaPrimitiveType::Boolean => "boolean",
JsonSchemaPrimitiveType::Integer => "integer",
JsonSchemaPrimitiveType::Object => "object",
JsonSchemaPrimitiveType::Array => "array",
JsonSchemaPrimitiveType::Null => "null",
}
}
fn singleton_null_schema_error() -> serde_json::Error {
serde_json::Error::io(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
"tool input schema must not be a singleton null type",
))
}
#[cfg(test)]
#[path = "json_schema_tests.rs"]
mod tests;

View File

@@ -1,5 +1,7 @@
use super::AdditionalProperties;
use super::JsonSchema;
use super::JsonSchemaPrimitiveType;
use super::JsonSchemaType;
use super::parse_tool_input_schema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -18,7 +20,7 @@ fn parse_tool_input_schema_coerces_boolean_schemas() {
// semantics directly.
let schema = parse_tool_input_schema(&serde_json::json!(true)).expect("parse schema");
assert_eq!(schema, JsonSchema::String { description: None });
assert_eq!(schema, JsonSchema::string(/*description*/ None));
}
#[test]
@@ -42,21 +44,19 @@ fn parse_tool_input_schema_infers_object_shape_and_defaults_properties() {
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
JsonSchema::object(
BTreeMap::from([(
"query".to_string(),
JsonSchema::String {
description: Some("search query".to_string()),
},
JsonSchema::string(Some("search query".to_string())),
)]),
required: None,
additional_properties: None,
}
/*required*/ None,
/*additional_properties*/ None
)
);
}
#[test]
fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
fn parse_tool_input_schema_preserves_integer_and_defaults_array_items() {
// Example schema shape:
// {
// "type": "object",
@@ -67,8 +67,7 @@ fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
// }
//
// Expected normalization behavior:
// - `"integer"` is accepted by the baseline model through the legacy
// number/integer alias.
// - `"integer"` is preserved distinctly from `"number"`.
// - Arrays missing `items` receive a permissive string `items` schema.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
@@ -81,20 +80,23 @@ fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([
("page".to_string(), JsonSchema::Number { description: None }),
JsonSchema::object(
BTreeMap::from([
(
"page".to_string(),
JsonSchema::integer(/*description*/ None),
),
(
"tags".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None,
},
JsonSchema::array(
JsonSchema::string(/*description*/ None),
/*description*/ None,
)
),
]),
required: None,
additional_properties: None,
}
/*required*/ None,
/*additional_properties*/ None
)
);
}
@@ -118,9 +120,7 @@ fn parse_tool_input_schema_sanitizes_additional_properties_schema() {
//
// Expected normalization behavior:
// - `additionalProperties` schema objects are recursively sanitized.
// - The nested schema is normalized into the baseline object form.
// - In the baseline model, the nested `anyOf` degrades to a plain string
// field because richer combiners are not preserved.
// - The nested schema is normalized into the current object/anyOf form.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"additionalProperties": {
@@ -134,20 +134,24 @@ fn parse_tool_input_schema_sanitizes_additional_properties_schema() {
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(AdditionalProperties::Schema(Box::new(
JsonSchema::Object {
properties: BTreeMap::from([(
"value".to_string(),
JsonSchema::String { description: None },
)]),
required: Some(vec!["value".to_string()]),
additional_properties: None,
},
))),
}
JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
Some(AdditionalProperties::Schema(Box::new(JsonSchema::object(
BTreeMap::from([(
"value".to_string(),
JsonSchema::any_of(
vec![
JsonSchema::string(/*description*/ None),
JsonSchema::number(/*description*/ None),
],
/*description*/ None,
),
)]),
Some(vec!["value".to_string()]),
/*additional_properties*/ None,
))))
)
);
}
@@ -168,11 +172,7 @@ fn parse_tool_input_schema_infers_object_shape_from_boolean_additional_propertie
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(false.into()),
}
JsonSchema::object(BTreeMap::new(), /*required*/ None, Some(false.into()))
);
}
@@ -191,7 +191,7 @@ fn parse_tool_input_schema_infers_number_from_numeric_keywords() {
}))
.expect("parse schema");
assert_eq!(schema, JsonSchema::Number { description: None });
assert_eq!(schema, JsonSchema::number(/*description*/ None));
}
#[test]
@@ -209,7 +209,7 @@ fn parse_tool_input_schema_infers_number_from_multiple_of() {
}))
.expect("parse schema");
assert_eq!(schema, JsonSchema::Number { description: None });
assert_eq!(schema, JsonSchema::number(/*description*/ None));
}
#[test]
@@ -220,7 +220,8 @@ fn parse_tool_input_schema_infers_string_from_enum_const_and_format_keywords() {
// { "format": "date-time" }
//
// Expected normalization behavior:
// - Each of these keywords implies a string schema when `type` is omitted.
// - `enum` and `const` normalize into explicit string-enum schemas.
// - `format` still falls back to a plain string schema.
let enum_schema = parse_tool_input_schema(&serde_json::json!({
"enum": ["fast", "safe"]
}))
@@ -234,9 +235,18 @@ fn parse_tool_input_schema_infers_string_from_enum_const_and_format_keywords() {
}))
.expect("parse format schema");
assert_eq!(enum_schema, JsonSchema::String { description: None });
assert_eq!(const_schema, JsonSchema::String { description: None });
assert_eq!(format_schema, JsonSchema::String { description: None });
assert_eq!(
enum_schema,
JsonSchema::string_enum(
vec![serde_json::json!("fast"), serde_json::json!("safe")],
/*description*/ None,
)
);
assert_eq!(
const_schema,
JsonSchema::string_enum(vec![serde_json::json!("file")], /*description*/ None)
);
assert_eq!(format_schema, JsonSchema::string(/*description*/ None));
}
#[test]
@@ -245,11 +255,11 @@ fn parse_tool_input_schema_defaults_empty_schema_to_string() {
// {}
//
// Expected normalization behavior:
// - With no structural hints at all, the baseline normalizer falls back to
// a permissive string schema.
// - With no structural hints at all, the normalizer falls back to a
// permissive string schema.
let schema = parse_tool_input_schema(&serde_json::json!({})).expect("parse schema");
assert_eq!(schema, JsonSchema::String { description: None });
assert_eq!(schema, JsonSchema::string(/*description*/ None));
}
#[test]
@@ -263,8 +273,8 @@ fn parse_tool_input_schema_infers_array_from_prefix_items() {
//
// Expected normalization behavior:
// - `prefixItems` implies an array schema when `type` is omitted.
// - The baseline model still stores the normalized result as a regular
// array schema with string items.
// - The normalized result is stored as a regular array schema with string
// items.
let schema = parse_tool_input_schema(&serde_json::json!({
"prefixItems": [
{"type": "string"}
@@ -274,10 +284,10 @@ fn parse_tool_input_schema_infers_array_from_prefix_items() {
assert_eq!(
schema,
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None,
}
JsonSchema::array(
JsonSchema::string(/*description*/ None),
/*description*/ None,
)
);
}
@@ -309,18 +319,14 @@ fn parse_tool_input_schema_preserves_boolean_additional_properties_on_inferred_o
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
JsonSchema::object(
BTreeMap::from([(
"metadata".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(AdditionalProperties::Boolean(true)),
},
JsonSchema::object(BTreeMap::new(), /*required*/ None, Some(true.into())),
)]),
required: None,
additional_properties: None,
}
/*required*/ None,
/*additional_properties*/ None
)
);
}
@@ -347,22 +353,202 @@ fn parse_tool_input_schema_infers_object_shape_from_schema_additional_properties
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(AdditionalProperties::Schema(Box::new(
JsonSchema::String { description: None },
))),
JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
Some(JsonSchema::string(/*description*/ None).into())
)
);
}
#[test]
fn parse_tool_input_schema_rewrites_const_to_single_value_enum() {
// Example schema shape:
// {
// "const": "tagged"
// }
//
// Expected normalization behavior:
// - `const` is rewritten through the sanitizer's `map.remove("const")`
// path into an equivalent single-value string enum schema.
let schema = parse_tool_input_schema(&serde_json::json!({
"const": "tagged"
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::string_enum(vec![serde_json::json!("tagged")], /*description*/ None)
);
}
#[test]
fn parse_tool_input_schema_rejects_singleton_null_type() {
let err = parse_tool_input_schema(&serde_json::json!({
"type": "null"
}))
.expect_err("singleton null should be rejected");
assert!(
err.to_string()
.contains("tool input schema must not be a singleton null type"),
"unexpected error: {err}"
);
}
#[test]
fn parse_tool_input_schema_fills_default_properties_for_nullable_object_union() {
// Example schema shape:
// {
// "type": ["object", "null"]
// }
//
// Expected normalization behavior:
// - The full union is preserved.
// - Object members of the union still receive default `properties`.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": ["object", "null"]
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema {
schema_type: Some(JsonSchemaType::Multiple(vec![
JsonSchemaPrimitiveType::Object,
JsonSchemaPrimitiveType::Null,
])),
properties: Some(BTreeMap::new()),
..Default::default()
}
);
}
#[test]
fn parse_tool_input_schema_fills_default_items_for_nullable_array_union() {
// Example schema shape:
// {
// "type": ["array", "null"]
// }
//
// Expected normalization behavior:
// - The full union is preserved.
// - Array members of the union still receive default `items`.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": ["array", "null"]
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema {
schema_type: Some(JsonSchemaType::Multiple(vec![
JsonSchemaPrimitiveType::Array,
JsonSchemaPrimitiveType::Null,
])),
items: Some(Box::new(JsonSchema::string(/*description*/ None))),
..Default::default()
}
);
}
// Schemas that should be preserved for Responses API compatibility rather than
// being rewritten into a different shape. These currently fail on the baseline
// normalizer and are the intended signal for the new JsonSchema work.
// being rewritten into a different shape.
#[test]
fn parse_tool_input_schema_preserves_nested_nullable_any_of_shape() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "open": {
// "anyOf": [
// {
// "type": "array",
// "items": {
// "type": "object",
// "properties": {
// "ref_id": { "type": "string" },
// "lineno": { "anyOf": [{ "type": "integer" }, { "type": "null" }] }
// },
// "required": ["ref_id"],
// "additionalProperties": false
// }
// },
// { "type": "null" }
// ]
// }
// }
// }
//
// Expected normalization behavior:
// - Nested nullable `anyOf` shapes are preserved all the way down.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
"open": {
"anyOf": [
{
"type": "array",
"items": {
"type": "object",
"properties": {
"ref_id": {"type": "string"},
"lineno": {"anyOf": [{"type": "integer"}, {"type": "null"}]}
},
"required": ["ref_id"],
"additionalProperties": false
}
},
{"type": "null"}
]
}
}
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::object(
BTreeMap::from([(
"open".to_string(),
JsonSchema::any_of(
vec![
JsonSchema::array(
JsonSchema::object(
BTreeMap::from([
(
"lineno".to_string(),
JsonSchema::any_of(
vec![
JsonSchema::integer(/*description*/ None),
JsonSchema::null(/*description*/ None),
],
/*description*/ None,
),
),
(
"ref_id".to_string(),
JsonSchema::string(/*description*/ None),
),
]),
Some(vec!["ref_id".to_string()]),
Some(false.into()),
),
/*description*/ None,
),
JsonSchema::null(/*description*/ None),
],
/*description*/ None,
),
),]),
/*required*/ None,
/*additional_properties*/ None
)
);
}
#[test]
#[ignore = "Expected to pass after the new JsonSchema preserves nullable type unions"]
fn parse_tool_input_schema_preserves_nested_nullable_type_union() {
// Example schema shape:
// {
@@ -395,23 +581,25 @@ fn parse_tool_input_schema_preserves_nested_nullable_type_union() {
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
JsonSchema::object(
BTreeMap::from([(
"nickname".to_string(),
serde_json::from_value(serde_json::json!({
"type": ["string", "null"],
"description": "Optional nickname"
}))
.expect("nested nullable schema"),
JsonSchema {
schema_type: Some(JsonSchemaType::Multiple(vec![
JsonSchemaPrimitiveType::String,
JsonSchemaPrimitiveType::Null,
])),
description: Some("Optional nickname".to_string()),
..Default::default()
},
)]),
required: Some(vec!["nickname".to_string()]),
additional_properties: Some(false.into()),
}
Some(vec!["nickname".to_string()]),
Some(false.into()),
)
);
}
#[test]
#[ignore = "Expected to pass after the new JsonSchema preserves nested anyOf schemas"]
fn parse_tool_input_schema_preserves_nested_any_of_property() {
// Example schema shape:
// {
@@ -444,19 +632,155 @@ fn parse_tool_input_schema_preserves_nested_any_of_property() {
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
JsonSchema::object(
BTreeMap::from([(
"query".to_string(),
serde_json::from_value(serde_json::json!({
"anyOf": [
{ "type": "string" },
{ "type": "number" }
]
}))
.expect("nested anyOf schema"),
JsonSchema::any_of(
vec![
JsonSchema::string(/*description*/ None),
JsonSchema::number(/*description*/ None),
],
/*description*/ None,
),
)]),
required: None,
additional_properties: None,
/*required*/ None,
/*additional_properties*/ None
)
);
}
#[test]
fn parse_tool_input_schema_preserves_type_unions_without_rewriting_to_any_of() {
// Example schema shape:
// {
// "type": ["string", "null"],
// "description": "optional string"
// }
//
// Expected normalization behavior:
// - Explicit type unions are preserved as unions rather than rewritten to
// `anyOf`.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": ["string", "null"],
"description": "optional string"
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema {
schema_type: Some(JsonSchemaType::Multiple(vec![
JsonSchemaPrimitiveType::String,
JsonSchemaPrimitiveType::Null,
])),
description: Some("optional string".to_string()),
..Default::default()
}
);
}
#[test]
fn parse_tool_input_schema_preserves_explicit_enum_type_union() {
// Example schema shape:
// {
// "type": ["string", "null"],
// "enum": ["short", "medium", "long"],
// "description": "optional response length"
// }
//
// Expected normalization behavior:
// - The explicit string/null union is preserved alongside the enum values.
let schema = super::parse_tool_input_schema(&serde_json::json!({
"type": ["string", "null"],
"enum": ["short", "medium", "long"],
"description": "optional response length"
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema {
schema_type: Some(JsonSchemaType::Multiple(vec![
JsonSchemaPrimitiveType::String,
JsonSchemaPrimitiveType::Null,
])),
description: Some("optional response length".to_string()),
enum_values: Some(vec![
serde_json::json!("short"),
serde_json::json!("medium"),
serde_json::json!("long"),
]),
..Default::default()
}
);
}
#[test]
fn parse_tool_input_schema_preserves_string_enum_constraints() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "response_length": { "type": "enum", "enum": ["short", "medium", "long"] },
// "kind": { "type": "const", "const": "tagged" },
// "scope": { "type": "enum", "enum": ["one", "two"] }
// }
// }
//
// Expected normalization behavior:
// - Legacy `type: "enum"` and `type: "const"` inputs are normalized into
// the current string-enum representation.
let schema = super::parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
"response_length": {
"type": "enum",
"enum": ["short", "medium", "long"]
},
"kind": {
"type": "const",
"const": "tagged"
},
"scope": {
"type": "enum",
"enum": ["one", "two"]
}
}
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::object(
BTreeMap::from([
(
"kind".to_string(),
JsonSchema::string_enum(
vec![serde_json::json!("tagged")],
/*description*/ None,
),
),
(
"response_length".to_string(),
JsonSchema::string_enum(
vec![
serde_json::json!("short"),
serde_json::json!("medium"),
serde_json::json!("long"),
],
/*description*/ None,
),
),
(
"scope".to_string(),
JsonSchema::string_enum(
vec![serde_json::json!("one"), serde_json::json!("two")],
/*description*/ None,
),
),
]),
/*required*/ None,
/*additional_properties*/ None
)
);
}

View File

@@ -55,6 +55,8 @@ pub use js_repl_tool::create_js_repl_reset_tool;
pub use js_repl_tool::create_js_repl_tool;
pub use json_schema::AdditionalProperties;
pub use json_schema::JsonSchema;
pub use json_schema::JsonSchemaPrimitiveType;
pub use json_schema::JsonSchemaType;
pub use json_schema::parse_tool_input_schema;
pub use local_tool::CommandToolOptions;
pub use local_tool::ShellToolOptions;
@@ -112,6 +114,7 @@ pub use tool_discovery::filter_tool_suggest_discoverable_tools_for_client;
pub use tool_registry_plan::build_tool_registry_plan;
pub use tool_registry_plan_types::ToolHandlerKind;
pub use tool_registry_plan_types::ToolHandlerSpec;
pub use tool_registry_plan_types::ToolNamespace;
pub use tool_registry_plan_types::ToolRegistryPlan;
pub use tool_registry_plan_types::ToolRegistryPlanAppTool;
pub use tool_registry_plan_types::ToolRegistryPlanParams;

View File

@@ -20,62 +20,47 @@ pub fn create_exec_command_tool(options: CommandToolOptions) -> ToolSpec {
let mut properties = BTreeMap::from([
(
"cmd".to_string(),
JsonSchema::String {
description: Some("Shell command to execute.".to_string()),
},
JsonSchema::string(Some("Shell command to execute.".to_string())),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some(
"Optional working directory to run the command in; defaults to the turn cwd."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional working directory to run the command in; defaults to the turn cwd."
.to_string(),
)),
),
(
"shell".to_string(),
JsonSchema::String {
description: Some(
"Shell binary to launch. Defaults to the user's default shell.".to_string(),
),
},
JsonSchema::string(Some(
"Shell binary to launch. Defaults to the user's default shell.".to_string(),
)),
),
(
"tty".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to allocate a TTY for the command. Defaults to false (plain pipes); set to true to open a PTY and access TTY process."
.to_string(),
),
},
JsonSchema::boolean(Some(
"Whether to allocate a TTY for the command. Defaults to false (plain pipes); set to true to open a PTY and access TTY process."
.to_string(),
)),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
),
},
JsonSchema::number(Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
)),
),
(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some(
"Maximum number of tokens to return. Excess output will be truncated."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum number of tokens to return. Excess output will be truncated.".to_string(),
)),
),
]);
if options.allow_login_shell {
properties.insert(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to run the shell with -l/-i semantics. Defaults to true.".to_string(),
),
},
JsonSchema::boolean(Some(
"Whether to run the shell with -l/-i semantics. Defaults to true.".to_string(),
)),
);
}
properties.extend(create_approval_parameters(
@@ -95,11 +80,11 @@ pub fn create_exec_command_tool(options: CommandToolOptions) -> ToolSpec {
},
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["cmd".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["cmd".to_string()]),
Some(false.into()),
),
output_schema: Some(unified_exec_output_schema()),
})
}
@@ -108,32 +93,27 @@ pub fn create_write_stdin_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"session_id".to_string(),
JsonSchema::Number {
description: Some("Identifier of the running unified exec session.".to_string()),
},
JsonSchema::number(Some(
"Identifier of the running unified exec session.".to_string(),
)),
),
(
"chars".to_string(),
JsonSchema::String {
description: Some("Bytes to write to stdin (may be empty to poll).".to_string()),
},
JsonSchema::string(Some(
"Bytes to write to stdin (may be empty to poll).".to_string(),
)),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
),
},
JsonSchema::number(Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
)),
),
(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some(
"Maximum number of tokens to return. Excess output will be truncated."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum number of tokens to return. Excess output will be truncated.".to_string(),
)),
),
]);
@@ -144,11 +124,11 @@ pub fn create_write_stdin_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["session_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["session_id".to_string()]),
Some(false.into()),
),
output_schema: Some(unified_exec_output_schema()),
})
}
@@ -157,22 +137,22 @@ pub fn create_shell_tool(options: ShellToolOptions) -> ToolSpec {
let mut properties = BTreeMap::from([
(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("The command to execute".to_string()),
},
JsonSchema::array(
JsonSchema::string(/*description*/ None),
Some("The command to execute".to_string()),
),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
JsonSchema::string(Some(
"The working directory to execute the command in".to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
JsonSchema::number(Some(
"The timeout for the command in milliseconds".to_string(),
)),
),
]);
properties.extend(create_approval_parameters(
@@ -207,11 +187,11 @@ Examples of valid command strings:
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["command".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}
@@ -220,34 +200,30 @@ pub fn create_shell_command_tool(options: CommandToolOptions) -> ToolSpec {
let mut properties = BTreeMap::from([
(
"command".to_string(),
JsonSchema::String {
description: Some(
"The shell script to execute in the user's default shell".to_string(),
),
},
JsonSchema::string(Some(
"The shell script to execute in the user's default shell".to_string(),
)),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
JsonSchema::string(Some(
"The working directory to execute the command in".to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
JsonSchema::number(Some(
"The timeout for the command in milliseconds".to_string(),
)),
),
]);
if options.allow_login_shell {
properties.insert(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to run the shell with login shell semantics. Defaults to true."
.to_string(),
),
},
JsonSchema::boolean(Some(
"Whether to run the shell with login shell semantics. Defaults to true."
.to_string(),
)),
);
}
properties.extend(create_approval_parameters(
@@ -281,11 +257,11 @@ Examples of valid command strings:
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["command".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}
@@ -294,12 +270,9 @@ pub fn create_request_permissions_tool(description: String) -> ToolSpec {
let properties = BTreeMap::from([
(
"reason".to_string(),
JsonSchema::String {
description: Some(
"Optional short explanation for why additional permissions are needed."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional short explanation for why additional permissions are needed.".to_string(),
)),
),
("permissions".to_string(), permission_profile_schema()),
]);
@@ -309,11 +282,11 @@ pub fn create_request_permissions_tool(description: String) -> ToolSpec {
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["permissions".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["permissions".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}
@@ -363,40 +336,33 @@ fn create_approval_parameters(
let mut properties = BTreeMap::from([
(
"sandbox_permissions".to_string(),
JsonSchema::String {
description: Some(
if exec_permission_approvals_enabled {
"Sandbox permissions for the command. Use \"with_additional_permissions\" to request additional sandboxed filesystem or network permissions (preferred), or \"require_escalated\" to request running without sandbox restrictions; defaults to \"use_default\"."
} else {
"Sandbox permissions for the command. Set to \"require_escalated\" to request running without sandbox restrictions; defaults to \"use_default\"."
}
.to_string(),
),
},
JsonSchema::string(Some(
if exec_permission_approvals_enabled {
"Sandbox permissions for the command. Use \"with_additional_permissions\" to request additional sandboxed filesystem or network permissions (preferred), or \"require_escalated\" to request running without sandbox restrictions; defaults to \"use_default\"."
} else {
"Sandbox permissions for the command. Set to \"require_escalated\" to request running without sandbox restrictions; defaults to \"use_default\"."
}
.to_string(),
)),
),
(
"justification".to_string(),
JsonSchema::String {
description: Some(
r#"Only set if sandbox_permissions is \"require_escalated\".
JsonSchema::string(Some(
r#"Only set if sandbox_permissions is \"require_escalated\".
Request approval from the user to run this command outside the sandbox.
Phrased as a simple question that summarizes the purpose of the
command as it relates to the task at hand - e.g. 'Do you want to
fetch and pull the latest version of this git branch?'"#
.to_string(),
),
},
)),
),
(
"prefix_rule".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some(
JsonSchema::array(JsonSchema::string(/*description*/ None), Some(
r#"Only specify when sandbox_permissions is `require_escalated`.
Suggest a prefix command pattern that will allow you to fulfill similar requests from the user in the future.
Should be a short but reasonable prefix, e.g. [\"git\", \"pull\"] or [\"uv\", \"run\"] or [\"pytest\"]."#.to_string(),
),
},
)),
),
]);
@@ -411,50 +377,48 @@ fn create_approval_parameters(
}
fn permission_profile_schema() -> JsonSchema {
JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::object(
BTreeMap::from([
("network".to_string(), network_permissions_schema()),
("file_system".to_string(), file_system_permissions_schema()),
]),
required: None,
additional_properties: Some(false.into()),
}
/*required*/ None,
Some(false.into()),
)
}
fn network_permissions_schema() -> JsonSchema {
JsonSchema::Object {
properties: BTreeMap::from([(
JsonSchema::object(
BTreeMap::from([(
"enabled".to_string(),
JsonSchema::Boolean {
description: Some("Set to true to request network access.".to_string()),
},
JsonSchema::boolean(Some("Set to true to request network access.".to_string())),
)]),
required: None,
additional_properties: Some(false.into()),
}
/*required*/ None,
Some(false.into()),
)
}
fn file_system_permissions_schema() -> JsonSchema {
JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::object(
BTreeMap::from([
(
"read".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("Absolute paths to grant read access to.".to_string()),
},
JsonSchema::array(
JsonSchema::string(/*description*/ None),
Some("Absolute paths to grant read access to.".to_string()),
),
),
(
"write".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("Absolute paths to grant write access to.".to_string()),
},
JsonSchema::array(
JsonSchema::string(/*description*/ None),
Some("Absolute paths to grant write access to.".to_string()),
),
),
]),
required: None,
additional_properties: Some(false.into()),
}
/*required*/ None,
Some(false.into()),
)
}
fn windows_destructive_filesystem_guidance() -> &'static str {

View File

@@ -35,56 +35,42 @@ Examples of valid command strings:
let properties = BTreeMap::from([
(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("The command to execute".to_string()),
},
JsonSchema::array(JsonSchema::string(/*description*/ None), Some("The command to execute".to_string())),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
JsonSchema::string(Some("The working directory to execute the command in".to_string())),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
JsonSchema::number(Some("The timeout for the command in milliseconds".to_string())),
),
(
"sandbox_permissions".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Sandbox permissions for the command. Set to \"require_escalated\" to request running without sandbox restrictions; defaults to \"use_default\"."
.to_string(),
),
},
)),
),
(
"justification".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
r#"Only set if sandbox_permissions is \"require_escalated\".
Request approval from the user to run this command outside the sandbox.
Phrased as a simple question that summarizes the purpose of the
command as it relates to the task at hand - e.g. 'Do you want to
fetch and pull the latest version of this git branch?'"#
.to_string(),
),
},
)),
),
(
"prefix_rule".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some(
JsonSchema::array(JsonSchema::string(/*description*/ None), Some(
r#"Only specify when sandbox_permissions is `require_escalated`.
Suggest a prefix command pattern that will allow you to fulfill similar requests from the user in the future.
Should be a short but reasonable prefix, e.g. [\"git\", \"pull\"] or [\"uv\", \"run\"] or [\"pytest\"]."#
.to_string(),
),
},
)),
),
]);
@@ -95,11 +81,11 @@ Examples of valid command strings:
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["command".to_string()]),
Some(false.into())
),
output_schema: None,
})
);
@@ -125,60 +111,46 @@ fn exec_command_tool_matches_expected_spec() {
let mut properties = BTreeMap::from([
(
"cmd".to_string(),
JsonSchema::String {
description: Some("Shell command to execute.".to_string()),
},
JsonSchema::string(Some("Shell command to execute.".to_string())),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Optional working directory to run the command in; defaults to the turn cwd."
.to_string(),
),
},
)),
),
(
"shell".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Shell binary to launch. Defaults to the user's default shell.".to_string(),
),
},
)),
),
(
"tty".to_string(),
JsonSchema::Boolean {
description: Some(
JsonSchema::boolean(Some(
"Whether to allocate a TTY for the command. Defaults to false (plain pipes); set to true to open a PTY and access TTY process."
.to_string(),
),
},
)),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
JsonSchema::number(Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
),
},
)),
),
(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some(
JsonSchema::number(Some(
"Maximum number of tokens to return. Excess output will be truncated."
.to_string(),
),
},
)),
),
(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
JsonSchema::boolean(Some(
"Whether to run the shell with -l/-i semantics. Defaults to true.".to_string(),
),
},
)),
),
]);
properties.extend(create_approval_parameters(
@@ -192,11 +164,11 @@ fn exec_command_tool_matches_expected_spec() {
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["cmd".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["cmd".to_string()]),
Some(false.into())
),
output_schema: Some(unified_exec_output_schema()),
})
);
@@ -209,32 +181,27 @@ fn write_stdin_tool_matches_expected_spec() {
let properties = BTreeMap::from([
(
"session_id".to_string(),
JsonSchema::Number {
description: Some("Identifier of the running unified exec session.".to_string()),
},
JsonSchema::number(Some(
"Identifier of the running unified exec session.".to_string(),
)),
),
(
"chars".to_string(),
JsonSchema::String {
description: Some("Bytes to write to stdin (may be empty to poll).".to_string()),
},
JsonSchema::string(Some(
"Bytes to write to stdin (may be empty to poll).".to_string(),
)),
),
(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
),
},
JsonSchema::number(Some(
"How long to wait (in milliseconds) for output before yielding.".to_string(),
)),
),
(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some(
"Maximum number of tokens to return. Excess output will be truncated."
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum number of tokens to return. Excess output will be truncated.".to_string(),
)),
),
]);
@@ -247,11 +214,11 @@ fn write_stdin_tool_matches_expected_spec() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["session_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["session_id".to_string()]),
Some(false.into())
),
output_schema: Some(unified_exec_output_schema()),
})
);
@@ -266,22 +233,22 @@ fn shell_tool_with_request_permission_includes_additional_permissions() {
let mut properties = BTreeMap::from([
(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("The command to execute".to_string()),
},
JsonSchema::array(
JsonSchema::string(/*description*/ None),
Some("The command to execute".to_string()),
),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
JsonSchema::string(Some(
"The working directory to execute the command in".to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
JsonSchema::number(Some(
"The timeout for the command in milliseconds".to_string(),
)),
),
]);
properties.extend(create_approval_parameters(
@@ -318,11 +285,11 @@ Examples of valid command strings:
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["command".to_string()]),
Some(false.into())
),
output_schema: None,
})
);
@@ -336,12 +303,9 @@ fn request_permissions_tool_includes_full_permission_schema() {
let properties = BTreeMap::from([
(
"reason".to_string(),
JsonSchema::String {
description: Some(
"Optional short explanation for why additional permissions are needed."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional short explanation for why additional permissions are needed.".to_string(),
)),
),
("permissions".to_string(), permission_profile_schema()),
]);
@@ -353,11 +317,11 @@ fn request_permissions_tool_includes_full_permission_schema() {
description: "Request extra permissions for this turn.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["permissions".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["permissions".to_string()]),
Some(false.into())
),
output_schema: None,
})
);
@@ -392,32 +356,28 @@ Examples of valid command strings:
let mut properties = BTreeMap::from([
(
"command".to_string(),
JsonSchema::String {
description: Some(
"The shell script to execute in the user's default shell".to_string(),
),
},
JsonSchema::string(Some(
"The shell script to execute in the user's default shell".to_string(),
)),
),
(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
JsonSchema::string(Some(
"The working directory to execute the command in".to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
JsonSchema::number(Some(
"The timeout for the command in milliseconds".to_string(),
)),
),
(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to run the shell with login shell semantics. Defaults to true."
.to_string(),
),
},
JsonSchema::boolean(Some(
"Whether to run the shell with login shell semantics. Defaults to true."
.to_string(),
)),
),
]);
properties.extend(create_approval_parameters(
@@ -431,11 +391,11 @@ Examples of valid command strings:
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["command".to_string()]),
Some(false.into())
),
output_schema: None,
})
);

View File

@@ -7,21 +7,17 @@ pub fn create_list_mcp_resources_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
"Optional MCP server name. When omitted, lists resources from every configured server."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional MCP server name. When omitted, lists resources from every configured server."
.to_string(),
)),
),
(
"cursor".to_string(),
JsonSchema::String {
description: Some(
"Opaque cursor returned by a previous list_mcp_resources call for the same server."
.to_string(),
),
},
JsonSchema::string(Some(
"Opaque cursor returned by a previous list_mcp_resources call for the same server."
.to_string(),
)),
),
]);
@@ -30,11 +26,7 @@ pub fn create_list_mcp_resources_tool() -> ToolSpec {
description: "Lists resources provided by MCP servers. Resources allow servers to share data that provides context to language models, such as files, database schemas, or application-specific information. Prefer resources over web search when possible.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, /*required*/ None, Some(false.into())),
output_schema: None,
})
}
@@ -43,21 +35,17 @@ pub fn create_list_mcp_resource_templates_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
"Optional MCP server name. When omitted, lists resource templates from all configured servers."
.to_string(),
),
},
JsonSchema::string(Some(
"Optional MCP server name. When omitted, lists resource templates from all configured servers."
.to_string(),
)),
),
(
"cursor".to_string(),
JsonSchema::String {
description: Some(
"Opaque cursor returned by a previous list_mcp_resource_templates call for the same server."
.to_string(),
),
},
JsonSchema::string(Some(
"Opaque cursor returned by a previous list_mcp_resource_templates call for the same server."
.to_string(),
)),
),
]);
@@ -66,11 +54,7 @@ pub fn create_list_mcp_resource_templates_tool() -> ToolSpec {
description: "Lists resource templates provided by MCP servers. Parameterized resource templates allow servers to share data that takes parameters and provides context to language models, such as files, database schemas, or application-specific information. Prefer resource templates over web search when possible.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, /*required*/ None, Some(false.into())),
output_schema: None,
})
}
@@ -79,21 +63,17 @@ pub fn create_read_mcp_resource_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
"MCP server name exactly as configured. Must match the 'server' field returned by list_mcp_resources."
.to_string(),
),
},
JsonSchema::string(Some(
"MCP server name exactly as configured. Must match the 'server' field returned by list_mcp_resources."
.to_string(),
)),
),
(
"uri".to_string(),
JsonSchema::String {
description: Some(
"Resource URI to read. Must be one of the URIs returned by list_mcp_resources."
.to_string(),
),
},
JsonSchema::string(Some(
"Resource URI to read. Must be one of the URIs returned by list_mcp_resources."
.to_string(),
)),
),
]);
@@ -104,11 +84,11 @@ pub fn create_read_mcp_resource_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["server".to_string(), "uri".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["server".to_string(), "uri".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -11,30 +12,22 @@ fn list_mcp_resources_tool_matches_expected_spec() {
description: "Lists resources provided by MCP servers. Resources allow servers to share data that provides context to language models, such as files, database schemas, or application-specific information. Prefer resources over web search when possible.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Optional MCP server name. When omitted, lists resources from every configured server."
.to_string(),
),
},
),),
),
(
"cursor".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Opaque cursor returned by a previous list_mcp_resources call for the same server."
.to_string(),
),
},
),),
),
]),
required: None,
additional_properties: Some(false.into()),
},
]), /*required*/ None, Some(false.into())),
output_schema: None,
})
);
@@ -49,30 +42,22 @@ fn list_mcp_resource_templates_tool_matches_expected_spec() {
description: "Lists resource templates provided by MCP servers. Parameterized resource templates allow servers to share data that takes parameters and provides context to language models, such as files, database schemas, or application-specific information. Prefer resource templates over web search when possible.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Optional MCP server name. When omitted, lists resource templates from all configured servers."
.to_string(),
),
},
),),
),
(
"cursor".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Opaque cursor returned by a previous list_mcp_resource_templates call for the same server."
.to_string(),
),
},
),),
),
]),
required: None,
additional_properties: Some(false.into()),
},
]), /*required*/ None, Some(false.into())),
output_schema: None,
})
);
@@ -89,30 +74,22 @@ fn read_mcp_resource_tool_matches_expected_spec() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"server".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"MCP server name exactly as configured. Must match the 'server' field returned by list_mcp_resources."
.to_string(),
),
},
),),
),
(
"uri".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Resource URI to read. Must be one of the URIs returned by list_mcp_resources."
.to_string(),
),
},
),),
),
]),
required: Some(vec!["server".to_string(), "uri".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["server".to_string(), "uri".to_string()]), Some(false.into())),
output_schema: None,
})
);

View File

@@ -34,11 +34,11 @@ fn parse_mcp_tool_inserts_empty_properties() {
ToolDefinition {
name: "no_props".to_string(),
description: "No properties".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
input_schema: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
defer_loading: false,
}
@@ -72,11 +72,11 @@ fn parse_mcp_tool_preserves_top_level_output_schema() {
ToolDefinition {
name: "with_output".to_string(),
description: "Has output schema".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
input_schema: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({
"properties": {
"result": {
@@ -112,11 +112,11 @@ fn parse_mcp_tool_preserves_output_schema_without_inferred_type() {
ToolDefinition {
name: "with_enum_output".to_string(),
description: "Has enum output schema".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
input_schema: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({
"enum": ["ok", "error"]
}))),

View File

@@ -5,30 +5,28 @@ use std::collections::BTreeMap;
pub fn create_update_plan_tool() -> ToolSpec {
let plan_item_properties = BTreeMap::from([
("step".to_string(), JsonSchema::String { description: None }),
("step".to_string(), JsonSchema::string(/*description*/ None)),
(
"status".to_string(),
JsonSchema::String {
description: Some("One of: pending, in_progress, completed".to_string()),
},
JsonSchema::string(Some("One of: pending, in_progress, completed".to_string())),
),
]);
let properties = BTreeMap::from([
(
"explanation".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
),
(
"plan".to_string(),
JsonSchema::Array {
description: Some("The list of steps".to_string()),
items: Box::new(JsonSchema::Object {
properties: plan_item_properties,
required: Some(vec!["step".to_string(), "status".to_string()]),
additional_properties: Some(false.into()),
}),
},
JsonSchema::array(
JsonSchema::object(
plan_item_properties,
Some(vec!["step".to_string(), "status".to_string()]),
Some(false.into()),
),
Some("The list of steps".to_string()),
),
),
]);
@@ -41,11 +39,11 @@ At most one step can be in_progress at a time.
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["plan".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["plan".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}

View File

@@ -12,71 +12,60 @@ pub fn create_request_user_input_tool(description: String) -> ToolSpec {
let option_props = BTreeMap::from([
(
"label".to_string(),
JsonSchema::String {
description: Some("User-facing label (1-5 words).".to_string()),
},
JsonSchema::string(Some("User-facing label (1-5 words).".to_string())),
),
(
"description".to_string(),
JsonSchema::String {
description: Some(
"One short sentence explaining impact/tradeoff if selected.".to_string(),
),
},
JsonSchema::string(Some(
"One short sentence explaining impact/tradeoff if selected.".to_string(),
)),
),
]);
let options_schema = JsonSchema::Array {
description: Some(
let options_schema = JsonSchema::array(JsonSchema::object(
option_props,
Some(vec!["label".to_string(), "description".to_string()]),
Some(false.into()),
), Some(
"Provide 2-3 mutually exclusive choices. Put the recommended option first and suffix its label with \"(Recommended)\". Do not include an \"Other\" option in this list; the client will add a free-form \"Other\" option automatically."
.to_string(),
),
items: Box::new(JsonSchema::Object {
properties: option_props,
required: Some(vec!["label".to_string(), "description".to_string()]),
additional_properties: Some(false.into()),
}),
};
));
let question_props = BTreeMap::from([
(
"id".to_string(),
JsonSchema::String {
description: Some(
"Stable identifier for mapping answers (snake_case).".to_string(),
),
},
JsonSchema::string(Some(
"Stable identifier for mapping answers (snake_case).".to_string(),
)),
),
(
"header".to_string(),
JsonSchema::String {
description: Some(
"Short header label shown in the UI (12 or fewer chars).".to_string(),
),
},
JsonSchema::string(Some(
"Short header label shown in the UI (12 or fewer chars).".to_string(),
)),
),
(
"question".to_string(),
JsonSchema::String {
description: Some("Single-sentence prompt shown to the user.".to_string()),
},
JsonSchema::string(Some(
"Single-sentence prompt shown to the user.".to_string(),
)),
),
("options".to_string(), options_schema),
]);
let questions_schema = JsonSchema::Array {
description: Some("Questions to show the user. Prefer 1 and do not exceed 3".to_string()),
items: Box::new(JsonSchema::Object {
properties: question_props,
required: Some(vec![
let questions_schema = JsonSchema::array(
JsonSchema::object(
question_props,
Some(vec![
"id".to_string(),
"header".to_string(),
"question".to_string(),
"options".to_string(),
]),
additional_properties: Some(false.into()),
}),
};
Some(false.into()),
),
Some("Questions to show the user. Prefer 1 and do not exceed 3".to_string()),
);
let properties = BTreeMap::from([("questions".to_string(), questions_schema)]);
@@ -85,11 +74,11 @@ pub fn create_request_user_input_tool(description: String) -> ToolSpec {
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["questions".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["questions".to_string()]),
Some(false.into()),
),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use codex_protocol::config_types::ModeKind;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -12,91 +13,77 @@ fn request_user_input_tool_includes_questions_schema() {
description: "Ask the user to choose.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(BTreeMap::from([(
"questions".to_string(),
JsonSchema::Array {
description: Some(
"Questions to show the user. Prefer 1 and do not exceed 3".to_string(),
),
items: Box::new(JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::array(
JsonSchema::object(
BTreeMap::from([
(
"header".to_string(),
JsonSchema::String {
description: Some(
"Short header label shown in the UI (12 or fewer chars)."
.to_string(),
),
},
JsonSchema::string(Some(
"Short header label shown in the UI (12 or fewer chars)."
.to_string(),
)),
),
(
"id".to_string(),
JsonSchema::String {
description: Some(
"Stable identifier for mapping answers (snake_case)."
.to_string(),
),
},
JsonSchema::string(Some(
"Stable identifier for mapping answers (snake_case)."
.to_string(),
)),
),
(
"options".to_string(),
JsonSchema::Array {
description: Some(
"Provide 2-3 mutually exclusive choices. Put the recommended option first and suffix its label with \"(Recommended)\". Do not include an \"Other\" option in this list; the client will add a free-form \"Other\" option automatically."
.to_string(),
),
items: Box::new(JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::array(
JsonSchema::object(
BTreeMap::from([
(
"description".to_string(),
JsonSchema::String {
description: Some(
"One short sentence explaining impact/tradeoff if selected."
.to_string(),
),
},
JsonSchema::string(Some(
"One short sentence explaining impact/tradeoff if selected."
.to_string(),
)),
),
(
"label".to_string(),
JsonSchema::String {
description: Some(
"User-facing label (1-5 words)."
.to_string(),
),
},
JsonSchema::string(Some(
"User-facing label (1-5 words)."
.to_string(),
)),
),
]),
required: Some(vec![
Some(vec![
"label".to_string(),
"description".to_string(),
]),
additional_properties: Some(false.into()),
}),
},
Some(false.into()),
),
Some(
"Provide 2-3 mutually exclusive choices. Put the recommended option first and suffix its label with \"(Recommended)\". Do not include an \"Other\" option in this list; the client will add a free-form \"Other\" option automatically."
.to_string(),
),
),
),
(
"question".to_string(),
JsonSchema::String {
description: Some(
"Single-sentence prompt shown to the user.".to_string(),
),
},
JsonSchema::string(Some(
"Single-sentence prompt shown to the user.".to_string(),
)),
),
]),
required: Some(vec![
Some(vec![
"id".to_string(),
"header".to_string(),
"question".to_string(),
"options".to_string(),
]),
additional_properties: Some(false.into()),
}),
},
)]),
required: Some(vec!["questions".to_string()]),
additional_properties: Some(false.into()),
},
Some(false.into()),
),
Some(
"Questions to show the user. Prefer 1 and do not exceed 3".to_string(),
),
),
)]), Some(vec!["questions".to_string()]), Some(false.into())),
output_schema: None,
})
);

View File

@@ -38,6 +38,7 @@ pub struct ResponsesApiTool {
#[derive(Debug, Clone, Serialize, PartialEq)]
#[serde(tag = "type")]
#[allow(clippy::large_enum_variant)]
pub enum ToolSearchOutputTool {
#[allow(dead_code)]
#[serde(rename = "function")]

View File

@@ -18,14 +18,14 @@ fn tool_definition_to_responses_api_tool_omits_false_defer_loading() {
tool_definition_to_responses_api_tool(ToolDefinition {
name: "lookup_order".to_string(),
description: "Look up an order".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::from([(
input_schema: JsonSchema::object(
BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["order_id".to_string()]),
Some(false.into())
),
output_schema: Some(json!({"type": "object"})),
defer_loading: false,
}),
@@ -34,14 +34,14 @@ fn tool_definition_to_responses_api_tool_omits_false_defer_loading() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["order_id".to_string()]),
Some(false.into())
),
output_schema: Some(json!({"type": "object"})),
}
);
@@ -70,14 +70,14 @@ fn dynamic_tool_to_responses_api_tool_preserves_defer_loading() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
JsonSchema::string(/*description*/ None),
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["order_id".to_string()]),
Some(false.into())
),
output_schema: None,
}
);
@@ -115,14 +115,10 @@ fn mcp_tool_to_deferred_responses_api_tool_sets_defer_loading() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(BTreeMap::from([(
"order_id".to_string(),
JsonSchema::String { description: None },
)]),
required: Some(vec!["order_id".to_string()]),
additional_properties: Some(false.into()),
},
JsonSchema::string(/*description*/ None),
)]), Some(vec!["order_id".to_string()]), Some(false.into())),
output_schema: None,
}
);
@@ -138,11 +134,11 @@ fn tool_search_output_namespace_serializes_with_deferred_child_tools() {
description: "Create a calendar event.".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
Default::default(),
/*required*/ None,
/*additional_properties*/ None,
),
output_schema: None,
})],
});

View File

@@ -7,11 +7,11 @@ fn tool_definition() -> ToolDefinition {
ToolDefinition {
name: "lookup_order".to_string(),
description: "Look up an order".to_string(),
input_schema: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
input_schema: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None,
),
output_schema: Some(serde_json::json!({
"type": "object",
})),

View File

@@ -147,17 +147,13 @@ pub fn create_tool_search_tool(app_tools: &[ToolSearchAppInfo], default_limit: u
let properties = BTreeMap::from([
(
"query".to_string(),
JsonSchema::String {
description: Some("Search query for apps tools.".to_string()),
},
JsonSchema::string(Some("Search query for apps tools.".to_string())),
),
(
"limit".to_string(),
JsonSchema::Number {
description: Some(format!(
"Maximum number of tools to return (defaults to {default_limit})."
)),
},
JsonSchema::number(Some(format!(
"Maximum number of tools to return (defaults to {default_limit})."
))),
),
]);
@@ -193,11 +189,11 @@ pub fn create_tool_search_tool(app_tools: &[ToolSearchAppInfo], default_limit: u
ToolSpec::ToolSearch {
execution: "client".to_string(),
description,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec!["query".to_string()]),
additional_properties: Some(false.into()),
},
Some(vec!["query".to_string()]),
Some(false.into()),
),
}
}
@@ -279,37 +275,29 @@ pub fn create_tool_suggest_tool(discoverable_tools: &[ToolSuggestEntry]) -> Tool
let properties = BTreeMap::from([
(
"tool_type".to_string(),
JsonSchema::String {
description: Some(
"Type of discoverable tool to suggest. Use \"connector\" or \"plugin\"."
.to_string(),
),
},
JsonSchema::string(Some(
"Type of discoverable tool to suggest. Use \"connector\" or \"plugin\"."
.to_string(),
)),
),
(
"action_type".to_string(),
JsonSchema::String {
description: Some(
"Suggested action for the tool. Use \"install\" or \"enable\".".to_string(),
),
},
JsonSchema::string(Some(
"Suggested action for the tool. Use \"install\" or \"enable\".".to_string(),
)),
),
(
"tool_id".to_string(),
JsonSchema::String {
description: Some(format!(
"Connector or plugin id to suggest. Must be one of: {discoverable_tool_ids}."
)),
},
JsonSchema::string(Some(format!(
"Connector or plugin id to suggest. Must be one of: {discoverable_tool_ids}."
))),
),
(
"suggest_reason".to_string(),
JsonSchema::String {
description: Some(
"Concise one-line user-facing reason why this tool can help with the current request."
.to_string(),
),
},
JsonSchema::string(Some(
"Concise one-line user-facing reason why this tool can help with the current request."
.to_string(),
)),
),
]);
@@ -323,16 +311,16 @@ pub fn create_tool_suggest_tool(discoverable_tools: &[ToolSuggestEntry]) -> Tool
description,
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
parameters: JsonSchema::object(
properties,
required: Some(vec![
Some(vec![
"tool_type".to_string(),
"action_type".to_string(),
"tool_id".to_string(),
"suggest_reason".to_string(),
]),
additional_properties: Some(false.into()),
},
Some(false.into()),
),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use codex_app_server_protocol::AppInfo;
use pretty_assertions::assert_eq;
use rmcp::model::JsonObject;
@@ -50,27 +51,19 @@ fn create_tool_search_tool_deduplicates_and_renders_enabled_apps() {
ToolSpec::ToolSearch {
execution: "client".to_string(),
description: "# Apps (Connectors) tool discovery\n\nSearches over apps/connectors tool metadata with BM25 and exposes matching tools for the next model call.\n\nYou have access to all the tools of the following apps/connectors:\n- Google Drive: Use Google Drive as the single entrypoint for Drive, Docs, Sheets, and Slides work.\n- Slack\nSome of the tools may not have been provided to you upfront, and you should use this tool (`tool_search`) to search for the required tools and load them for the apps mentioned above. For the apps mentioned above, always use `tool_search` instead of `list_mcp_resources` or `list_mcp_resource_templates` for tool discovery.".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"limit".to_string(),
JsonSchema::Number {
description: Some(
JsonSchema::number(Some(
"Maximum number of tools to return (defaults to 8)."
.to_string(),
),
},
),),
),
(
"query".to_string(),
JsonSchema::String {
description: Some("Search query for apps tools.".to_string()),
},
JsonSchema::string(Some("Search query for apps tools.".to_string()),),
),
]),
required: Some(vec!["query".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["query".to_string()]), Some(false.into())),
}
);
}
@@ -103,53 +96,41 @@ fn create_tool_suggest_tool_uses_plugin_summary_fallback() {
description: "# Tool suggestion discovery\n\nSuggests a missing connector in an installed plugin, or in narrower cases a not installed but discoverable plugin, when the user clearly wants a capability that is not currently available in the active `tools` list.\n\nUse this ONLY when:\n- You've already tried to find a matching available tool for the user's request but couldn't find a good match. This includes `tool_search` (if available) and other means.\n- For connectors/apps that are not installed but needed for an installed plugin, suggest to install them if the task requirements match precisely.\n- For plugins that are not installed but discoverable, only suggest discoverable and installable plugins when the user's intent very explicitly and unambiguously matches that plugin itself. Do not suggest a plugin just because one of its connectors or capabilities seems relevant.\n\nTool suggestions should only use the discoverable tools listed here. DO NOT explore or recommend tools that are not on this list.\n\nDiscoverable tools:\n- GitHub (id: `github`, type: plugin, action: install): skills; MCP servers: github-mcp; app connectors: github-app\n- Slack (id: `slack@openai-curated`, type: connector, action: install): No description provided.\n\nWorkflow:\n\n1. Ensure all possible means have been exhausted to find an existing available tool but none of them matches the request intent.\n2. Match the user's request against the discoverable tools list above. Apply the stricter explicit-and-unambiguous rule for *discoverable tools* like plugin install suggestions; *missing tools* like connector install suggestions continue to use the normal clear-fit standard.\n3. If one tool clearly fits, call `tool_suggest` with:\n - `tool_type`: `connector` or `plugin`\n - `action_type`: `install` or `enable`\n - `tool_id`: exact id from the discoverable tools list above\n - `suggest_reason`: concise one-line user-facing reason this tool can help with the current request\n4. After the suggestion flow completes:\n - if the user finished the install or enable flow, continue by searching again or using the newly available tool\n - if the user did not finish, continue without that tool, and don't suggest that tool again unless the user explicitly asks for it.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"action_type".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Suggested action for the tool. Use \"install\" or \"enable\"."
.to_string(),
),
},
),),
),
(
"suggest_reason".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Concise one-line user-facing reason why this tool can help with the current request."
.to_string(),
),
},
),),
),
(
"tool_id".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Connector or plugin id to suggest. Must be one of: slack@openai-curated, github."
.to_string(),
),
},
),),
),
(
"tool_type".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Type of discoverable tool to suggest. Use \"connector\" or \"plugin\"."
.to_string(),
),
},
),),
),
]),
required: Some(vec![
]), Some(vec![
"tool_type".to_string(),
"action_type".to_string(),
"tool_id".to_string(),
"suggest_reason".to_string(),
]),
additional_properties: Some(false.into()),
},
]), Some(false.into())),
output_schema: None,
})
);
@@ -198,11 +179,11 @@ fn collect_tool_search_output_tools_groups_results_by_namespace() {
description: "Create a calendar event.".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
Default::default(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
}),
ResponsesApiNamespaceTool::Function(ResponsesApiTool {
@@ -210,11 +191,11 @@ fn collect_tool_search_output_tools_groups_results_by_namespace() {
description: "List calendar events.".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
Default::default(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
}),
],
@@ -227,11 +208,11 @@ fn collect_tool_search_output_tools_groups_results_by_namespace() {
description: "Read an email.".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
Default::default(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
})],
}),
@@ -262,11 +243,11 @@ fn collect_tool_search_output_tools_falls_back_to_connector_name_description() {
description: "Read multiple emails.".to_string(),
strict: false,
defer_loading: Some(true),
parameters: JsonSchema::Object {
properties: Default::default(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
Default::default(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
})],
})],

View File

@@ -61,6 +61,7 @@ use crate::tool_registry_plan_types::agent_type_description;
use codex_protocol::openai_models::ApplyPatchToolType;
use codex_protocol::openai_models::ConfigShellToolType;
use rmcp::model::Tool as McpTool;
use std::collections::BTreeMap;
pub fn build_tool_registry_plan(
config: &ToolsConfig,
@@ -70,6 +71,20 @@ pub fn build_tool_registry_plan(
let exec_permission_approvals_enabled = config.exec_permission_approvals_enabled;
if config.code_mode_enabled {
let namespace_descriptions = params
.tool_namespaces
.into_iter()
.flatten()
.map(|(name, detail)| {
(
name.clone(),
codex_code_mode::ToolNamespaceDescription {
name: detail.name.clone(),
description: detail.description.clone().unwrap_or_default(),
},
)
})
.collect::<BTreeMap<_, _>>();
let nested_config = config.for_code_mode_nested_tools();
let nested_plan = build_tool_registry_plan(
&nested_config,
@@ -78,7 +93,7 @@ pub fn build_tool_registry_plan(
..params
},
);
let enabled_tools = collect_code_mode_tool_definitions(
let mut enabled_tools = collect_code_mode_tool_definitions(
nested_plan
.specs
.iter()
@@ -87,8 +102,15 @@ pub fn build_tool_registry_plan(
.into_iter()
.map(|tool| (tool.name, tool.description))
.collect::<Vec<_>>();
enabled_tools.sort_by(|(left_name, _), (right_name, _)| {
compare_code_mode_tool_names(left_name, right_name, &namespace_descriptions)
});
plan.push_spec(
create_code_mode_tool(&enabled_tools, config.code_mode_only_enabled),
create_code_mode_tool(
&enabled_tools,
&namespace_descriptions,
config.code_mode_only_enabled,
),
/*supports_parallel_tool_calls*/ false,
config.code_mode_enabled,
);
@@ -494,6 +516,41 @@ pub fn build_tool_registry_plan(
plan
}
fn compare_code_mode_tool_names(
left_name: &str,
right_name: &str,
namespace_descriptions: &BTreeMap<String, codex_code_mode::ToolNamespaceDescription>,
) -> std::cmp::Ordering {
let left_namespace = code_mode_namespace_name(left_name, namespace_descriptions);
let right_namespace = code_mode_namespace_name(right_name, namespace_descriptions);
left_namespace
.cmp(&right_namespace)
.then_with(|| {
code_mode_function_name(left_name, left_namespace)
.cmp(code_mode_function_name(right_name, right_namespace))
})
.then_with(|| left_name.cmp(right_name))
}
fn code_mode_namespace_name<'a>(
name: &str,
namespace_descriptions: &'a BTreeMap<String, codex_code_mode::ToolNamespaceDescription>,
) -> Option<&'a str> {
namespace_descriptions
.get(name)
.map(|namespace_description| namespace_description.name.as_str())
}
fn code_mode_function_name<'a>(name: &'a str, namespace: Option<&str>) -> &'a str {
namespace
.and_then(|namespace| {
name.strip_prefix(namespace)
.and_then(|suffix| suffix.strip_prefix("__"))
})
.unwrap_or(name)
}
#[cfg(test)]
#[path = "tool_registry_plan_tests.rs"]
mod tests;

View File

@@ -5,10 +5,13 @@ use crate::DiscoverablePluginInfo;
use crate::DiscoverableTool;
use crate::FreeformTool;
use crate::JsonSchema;
use crate::JsonSchemaPrimitiveType;
use crate::JsonSchemaType;
use crate::ResponsesApiTool;
use crate::ResponsesApiWebSearchFilters;
use crate::ResponsesApiWebSearchUserLocation;
use crate::ToolHandlerSpec;
use crate::ToolNamespace;
use crate::ToolRegistryPlanAppTool;
use crate::ToolsConfigParams;
use crate::WaitAgentTimeoutOptions;
@@ -172,9 +175,7 @@ fn test_build_specs_collab_tools_enabled() {
let ToolSpec::Function(ResponsesApiTool { parameters, .. }) = &spawn_agent.spec else {
panic!("spawn_agent should be a function tool");
};
let JsonSchema::Object { properties, .. } = parameters else {
panic!("spawn_agent should use object params");
};
let (properties, _) = expect_object_schema(parameters);
assert!(properties.contains_key("fork_context"));
assert!(!properties.contains_key("fork_turns"));
}
@@ -223,21 +224,14 @@ fn test_build_specs_multi_agent_v2_uses_task_names_and_hides_resume() {
else {
panic!("spawn_agent should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("spawn_agent should use object params");
};
let (properties, required) = expect_object_schema(parameters);
assert!(properties.contains_key("task_name"));
assert!(properties.contains_key("message"));
assert!(properties.contains_key("fork_turns"));
assert!(!properties.contains_key("items"));
assert!(!properties.contains_key("fork_context"));
assert_eq!(
required.as_ref(),
required,
Some(&vec!["task_name".to_string(), "message".to_string()])
);
let output_schema = output_schema
@@ -255,20 +249,13 @@ fn test_build_specs_multi_agent_v2_uses_task_names_and_hides_resume() {
panic!("send_message should be a function tool");
};
assert_eq!(output_schema, &None);
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("send_message should use object params");
};
let (properties, required) = expect_object_schema(parameters);
assert!(properties.contains_key("target"));
assert!(!properties.contains_key("interrupt"));
assert!(properties.contains_key("message"));
assert!(!properties.contains_key("items"));
assert_eq!(
required.as_ref(),
required,
Some(&vec!["target".to_string(), "message".to_string()])
);
@@ -282,19 +269,12 @@ fn test_build_specs_multi_agent_v2_uses_task_names_and_hides_resume() {
panic!("followup_task should be a function tool");
};
assert_eq!(output_schema, &None);
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("followup_task should use object params");
};
let (properties, required) = expect_object_schema(parameters);
assert!(properties.contains_key("target"));
assert!(properties.contains_key("message"));
assert!(!properties.contains_key("items"));
assert_eq!(
required.as_ref(),
required,
Some(&vec!["target".to_string(), "message".to_string()])
);
@@ -307,17 +287,10 @@ fn test_build_specs_multi_agent_v2_uses_task_names_and_hides_resume() {
else {
panic!("wait_agent should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("wait_agent should use object params");
};
let (properties, required) = expect_object_schema(parameters);
assert!(!properties.contains_key("targets"));
assert!(properties.contains_key("timeout_ms"));
assert_eq!(required, &None);
assert_eq!(required, None);
let output_schema = output_schema
.as_ref()
.expect("wait_agent should define output schema");
@@ -335,16 +308,9 @@ fn test_build_specs_multi_agent_v2_uses_task_names_and_hides_resume() {
else {
panic!("list_agents should be a function tool");
};
let JsonSchema::Object {
properties,
required,
..
} = parameters
else {
panic!("list_agents should use object params");
};
let (properties, required) = expect_object_schema(parameters);
assert!(properties.contains_key("path_prefix"));
assert_eq!(required.as_ref(), None);
assert_eq!(required, None);
let output_schema = output_schema
.as_ref()
.expect("list_agents should define output schema");
@@ -416,9 +382,7 @@ fn view_image_tool_omits_detail_without_original_detail_feature() {
let ToolSpec::Function(ResponsesApiTool { parameters, .. }) = &view_image.spec else {
panic!("view_image should be a function tool");
};
let JsonSchema::Object { properties, .. } = parameters else {
panic!("view_image should use an object schema");
};
let (properties, _) = expect_object_schema(parameters);
assert!(!properties.contains_key("detail"));
}
@@ -448,16 +412,13 @@ fn view_image_tool_includes_detail_with_original_detail_feature() {
let ToolSpec::Function(ResponsesApiTool { parameters, .. }) = &view_image.spec else {
panic!("view_image should be a function tool");
};
let JsonSchema::Object { properties, .. } = parameters else {
panic!("view_image should use an object schema");
};
let (properties, _) = expect_object_schema(parameters);
assert!(properties.contains_key("detail"));
let Some(JsonSchema::String {
description: Some(description),
}) = properties.get("detail")
else {
panic!("view_image detail should include a description");
};
let description = expect_string_description(
properties
.get("detail")
.expect("view_image detail should include a description"),
);
assert!(description.contains("only supported value is `original`"));
assert!(description.contains("omit this field for default resized behavior"));
}
@@ -1118,40 +1079,40 @@ fn test_build_specs_mcp_tools_converted() {
&tool.spec,
&ToolSpec::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(
BTreeMap::from([
(
"string_argument".to_string(),
JsonSchema::String { description: None }
JsonSchema::string(/*description*/ None),
),
(
"number_argument".to_string(),
JsonSchema::Number { description: None }
JsonSchema::number(/*description*/ None),
),
(
"object_argument".to_string(),
JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::object(
BTreeMap::from([
(
"string_property".to_string(),
JsonSchema::String { description: None }
JsonSchema::string(/*description*/ None),
),
(
"number_property".to_string(),
JsonSchema::Number { description: None }
JsonSchema::number(/*description*/ None),
),
]),
required: Some(vec![
Some(vec![
"string_property".to_string(),
"number_property".to_string(),
]),
additional_properties: Some(false.into()),
},
Some(false.into()),
),
),
]),
required: None,
additional_properties: None,
},
/*required*/ None,
/*additional_properties*/ None
),
description: "Do something cool".to_string(),
strict: false,
output_schema: Some(mcp_call_tool_result_output_schema(serde_json::json!({}))),
@@ -1372,6 +1333,7 @@ fn tool_suggest_is_not_registered_without_feature_flag() {
&tools_config,
/*mcp_tools*/ None,
/*app_tools*/ None,
/*tool_namespaces*/ None,
Some(vec![discoverable_connector(
"connector_2128aebfecb84f64a069897515042a44",
"Google Calendar",
@@ -1411,6 +1373,7 @@ fn tool_suggest_can_be_registered_without_search_tool() {
&tools_config,
/*mcp_tools*/ None,
/*app_tools*/ None,
/*tool_namespaces*/ None,
Some(vec![discoverable_connector(
"connector_2128aebfecb84f64a069897515042a44",
"Google Calendar",
@@ -1478,6 +1441,7 @@ fn tool_suggest_description_lists_discoverable_tools() {
&tools_config,
/*mcp_tools*/ None,
/*app_tools*/ None,
/*tool_namespaces*/ None,
Some(discoverable_tools),
&[],
);
@@ -1524,11 +1488,9 @@ fn tool_suggest_description_lists_discoverable_tools() {
assert!(description.contains("DO NOT explore or recommend tools that are not on this list."));
assert!(!description.contains("{{discoverable_tools}}"));
assert!(!description.contains("tool_search fails to find a good match"));
let JsonSchema::Object { required, .. } = parameters else {
panic!("expected object parameters");
};
let (_, required) = expect_object_schema(parameters);
assert_eq!(
required.as_ref(),
required,
Some(&vec![
"tool_type".to_string(),
"action_type".to_string(),
@@ -1543,6 +1505,7 @@ fn code_mode_augments_mcp_tool_descriptions_with_namespaced_sample() {
let model_info = model_info();
let mut features = Features::with_defaults();
features.enable(Feature::CodeMode);
features.enable(Feature::CodeModeOnly);
features.enable(Feature::UnifiedExec);
let available_models = Vec::new();
let tools_config = ToolsConfig::new(&ToolsConfigParams {
@@ -1584,10 +1547,100 @@ fn code_mode_augments_mcp_tool_descriptions_with_namespaced_sample() {
assert_eq!(
description,
"Echo text\n\nexec tool declaration:\n```ts\ndeclare const tools: { mcp__sample__echo(args: { message: string; }): Promise<{ _meta?: unknown; content: Array<unknown>; isError?: boolean; structuredContent?: unknown; }>; };\n```"
r#"Echo text
exec tool declaration:
```ts
declare const tools: { mcp__sample__echo(args: { message: string; }): Promise<{ _meta?: unknown; content: Array<unknown>; isError?: boolean; structuredContent?: unknown; }>; };
```"#
);
}
#[test]
fn code_mode_preserves_nullable_and_literal_mcp_input_shapes() {
let model_info = model_info();
let mut features = Features::with_defaults();
features.enable(Feature::CodeMode);
features.enable(Feature::UnifiedExec);
let available_models = Vec::new();
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
available_models: &available_models,
features: &features,
web_search_mode: Some(WebSearchMode::Cached),
session_source: SessionSource::Cli,
sandbox_policy: &SandboxPolicy::DangerFullAccess,
windows_sandbox_level: WindowsSandboxLevel::Disabled,
});
let (tools, _) = build_specs(
&tools_config,
Some(HashMap::from([(
"mcp__sample__fn".to_string(),
mcp_tool(
"fn",
"Sample fn",
serde_json::json!({
"type": "object",
"properties": {
"open": {
"anyOf": [
{
"type": "array",
"items": {
"type": "object",
"properties": {
"ref_id": {"type": "string"},
"lineno": {"anyOf": [{"type": "integer"}, {"type": "null"}]}
},
"required": ["ref_id"],
"additionalProperties": false
}
},
{"type": "null"}
]
},
"tagged_list": {
"anyOf": [
{
"type": "array",
"items": {
"type": "object",
"properties": {
"kind": {"type": "const", "const": "tagged"},
"variant": {"type": "enum", "enum": ["alpha", "beta"]},
"scope": {"type": "enum", "enum": ["one", "two"]}
},
"required": ["kind", "variant", "scope"]
}
},
{"type": "null"}
]
},
"response_length": {"type": "enum", "enum": ["short", "medium", "long"]}
},
"additionalProperties": false
}),
),
)])),
/*app_tools*/ None,
&[],
);
let ToolSpec::Function(ResponsesApiTool { description, .. }) =
&find_tool(&tools, "mcp__sample__fn").spec
else {
panic!("expected function tool");
};
assert!(description.contains(
r#"exec tool declaration:
```ts
declare const tools: { mcp__sample__fn(args: { open?: Array<{ lineno?: number | null; ref_id: string; }> | null; response_length?: "short" | "medium" | "long"; tagged_list?: Array<{ kind: "tagged"; scope: "one" | "two"; variant: "alpha" | "beta"; }> | null; }): Promise<{ _meta?: unknown; content: Array<unknown>; isError?: boolean; structuredContent?: unknown; }>; };
```"#
));
}
#[test]
fn code_mode_augments_builtin_tool_descriptions_with_typed_sample() {
let model_info = model_info();
@@ -1619,7 +1672,7 @@ fn code_mode_augments_builtin_tool_descriptions_with_typed_sample() {
assert_eq!(
description,
"View a local image from the filesystem (only use if given a full filepath by the user, and the image isn't already attached to the thread context within <image ...> tags).\n\nexec tool declaration:\n```ts\ndeclare const tools: { view_image(args: { path: string; }): Promise<{ detail: string | null; image_url: string; }>; };\n```"
"View a local image from the filesystem (only use if given a full filepath by the user, and the image isn't already attached to the thread context within <image ...> tags).\n\nexec tool declaration:\n```ts\ndeclare const tools: { view_image(args: {\n // Local filesystem path to an image file\n path: string;\n}): Promise<{\n // Image detail hint returned by view_image. Returns `original` when original resolution is preserved, otherwise `null`.\n detail: string | null;\n // Data URL for the loaded image.\n image_url: string;\n}>; };\n```"
);
}
@@ -1746,6 +1799,7 @@ fn build_specs<'a>(
config,
mcp_tools,
app_tools,
/*tool_namespaces*/ None,
/*discoverable_tools*/ None,
dynamic_tools,
)
@@ -1755,6 +1809,25 @@ fn build_specs_with_discoverable_tools<'a>(
config: &ToolsConfig,
mcp_tools: Option<HashMap<String, rmcp::model::Tool>>,
app_tools: Option<Vec<ToolRegistryPlanAppTool<'a>>>,
tool_namespaces: Option<HashMap<String, ToolNamespace>>,
discoverable_tools: Option<Vec<DiscoverableTool>>,
dynamic_tools: &[DynamicToolSpec],
) -> (Vec<ConfiguredToolSpec>, Vec<ToolHandlerSpec>) {
build_specs_with_optional_tool_namespaces(
config,
mcp_tools,
tool_namespaces,
app_tools,
discoverable_tools,
dynamic_tools,
)
}
fn build_specs_with_optional_tool_namespaces<'a>(
config: &ToolsConfig,
mcp_tools: Option<HashMap<String, rmcp::model::Tool>>,
tool_namespaces: Option<HashMap<String, ToolNamespace>>,
app_tools: Option<Vec<ToolRegistryPlanAppTool<'a>>>,
discoverable_tools: Option<Vec<DiscoverableTool>>,
dynamic_tools: &[DynamicToolSpec],
) -> (Vec<ConfiguredToolSpec>, Vec<ToolHandlerSpec>) {
@@ -1762,6 +1835,7 @@ fn build_specs_with_discoverable_tools<'a>(
config,
ToolRegistryPlanParams {
mcp_tools: mcp_tools.as_ref(),
tool_namespaces: tool_namespaces.as_ref(),
app_tools: app_tools.as_deref(),
discoverable_tools: discoverable_tools.as_deref(),
dynamic_tools,
@@ -1884,30 +1958,46 @@ fn find_tool<'a>(tools: &'a [ConfiguredToolSpec], expected_name: &str) -> &'a Co
.unwrap_or_else(|| panic!("expected tool {expected_name}"))
}
fn expect_object_schema(
schema: &JsonSchema,
) -> (&BTreeMap<String, JsonSchema>, Option<&Vec<String>>) {
assert_eq!(
schema.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::Object))
);
let properties = schema
.properties
.as_ref()
.expect("expected object properties");
(properties, schema.required.as_ref())
}
fn expect_string_description(schema: &JsonSchema) -> &str {
assert_eq!(
schema.schema_type,
Some(JsonSchemaType::Single(JsonSchemaPrimitiveType::String))
);
schema.description.as_deref().expect("expected description")
}
fn strip_descriptions_schema(schema: &mut JsonSchema) {
match schema {
JsonSchema::Boolean { description }
| JsonSchema::String { description }
| JsonSchema::Number { description } => {
*description = None;
}
JsonSchema::Array { items, description } => {
strip_descriptions_schema(items);
*description = None;
}
JsonSchema::Object {
properties,
required: _,
additional_properties,
} => {
for value in properties.values_mut() {
strip_descriptions_schema(value);
}
if let Some(AdditionalProperties::Schema(schema)) = additional_properties {
strip_descriptions_schema(schema);
}
if let Some(variants) = &mut schema.any_of {
for variant in variants {
strip_descriptions_schema(variant);
}
}
if let Some(items) = &mut schema.items {
strip_descriptions_schema(items);
}
if let Some(properties) = &mut schema.properties {
for value in properties.values_mut() {
strip_descriptions_schema(value);
}
}
if let Some(AdditionalProperties::Schema(schema)) = &mut schema.additional_properties {
strip_descriptions_schema(schema);
}
schema.description = None;
}
fn strip_descriptions_tool(spec: &mut ToolSpec) {

View File

@@ -58,6 +58,7 @@ pub struct ToolRegistryPlan {
#[derive(Debug, Clone, Copy)]
pub struct ToolRegistryPlanParams<'a> {
pub mcp_tools: Option<&'a HashMap<String, McpTool>>,
pub tool_namespaces: Option<&'a HashMap<String, ToolNamespace>>,
pub app_tools: Option<&'a [ToolRegistryPlanAppTool<'a>]>,
pub discoverable_tools: Option<&'a [DiscoverableTool]>,
pub dynamic_tools: &'a [DynamicToolSpec],
@@ -66,6 +67,12 @@ pub struct ToolRegistryPlanParams<'a> {
pub codex_apps_mcp_server_name: &'a str,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ToolNamespace {
pub name: String,
pub description: Option<String>,
}
#[derive(Debug, Clone, Copy)]
pub struct ToolRegistryPlanAppTool<'a> {
pub tool_name: &'a str,

View File

@@ -24,11 +24,11 @@ fn tool_spec_name_covers_all_variants() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
})
.name(),
@@ -38,11 +38,11 @@ fn tool_spec_name_covers_all_variants() {
ToolSpec::ToolSearch {
execution: "sync".to_string(),
description: "Search for tools".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
}
.name(),
"tool_search"
@@ -90,11 +90,11 @@ fn configured_tool_spec_name_delegates_to_tool_spec() {
description: "Look up an order".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
BTreeMap::new(),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
}),
/*supports_parallel_tool_calls*/ true,
@@ -140,14 +140,11 @@ fn create_tools_json_for_responses_api_includes_top_level_name() {
description: "A demo tool".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
"foo".to_string(),
JsonSchema::String { description: None },
)]),
required: None,
additional_properties: None,
},
parameters: JsonSchema::object(
BTreeMap::from([("foo".to_string(), JsonSchema::string(/*description*/ None),)]),
/*required*/ None,
/*additional_properties*/ None
),
output_schema: None,
})])
.expect("serialize tools"),
@@ -210,16 +207,14 @@ fn tool_search_tool_spec_serializes_expected_wire_shape() {
serde_json::to_value(ToolSpec::ToolSearch {
execution: "sync".to_string(),
description: "Search app tools".to_string(),
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(
BTreeMap::from([(
"query".to_string(),
JsonSchema::String {
description: Some("Tool search query".to_string()),
},
JsonSchema::string(Some("Tool search query".to_string()),),
)]),
required: Some(vec!["query".to_string()]),
additional_properties: Some(AdditionalProperties::Boolean(false)),
},
Some(vec!["query".to_string()]),
Some(AdditionalProperties::Boolean(false))
),
})
.expect("serialize tool_search"),
json!({

View File

@@ -7,31 +7,23 @@ pub fn create_list_dir_tool() -> ToolSpec {
let properties = BTreeMap::from([
(
"dir_path".to_string(),
JsonSchema::String {
description: Some("Absolute path to the directory to list.".to_string()),
},
JsonSchema::string(Some("Absolute path to the directory to list.".to_string())),
),
(
"offset".to_string(),
JsonSchema::Number {
description: Some(
"The entry number to start listing from. Must be 1 or greater.".to_string(),
),
},
JsonSchema::number(Some(
"The entry number to start listing from. Must be 1 or greater.".to_string(),
)),
),
(
"limit".to_string(),
JsonSchema::Number {
description: Some("The maximum number of entries to return.".to_string()),
},
JsonSchema::number(Some("The maximum number of entries to return.".to_string())),
),
(
"depth".to_string(),
JsonSchema::Number {
description: Some(
"The maximum directory depth to traverse. Must be 1 or greater.".to_string(),
),
},
JsonSchema::number(Some(
"The maximum directory depth to traverse. Must be 1 or greater.".to_string(),
)),
),
]);
@@ -42,11 +34,7 @@ pub fn create_list_dir_tool() -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["dir_path".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["dir_path".to_string()]), Some(false.into())),
output_schema: None,
})
}
@@ -55,54 +43,44 @@ pub fn create_test_sync_tool() -> ToolSpec {
let barrier_properties = BTreeMap::from([
(
"id".to_string(),
JsonSchema::String {
description: Some(
"Identifier shared by concurrent calls that should rendezvous".to_string(),
),
},
JsonSchema::string(Some(
"Identifier shared by concurrent calls that should rendezvous".to_string(),
)),
),
(
"participants".to_string(),
JsonSchema::Number {
description: Some(
"Number of tool calls that must arrive before the barrier opens".to_string(),
),
},
JsonSchema::number(Some(
"Number of tool calls that must arrive before the barrier opens".to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some(
"Maximum time in milliseconds to wait at the barrier".to_string(),
),
},
JsonSchema::number(Some(
"Maximum time in milliseconds to wait at the barrier".to_string(),
)),
),
]);
let properties = BTreeMap::from([
(
"sleep_before_ms".to_string(),
JsonSchema::Number {
description: Some(
"Optional delay in milliseconds before any other action".to_string(),
),
},
JsonSchema::number(Some(
"Optional delay in milliseconds before any other action".to_string(),
)),
),
(
"sleep_after_ms".to_string(),
JsonSchema::Number {
description: Some(
"Optional delay in milliseconds after completing the barrier".to_string(),
),
},
JsonSchema::number(Some(
"Optional delay in milliseconds after completing the barrier".to_string(),
)),
),
(
"barrier".to_string(),
JsonSchema::Object {
properties: barrier_properties,
required: Some(vec!["id".to_string(), "participants".to_string()]),
additional_properties: Some(false.into()),
},
JsonSchema::object(
barrier_properties,
Some(vec!["id".to_string(), "participants".to_string()]),
Some(false.into()),
),
),
]);
@@ -111,11 +89,7 @@ pub fn create_test_sync_tool() -> ToolSpec {
description: "Internal synchronization helper used by Codex integration tests.".to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: None,
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, /*required*/ None, Some(false.into())),
output_schema: None,
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -13,46 +14,34 @@ fn list_dir_tool_matches_expected_spec() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"depth".to_string(),
JsonSchema::Number {
description: Some(
"The maximum directory depth to traverse. Must be 1 or greater."
.to_string(),
),
},
JsonSchema::number(Some(
"The maximum directory depth to traverse. Must be 1 or greater."
.to_string(),
)),
),
(
"dir_path".to_string(),
JsonSchema::String {
description: Some(
"Absolute path to the directory to list.".to_string(),
),
},
JsonSchema::string(Some(
"Absolute path to the directory to list.".to_string(),
)),
),
(
"limit".to_string(),
JsonSchema::Number {
description: Some(
"The maximum number of entries to return.".to_string(),
),
},
JsonSchema::number(Some(
"The maximum number of entries to return.".to_string(),
)),
),
(
"offset".to_string(),
JsonSchema::Number {
description: Some(
"The entry number to start listing from. Must be 1 or greater."
.to_string(),
),
},
JsonSchema::number(Some(
"The entry number to start listing from. Must be 1 or greater."
.to_string(),
)),
),
]),
required: Some(vec!["dir_path".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["dir_path".to_string()]), Some(false.into())),
output_schema: None,
})
);
@@ -68,69 +57,51 @@ fn test_sync_tool_matches_expected_spec() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"barrier".to_string(),
JsonSchema::Object {
properties: BTreeMap::from([
JsonSchema::object(
BTreeMap::from([
(
"id".to_string(),
JsonSchema::String {
description: Some(
"Identifier shared by concurrent calls that should rendezvous"
.to_string(),
),
},
JsonSchema::string(Some(
"Identifier shared by concurrent calls that should rendezvous"
.to_string(),
)),
),
(
"participants".to_string(),
JsonSchema::Number {
description: Some(
"Number of tool calls that must arrive before the barrier opens"
.to_string(),
),
},
JsonSchema::number(Some(
"Number of tool calls that must arrive before the barrier opens"
.to_string(),
)),
),
(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some(
"Maximum time in milliseconds to wait at the barrier"
.to_string(),
),
},
JsonSchema::number(Some(
"Maximum time in milliseconds to wait at the barrier"
.to_string(),
)),
),
]),
required: Some(vec![
"id".to_string(),
"participants".to_string(),
]),
additional_properties: Some(false.into()),
},
Some(vec!["id".to_string(), "participants".to_string()]),
Some(false.into()),
),
),
(
"sleep_after_ms".to_string(),
JsonSchema::Number {
description: Some(
"Optional delay in milliseconds after completing the barrier"
.to_string(),
),
},
JsonSchema::number(Some(
"Optional delay in milliseconds after completing the barrier"
.to_string(),
)),
),
(
"sleep_before_ms".to_string(),
JsonSchema::Number {
description: Some(
"Optional delay in milliseconds before any other action"
.to_string(),
),
},
JsonSchema::number(Some(
"Optional delay in milliseconds before any other action".to_string(),
)),
),
]),
required: None,
additional_properties: Some(false.into()),
},
]), /*required*/ None, Some(false.into())),
output_schema: None,
})
);

View File

@@ -14,18 +14,14 @@ pub struct ViewImageToolOptions {
pub fn create_view_image_tool(options: ViewImageToolOptions) -> ToolSpec {
let mut properties = BTreeMap::from([(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
JsonSchema::string(Some("Local filesystem path to an image file".to_string())),
)]);
if options.can_request_original_image_detail {
properties.insert(
"detail".to_string(),
JsonSchema::String {
description: Some(
"Optional detail override. The only supported value is `original`; omit this field for default resized behavior. Use `original` to preserve the file's original resolution instead of resizing to fit. This is important when high-fidelity image perception or precise localization is needed, especially for CUA agents.".to_string(),
),
},
JsonSchema::string(Some(
"Optional detail override. The only supported value is `original`; omit this field for default resized behavior. Use `original` to preserve the file's original resolution instead of resizing to fit. This is important when high-fidelity image perception or precise localization is needed, especially for CUA agents.".to_string(),
)),
);
}
@@ -35,11 +31,7 @@ pub fn create_view_image_tool(options: ViewImageToolOptions) -> ToolSpec {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["path".to_string()]),
additional_properties: Some(false.into()),
},
parameters: JsonSchema::object(properties, Some(vec!["path".to_string()]), Some(false.into())),
output_schema: Some(view_image_output_schema()),
})
}

View File

@@ -1,4 +1,5 @@
use super::*;
use crate::JsonSchema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
@@ -14,16 +15,10 @@ fn view_image_tool_omits_detail_without_original_detail_feature() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([(
parameters: JsonSchema::object(BTreeMap::from([(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
)]),
required: Some(vec!["path".to_string()]),
additional_properties: Some(false.into()),
},
JsonSchema::string(Some("Local filesystem path to an image file".to_string()),),
)]), Some(vec!["path".to_string()]), Some(false.into())),
output_schema: Some(view_image_output_schema()),
})
);
@@ -41,26 +36,18 @@ fn view_image_tool_includes_detail_with_original_detail_feature() {
.to_string(),
strict: false,
defer_loading: None,
parameters: JsonSchema::Object {
properties: BTreeMap::from([
parameters: JsonSchema::object(BTreeMap::from([
(
"detail".to_string(),
JsonSchema::String {
description: Some(
JsonSchema::string(Some(
"Optional detail override. The only supported value is `original`; omit this field for default resized behavior. Use `original` to preserve the file's original resolution instead of resizing to fit. This is important when high-fidelity image perception or precise localization is needed, especially for CUA agents.".to_string(),
),
},
),),
),
(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
JsonSchema::string(Some("Local filesystem path to an image file".to_string()),),
),
]),
required: Some(vec!["path".to_string()]),
additional_properties: Some(false.into()),
},
]), Some(vec!["path".to_string()]), Some(false.into())),
output_schema: Some(view_image_output_schema()),
})
);

View File

@@ -455,7 +455,10 @@ impl ChatComposer {
realtime_conversation_enabled: false,
audio_device_selection_enabled: false,
windows_degraded_sandbox_active: false,
is_zellij: codex_terminal_detection::terminal_info().is_zellij(),
is_zellij: matches!(
codex_terminal_detection::terminal_info().multiplexer,
Some(codex_terminal_detection::Multiplexer::Zellij {})
),
status_line_value: None,
status_line_enabled: false,
active_agent_label: None,

View File

@@ -14,6 +14,7 @@ use codex_app_server_client::InProcessAppServerClient;
use codex_app_server_client::InProcessClientStartArgs;
use codex_app_server_client::RemoteAppServerClient;
use codex_app_server_client::RemoteAppServerConnectArgs;
use codex_app_server_protocol::Account as AppServerAccount;
use codex_app_server_protocol::AuthMode as AppServerAuthMode;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::Thread as AppServerThread;
@@ -1016,32 +1017,33 @@ async fn run_ratatui_app(
// Initialize high-fidelity session event logging if enabled.
session_log::maybe_init(&initial_config);
let mut app_server = Some(
match start_app_server(
&app_server_target,
arg0_paths.clone(),
initial_config.clone(),
cli_kv_overrides.clone(),
loader_overrides.clone(),
cloud_requirements.clone(),
feedback.clone(),
)
.await
{
Ok(app_server) => AppServerSession::new(app_server)
.with_remote_cwd_override(remote_cwd_override.clone()),
Err(err) => {
terminal_restore_guard.restore_silently();
session_log::log_session_end();
return Err(err);
}
},
);
let should_show_trust_screen_flag = !remote_mode && should_show_trust_screen(&initial_config);
let mut trust_decision_was_made = false;
let needs_onboarding_app_server =
should_show_trust_screen_flag || initial_config.model_provider.requires_openai_auth;
let mut onboarding_app_server = if needs_onboarding_app_server {
Some(
AppServerSession::new(
start_app_server(
&app_server_target,
arg0_paths.clone(),
initial_config.clone(),
cli_kv_overrides.clone(),
loader_overrides.clone(),
cloud_requirements.clone(),
feedback.clone(),
)
.await?,
)
.with_remote_cwd_override(remote_cwd_override.clone()),
)
} else {
None
};
let login_status = if initial_config.model_provider.requires_openai_auth {
let Some(app_server) = onboarding_app_server.as_mut() else {
unreachable!("onboarding app server should exist when auth is required");
let Some(app_server) = app_server.as_mut() else {
unreachable!("app server should exist when auth is required");
};
get_login_status(app_server, &initial_config).await?
} else {
@@ -1057,13 +1059,13 @@ async fn run_ratatui_app(
show_login_screen,
show_trust_screen: should_show_trust_screen_flag,
login_status,
app_server_request_handle: onboarding_app_server
app_server_request_handle: app_server
.as_ref()
.map(AppServerSession::request_handle),
config: initial_config.clone(),
},
if show_login_screen {
onboarding_app_server.take()
app_server.as_mut()
} else {
None
},
@@ -1071,6 +1073,7 @@ async fn run_ratatui_app(
)
.await?;
if onboarding_result.should_exit {
shutdown_app_server_if_present(app_server.take()).await;
terminal_restore_guard.restore_silently();
session_log::log_session_end();
let _ = tui.terminal.clear();
@@ -1110,10 +1113,8 @@ async fn run_ratatui_app(
initial_config
}
} else {
shutdown_app_server_if_present(onboarding_app_server.take()).await;
initial_config
};
shutdown_app_server_if_present(onboarding_app_server.take()).await;
let mut missing_session_exit = |id_str: &str, action: &str| {
error!("Error finding conversation path: {id_str}");
@@ -1131,42 +1132,16 @@ async fn run_ratatui_app(
})
};
let needs_app_server_session_lookup = cli.resume_last
|| cli.fork_last
|| cli.resume_session_id.is_some()
|| cli.fork_session_id.is_some()
|| cli.resume_picker
|| cli.fork_picker;
let mut session_lookup_app_server = if needs_app_server_session_lookup {
Some(
AppServerSession::new(
start_app_server(
&app_server_target,
arg0_paths.clone(),
config.clone(),
cli_kv_overrides.clone(),
loader_overrides.clone(),
cloud_requirements.clone(),
feedback.clone(),
)
.await?,
)
.with_remote_cwd_override(remote_cwd_override.clone()),
)
} else {
None
};
let use_fork = cli.fork_picker || cli.fork_last || cli.fork_session_id.is_some();
let session_selection = if use_fork {
if let Some(id_str) = cli.fork_session_id.as_deref() {
let Some(app_server) = session_lookup_app_server.as_mut() else {
unreachable!("session lookup app server should be initialized for --fork <id>");
let Some(startup_app_server) = app_server.as_mut() else {
unreachable!("app server should be initialized for --fork <id>");
};
match lookup_session_target_with_app_server(app_server, id_str).await? {
match lookup_session_target_with_app_server(startup_app_server, id_str).await? {
Some(target_session) => resume_picker::SessionSelection::Fork(target_session),
None => {
shutdown_app_server_if_present(session_lookup_app_server.take()).await;
shutdown_app_server_if_present(app_server.take()).await;
return missing_session_exit(id_str, "fork");
}
}
@@ -1181,8 +1156,8 @@ async fn run_ratatui_app(
} else {
None
};
let Some(app_server) = session_lookup_app_server.as_mut() else {
unreachable!("session lookup app server should be initialized for --fork --last");
let Some(app_server) = app_server.as_mut() else {
unreachable!("app server should be initialized for --fork --last");
};
match lookup_latest_session_target_with_app_server(
app_server, &config, filter_cwd, /*include_non_interactive*/ false,
@@ -1193,8 +1168,8 @@ async fn run_ratatui_app(
None => resume_picker::SessionSelection::StartFresh,
}
} else if cli.fork_picker {
let Some(app_server) = session_lookup_app_server.take() else {
unreachable!("session lookup app server should be initialized for --fork picker");
let Some(app_server) = app_server.take() else {
unreachable!("app server should be initialized for --fork picker");
};
match resume_picker::run_fork_picker_with_app_server(
&mut tui,
@@ -1221,13 +1196,13 @@ async fn run_ratatui_app(
resume_picker::SessionSelection::StartFresh
}
} else if let Some(id_str) = cli.resume_session_id.as_deref() {
let Some(app_server) = session_lookup_app_server.as_mut() else {
unreachable!("session lookup app server should be initialized for --resume <id>");
let Some(startup_app_server) = app_server.as_mut() else {
unreachable!("app server should be initialized for --resume <id>");
};
match lookup_session_target_with_app_server(app_server, id_str).await? {
match lookup_session_target_with_app_server(startup_app_server, id_str).await? {
Some(target_session) => resume_picker::SessionSelection::Resume(target_session),
None => {
shutdown_app_server_if_present(session_lookup_app_server.take()).await;
shutdown_app_server_if_present(app_server.take()).await;
return missing_session_exit(id_str, "resume");
}
}
@@ -1238,8 +1213,8 @@ async fn run_ratatui_app(
&config,
cli.resume_show_all,
);
let Some(app_server) = session_lookup_app_server.as_mut() else {
unreachable!("session lookup app server should be initialized for --resume --last");
let Some(app_server) = app_server.as_mut() else {
unreachable!("app server should be initialized for --resume --last");
};
match lookup_latest_session_target_with_app_server(
app_server,
@@ -1253,8 +1228,8 @@ async fn run_ratatui_app(
None => resume_picker::SessionSelection::StartFresh,
}
} else if cli.resume_picker {
let Some(app_server) = session_lookup_app_server.take() else {
unreachable!("session lookup app server should be initialized for --resume picker");
let Some(app_server) = app_server.take() else {
unreachable!("app server should be initialized for --resume picker");
};
match resume_picker::run_resume_picker_with_app_server(
&mut tui,
@@ -1281,7 +1256,6 @@ async fn run_ratatui_app(
} else {
resume_picker::SessionSelection::StartFresh
};
shutdown_app_server_if_present(session_lookup_app_server.take()).await;
let current_cwd = config.cwd.clone();
let allow_prompt = !remote_mode && cli.cwd.is_none();
@@ -1367,28 +1341,32 @@ async fn run_ratatui_app(
let use_alt_screen = determine_alt_screen_mode(no_alt_screen, config.tui_alternate_screen);
tui.set_alt_screen_enabled(use_alt_screen);
let app_server = match start_app_server(
&app_server_target,
arg0_paths,
config.clone(),
cli_kv_overrides.clone(),
loader_overrides,
cloud_requirements.clone(),
feedback.clone(),
)
.await
{
Ok(app_server) => app_server,
Err(err) => {
terminal_restore_guard.restore_silently();
session_log::log_session_end();
return Err(err);
}
let app_server = match app_server {
Some(app_server) => app_server,
None => match start_app_server(
&app_server_target,
arg0_paths,
config.clone(),
cli_kv_overrides.clone(),
loader_overrides,
cloud_requirements.clone(),
feedback.clone(),
)
.await
{
Ok(app_server) => AppServerSession::new(app_server)
.with_remote_cwd_override(remote_cwd_override.clone()),
Err(err) => {
terminal_restore_guard.restore_silently();
session_log::log_session_end();
return Err(err);
}
},
};
let app_result = App::run(
&mut tui,
AppServerSession::new(app_server).with_remote_cwd_override(remote_cwd_override),
app_server,
config,
cli_kv_overrides.clone(),
overrides.clone(),
@@ -1603,7 +1581,10 @@ fn determine_alt_screen_mode(no_alt_screen: bool, tui_alternate_screen: AltScree
AltScreenMode::Never => false,
AltScreenMode::Auto => {
let terminal_info = terminal_info();
!terminal_info.is_zellij()
!matches!(
terminal_info.multiplexer,
Some(codex_terminal_detection::Multiplexer::Zellij {})
)
}
}
}
@@ -1628,12 +1609,8 @@ async fn get_login_status(
let account = app_server.read_account().await?;
Ok(match account.account {
Some(codex_app_server_protocol::Account::ApiKey {}) => {
LoginStatus::AuthMode(AppServerAuthMode::ApiKey)
}
Some(codex_app_server_protocol::Account::Chatgpt { .. }) => {
LoginStatus::AuthMode(AppServerAuthMode::Chatgpt)
}
Some(AppServerAccount::ApiKey {}) => LoginStatus::AuthMode(AppServerAuthMode::ApiKey),
Some(AppServerAccount::Chatgpt { .. }) => LoginStatus::AuthMode(AppServerAuthMode::Chatgpt),
None => LoginStatus::NotAuthenticated,
})
}

View File

@@ -447,7 +447,7 @@ impl WidgetRef for Step {
pub(crate) async fn run_onboarding_app(
args: OnboardingScreenArgs,
mut app_server: Option<AppServerSession>,
mut app_server: Option<&mut AppServerSession>,
tui: &mut Tui,
) -> Result<OnboardingResult> {
use tokio_stream::StreamExt;
@@ -533,9 +533,6 @@ pub(crate) async fn run_onboarding_app(
}
}
}
if let Some(app_server) = app_server {
app_server.shutdown().await.ok();
}
Ok(OnboardingResult {
directory_trust_decision: onboarding_screen.directory_trust_decision(),
should_exit: onboarding_screen.should_exit(),

View File

@@ -9,9 +9,7 @@ const ANNOUNCEMENT_TIP_URL: &str =
const IS_MACOS: bool = cfg!(target_os = "macos");
const IS_WINDOWS: bool = cfg!(target_os = "windows");
const PAID_TOOLTIP: &str = "*New* Try the **Codex App** with 2x rate limits until *April 2nd*. Run 'codex app' or visit https://chatgpt.com/codex?app-landing-page=true";
const PAID_TOOLTIP_WINDOWS: &str = "*New* Try the **Codex App**, now available on **Windows**, with 2x rate limits until *April 2nd*. Run 'codex app' or visit https://chatgpt.com/codex?app-landing-page=true";
const PAID_TOOLTIP_NON_MAC: &str = "*New* 2x rate limits until *April 2nd*.";
const APP_TOOLTIP: &str = "Try the **Codex App**. Run 'codex app' or visit https://chatgpt.com/codex?app-landing-page=true";
const FAST_TOOLTIP: &str = "*New* Use **/fast** to enable our fastest inference at 2X plan usage.";
const OTHER_TOOLTIP: &str = "*New* Build faster with the **Codex App**. Run 'codex app' or visit https://chatgpt.com/codex?app-landing-page=true";
const OTHER_TOOLTIP_NON_MAC: &str = "*New* Build faster with Codex.";
@@ -67,7 +65,9 @@ pub(crate) fn get_tooltip(plan: Option<PlanType>, fast_mode_enabled: bool) -> Op
) || plan_type.is_team_like()
|| plan_type.is_business_like() =>
{
return Some(pick_paid_tooltip(&mut rng, fast_mode_enabled).to_string());
if let Some(tooltip) = pick_paid_tooltip(&mut rng, fast_mode_enabled) {
return Some(tooltip.to_string());
}
}
Some(PlanType::Go) | Some(PlanType::Free) => {
return Some(FREE_GO_TOOLTIP.to_string());
@@ -86,13 +86,11 @@ pub(crate) fn get_tooltip(plan: Option<PlanType>, fast_mode_enabled: bool) -> Op
pick_tooltip(&mut rng).map(str::to_string)
}
fn paid_app_tooltip() -> &'static str {
if IS_MACOS {
PAID_TOOLTIP
} else if IS_WINDOWS {
PAID_TOOLTIP_WINDOWS
fn paid_app_tooltip() -> Option<&'static str> {
if IS_MACOS || IS_WINDOWS {
Some(APP_TOOLTIP)
} else {
PAID_TOOLTIP_NON_MAC
None
}
}
@@ -100,11 +98,14 @@ fn paid_app_tooltip() -> &'static str {
/// generic random tip pool. Keep this business logic explicit: we currently split
/// that slot between the app promo and Fast mode, but suppress the Fast promo once
/// the user already has Fast mode enabled.
fn pick_paid_tooltip<R: Rng + ?Sized>(rng: &mut R, fast_mode_enabled: bool) -> &'static str {
fn pick_paid_tooltip<R: Rng + ?Sized>(
rng: &mut R,
fast_mode_enabled: bool,
) -> Option<&'static str> {
if fast_mode_enabled || rng.random_bool(0.5) {
paid_app_tooltip()
} else {
FAST_TOOLTIP
Some(FAST_TOOLTIP)
}
}
@@ -296,7 +297,7 @@ mod tests {
));
}
let expected = std::collections::BTreeSet::from([paid_app_tooltip(), FAST_TOOLTIP]);
let expected = std::collections::BTreeSet::from([paid_app_tooltip(), Some(FAST_TOOLTIP)]);
assert_eq!(seen, expected);
}
@@ -310,7 +311,7 @@ mod tests {
let expected = std::collections::BTreeSet::from([paid_app_tooltip()]);
assert_eq!(seen, expected);
assert!(!seen.contains(&FAST_TOOLTIP));
assert!(!seen.contains(&Some(FAST_TOOLTIP)));
}
#[test]

View File

@@ -270,7 +270,10 @@ impl Tui {
// Cache this to avoid contention with the event reader.
supports_color::on_cached(supports_color::Stream::Stdout);
let _ = crate::terminal_palette::default_colors();
let is_zellij = codex_terminal_detection::terminal_info().is_zellij();
let is_zellij = matches!(
codex_terminal_detection::terminal_info().multiplexer,
Some(codex_terminal_detection::Multiplexer::Zellij {})
);
Self {
frame_requester,

View File

@@ -20,7 +20,7 @@ pub const MAX_WIDTH: u32 = 2048;
/// Maximum height used when resizing images before uploading.
pub const MAX_HEIGHT: u32 = 768;
pub(crate) mod error;
pub mod error;
pub use crate::error::ImageProcessingError;