Compare commits

...

36 Commits

Author SHA1 Message Date
Michael Bolin
9db53b33aa fix: support arm64 build for Linux (#1225)
Users were running into issues with glibc mismatches on arm64 linux. In
the past, we did not provide a musl build for arm64 Linux because we had
trouble getting the openssl dependency to build correctly. Though today
I just tried the same trick in `Cargo.toml` that we were doing for
`x86_64-unknown-linux-musl` (using `openssl-sys` with `features =
["vendored"]`), so I'm not sure what problem we had in the past the
builds "just worked" today!

Though one tweak that did have to be made is that the integration tests
for Seccomp/Landlock empirically require longer timeouts on arm64 linux,
or at least on the `ubuntu-24.04-arm` GitHub Runner. As such, we change
the timeouts for arm64 in `codex-rs/linux-sandbox/tests/landlock.rs`.

Though in solving this problem, I decided I needed a turnkey solution
for testing the Linux build(s) from my Mac laptop, so this PR introduces
`.devcontainer/Dockerfile` and `.devcontainer/devcontainer.json` to
facilitate this. Detailed instructions are in `.devcontainer/README.md`.

We will update `dotslash-config.json` and other release-related scripts
in a follow-up PR.
2025-06-05 20:29:46 -07:00
Michael Bolin
515b6331bd feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.

**What works**

* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.

**What does not work**

* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:


a67a67f325/codex-cli/src/utils/get-api-key.tsx (L84-L89)

**Implementation**

Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.

As such, the most significant files in this PR are:

```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```

Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:

```
codex-rs/core/src/openai_api_key.rs
```

Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.

Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.

**Testing**

Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:

```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```

For reference:

* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
Reilly Wood
a67a67f325 codex-rs: make tool calls prettier (#1211)
This PR overhauls how active tool calls and completed tool calls are
displayed:

1. More use of colour to indicate success/failure and distinguish
between components like tool name+arguments
2. Previously, the entire `CallToolResult` was serialized to JSON and
pretty-printed. Now, we extract each individual `CallToolResultContent`
and print those
1. The previous solution was wasting space by unnecessarily showing
details of the `CallToolResult` struct to users, without formatting the
actual tool call results nicely
2. We're now able to show users more information from tool results in
less space, with nicer formatting when tools return JSON results

### Before:

<img width="1251" alt="Screenshot 2025-06-03 at 11 24 26"
src="https://github.com/user-attachments/assets/5a58f222-219c-4c53-ace7-d887194e30cf"
/>

### After:

<img width="1265" alt="image"
src="https://github.com/user-attachments/assets/99fe54d0-9ebe-406a-855b-7aa529b91274"
/>

## Future Work

1. Integrate image tool result handling better. We should be able to
display images even if they're not the first `CallToolResultContent`
2. Users should have some way to view the full version of truncated tool
results
3. It would be nice to add some left padding for tool results, make it
more clear that they are results. This is doable, just a little fiddly
due to the way `first_visible_line` scrolling works
4. There's almost certainly a better way to format JSON than "all on 1
line with spaces to make Ratatui wrapping work". But I think that works
OK for now.
2025-06-03 14:29:26 -07:00
Michael Bolin
c6fcec55fe fix: always send full instructions when using the Responses API (#1207)
This fixes a longstanding error in the Rust CLI where `codex.rs`
contained an errant `is_first_turn` check that would exclude the user
instructions for subsequent "turns" of a conversation when using the
responses API (i.e., when `previous_response_id` existed).

While here, renames `Prompt.instructions` to `Prompt.user_instructions`
since we now have quite a few levels of instructions floating around.
Also removed an unnecessary use of `clone()` in
`Prompt.get_full_instructions()`.
2025-06-03 09:40:19 -07:00
Michael Bolin
6fcc528a43 fix: provide tolerance for apply_patch tool (#993)
As explained in detail in the doc comment for `ParseMode::Lenient`, we
have observed that GPT-4.1 does not always generate a valid invocation
of `apply_patch`. Fortunately, the error is predictable, so we introduce
some new logic to the `codex-apply-patch` crate to recover from this
error.

Because we would like to avoid this becoming a de facto standard (as it
would be incompatible if `apply_patch` were provided as an actual
executable, unless we also introduced the lenient behavior in the
executable, as well), we require passing `ParseMode::Lenient` to
`parse_patch_text()` to make it clear that the caller is opting into
supporting this special case.

Note the analogous change to the TypeScript CLI was
https://github.com/openai/codex/pull/930. In addition to changing the
accepted input to `apply_patch`, it also introduced additional
instructions for the model, which we include in this PR.

Note that `apply-patch` does not depend on either `regex` or
`regex-lite`, so some of the checks are slightly more verbose to avoid
introducing this dependency.

That said, this PR does not leverage the existing
`extract_heredoc_body_from_apply_patch_command()`, which depends on
`tree-sitter` and `tree-sitter-bash`:


5a5aa89914/codex-rs/apply-patch/src/lib.rs (L191-L246)

though perhaps it should.
2025-06-03 09:06:38 -07:00
Michael Bolin
5a5aa89914 chore: replace regex with regex-lite, where appropriate (#1200)
As explained on https://crates.io/crates/regex-lite, `regex-lite` is a
lighter alternative to `regex` and seems to be sufficient for our
purposes.
2025-06-02 17:11:45 -07:00
Michael Bolin
0f3cc8f842 feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:


d7245cbbc9/codex-rs/core/src/client.rs (L108-L111)

Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:

```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```

We take a cue from the TypeScript CLI, which does a check on the model
name:


d7245cbbc9/codex-cli/src/utils/agent/agent-loop.ts (L786-L789)

This PR does a similar check, though also adds support for the following
config options:

```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```

This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)

This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
Michael Bolin
d7245cbbc9 fix: chat completions API now also passes tools along (#1167)
Prior to this PR, there were two big misses in `chat_completions.rs`:

1. The loop in `stream_chat_completions()` was only including items of
type `ResponseItem::Message` when building up the `"messages"` JSON for
the `POST` request to the `chat/completions` endpoint. This fixes things
by ensuring other variants (`FunctionCall`, `LocalShellCall`, and
`FunctionCallOutput`) are included, as well.
2. In `process_chat_sse()`, we were not recording tool calls and were
only emitting items of type
`ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream.
Now we introduce `FunctionCallState`, which is used to accumulate the
`delta`s of type `tool_calls`, so we can ultimately emit a
`ResponseItem::FunctionCall`, when appropriate.

While function calling now appears to work for chat completions with my
local testing, I believe that there are still edge cases that are not
covered and that this codepath would benefit from a battery of
integration tests. (As part of that further cleanup, we should also work
to support streaming responses in the UI.)

The other important part of this PR is some cleanup in
`core/src/codex.rs`. In particular, it was hard to reason about how
`run_task()` was building up the list of messages to include in a
request across the various cases:

- Responses API
- Chat Completions API
- Responses API used in concert with ZDR

I like to think things are a bit cleaner now where:

- `zdr_transcript` (if present) contains all messages in the history of
the conversation, which includes function call outputs that have not
been sent back to the model yet
- `pending_input` includes any messages the user has submitted while the
turn is in flight that need to be injected as part of the next `POST` to
the model
- `input_for_next_turn` includes the tool call outputs that have not
been sent back to the model yet
2025-06-02 13:47:51 -07:00
Michael Bolin
e40f86b446 chore: logging cleanup (#1196)
Update what we log to make `RUST_LOG=debug` a bit easier to work with.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1196).
* #1167
* __->__ #1196
2025-06-02 13:31:33 -07:00
Michael Bolin
7896b1089d chore: update the WORKFLOW_URL in install_native_deps.sh to the latest release (#1190) 2025-05-31 10:30:50 -07:00
Michael Bolin
1410ae95ca fix: set --config hide_agent_reasoning=true in the GitHub Action (#1185)
Whoops, I had this flipped in https://github.com/openai/codex/pull/1183.
2025-05-30 23:57:05 -07:00
Michael Bolin
fccf5f3221 fix: disable agent reasoning output by default in the GitHub Action (#1183) 2025-05-30 23:49:48 -07:00
Michael Bolin
1159eaf04f feat: show the version when starting Codex (#1182)
The TypeScript version of the CLI shows the version when it starts up,
which is helpful when users share screenshots (and nice to know, as a
user).
2025-05-30 23:24:36 -07:00
Michael Bolin
e81327e5f4 feat: add hide_agent_reasoning config option (#1181)
This PR introduces a `hide_agent_reasoning` config option (that defaults
to `false`) that users can enable to make the output less verbose by
suppressing reasoning output.

To test, verified that this includes agent reasoning in the output:

```
echo hello | just exec
```

whereas this does not:

```
echo hello | just exec --config hide_agent_reasoning=false
```
2025-05-30 23:14:56 -07:00
Michael Bolin
4f3d294762 feat: dim the timestamp in the exec output (#1180)
This required changing `ts_println!()` to take `$self:ident`, which is a
bit more verbose, but the usability improvement seems worth it.

Also eliminated an unnecessary `.to_string()` while here.
2025-05-30 16:27:37 -07:00
Michael Bolin
cf1d070538 feat: grab-bag of improvements to exec output (#1179)
Fixes:

* Instantiate `EventProcessor` earlier in `lib.rs` so
`print_config_summary()` can be an instance method of it and leverage
its various `Style` fields to ensure it honors `with_ansi` properly.
* After printing the config summary, print out user's prompt with the
heading `User instructions:`. As noted in the comment, now that we can
read the instructions via stdin as of #1178, it is helpful to the user
to ensure they know what instructions were given to Codex.
* Use same colors/bold/italic settings for headers as the TUI, making
the output a bit easier to read.
2025-05-30 16:22:10 -07:00
Michael Bolin
ae743d56b0 feat: for codex exec, if PROMPT is not specified, read from stdin if not a TTY (#1178)
This attempts to make `codex exec` more flexible in how the prompt can
be passed:

* as before, it can be passed as a single string argument
* if `-` is passed as the value, the prompt is read from stdin
* if no argument is passed _and stdin is a tty_, prints a warning to
stderr that no prompt was specified an exits non-zero.
* if no argument is passed _and stdin is NOT a tty_, prints `Reading
prompt from stdin...` to stderr to let the user know that Codex will
wait until it reads EOF from stdin to proceed. (You can repro this case
by doing `yes | just exec` since stdin is not a TTY in that case but it
also never reaches EOF).
2025-05-30 14:41:55 -07:00
Michael Bolin
1bf82056b3 fix: introduce create_tools_json() and share it with chat_completions.rs (#1177)
The main motivator behind this PR is that `stream_chat_completions()`
was not adding the `"tools"` entry to the payload posted to the
`/chat/completions` endpoint. This (1) refactors the existing logic to
build up the `"tools"` JSON from `client.rs` into `openai_tools.rs`, and
(2) updates the use of responses API (`client.rs`) and chat completions
API (`chat_completions.rs`) to both use it.

Note this PR alone is not sufficient to get tool calling from chat
completions working: that is done in
https://github.com/openai/codex/pull/1167.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1177).
* #1167
* __->__ #1177
2025-05-30 14:07:03 -07:00
Michael Bolin
e207f20f64 fix: add extra debugging to GitHub Action (#1173)
https://github.com/openai/codex/actions/runs/15352839832/job/43205041563
appeared to fail around `postComment()`, but I don't see the output from
`fail()` in the logs. Adding a bit more info.
2025-05-30 11:16:30 -07:00
Michael Bolin
0f40ef5a10 fix: missed a step in #1171 for codex.yml (#1172)
Missed in my copy/paste.
2025-05-30 11:04:41 -07:00
Michael Bolin
8676185389 fix: update outdated repo setup in codex.yml (#1171)
We should do some work to share the setup logic across `codex.yml`,
`ci.yml`, and `rust-ci.yml`.
2025-05-30 10:58:57 -07:00
Michael Bolin
baa92f37e0 feat: initial import of experimental GitHub Action (#1170)
This is a first cut at a GitHub Action that lets you define prompt
templates in `.md` files under `.github/codex/labels` that will run
Codex with the associated prompt when the label is added to a GitHub
pull request.

For example, this PR includes these files:

```
.github/codex/labels/codex-attempt.md
.github/codex/labels/codex-code-review.md
.github/codex/labels/codex-investigate-issue.md
```

And the new `.github/workflows/codex.yml` workflow declares the
following triggers:

```yaml
on:
  issues:
    types: [opened, labeled]
  pull_request:
    branches: [main]
    types: [labeled]
```

as well as the following expression to gate the action:

```
jobs:
  codex:
    if: |
      (github.event_name == 'issues' && (
        (github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-investigate-issue'))
      )) ||
      (github.event_name == 'pull_request' && github.event.action == 'labeled' && github.event.label.name == 'codex-code-review')
```

Note the "actor" who added the label must have write access to the repo
for the action to take effect.

After adding a label, the action will "ack" the request by replacing the
original label (e.g., `codex-review`) with an `-in-progress` suffix
(e.g., `codex-review-in-progress`). When it is finished, it will swap
the `-in-progress` label with a `-completed` one (e.g.,
`codex-review-completed`).

Users of the action are responsible for providing an `OPENAI_API_KEY`
and making it available as a secret to the action.
2025-05-30 10:55:28 -07:00
Michael Bolin
a0239c3cd6 fix: enable set positional-arguments in justfile (#1169)
The way these definitions worked before, they did not handle quoted args
with spaces properly.

For example, if you had `/tmp/test-just/printlen.py` as:

```python
#!/usr/bin/env python3

import sys

print(len(sys.argv))
```

and your `justfile` was:

```
printlen *args:
    /tmp/test-just/printlen.py {{args}}
```

Then:

```shell
$ just printlen foo bar
3
$ just printlen 'foo bar'
3
```

which is not what we want: `'foo bar'` should be treated as one
argument.

The fix is to use
[positional-arguments](515e806b51/README.md (L1131)):

```
set positional-arguments

printlen *args:
    /tmp/test-just/printlen.py "$@"
```
2025-05-30 09:11:53 -07:00
Michael Bolin
bdfa95ed31 docs: split the config-related portion of codex-rs/README.md into its own config.md file (#1165)
Also updated the overview on `codex-rs/README.md` while here.
2025-05-29 16:59:35 -07:00
Fouad Matin
828e2062c2 fix(codex-rs): use codex-mini-latest as default (#1164) 2025-05-29 16:55:19 -07:00
Michael Bolin
92957c47fb fix: update justfile to facilitate running CLIs from source and formatting source code (#1163) 2025-05-29 15:35:14 -07:00
Michael Bolin
8c1902b562 chore: update GitHub workflow for native artifacts for npm release (#1162)
Among other things, this picks up this UI treatment fix:

https://github.com/openai/codex/pull/1161
2025-05-29 15:34:06 -07:00
Michael Bolin
a32d305ae6 fix: update UI treatment of slash command menu to match that of the TS CLI (#1161)
Uses the same colors as in the TypeScript CLI:


![image](https://github.com/user-attachments/assets/919cd472-ffb4-4654-a46a-d84f0cd9c097)

Now it is also readable on a light theme, e.g., in Ghostty:


![image](https://github.com/user-attachments/assets/468c37b0-ea63-4455-9b48-73dc2c95f0f6)
2025-05-29 14:57:55 -07:00
Michael Bolin
a768a6a41d fix: introduce ResponseInputItem::McpToolCallOutput variant (#1151)
The output of an MCP server tool call can be one of several types, but
to date, we treated all outputs as text by showing the serialized JSON
as the "tool output" in Codex:


25a9949c49/codex-rs/mcp-types/src/lib.rs (L96-L101)

This PR adds support for the `ImageContent` variant so we can now
display an image output from an MCP tool call.

In making this change, we introduce a new
`ResponseInputItem::McpToolCallOutput` variant so that we can work with
the `mcp_types::CallToolResult` directly when the function call is made
to an MCP server.

Though arguably the more significant change is the introduction of
`HistoryCell::CompletedMcpToolCallWithImageOutput`, which is a cell that
uses `ratatui_image` to render an image into the terminal. To support
this, we introduce `ImageRenderCache`, cache a
`ratatui_image::picker::Picker`, and `ensure_image_cache()` to cache the
appropriate scaled image data and dimensions based on the current
terminal size.

To test, I created a minimal `package.json`:

```json
{
  "name": "kitty-mcp",
  "version": "1.0.0",
  "type": "module",
  "description": "MCP that returns image of kitty",
  "main": "index.js",
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.12.0"
  }
}
```

with the following `index.js` to define the MCP server:

```js
#!/usr/bin/env node

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { readFile } from "node:fs/promises";
import { join } from "node:path";

const IMAGE_URI = "image://Ada.png";

const server = new McpServer({
  name: "Demo",
  version: "1.0.0",
});

server.tool(
  "get-cat-image",
  "If you need a cat image, this tool will provide one.",
  async () => ({
    content: [
      { type: "image", data: await getAdaPngBase64(), mimeType: "image/png" },
    ],
  })
);

server.resource("Ada the Cat", IMAGE_URI, async (uri) => {
  const base64Image = await getAdaPngBase64();
  return {
    contents: [
      {
        uri: uri.href,
        mimeType: "image/png",
        blob: base64Image,
      },
    ],
  };
});

async function getAdaPngBase64() {
  const __dirname = new URL(".", import.meta.url).pathname;
  // From 9705ce2c59/assets/Ada.png
  const filePath = join(__dirname, "Ada.png");
  const imageData = await readFile(filePath);
  const base64Image = imageData.toString("base64");
  return base64Image;
}

const transport = new StdioServerTransport();
await server.connect(transport);
```

With the local changes from this PR, I added the following to my
`config.toml`:

```toml
[mcp_servers.kitty]
command = "node"
args = ["/Users/mbolin/code/kitty-mcp/index.js"]
```

Running the TUI from source:

```
cargo run --bin codex -- --model o3 'I need a picture of a cat'
```

I get:

<img width="732" alt="image"
src="https://github.com/user-attachments/assets/bf80b721-9ca0-4d81-aec7-77d6899e2869"
/>

Now, that said, I have only tested in iTerm and there is definitely some
funny business with getting an accurate character-to-pixel ratio
(sometimes the `CompletedMcpToolCallWithImageOutput` thinks it needs 10
rows to render instead of 4), so there is still work to be done here.
2025-05-28 19:03:17 -07:00
Michael Bolin
25a9949c49 fix: ensure inputSchema for MCP tool always has "properties" field when talking to OpenAI (#1150)
As noted in the comment introduced in this PR, this is analogous to the
issue reported in
https://github.com/openai/openai-agents-python/issues/449. This seems to
work now.
2025-05-28 17:17:21 -07:00
Michael Bolin
392fdd7db6 fix: honor RUST_LOG in mcp-client CLI and default to DEBUG (#1149)
We had `debug!()` logging statements already, but they weren't being
printed because `tracing_subscriber` was not set up.
2025-05-28 17:10:06 -07:00
Michael Bolin
ae1a83f095 feat: introduce CellWidget trait (#1148)
The motivation behind this PR is to make it so a `HistoryCell` is more
like a `WidgetRef` that knows how to render itself into a `Rect` so that
it can be backed by something other than a `Vec<Line>`. Because a
`HistoryCell` is intended to appear in a scrollable list, we want to
ensure the stack of cells can be scrolled one `Line` at a time even if
the `HistoryCell` is not backed by a `Vec<Line>` itself.

To this end, we introduce the `CellWidget` trait whose key method is:

```
fn render_window(&self, first_visible_line: usize, area: Rect, buf: &mut Buffer);
```

The `first_visible_line` param is what differs from
`WidgetRef::render_ref()`, as a `CellWidget` needs to know the offset
into its "full view" at which it should start rendering.

The bookkeeping in `ConversationHistoryWidget` has been updated
accordingly to ensure each `CellWidget` in the history is rendered
appropriately.
2025-05-28 14:03:19 -07:00
Michael Bolin
d60f350cf8 feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:

```
codex --config model=o4-mini
```

Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.

Effectively, it means there are four levels of configuration for some
values:

- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)

Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.

Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
Michael Bolin
eba0e32909 fix: update install_native_deps.sh to pick up the latest release (#1136) 2025-05-27 10:06:41 -07:00
Michael Bolin
29d154cb13 fix: use o4-mini as the default model (#1135)
Rollback of https://github.com/openai/codex/pull/972.
2025-05-27 09:12:55 -07:00
Michael Bolin
6b5b184f21 fix: TUI was not honoring --skip-git-repo-check correctly (#1105)
I discovered that if I ran `codex <PROMPT>` in a cwd that was not a Git
repo, Codex did not automatically run `<PROMPT>` after I accepted the
Git warning. It appears that we were not managing the `AppState`
transition correctly, so this fixes the bug and ensures the Codex
session does not start until the user accepts the Git warning.

In particular, we now create the `ChatWidget` lazily and store it in the
`AppState::Chat` variant.
2025-05-24 08:33:49 -07:00
102 changed files with 6672 additions and 1120 deletions

29
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,29 @@
FROM ubuntu:22.04
ARG DEBIAN_FRONTEND=noninteractive
# enable 'universe' because musl-tools & clang live there
RUN apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common && \
add-apt-repository --yes universe
# now install build deps
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential curl git ca-certificates \
pkg-config clang musl-tools libssl-dev && \
rm -rf /var/lib/apt/lists/*
# non-root dev user
ARG USER=dev
ARG UID=1000
RUN useradd -m -u $UID $USER
USER $USER
# install Rust + musl target as dev user
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal && \
~/.cargo/bin/rustup target add aarch64-unknown-linux-musl
ENV PATH="/home/${USER}/.cargo/bin:${PATH}"
WORKDIR /workspace

30
.devcontainer/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Containerized Development
We provide the following options to facilitate Codex development in a container. This is particularly useful for verifying the Linux build when working on a macOS host.
## Docker
To build the Docker image locally for x64 and then run it with the repo mounted under `/workspace`:
```shell
CODEX_DOCKER_IMAGE_NAME=codex-linux-dev
docker build --platform=linux/amd64 -t "$CODEX_DOCKER_IMAGE_NAME" ./.devcontainer
docker run --platform=linux/amd64 --rm -it -e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64 -v "$PWD":/workspace -w /workspace/codex-rs "$CODEX_DOCKER_IMAGE_NAME"
```
Note that `/workspace/target` will contain the binaries built for your host platform, so we include `-e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64` in the `docker run` command so that the binaries built inside your container are written to a separate directory.
For arm64, specify `--platform=linux/amd64` instead for both `docker build` and `docker run`.
Currently, the `Dockerfile` works for both x64 and arm64 Linux, though you need to run `rustup target add x86_64-unknown-linux-musl` yourself to install the musl toolchain for x64.
## VS Code
VS Code recognizes the `devcontainer.json` file and gives you the option to develop Codex in a container. Currently, `devcontainer.json` builds and runs the `arm64` flavor of the container.
From the integrated terminal in VS Code, you can build either flavor of the `arm64` build (GNU or musl):
```shell
cargo build --target aarch64-unknown-linux-musl
cargo build --target aarch64-unknown-linux-gnu
```

View File

@@ -0,0 +1,29 @@
{
"name": "Codex",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"platform": "linux/arm64"
},
/* Force VS Code to run the container as arm64 in
case your host is x86 (or vice-versa). */
"runArgs": ["--platform=linux/arm64"],
"containerEnv": {
"RUST_BACKTRACE": "1",
"CARGO_TARGET_DIR": "${containerWorkspaceFolder}/codex-rs/target-arm64"
},
"remoteUser": "dev",
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.defaultProfile.linux": "bash"
},
"extensions": [
"rust-lang.rust-analyzer"
],
}
}
}

1
.github/actions/codex/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/node_modules/

View File

@@ -0,0 +1,8 @@
printWidth = 80
quoteProps = "consistent"
semi = true
tabWidth = 2
trailingComma = "all"
# Preserve existing behavior for markdown/text wrapping.
proseWrap = "preserve"

140
.github/actions/codex/README.md vendored Normal file
View File

@@ -0,0 +1,140 @@
# openai/codex-action
`openai/codex-action` is a GitHub Action that facilitates the use of [Codex](https://github.com/openai/codex) on GitHub issues and pull requests. Using the action, associate **labels** to run Codex with the appropriate prompt for the given context. Codex will respond by posting comments or creating PRs, whichever you specify!
Here is a sample workflow that uses `openai/codex-action`:
```yaml
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
if: ... # optional, but can be effective in conserving CI resources
runs-on: ubuntu-latest
# TODO(mbolin): Need to verify if/when `write` is necessary.
permissions:
contents: write
issues: write
pull-requests: write
steps:
# By default, Codex runs network disabled using --full-auto, so perform
# any setup that requires network (such as installing dependencies)
# before openai/codex-action.
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Codex
uses: openai/codex-action@latest
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
```
See sample usage in [`codex.yml`](../../workflows/codex.yml).
## Triggering the Action
Using the sample workflow above, we have:
```yaml
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
```
which means our workflow will be triggered when any of the following events occur:
- a label is added to an issue
- a label is added to a pull request against the `main` branch
### Label-Based Triggers
To define a GitHub label that should trigger Codex, create a file named `.github/codex/labels/LABEL-NAME.md` in your repository where `LABEL-NAME` is the name of the label. The content of the file is the prompt template to use when the label is added (see more on [Prompt Template Variables](#prompt-template-variables) below).
For example, if the file `.github/codex/labels/codex-review.md` exists, then:
- Adding the `codex-review` label will trigger the workflow containing the `openai/codex-action` GitHub Action.
- When `openai/codex-action` starts, it will replace the `codex-review` label with `codex-review-in-progress`.
- When `openai/codex-action` is finished, it will replace the `codex-review-in-progress` label with `codex-review-completed`.
If Codex sees that either `codex-review-in-progress` or `codex-review-completed` is already present, it will not perform the action.
As determined by the [default config](./src/default-label-config.ts), Codex will act on the following labels by default:
- Adding the `codex-review` label to a pull request will have Codex review the PR and add it to the PR as a comment.
- Adding the `codex-triage` label to an issue will have Codex investigate the issue and report its findings as a comment.
- Adding the `codex-issue-fix` label to an issue will have Codex attempt to fix the issue and create a PR wit the fix, if any.
## Action Inputs
The `openai/codex-action` GitHub Action takes the following inputs
### `openai_api_key` (required)
Set your `OPENAI_API_KEY` as a [repository secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). See **Secrets and varaibles** then **Actions** in the settings for your GitHub repo.
Note that the secret name does not have to be `OPENAI_API_KEY`. For example, you might want to name it `CODEX_OPENAI_API_KEY` and then configure it on `openai/codex-action` as follows:
```yaml
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
```
### `github_token` (required)
This is required so that Codex can post a comment or create a PR. Set this value on the action as follows:
```yaml
github_token: ${{ secrets.GITHUB_TOKEN }}
```
### `codex_args`
A whitespace-delimited list of arguments to pass to Codex. Defaults to `--full-auto`, but if you want to override the default model to use `o3`:
```yaml
codex_args: "--full-auto --model o3"
```
For more complex configurations, use the `codex_home` input.
### `codex_home`
If set, the value to use for the `$CODEX_HOME` environment variable when running Codex. As explained [in the docs](https://github.com/openai/codex/tree/main/codex-rs#readme), this folder can contain the `config.toml` to configure Codex, custom instructions, and log files.
This should be a relative path within your repo.
## Prompt Template Variables
As shown above, `"prompt"` and `"promptPath"` are used to define prompt templates that will be populated and passed to Codex in response to certain events. All template variables are of the form `{CODEX_ACTION_...}` and the supported values are defined below.
### `CODEX_ACTION_ISSUE_TITLE`
If the action was triggered on a GitHub issue, this is the issue title.
Specifically it is read as the `.issue.title` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_ISSUE_BODY`
If the action was triggered on a GitHub issue, this is the issue body.
Specifically it is read as the `.issue.body` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_GITHUB_EVENT_PATH`
The value of the `$GITHUB_EVENT_PATH` environment variable, which is the path to the file that contains the JSON payload for the event that triggered the workflow. Codex can use `jq` to read only the fields of interest from this file.
### `CODEX_ACTION_PR_DIFF`
If the action was triggered on a pull request, this is the diff between the base and head commits of the PR. It is the output from `git diff`.
Note that the content of the diff could be quite large, so is generally safer to point Codex at `CODEX_ACTION_GITHUB_EVENT_PATH` and let it decide how it wants to explore the change.

124
.github/actions/codex/action.yml vendored Normal file
View File

@@ -0,0 +1,124 @@
name: "Codex [reusable action]"
description: "A reusable action that runs a Codex model."
inputs:
openai_api_key:
description: "The value to use as the OPENAI_API_KEY environment variable when running Codex."
required: true
trigger_phrase:
description: "Text to trigger Codex from a PR/issue body or comment."
required: false
default: ""
github_token:
description: "Token so Codex can comment on the PR or issue."
required: true
codex_args:
description: "A whitespace-delimited list of arguments to pass to Codex. Due to limitations in YAML, arguments with spaces are not supported. For more complex configurations, use the `codex_home` input."
required: false
default: "--config hide_agent_reasoning=true --full-auto"
codex_home:
description: "Value to use as the CODEX_HOME environment variable when running Codex."
required: false
codex_release_tag:
description: "The release tag of the Codex model to run."
required: false
default: "codex-rs-ca8e97fcbcb991e542b8689f2d4eab9d30c399d6-1-rust-v0.0.2505302325"
runs:
using: "composite"
steps:
# Do this in Bash so we do not even bother to install Bun if the sender does
# not have write access to the repo.
- name: Verify user has write access to the repo.
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
PERMISSION=$(gh api \
"/repos/${GITHUB_REPOSITORY}/collaborators/${{ github.event.sender.login }}/permission" \
| jq -r '.permission')
if [[ "$PERMISSION" != "admin" && "$PERMISSION" != "write" ]]; then
exit 1
fi
- name: Download Codex
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
# Determine OS/arch and corresponding Codex artifact name.
uname_s=$(uname -s)
uname_m=$(uname -m)
case "$uname_s" in
Linux*) os="linux" ;;
Darwin*) os="apple-darwin" ;;
*) echo "Unsupported operating system: $uname_s"; exit 1 ;;
esac
case "$uname_m" in
x86_64*) arch="x86_64" ;;
arm64*|aarch64*) arch="aarch64" ;;
*) echo "Unsupported architecture: $uname_m"; exit 1 ;;
esac
# linux builds differentiate between musl and gnu.
if [[ "$os" == "linux" ]]; then
if [[ "$arch" == "x86_64" ]]; then
triple="${arch}-unknown-linux-musl"
else
# Only other supported linux build is aarch64 gnu.
triple="${arch}-unknown-linux-gnu"
fi
else
# macOS
triple="${arch}-apple-darwin"
fi
# Note that if we start baking version numbers into the artifact name,
# we will need to update this action.yml file to match.
artifact="codex-exec-${triple}.tar.gz"
gh release download ${{ inputs.codex_release_tag }} --repo openai/codex \
--pattern "$artifact" --output - \
| tar xzO > /usr/local/bin/codex-exec
chmod +x /usr/local/bin/codex-exec
# Display Codex version to confirm binary integrity; ensure we point it
# at the checked-out repository via --cd so that any subsequent commands
# use the correct working directory.
codex-exec --cd "$GITHUB_WORKSPACE" --version
- name: Install Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: 1.2.11
- name: Install dependencies
shell: bash
run: |
cd ${{ github.action_path }}
bun install --production
- name: Run Codex
shell: bash
run: bun run ${{ github.action_path }}/src/main.ts
# Process args plus environment variables often have a max of 128 KiB,
# so we should fit within that limit?
env:
INPUT_CODEX_ARGS: ${{ inputs.codex_args || '' }}
INPUT_CODEX_HOME: ${{ inputs.codex_home || ''}}
INPUT_TRIGGER_PHRASE: ${{ inputs.trigger_phrase || '' }}
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
GITHUB_TOKEN: ${{ inputs.github_token }}
GITHUB_EVENT_ACTION: ${{ github.event.action || '' }}
GITHUB_EVENT_LABEL_NAME: ${{ github.event.label.name || '' }}
GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number || '' }}
GITHUB_EVENT_ISSUE_BODY: ${{ github.event.issue.body || '' }}
GITHUB_EVENT_REVIEW_BODY: ${{ github.event.review.body || '' }}
GITHUB_EVENT_COMMENT_BODY: ${{ github.event.comment.body || '' }}

85
.github/actions/codex/bun.lock vendored Normal file
View File

@@ -0,0 +1,85 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"name": "codex-action",
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1",
},
"devDependencies": {
"@types/bun": "^1.2.11",
"@types/node": "^22.15.21",
"prettier": "^3.5.3",
"typescript": "^5.8.3",
},
},
},
"packages": {
"@actions/core": ["@actions/core@1.11.1", "", { "dependencies": { "@actions/exec": "^1.1.1", "@actions/http-client": "^2.0.1" } }, "sha512-hXJCSrkwfA46Vd9Z3q4cpEpHB1rL5NG04+/rbqW9d3+CSvtB1tYe8UTpAlixa1vj0m/ULglfEK2UKxMGxCxv5A=="],
"@actions/exec": ["@actions/exec@1.1.1", "", { "dependencies": { "@actions/io": "^1.0.1" } }, "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w=="],
"@actions/github": ["@actions/github@6.0.1", "", { "dependencies": { "@actions/http-client": "^2.2.0", "@octokit/core": "^5.0.1", "@octokit/plugin-paginate-rest": "^9.2.2", "@octokit/plugin-rest-endpoint-methods": "^10.4.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "undici": "^5.28.5" } }, "sha512-xbZVcaqD4XnQAe35qSQqskb3SqIAfRyLBrHMd/8TuL7hJSz2QtbDwnNM8zWx4zO5l2fnGtseNE3MbEvD7BxVMw=="],
"@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
"@octokit/auth-token": ["@octokit/auth-token@4.0.0", "", {}, "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA=="],
"@octokit/core": ["@octokit/core@5.2.1", "", { "dependencies": { "@octokit/auth-token": "^4.0.0", "@octokit/graphql": "^7.1.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.0.0", "before-after-hook": "^2.2.0", "universal-user-agent": "^6.0.0" } }, "sha512-dKYCMuPO1bmrpuogcjQ8z7ICCH3FP6WmxpwC03yjzGfZhj9fTJg6+bS1+UAplekbN2C+M61UNllGOOoAfGCrdQ=="],
"@octokit/endpoint": ["@octokit/endpoint@9.0.6", "", { "dependencies": { "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-H1fNTMA57HbkFESSt3Y9+FBICv+0jFceJFPWDePYlR/iMGrwM5ph+Dd4XRQs+8X+PUFURLQgX9ChPfhJ/1uNQw=="],
"@octokit/graphql": ["@octokit/graphql@7.1.1", "", { "dependencies": { "@octokit/request": "^8.4.1", "@octokit/types": "^13.0.0", "universal-user-agent": "^6.0.0" } }, "sha512-3mkDltSfcDUoa176nlGoA32RGjeWjl3K7F/BwHwRMJUW/IteSa4bnSV8p2ThNkcIcZU2umkZWxwETSSCJf2Q7g=="],
"@octokit/openapi-types": ["@octokit/openapi-types@24.2.0", "", {}, "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg=="],
"@octokit/plugin-paginate-rest": ["@octokit/plugin-paginate-rest@9.2.2", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-u3KYkGF7GcZnSD/3UP0S7K5XUFT2FkOQdcfXZGZQPGv3lm4F2Xbf71lvjldr8c1H3nNbF+33cLEkWYbokGWqiQ=="],
"@octokit/plugin-rest-endpoint-methods": ["@octokit/plugin-rest-endpoint-methods@10.4.1", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg=="],
"@octokit/request": ["@octokit/request@8.4.1", "", { "dependencies": { "@octokit/endpoint": "^9.0.6", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-qnB2+SY3hkCmBxZsR/MPCybNmbJe4KAlfWErXq+rBKkQJlbjdJeS85VI9r8UqeLYLvnAenU8Q1okM/0MBsAGXw=="],
"@octokit/request-error": ["@octokit/request-error@5.1.1", "", { "dependencies": { "@octokit/types": "^13.1.0", "deprecation": "^2.0.0", "once": "^1.4.0" } }, "sha512-v9iyEQJH6ZntoENr9/yXxjuezh4My67CBSu9r6Ve/05Iu5gNgnisNWOsoJHTP6k0Rr0+HQIpnH+kyammu90q/g=="],
"@octokit/types": ["@octokit/types@13.10.0", "", { "dependencies": { "@octokit/openapi-types": "^24.2.0" } }, "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA=="],
"@types/bun": ["@types/bun@1.2.13", "", { "dependencies": { "bun-types": "1.2.13" } }, "sha512-u6vXep/i9VBxoJl3GjZsl/BFIsvML8DfVDO0RYLEwtSZSp981kEO1V5NwRcO1CPJ7AmvpbnDCiMKo3JvbDEjAg=="],
"@types/node": ["@types/node@22.15.21", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-EV/37Td6c+MgKAbkcLG6vqZ2zEYHD7bvSrzqqs2RIhbA6w3x+Dqz8MZM3sP6kGTeLrdoOgKZe+Xja7tUB2DNkQ=="],
"before-after-hook": ["before-after-hook@2.2.3", "", {}, "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ=="],
"bun-types": ["bun-types@1.2.13", "", { "dependencies": { "@types/node": "*" } }, "sha512-rRjA1T6n7wto4gxhAO/ErZEtOXyEZEmnIHQfl0Dt1QQSB4QV0iP6BZ9/YB5fZaHFQ2dwHFrmPaRQ9GGMX01k9Q=="],
"deprecation": ["deprecation@2.3.1", "", {}, "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"prettier": ["prettier@3.5.3", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-QQtaxnoDJeAkDvDKWCLiwIXkTgRhwYDEQCghU9Z6q03iyek/rxRh/2lC3HB7P8sWT2xC/y5JDctPLBIGzHKbhw=="],
"tunnel": ["tunnel@0.0.6", "", {}, "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg=="],
"typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="],
"undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
"undici-types": ["undici-types@6.21.0", "", {}, "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="],
"universal-user-agent": ["universal-user-agent@6.0.1", "", {}, "sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"@octokit/plugin-paginate-rest/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"@octokit/plugin-paginate-rest/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
}
}

21
.github/actions/codex/package.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"name": "codex-action",
"version": "0.0.0",
"private": true,
"scripts": {
"format": "prettier --check src",
"format:fix": "prettier --write src",
"test": "bun test",
"typecheck": "tsc"
},
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1"
},
"devDependencies": {
"@types/bun": "^1.2.11",
"@types/node": "^22.15.21",
"prettier": "^3.5.3",
"typescript": "^5.8.3"
}
}

View File

@@ -0,0 +1,85 @@
import * as github from "@actions/github";
import type { EnvContext } from "./env-context";
/**
* Add an "eyes" reaction to the entity (issue, issue comment, or pull request
* review comment) that triggered the current Codex invocation.
*
* The purpose is to provide immediate feedback to the user similar to the
* *-in-progress label flow indicating that the bot has acknowledged the
* request and is working on it.
*
* We attempt to add the reaction best suited for the current GitHub event:
*
* • issues → POST /repos/{owner}/{repo}/issues/{issue_number}/reactions
* • issue_comment → POST /repos/{owner}/{repo}/issues/comments/{comment_id}/reactions
* • pull_request_review_comment → POST /repos/{owner}/{repo}/pulls/comments/{comment_id}/reactions
*
* If the specific target is unavailable (e.g. unexpected payload shape) we
* silently skip instead of failing the whole action because the reaction is
* merely cosmetic.
*/
export async function addEyesReaction(ctx: EnvContext): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo } = github.context.repo;
const eventName = github.context.eventName;
try {
switch (eventName) {
case "issue_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForIssueComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "pull_request_review_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForPullRequestReviewComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "issues": {
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
return;
}
break;
}
default: {
// Fallback: try to react to the issue/PR if we have a number.
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
}
}
}
} catch (error) {
// Do not fail the action if reaction creation fails log and continue.
console.warn(`Failed to add \"eyes\" reaction: ${error}`);
}
}

53
.github/actions/codex/src/comment.ts vendored Normal file
View File

@@ -0,0 +1,53 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `issue_comment` and `pull_request_review_comment` events once we know
* the action is supported.
*/
export async function onComment(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
// Attempt to get the body of the comment from the environment. Depending on
// the event type either `GITHUB_EVENT_COMMENT_BODY` (issue & PR comments) or
// `GITHUB_EVENT_REVIEW_BODY` (PR reviews) is set.
const commentBody =
ctx.tryGetNonEmpty("GITHUB_EVENT_COMMENT_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_REVIEW_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_ISSUE_BODY");
if (!commentBody) {
console.warn("Comment body not found in environment: skipping.");
return;
}
// Check if the trigger phrase is present.
if (!commentBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this comment.`,
);
return;
}
// Derive the prompt by removing the trigger phrase. Remove only the first
// occurrence to keep any additional occurrences that might be meaningful.
const prompt = commentBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping");
return;
}
// Provide immediate feedback that we are working on the request.
await addEyesReaction(ctx);
// Run Codex and post the response as a new comment.
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

11
.github/actions/codex/src/config.ts vendored Normal file
View File

@@ -0,0 +1,11 @@
import { readdirSync, statSync } from "fs";
import * as path from "path";
export interface Config {
labels: Record<string, LabelConfig>;
}
export interface LabelConfig {
/** Returns the prompt template. */
getPromptTemplate(): string;
}

View File

@@ -0,0 +1,44 @@
import type { Config } from "./config";
export function getDefaultConfig(): Config {
return {
labels: {
"codex-investigate-issue": {
getPromptTemplate: () =>
`
Troubleshoot whether the reported issue is valid.
Provide a concise and respectful comment summarizing the findings.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
"codex-code-review": {
getPromptTemplate: () =>
`
Review this PR and respond with a very concise final message, formatted in Markdown.
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the \`base\` and \`head\` refs that define this PR. Both refs are available locally.
`.trim(),
},
"codex-attempt-fix": {
getPromptTemplate: () =>
`
Attempt to solve the reported issue.
If a code change is required, create a new branch, commit the fix, and open a pull-request that resolves the problem.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
},
};
}

116
.github/actions/codex/src/env-context.ts vendored Normal file
View File

@@ -0,0 +1,116 @@
/*
* Centralised access to environment variables used by the Codex GitHub
* Action.
*
* To enable proper unit-testing we avoid reading from `process.env` at module
* initialisation time. Instead a `EnvContext` object is created (usually from
* the real `process.env`) and passed around explicitly or where that is not
* yet practical imported as the shared `defaultContext` singleton. Tests can
* create their own context backed by a stubbed map of variables without having
* to mutate global state.
*/
import { fail } from "./fail";
import * as github from "@actions/github";
export interface EnvContext {
/**
* Return the value for a given environment variable or terminate the action
* via `fail` if it is missing / empty.
*/
get(name: string): string;
/**
* Attempt to read an environment variable. Returns the value when present;
* otherwise returns undefined (does not call `fail`).
*/
tryGet(name: string): string | undefined;
/**
* Attempt to read an environment variable. Returns non-empty string value or
* null if unset or empty string.
*/
tryGetNonEmpty(name: string): string | null;
/**
* Return a memoised Octokit instance authenticated via the token resolved
* from the provided argument (when defined) or the environment variables
* `GITHUB_TOKEN`/`GH_TOKEN`.
*
* Subsequent calls return the same cached instance to avoid spawning
* multiple REST clients within a single action run.
*/
getOctokit(token?: string): ReturnType<typeof github.getOctokit>;
}
/** Internal helper *not* exported. */
function _getRequiredEnv(
name: string,
env: Record<string, string | undefined>,
): string | undefined {
const value = env[name];
// Avoid leaking secrets into logs while still logging non-secret variables.
if (name.endsWith("KEY") || name.endsWith("TOKEN")) {
if (value) {
console.log(`value for ${name} was found`);
}
} else {
console.log(`${name}=${value}`);
}
return value;
}
/** Create a context backed by the supplied environment map (defaults to `process.env`). */
export function createEnvContext(
env: Record<string, string | undefined> = process.env,
): EnvContext {
// Lazily instantiated Octokit client shared across this context.
let cachedOctokit: ReturnType<typeof github.getOctokit> | null = null;
return {
get(name: string): string {
const value = _getRequiredEnv(name, env);
if (value == null) {
fail(`Missing required environment variable: ${name}`);
}
return value;
},
tryGet(name: string): string | undefined {
return _getRequiredEnv(name, env);
},
tryGetNonEmpty(name: string): string | null {
const value = _getRequiredEnv(name, env);
return value == null || value === "" ? null : value;
},
getOctokit(token?: string) {
if (cachedOctokit) {
return cachedOctokit;
}
// Determine the token to authenticate with.
const githubToken = token ?? env["GITHUB_TOKEN"] ?? env["GH_TOKEN"];
if (!githubToken) {
fail(
"Unable to locate a GitHub token. `github_token` should have been set on the action.",
);
}
cachedOctokit = github.getOctokit(githubToken!);
return cachedOctokit;
},
};
}
/**
* Shared context built from the actual `process.env`. Production code that is
* not yet refactored to receive a context explicitly may import and use this
* singleton. Tests should avoid the singleton and instead pass their own
* context to the functions they exercise.
*/
export const defaultContext: EnvContext = createEnvContext();

4
.github/actions/codex/src/fail.ts vendored Normal file
View File

@@ -0,0 +1,4 @@
export function fail(message: string): never {
console.error(message);
process.exit(1);
}

149
.github/actions/codex/src/git-helpers.ts vendored Normal file
View File

@@ -0,0 +1,149 @@
import { spawnSync } from "child_process";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
function runGit(args: string[], silent = true): string {
console.info(`Running git ${args.join(" ")}`);
const res = spawnSync("git", args, {
encoding: "utf8",
stdio: silent ? ["ignore", "pipe", "pipe"] : "inherit",
});
if (res.error) {
throw res.error;
}
if (res.status !== 0) {
// Return stderr so caller may handle; else throw.
throw new Error(
`git ${args.join(" ")} failed with code ${res.status}: ${res.stderr}`,
);
}
return res.stdout.trim();
}
function stageAllChanges() {
runGit(["add", "-A"]);
}
function hasStagedChanges(): boolean {
const res = spawnSync("git", ["diff", "--cached", "--quiet", "--exit-code"]);
return res.status !== 0;
}
function ensureOnBranch(
issueNumber: number,
protectedBranches: string[],
suggestedSlug?: string,
): string {
let branch = "";
try {
branch = runGit(["symbolic-ref", "--short", "-q", "HEAD"]);
} catch {
branch = "";
}
// If detached HEAD or on a protected branch, create a new branch.
if (!branch || protectedBranches.includes(branch)) {
if (suggestedSlug) {
const safeSlug = suggestedSlug
.toLowerCase()
.replace(/[^\w\s-]/g, "")
.trim()
.replace(/\s+/g, "-");
branch = `codex-fix-${issueNumber}-${safeSlug}`;
} else {
branch = `codex-fix-${issueNumber}-${Date.now()}`;
}
runGit(["switch", "-c", branch]);
}
return branch;
}
function commitIfNeeded(issueNumber: number) {
if (hasStagedChanges()) {
runGit([
"commit",
"-m",
`fix: automated fix for #${issueNumber} via Codex`,
]);
}
}
function pushBranch(branch: string, githubToken: string, ctx: EnvContext) {
const repoSlug = ctx.get("GITHUB_REPOSITORY"); // owner/repo
const remoteUrl = `https://x-access-token:${githubToken}@github.com/${repoSlug}.git`;
runGit(["push", "--force-with-lease", "-u", remoteUrl, `HEAD:${branch}`]);
}
/**
* If this returns a string, it is the URL of the created PR.
*/
export async function maybePublishPRForIssue(
issueNumber: number,
lastMessage: string,
ctx: EnvContext,
): Promise<string | undefined> {
// Only proceed if GITHUB_TOKEN available.
const githubToken =
ctx.tryGetNonEmpty("GITHUB_TOKEN") ?? ctx.tryGetNonEmpty("GH_TOKEN");
if (!githubToken) {
console.warn("No GitHub token - skipping PR creation.");
return undefined;
}
// Print `git status` for debugging.
runGit(["status"]);
// Stage any remaining changes so they can be committed and pushed.
stageAllChanges();
const octokit = ctx.getOctokit(githubToken);
const { owner, repo } = github.context.repo;
// Determine default branch to treat as protected.
let defaultBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
defaultBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const sanitizedMessage = lastMessage.replace(/\u2022/g, "-");
const [summaryLine] = sanitizedMessage.split(/\r?\n/);
const branch = ensureOnBranch(issueNumber, [defaultBranch, "master"], summaryLine);
commitIfNeeded(issueNumber);
pushBranch(branch, githubToken, ctx);
// Try to find existing PR for this branch
const headParam = `${owner}:${branch}`;
const existing = await octokit.rest.pulls.list({
owner,
repo,
head: headParam,
state: "open",
});
if (existing.data.length > 0) {
return existing.data[0].html_url;
}
// Determine base branch (default to main)
let baseBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
baseBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const pr = await octokit.rest.pulls.create({
owner,
repo,
title: summaryLine,
head: branch,
base: baseBranch,
body: sanitizedMessage,
});
return pr.data.html_url;
}

16
.github/actions/codex/src/git-user.ts vendored Normal file
View File

@@ -0,0 +1,16 @@
export function setGitHubActionsUser(): void {
const commands = [
["git", "config", "--global", "user.name", "github-actions[bot]"],
[
"git",
"config",
"--global",
"user.email",
"41898282+github-actions[bot]@users.noreply.github.com",
],
];
for (const command of commands) {
Bun.spawnSync(command);
}
}

View File

@@ -0,0 +1,11 @@
import * as pathMod from "path";
import { EnvContext } from "./env-context";
export function resolveWorkspacePath(path: string, ctx: EnvContext): string {
if (pathMod.isAbsolute(path)) {
return path;
} else {
const workspace = ctx.get("GITHUB_WORKSPACE");
return pathMod.join(workspace, path);
}
}

View File

@@ -0,0 +1,56 @@
import type { Config, LabelConfig } from "./config";
import { getDefaultConfig } from "./default-label-config";
import { readFileSync, readdirSync, statSync } from "fs";
import * as path from "path";
/**
* Build an in-memory configuration object by scanning the repository for
* Markdown templates located in `.github/codex/labels`.
*
* Each `*.md` file in that directory represents a label that can trigger the
* Codex GitHub Action. The filename **without** the extension is interpreted
* as the label name, e.g. `codex-review.md` ➜ `codex-review`.
*
* For every such label we derive the corresponding `doneLabel` by appending
* the suffix `-completed`.
*/
export function loadConfig(workspace: string): Config {
const labelsDir = path.join(workspace, ".github", "codex", "labels");
let entries: string[];
try {
entries = readdirSync(labelsDir);
} catch {
// If the directory is missing, return the default configuration.
return getDefaultConfig();
}
const labels: Record<string, LabelConfig> = {};
for (const entry of entries) {
if (!entry.endsWith(".md")) {
continue;
}
const fullPath = path.join(labelsDir, entry);
if (!statSync(fullPath).isFile()) {
continue;
}
const labelName = entry.slice(0, -3); // trim ".md"
labels[labelName] = new FileLabelConfig(fullPath);
}
return { labels };
}
class FileLabelConfig implements LabelConfig {
constructor(private readonly promptPath: string) {}
getPromptTemplate(): string {
return readFileSync(this.promptPath, "utf8");
}
}

80
.github/actions/codex/src/main.ts vendored Executable file
View File

@@ -0,0 +1,80 @@
#!/usr/bin/env bun
import type { Config } from "./config";
import { defaultContext, EnvContext } from "./env-context";
import { loadConfig } from "./load-config";
import { setGitHubActionsUser } from "./git-user";
import { onLabeled } from "./process-label";
import { ensureBaseAndHeadCommitsForPRAreAvailable } from "./prompt-template";
import { performAdditionalValidation } from "./verify-inputs";
import { onComment } from "./comment";
import { onReview } from "./review";
async function main(): Promise<void> {
const ctx: EnvContext = defaultContext;
// Build the configuration dynamically by scanning `.github/codex/labels`.
const GITHUB_WORKSPACE = ctx.get("GITHUB_WORKSPACE");
const config: Config = loadConfig(GITHUB_WORKSPACE);
// Optionally perform additional validation of prompt template files.
performAdditionalValidation(config, GITHUB_WORKSPACE);
const GITHUB_EVENT_NAME = ctx.get("GITHUB_EVENT_NAME");
const GITHUB_EVENT_ACTION = ctx.get("GITHUB_EVENT_ACTION");
// Set user.name and user.email to a bot before Codex runs, just in case it
// creates a commit.
setGitHubActionsUser();
switch (GITHUB_EVENT_NAME) {
case "issues": {
if (GITHUB_EVENT_ACTION === "labeled") {
await onLabeled(config, ctx);
return;
} else if (GITHUB_EVENT_ACTION === "opened") {
await onComment(ctx);
return;
}
break;
}
case "issue_comment": {
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
case "pull_request": {
if (GITHUB_EVENT_ACTION === "labeled") {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
await onLabeled(config, ctx);
return;
}
break;
}
case "pull_request_review": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "submitted") {
await onReview(ctx);
return;
}
break;
}
case "pull_request_review_comment": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
}
console.warn(
`Unsupported action '${GITHUB_EVENT_ACTION}' for event '${GITHUB_EVENT_NAME}'.`,
);
}
main();

View File

@@ -0,0 +1,62 @@
import { fail } from "./fail";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
/**
* Post a comment to the issue / pull request currently in scope.
*
* Provide the environment context so that token lookup (inside getOctokit) does
* not rely on global state.
*/
export async function postComment(
commentBody: string,
ctx: EnvContext,
): Promise<void> {
// Append a footer with a link back to the workflow run, if available.
const footer = buildWorkflowRunFooter(ctx);
const bodyWithFooter = footer ? `${commentBody}${footer}` : commentBody;
const octokit = ctx.getOctokit();
console.info("Got Octokit instance for posting comment");
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
console.warn(
"No issue or pull_request number found in GitHub context; skipping comment creation.",
);
return;
}
try {
console.info("Calling octokit.rest.issues.createComment()");
await octokit.rest.issues.createComment({
owner,
repo,
issue_number: issueNumber,
body: bodyWithFooter,
});
} catch (error) {
fail(`Failed to create comment via GitHub API: ${error}`);
}
}
/**
* Helper to build a Markdown fragment linking back to the workflow run that
* generated the current comment. Returns `undefined` if required environment
* variables are missing e.g. when running outside of GitHub Actions so we
* can gracefully skip the footer in those cases.
*/
function buildWorkflowRunFooter(ctx: EnvContext): string | undefined {
const serverUrl =
ctx.tryGetNonEmpty("GITHUB_SERVER_URL") ?? "https://github.com";
const repository = ctx.tryGetNonEmpty("GITHUB_REPOSITORY");
const runId = ctx.tryGetNonEmpty("GITHUB_RUN_ID");
if (!repository || !runId) {
return undefined;
}
const url = `${serverUrl}/${repository}/actions/runs/${runId}`;
return `\n\n---\n*[_View workflow run_](${url})*`;
}

View File

@@ -0,0 +1,195 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { renderPromptTemplate } from "./prompt-template";
import { postComment } from "./post-comment";
import { runCodex } from "./run-codex";
import * as github from "@actions/github";
import { Config, LabelConfig } from "./config";
import { maybePublishPRForIssue } from "./git-helpers";
export async function onLabeled(
config: Config,
ctx: EnvContext,
): Promise<void> {
const GITHUB_EVENT_LABEL_NAME = ctx.get("GITHUB_EVENT_LABEL_NAME");
const labelConfig = config.labels[GITHUB_EVENT_LABEL_NAME] as
| LabelConfig
| undefined;
if (!labelConfig) {
fail(
`Label \`${GITHUB_EVENT_LABEL_NAME}\` not found in config: ${JSON.stringify(config)}`,
);
}
await processLabelConfig(ctx, GITHUB_EVENT_LABEL_NAME, labelConfig);
}
/**
* Wrapper that handles `-in-progress` and `-completed` semantics around the core lint/fix/review
* processing. It will:
*
* - Skip execution if the `-in-progress` or `-completed` label is already present.
* - Mark the PR/issue as `-in-progress`.
* - After successful execution, mark the PR/issue as `-completed`.
*/
async function processLabelConfig(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo, issueNumber, labelNames } =
await getCurrentLabels(octokit);
const inProgressLabel = `${label}-in-progress`;
const completedLabel = `${label}-completed`;
for (const markerLabel of [inProgressLabel, completedLabel]) {
if (labelNames.includes(markerLabel)) {
console.log(
`Label '${markerLabel}' already present on issue/PR #${issueNumber}. Skipping Codex action.`,
);
// Clean up: remove the triggering label to avoid confusion and re-runs.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
remove: markerLabel,
});
return;
}
}
// Mark the PR/issue as in progress.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: inProgressLabel,
remove: label,
});
// Run the core Codex processing.
await processLabel(ctx, label, labelConfig);
// Mark the PR/issue as completed.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: completedLabel,
remove: inProgressLabel,
});
}
async function processLabel(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const template = labelConfig.getPromptTemplate();
const populatedTemplate = await renderPromptTemplate(template, ctx);
// Always run Codex and post the resulting message as a comment.
let commentBody = await runCodex(populatedTemplate, ctx);
// Current heuristic: only try to create a PR if "attempt" or "fix" is in the
// label name. (Yes, we plan to evolve this.)
if (label.indexOf("fix") !== -1 || label.indexOf("attempt") !== -1) {
console.info(`label ${label} indicates we should attempt to create a PR`);
const prUrl = await maybeFixIssue(ctx, commentBody);
if (prUrl) {
commentBody += `\n\n---\nOpened pull request: ${prUrl}`;
}
} else {
console.info(
`label ${label} does not indicate we should attempt to create a PR`,
);
}
await postComment(commentBody, ctx);
}
async function maybeFixIssue(
ctx: EnvContext,
lastMessage: string,
): Promise<string | undefined> {
// Attempt to create a PR out of any changes Codex produced.
const issueNumber = github.context.issue.number!; // exists for issues triggering this path
try {
return await maybePublishPRForIssue(issueNumber, lastMessage, ctx);
} catch (e) {
console.warn(`Failed to publish PR: ${e}`);
}
}
async function getCurrentLabels(
octokit: ReturnType<typeof github.getOctokit>,
): Promise<{
owner: string;
repo: string;
issueNumber: number;
labelNames: Array<string>;
}> {
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
fail("No issue or pull_request number found in GitHub context.");
}
const { data: issueData } = await octokit.rest.issues.get({
owner,
repo,
issue_number: issueNumber,
});
const labelNames =
issueData.labels?.map((label: any) =>
typeof label === "string" ? label : label.name,
) ?? [];
return { owner, repo, issueNumber, labelNames };
}
async function addAndRemoveLabels(
octokit: ReturnType<typeof github.getOctokit>,
opts: {
owner: string;
repo: string;
issueNumber: number;
add?: string;
remove?: string;
},
): Promise<void> {
const { owner, repo, issueNumber, add, remove } = opts;
if (add) {
try {
await octokit.rest.issues.addLabels({
owner,
repo,
issue_number: issueNumber,
labels: [add],
});
} catch (error) {
console.warn(`Failed to add label '${add}': ${error}`);
}
}
if (remove) {
try {
await octokit.rest.issues.removeLabel({
owner,
repo,
issue_number: issueNumber,
name: remove,
});
} catch (error) {
console.warn(`Failed to remove label '${remove}': ${error}`);
}
}
}

View File

@@ -0,0 +1,284 @@
/*
* Utilities to render Codex prompt templates.
*
* A template is a Markdown (or plain-text) file that may contain one or more
* placeholders of the form `{CODEX_ACTION_<NAME>}`. At runtime these
* placeholders are substituted with dynamically generated content. Each
* placeholder is resolved **exactly once** even if it appears multiple times
* in the same template.
*/
import { readFile } from "fs/promises";
import { EnvContext } from "./env-context";
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
/**
* Lazily caches parsed `$GITHUB_EVENT_PATH` contents keyed by the file path so
* we only hit the filesystem once per unique event payload.
*/
const githubEventDataCache: Map<string, Promise<any>> = new Map();
function getGitHubEventData(ctx: EnvContext): Promise<any> {
const eventPath = ctx.get("GITHUB_EVENT_PATH");
let cached = githubEventDataCache.get(eventPath);
if (!cached) {
cached = readFile(eventPath, "utf8").then((raw) => JSON.parse(raw));
githubEventDataCache.set(eventPath, cached);
}
return cached;
}
async function runCommand(args: Array<string>): Promise<string> {
const result = Bun.spawnSync(args, {
stdout: "pipe",
stderr: "pipe",
});
if (result.success) {
return result.stdout.toString();
}
console.error(`Error running ${JSON.stringify(args)}: ${result.stderr}`);
return "";
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
// Regex that captures the variable name without the surrounding { } braces.
const VAR_REGEX = /\{(CODEX_ACTION_[A-Z0-9_]+)\}/g;
// Cache individual placeholder values so each one is resolved at most once per
// process even if many templates reference it.
const placeholderCache: Map<string, Promise<string>> = new Map();
/**
* Parse a template string, resolve all placeholders and return the rendered
* result.
*/
export async function renderPromptTemplate(
template: string,
ctx: EnvContext,
): Promise<string> {
// ---------------------------------------------------------------------
// 1) Gather all *unique* placeholders present in the template.
// ---------------------------------------------------------------------
const variables = new Set<string>();
for (const match of template.matchAll(VAR_REGEX)) {
variables.add(match[1]);
}
// ---------------------------------------------------------------------
// 2) Kick off (or reuse) async resolution for each variable.
// ---------------------------------------------------------------------
for (const variable of variables) {
if (!placeholderCache.has(variable)) {
placeholderCache.set(variable, resolveVariable(variable, ctx));
}
}
// ---------------------------------------------------------------------
// 3) Await completion so we can perform a simple synchronous replace below.
// ---------------------------------------------------------------------
const resolvedEntries: [string, string][] = [];
for (const [key, promise] of placeholderCache.entries()) {
resolvedEntries.push([key, await promise]);
}
const resolvedMap = new Map<string, string>(resolvedEntries);
// ---------------------------------------------------------------------
// 4) Replace each occurrence. We use replace with a callback to ensure
// correct substitution even if variable names overlap (they shouldn't,
// but better safe than sorry).
// ---------------------------------------------------------------------
return template.replace(VAR_REGEX, (_, varName: string) => {
return resolvedMap.get(varName) ?? "";
});
}
export async function ensureBaseAndHeadCommitsForPRAreAvailable(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const prShas = await getPrShas(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return null;
}
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined - unexpected");
return null;
}
const workspace = ctx.get("GITHUB_WORKSPACE");
// Refs (branch names)
const baseRef: string | undefined = pr.base?.ref;
const headRef: string | undefined = pr.head?.ref;
// Clone URLs
const baseRemoteUrl: string | undefined = pr.base?.repo?.clone_url;
const headRemoteUrl: string | undefined = pr.head?.repo?.clone_url;
if (!baseRef || !headRef || !baseRemoteUrl || !headRemoteUrl) {
console.warn(
"Missing PR ref or remote URL information - cannot fetch commits",
);
return null;
}
// Ensure we have the base branch.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
baseRef,
]);
// Ensure we have the head branch.
if (headRemoteUrl === baseRemoteUrl) {
// Same repository the commit is available from `origin`.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
headRef,
]);
} else {
// Fork make sure a `pr` remote exists that points at the fork. Attempting
// to add a remote that already exists causes git to error, so we swallow
// any non-zero exit codes from that specific command.
await runCommand([
"git",
"-C",
workspace,
"remote",
"add",
"pr",
headRemoteUrl,
]);
// Whether adding succeeded or the remote already existed, attempt to fetch
// the head ref from the `pr` remote.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"pr",
headRef,
]);
}
return prShas;
}
// ---------------------------------------------------------------------------
// Internal helpers still exported for use by other modules.
// ---------------------------------------------------------------------------
export async function resolvePrDiff(ctx: EnvContext): Promise<string> {
const prShas = await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return "";
}
const workspace = ctx.get("GITHUB_WORKSPACE");
const { baseSha, headSha } = prShas;
return runCommand([
"git",
"-C",
workspace,
"diff",
"--color=never",
`${baseSha}..${headSha}`,
]);
}
// ---------------------------------------------------------------------------
// Placeholder resolution
// ---------------------------------------------------------------------------
async function resolveVariable(name: string, ctx: EnvContext): Promise<string> {
switch (name) {
case "CODEX_ACTION_ISSUE_TITLE": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.title ?? "";
}
case "CODEX_ACTION_ISSUE_BODY": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.body ?? "";
}
case "CODEX_ACTION_GITHUB_EVENT_PATH": {
return ctx.get("GITHUB_EVENT_PATH");
}
case "CODEX_ACTION_BASE_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.base?.ref ?? "";
}
case "CODEX_ACTION_HEAD_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.head?.ref ?? "";
}
case "CODEX_ACTION_PR_DIFF": {
return resolvePrDiff(ctx);
}
// -------------------------------------------------------------------
// Add new template variables here.
// -------------------------------------------------------------------
default: {
// Unknown variable leave it blank to avoid leaking placeholders to the
// final prompt. The alternative would be to `fail()` here, but silently
// ignoring unknown placeholders is more forgiving and better matches the
// behaviour of typical template engines.
console.warn(`Unknown template variable: ${name}`);
return "";
}
}
}
async function getPrShas(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined");
return null;
}
// Prefer explicit SHAs if available to avoid relying on local branch names.
const baseSha: string | undefined = pr.base?.sha;
const headSha: string | undefined = pr.head?.sha;
if (!baseSha || !headSha) {
console.warn("one of base or head is not defined on event.pull_request");
return null;
}
return { baseSha, headSha };
}

42
.github/actions/codex/src/review.ts vendored Normal file
View File

@@ -0,0 +1,42 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `pull_request_review` events. We treat the review body the same way
* as a normal comment.
*/
export async function onReview(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
const reviewBody = ctx.tryGet("GITHUB_EVENT_REVIEW_BODY");
if (!reviewBody) {
console.warn("Review body not found in environment: skipping.");
return;
}
if (!reviewBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this review.`,
);
return;
}
const prompt = reviewBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping.");
return;
}
await addEyesReaction(ctx);
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

56
.github/actions/codex/src/run-codex.ts vendored Normal file
View File

@@ -0,0 +1,56 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { tmpdir } from "os";
import { join } from "node:path";
import { readFile, mkdtemp } from "fs/promises";
import { resolveWorkspacePath } from "./github-workspace";
/**
* Runs the Codex CLI with the provided prompt and returns the output written
* to the "last message" file.
*/
export async function runCodex(
prompt: string,
ctx: EnvContext,
): Promise<string> {
const OPENAI_API_KEY = ctx.get("OPENAI_API_KEY");
const tempDirPath = await mkdtemp(join(tmpdir(), "codex-"));
const lastMessageOutput = join(tempDirPath, "codex-prompt.md");
const args = ["/usr/local/bin/codex-exec"];
const inputCodexArgs = ctx.tryGet("INPUT_CODEX_ARGS")?.trim();
if (inputCodexArgs) {
args.push(...inputCodexArgs.split(/\s+/));
}
args.push("--output-last-message", lastMessageOutput, prompt);
const env: Record<string, string> = { ...process.env, OPENAI_API_KEY };
const INPUT_CODEX_HOME = ctx.tryGet("INPUT_CODEX_HOME");
if (INPUT_CODEX_HOME) {
env.CODEX_HOME = resolveWorkspacePath(INPUT_CODEX_HOME, ctx);
}
console.log(`Running Codex: ${JSON.stringify(args)}`);
const result = Bun.spawnSync(args, {
stdout: "inherit",
stderr: "inherit",
env,
});
if (!result.success) {
fail(`Codex failed: see above for details.`);
}
// Read the output generated by Codex.
let lastMessage: string;
try {
lastMessage = await readFile(lastMessageOutput, "utf8");
} catch (err) {
fail(`Failed to read Codex output at '${lastMessageOutput}': ${err}`);
}
return lastMessage;
}

View File

@@ -0,0 +1,33 @@
// Validate the inputs passed to the composite action.
// The script currently ensures that the provided configuration file exists and
// matches the expected schema.
import type { Config } from "./config";
import { existsSync } from "fs";
import * as path from "path";
import { fail } from "./fail";
export function performAdditionalValidation(config: Config, workspace: string) {
// Additional validation: ensure referenced prompt files exist and are Markdown.
for (const [label, details] of Object.entries(config.labels)) {
// Determine which prompt key is present (the schema guarantees exactly one).
const promptPathStr =
(details as any).prompt ?? (details as any).promptPath;
if (promptPathStr) {
const promptPath = path.isAbsolute(promptPathStr)
? promptPathStr
: path.join(workspace, promptPathStr);
if (!existsSync(promptPath)) {
fail(`Prompt file for label '${label}' not found: ${promptPath}`);
}
if (!promptPath.endsWith(".md")) {
fail(
`Prompt file for label '${label}' must be a .md file (got ${promptPathStr}).`,
);
}
}
}
}

15
.github/actions/codex/tsconfig.json vendored Normal file
View File

@@ -0,0 +1,15 @@
{
"compilerOptions": {
"lib": ["ESNext"],
"target": "ESNext",
"module": "ESNext",
"moduleDetection": "force",
"moduleResolution": "bundler",
"noEmit": true,
"strict": true,
"skipLibCheck": true
},
"include": ["src"]
}

3
.github/codex/home/config.toml vendored Normal file
View File

@@ -0,0 +1,3 @@
model = "o3"
# Consider setting [mcp_servers] here!

9
.github/codex/labels/codex-attempt.md vendored Normal file
View File

@@ -0,0 +1,9 @@
Attempt to solve the reported issue.
If a code change is required, create a new branch, commit the fix, and open a pull request that resolves the problem.
Here is the original GitHub issue that triggered this run:
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}

7
.github/codex/labels/codex-review.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Review this PR and respond with a very concise final message, formatted in Markdown.
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the `base` and `head` refs that define this PR. Both refs are available locally.

7
.github/codex/labels/codex-triage.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Troubleshoot whether the reported issue is valid.
Provide a concise and respectful comment summarizing the findings.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}

95
.github/workflows/codex.yml vendored Normal file
View File

@@ -0,0 +1,95 @@
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
# This `if` check provides complex filtering logic to avoid running Codex
# on every PR. Admittedly, one thing this does not verify is whether the
# sender has write access to the repo: that must be done as part of a
# runtime step.
#
# Note the label values should match the ones in the .github/codex/labels
# folder.
if: |
(github.event_name == 'issues' && (
(github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-triage'))
)) ||
(github.event_name == 'pull_request' && github.event.action == 'labeled' && github.event.label.name == 'codex-review')
runs-on: ubuntu-latest
permissions:
contents: write # can push or create branches
issues: write # for comments + labels on issues/PRs
pull-requests: write # for PR comments/labels
steps:
# TODO: Consider adding an optional mode (--dry-run?) to actions/codex
# that verifies whether Codex should actually be run for this event.
# (For example, it may be rejected because the sender does not have
# write access to the repo.) The benefit would be two-fold:
# 1. As the first step of this job, it gives us a chance to add a reaction
# or comment to the PR/issue ASAP to "ack" the request.
# 2. It saves resources by skipping the clone and setup steps below if
# Codex is not going to run.
- name: Checkout repository
uses: actions/checkout@v4
# We install the dependencies like we would for an ordinary CI job,
# particularly because Codex will not have network access to install
# these dependencies.
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 22
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10.8.1
run_install: false
- name: Get pnpm store directory
id: pnpm-cache
shell: bash
run: |
echo "store_path=$(pnpm store path --silent)" >> $GITHUB_OUTPUT
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ steps.pnpm-cache.outputs.store_path }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install
- uses: dtolnay/rust-toolchain@1.87
with:
targets: x86_64-unknown-linux-gnu
components: clippy
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-${{ hashFiles('**/Cargo.lock') }}
# Note it is possible that the `verify` step internal to Run Codex will
# fail, in which case the work to setup the repo was worthless :(
- name: Run Codex
uses: ./.github/actions/codex
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
codex_home: ./.github/codex/home

View File

@@ -55,6 +55,10 @@ jobs:
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
- runner: windows-latest
target: x86_64-pc-windows-msvc
@@ -75,7 +79,7 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt install -y musl-tools pkg-config

View File

@@ -69,6 +69,8 @@ jobs:
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
@@ -88,7 +90,7 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt install -y musl-tools pkg-config

View File

@@ -65,7 +65,7 @@ mkdir -p "$BIN_DIR"
# Until we start publishing stable GitHub releases, we have to grab the binaries
# from the GitHub Action that created them. Update the URL below to point to the
# appropriate workflow run:
WORKFLOW_URL="https://github.com/openai/codex/actions/runs/15192425904"
WORKFLOW_URL="https://github.com/openai/codex/actions/runs/15361005231"
WORKFLOW_ID="${WORKFLOW_URL##*/}"
ARTIFACTS_DIR="$(mktemp -d)"

View File

@@ -382,6 +382,8 @@ async function handleCallback(
const exchanged = (await exchangeRes.json()) as {
access_token: string;
// NOTE(mbolin): I did not see the "key" property set in practice. Note
// this property is not read by the code.
key: string;
};

6
codex-rs/.gitignore vendored
View File

@@ -1 +1,7 @@
/target/
# Recommended value of CARGO_TARGET_DIR when using Docker as explained in .devcontainer/README.md.
/target-amd64/
# Value of CARGO_TARGET_DIR when using .devcontainer/devcontainer.json.
/target-arm64/

685
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ members = [
"exec",
"execpolicy",
"linux-sandbox",
"login",
"mcp-client",
"mcp-server",
"mcp-types",

View File

@@ -1,16 +1,31 @@
# codex-rs
# Codex CLI (Rust Implementation)
April 24, 2025
We provide Codex CLI as a standalone, native executable to ensure a zero-dependency install.
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to run it. For a number of users, this runtime requirement inhibits adoption: they would be better served by a standalone executable. As maintainers, we want Codex to run efficiently in a wide range of environments with minimal overhead. We also want to take advantage of operating system-specific APIs to provide better sandboxing, where possible.
## Installing Codex
To that end, we are moving forward with a Rust implementation of Codex CLI contained in this folder, which has the following benefits:
Today, the easiest way to install Codex is via `npm`, though we plan to publish Codex to other package managers soon.
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to [seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and [landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption and better, more predictable performance.
```shell
npm i -g @openai/codex@native
codex
```
Currently, the Rust implementation is materially behind the TypeScript implementation in functionality, so continue to use the TypeScript implementation for the time being. We will publish native executables via GitHub Releases as soon as we feel the Rust version is usable.
You can also download a platform-specific release directly from our [GitHub Releases](https://github.com/openai/codex/releases).
## Config
Codex supports a rich set of configuration options. See [`config.md`](./config.md) for details.
## Model Context Protocol Support
Codex CLI functions as an MCP client that can connect to MCP servers on startup. See the [`mcp_servers`](./config.md#mcp_servers) section in the configuration documentation for details.
It is still experimental, but you can also launch Codex as an MCP _server_ by running `codex mcp`. Using the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) is
```shell
npx @modelcontextprotocol/inspector codex mcp
```
## Code Organization
@@ -20,373 +35,3 @@ This folder is the root of a Cargo workspace. It contains quite a bit of experim
- [`exec/`](./exec) "headless" CLI for use in automation.
- [`tui/`](./tui) CLI that launches a fullscreen TUI built with [Ratatui](https://ratatui.rs/).
- [`cli/`](./cli) CLI multitool that provides the aforementioned CLIs via subcommands.
## Config
The CLI can be configured via a file named `config.toml`. By default, configuration is read from `~/.codex/config.toml`, though the `CODEX_HOME` environment variable can be used to specify a directory other than `~/.codex`.
The `config.toml` file supports the following options:
### model
The model that Codex should use.
```toml
model = "o3" # overrides the default of "codex-mini-latest"
```
### model_provider
Codex comes bundled with a number of "model providers" predefined. This config value is a string that indicates which provider to use. You can also define your own providers via `model_providers`.
For example, if you are running ollama with Mistral locally, then you would need to add the following to your config:
```toml
model = "mistral"
model_provider = "ollama"
```
because the following definition for `ollama` is included in Codex:
```toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
wire_api = "chat"
```
This option defaults to `"openai"` and the corresponding provider is defined as follows:
```toml
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"
```
### model_providers
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the correspodning provider.
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you
```toml
# Recall that in TOML, root keys must be listed before tables.
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
# Name of the provider that will be displayed in the Codex UI.
name = "OpenAI using Chat Completions"
# The path `/chat/completions` will be amended to this URL to make the POST
# request for the chat completions.
base_url = "https://api.openai.com/v1"
# If `env_key` is set, identifies an environment variable that must be set when
# using Codex with this provider. The value of the environment variable must be
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
env_key = "OPENAI_API_KEY"
# valid values for wire_api are "chat" and "responses".
wire_api = "chat"
```
### approval_policy
Determines when the user should be prompted to approve whether Codex can execute a command:
```toml
# This is analogous to --suggest in the TypeScript Codex CLI
approval_policy = "unless-allow-listed"
```
```toml
# If the command fails when run in the sandbox, Codex asks for permission to
# retry the command outside the sandbox.
approval_policy = "on-failure"
```
```toml
# User is never prompted: if the command fails, Codex will automatically try
# something out. Note the `exec` subcommand always uses this mode.
approval_policy = "never"
```
### profiles
A _profile_ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
want to use at runtime via the `--profile` flag.
Here is an example of a `config.toml` that defines multiple profiles:
```toml
model = "o3"
approval_policy = "unless-allow-listed"
sandbox_permissions = ["disk-full-read-access"]
disable_response_storage = false
# Setting `profile` is equivalent to specifying `--profile o3` on the command
# line, though the `--profile` flag can still be used to override this value.
profile = "o3"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
[profiles.o3]
model = "o3"
model_provider = "openai"
approval_policy = "never"
[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider = "openai-chat-completions"
[profiles.zdr]
model = "o3"
model_provider = "openai"
approval_policy = "on-failure"
disable_response_storage = true
```
Users can specify config values at multiple levels. Order of precedence is as follows:
1. custom command-line argument, e.g., `--model o3`
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
3. as an entry in `config.toml`, e.g., `model = "o3"`
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `o4-mini`)
### sandbox_permissions
List of permissions to grant to the sandbox that Codex uses to execute untrusted commands:
```toml
# This is comparable to --full-auto in the TypeScript Codex CLI, though
# specifying `disk-write-platform-global-temp-folder` adds /tmp as a writable
# folder in addition to $TMPDIR.
sandbox_permissions = [
"disk-full-read-access",
"disk-write-platform-user-temp-folder",
"disk-write-platform-global-temp-folder",
"disk-write-cwd",
]
```
To add additional writable folders, use `disk-write-folder`, which takes a parameter (this can be specified multiple times):
```toml
sandbox_permissions = [
# ...
"disk-write-folder=/Users/mbolin/.pyenv/shims",
]
```
### mcp_servers
Defines the list of MCP servers that Codex can consult for tool use. Currently, only servers that are launched by executing a program that communicate over stdio are supported. For servers that use the SSE transport, consider an adapter like [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy).
**Note:** Codex may cache the list of tools and resources from an MCP server so that Codex can include this information in context at startup without spawning all the servers. This is designed to save resources by loading MCP servers lazily.
This config option is comparable to how Claude and Cursor define `mcpServers` in their respective JSON config files, though because Codex uses TOML for its config language, the format is slightly different. For example, the following config in JSON:
```json
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "mcp-server"],
"env": {
"API_KEY": "value"
}
}
}
}
```
Should be represented as follows in `~/.codex/config.toml`:
```toml
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
[mcp_servers.server-name]
command = "npx"
args = ["-y", "mcp-server"]
env = { "API_KEY" = "value" }
```
### disable_response_storage
Currently, customers whose accounts are set to use Zero Data Retention (ZDR) must set `disable_response_storage` to `true` so that Codex uses an alternative to the Responses API that works with ZDR:
```toml
disable_response_storage = true
```
### shell_environment_policy
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it passes **only a minimal core subset** of your environment to those subprocesses to avoid leaking credentials. You can tune this behavior via the **`shell_environment_policy`** block in
`config.toml`:
```toml
[shell_environment_policy]
# inherit can be "core" (default), "all", or "none"
inherit = "core"
# set to true to *skip* the filter for `"*KEY*"` and `"*TOKEN*"`
ignore_default_excludes = false
# exclude patterns (case-insensitive globs)
exclude = ["AWS_*", "AZURE_*"]
# force-set / override values
set = { CI = "1" }
# if provided, *only* vars matching these patterns are kept
include_only = ["PATH", "HOME"]
```
| Field | Type | Default | Description |
| ------------------------- | -------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `inherit` | string | `core` | Starting template for the environment:<br>`core` (`HOME`, `PATH`, `USER`, …), `all` (clone full parent env), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
| `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. |
| `set` | table&lt;string,string&gt; | `{}` | Explicit key/value overrides or additions always win over inherited values. |
| `include_only` | array&lt;string&gt; | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) |
The patterns are **glob style**, not full regular expressions: `*` matches any
number of characters, `?` matches exactly one, and character classes like
`[A-Z]`/`[^0-9]` are supported. Matching is always **case-insensitive**. This
syntax is documented in code as `EnvironmentVariablePattern` (see
`core/src/config_types.rs`).
If you just need a clean slate with a few custom entries you can write:
```toml
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
```
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
### notify
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
```json
{
"type": "agent-turn-complete",
"turn-id": "12345",
"input-messages": ["Rename `foo` to `bar` and update the callsites."],
"last-assistant-message": "Rename complete and verified `cargo build` succeeds."
}
```
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
As an example, here is a Python script that parses the JSON and decides whether to show a desktop push notification using [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS:
```python
#!/usr/bin/env python3
import json
import subprocess
import sys
def main() -> int:
if len(sys.argv) != 2:
print("Usage: notify.py <NOTIFICATION_JSON>")
return 1
try:
notification = json.loads(sys.argv[1])
except json.JSONDecodeError:
return 1
match notification_type := notification.get("type"):
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
else:
title = "Codex: Turn Complete!"
input_messages = notification.get("input_messages", [])
message = " ".join(input_messages)
title += message
case _:
print(f"not sending a push notification for: {notification_type}")
return 0
subprocess.check_output(
[
"terminal-notifier",
"-title",
title,
"-message",
message,
"-group",
"codex",
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
]
)
return 0
if __name__ == "__main__":
sys.exit(main())
```
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
### history
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
To disable this behavior, configure `[history]` as follows:
```toml
[history]
persistence = "none" # "save-all" is the default value
```
### file_opener
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
For example, if the model output includes a reference such as `【F:/home/user/project/main.py†L42-L50】`, then this would be rewritten to link to the URI `vscode://file/home/user/project/main.py:42`.
Note this is **not** a general editor setting (like `$EDITOR`), as it only accepts a fixed set of values:
- `"vscode"` (default)
- `"vscode-insiders"`
- `"windsurf"`
- `"cursor"`
- `"none"` to explicitly disable this feature
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
### project_doc_max_bytes
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
### tui
Options that are specific to the TUI.
```toml
[tui]
# This will make it so that Codex does not try to process mouse events, which
# means your Terminal's native drag-to-text to text selection and copy/paste
# should work. The tradeoff is that Codex will not receive any mouse events, so
# it will not be possible to use the mouse to scroll conversation history.
#
# Note that most terminals support holding down a modifier key when using the
# mouse to support text selection. For example, even if Codex mouse capture is
# enabled (i.e., this is set to `false`), you can still hold down alt while
# dragging the mouse to select text.
disable_mouse_capture = true # defaults to `false`
```

View File

@@ -12,7 +12,6 @@ workspace = true
[dependencies]
anyhow = "1"
regex = "1.11.1"
serde_json = "1.0.110"
similar = "2.7.0"
thiserror = "2.0.12"

View File

@@ -0,0 +1,40 @@
To edit files, ALWAYS use the `shell` tool with `apply_patch` CLI. `apply_patch` effectively allows you to execute a diff/patch against a file, but the format of the diff specification is unique to this task, so pay careful attention to these instructions. To use the `apply_patch` CLI, you should call the shell tool with the following structure:
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n[YOUR_PATCH]\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff format.
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs. For instance, we might have:
@@ class BaseClass
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough to uniquely identify code. An example of a message that you might pass as "input" to this function, in order to apply a patch, is shown below.
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n*** Update File: pygorithm/searching/binary_search.py\\n@@ class BaseClass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n@@ class Subclass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
File references can only be relative, NEVER ABSOLUTE. After the apply_patch command is run, it will always say "Done!", regardless of whether the patch was successfully applied or not. However, you can determine if there are issue and errors by looking at any warnings or logging lines printed BEFORE the "Done!" is output.

View File

@@ -19,6 +19,9 @@ use tree_sitter::LanguageError;
use tree_sitter::Parser;
use tree_sitter_bash::LANGUAGE as BASH;
/// Detailed instructions for gpt-4.1 on how to use the `apply_patch` tool.
pub const APPLY_PATCH_TOOL_INSTRUCTIONS: &str = include_str!("../apply_patch_tool_instructions.md");
#[derive(Debug, Error, PartialEq)]
pub enum ApplyPatchError {
#[error(transparent)]

View File

@@ -37,7 +37,15 @@ const EOF_MARKER: &str = "*** End of File";
const CHANGE_CONTEXT_MARKER: &str = "@@ ";
const EMPTY_CHANGE_CONTEXT_MARKER: &str = "@@";
#[derive(Debug, PartialEq, Error)]
/// Currently, the only OpenAI model that knowingly requires lenient parsing is
/// gpt-4.1. While we could try to require everyone to pass in a strictness
/// param when invoking apply_patch, it is a pain to thread it through all of
/// the call sites, so we resign ourselves allowing lenient parsing for all
/// models. See [`ParseMode::Lenient`] for details on the exceptions we make for
/// gpt-4.1.
const PARSE_IN_STRICT_MODE: bool = false;
#[derive(Debug, PartialEq, Error, Clone)]
pub enum ParseError {
#[error("invalid patch: {0}")]
InvalidPatchError(String),
@@ -46,7 +54,7 @@ pub enum ParseError {
}
use ParseError::*;
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Clone)]
#[allow(clippy::enum_variant_names)]
pub enum Hunk {
AddFile {
@@ -78,7 +86,7 @@ impl Hunk {
use Hunk::*;
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Clone)]
pub struct UpdateFileChunk {
/// A single line of context used to narrow down the position of the chunk
/// (this is usually a class, method, or function definition.)
@@ -95,19 +103,68 @@ pub struct UpdateFileChunk {
}
pub fn parse_patch(patch: &str) -> Result<Vec<Hunk>, ParseError> {
let mode = if PARSE_IN_STRICT_MODE {
ParseMode::Strict
} else {
ParseMode::Lenient
};
parse_patch_text(patch, mode)
}
enum ParseMode {
/// Parse the patch text argument as is.
Strict,
/// GPT-4.1 is known to formulate the `command` array for the `local_shell`
/// tool call for `apply_patch` call using something like the following:
///
/// ```json
/// [
/// "apply_patch",
/// "<<'EOF'\n*** Begin Patch\n*** Update File: README.md\n@@...\n*** End Patch\nEOF\n",
/// ]
/// ```
///
/// This is a problem because `local_shell` is a bit of a misnomer: the
/// `command` is not invoked by passing the arguments to a shell like Bash,
/// but are invoked using something akin to `execvpe(3)`.
///
/// This is significant in this case because where a shell would interpret
/// `<<'EOF'...` as a heredoc and pass the contents via stdin (which is
/// fine, as `apply_patch` is specified to read from stdin if no argument is
/// passed), `execvpe(3)` interprets the heredoc as a literal string. To get
/// the `local_shell` tool to run a command the way shell would, the
/// `command` array must be something like:
///
/// ```json
/// [
/// "bash",
/// "-lc",
/// "apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: README.md\n@@...\n*** End Patch\nEOF\n",
/// ]
/// ```
///
/// In lenient mode, we check if the argument to `apply_patch` starts with
/// `<<'EOF'` and ends with `EOF\n`. If so, we strip off these markers,
/// trim() the result, and treat what is left as the patch text.
Lenient,
}
fn parse_patch_text(patch: &str, mode: ParseMode) -> Result<Vec<Hunk>, ParseError> {
let lines: Vec<&str> = patch.trim().lines().collect();
if lines.is_empty() || lines[0] != BEGIN_PATCH_MARKER {
return Err(InvalidPatchError(String::from(
"The first line of the patch must be '*** Begin Patch'",
)));
}
let last_line_index = lines.len() - 1;
if lines[last_line_index] != END_PATCH_MARKER {
return Err(InvalidPatchError(String::from(
"The last line of the patch must be '*** End Patch'",
)));
}
let lines: &[&str] = match check_patch_boundaries_strict(&lines) {
Ok(()) => &lines,
Err(e) => match mode {
ParseMode::Strict => {
return Err(e);
}
ParseMode::Lenient => check_patch_boundaries_lenient(&lines, e)?,
},
};
let mut hunks: Vec<Hunk> = Vec::new();
// The above checks ensure that lines.len() >= 2.
let last_line_index = lines.len().saturating_sub(1);
let mut remaining_lines = &lines[1..last_line_index];
let mut line_number = 2;
while !remaining_lines.is_empty() {
@@ -119,6 +176,64 @@ pub fn parse_patch(patch: &str) -> Result<Vec<Hunk>, ParseError> {
Ok(hunks)
}
/// Checks the start and end lines of the patch text for `apply_patch`,
/// returning an error if they do not match the expected markers.
fn check_patch_boundaries_strict(lines: &[&str]) -> Result<(), ParseError> {
let (first_line, last_line) = match lines {
[] => (None, None),
[first] => (Some(first), Some(first)),
[first, .., last] => (Some(first), Some(last)),
};
check_start_and_end_lines_strict(first_line, last_line)
}
/// If we are in lenient mode, we check if the first line starts with `<<EOF`
/// (possibly quoted) and the last line ends with `EOF`. There must be at least
/// 4 lines total because the heredoc markers take up 2 lines and the patch text
/// must have at least 2 lines.
///
/// If successful, returns the lines of the patch text that contain the patch
/// contents, excluding the heredoc markers.
fn check_patch_boundaries_lenient<'a>(
original_lines: &'a [&'a str],
original_parse_error: ParseError,
) -> Result<&'a [&'a str], ParseError> {
match original_lines {
[first, .., last] => {
if (first == &"<<EOF" || first == &"<<'EOF'" || first == &"<<\"EOF\"")
&& last.ends_with("EOF")
&& original_lines.len() >= 4
{
let inner_lines = &original_lines[1..original_lines.len() - 1];
match check_patch_boundaries_strict(inner_lines) {
Ok(()) => Ok(inner_lines),
Err(e) => Err(e),
}
} else {
Err(original_parse_error)
}
}
_ => Err(original_parse_error),
}
}
fn check_start_and_end_lines_strict(
first_line: Option<&&str>,
last_line: Option<&&str>,
) -> Result<(), ParseError> {
match (first_line, last_line) {
(Some(&first), Some(&last)) if first == BEGIN_PATCH_MARKER && last == END_PATCH_MARKER => {
Ok(())
}
(Some(&first), _) if first != BEGIN_PATCH_MARKER => Err(InvalidPatchError(String::from(
"The first line of the patch must be '*** Begin Patch'",
))),
_ => Err(InvalidPatchError(String::from(
"The last line of the patch must be '*** End Patch'",
))),
}
}
/// Attempts to parse a single hunk from the start of lines.
/// Returns the parsed hunk and the number of lines parsed (or a ParseError).
fn parse_one_hunk(lines: &[&str], line_number: usize) -> Result<(Hunk, usize), ParseError> {
@@ -312,22 +427,23 @@ fn parse_update_file_chunk(
#[test]
fn test_parse_patch() {
assert_eq!(
parse_patch("bad"),
parse_patch_text("bad", ParseMode::Strict),
Err(InvalidPatchError(
"The first line of the patch must be '*** Begin Patch'".to_string()
))
);
assert_eq!(
parse_patch("*** Begin Patch\nbad"),
parse_patch_text("*** Begin Patch\nbad", ParseMode::Strict),
Err(InvalidPatchError(
"The last line of the patch must be '*** End Patch'".to_string()
))
);
assert_eq!(
parse_patch(
parse_patch_text(
"*** Begin Patch\n\
*** Update File: test.py\n\
*** End Patch"
*** End Patch",
ParseMode::Strict
),
Err(InvalidHunkError {
message: "Update file hunk for path 'test.py' is empty".to_string(),
@@ -335,14 +451,15 @@ fn test_parse_patch() {
})
);
assert_eq!(
parse_patch(
parse_patch_text(
"*** Begin Patch\n\
*** End Patch"
*** End Patch",
ParseMode::Strict
),
Ok(Vec::new())
);
assert_eq!(
parse_patch(
parse_patch_text(
"*** Begin Patch\n\
*** Add File: path/add.py\n\
+abc\n\
@@ -353,7 +470,8 @@ fn test_parse_patch() {
@@ def f():\n\
- pass\n\
+ return 123\n\
*** End Patch"
*** End Patch",
ParseMode::Strict
),
Ok(vec![
AddFile {
@@ -377,14 +495,15 @@ fn test_parse_patch() {
);
// Update hunk followed by another hunk (Add File).
assert_eq!(
parse_patch(
parse_patch_text(
"*** Begin Patch\n\
*** Update File: file.py\n\
@@\n\
+line\n\
*** Add File: other.py\n\
+content\n\
*** End Patch"
*** End Patch",
ParseMode::Strict
),
Ok(vec![
UpdateFile {
@@ -407,12 +526,13 @@ fn test_parse_patch() {
// Update hunk without an explicit @@ header for the first chunk should parse.
// Use a raw string to preserve the leading space diff marker on the context line.
assert_eq!(
parse_patch(
parse_patch_text(
r#"*** Begin Patch
*** Update File: file2.py
import foo
+bar
*** End Patch"#,
ParseMode::Strict
),
Ok(vec![UpdateFile {
path: PathBuf::from("file2.py"),
@@ -427,6 +547,80 @@ fn test_parse_patch() {
);
}
#[test]
fn test_parse_patch_lenient() {
let patch_text = r#"*** Begin Patch
*** Update File: file2.py
import foo
+bar
*** End Patch"#;
let expected_patch = vec![UpdateFile {
path: PathBuf::from("file2.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
old_lines: vec!["import foo".to_string()],
new_lines: vec!["import foo".to_string(), "bar".to_string()],
is_end_of_file: false,
}],
}];
let expected_error =
InvalidPatchError("The first line of the patch must be '*** Begin Patch'".to_string());
let patch_text_in_heredoc = format!("<<EOF\n{patch_text}\nEOF\n");
assert_eq!(
parse_patch_text(&patch_text_in_heredoc, ParseMode::Strict),
Err(expected_error.clone())
);
assert_eq!(
parse_patch_text(&patch_text_in_heredoc, ParseMode::Lenient),
Ok(expected_patch.clone())
);
let patch_text_in_single_quoted_heredoc = format!("<<'EOF'\n{patch_text}\nEOF\n");
assert_eq!(
parse_patch_text(&patch_text_in_single_quoted_heredoc, ParseMode::Strict),
Err(expected_error.clone())
);
assert_eq!(
parse_patch_text(&patch_text_in_single_quoted_heredoc, ParseMode::Lenient),
Ok(expected_patch.clone())
);
let patch_text_in_double_quoted_heredoc = format!("<<\"EOF\"\n{patch_text}\nEOF\n");
assert_eq!(
parse_patch_text(&patch_text_in_double_quoted_heredoc, ParseMode::Strict),
Err(expected_error.clone())
);
assert_eq!(
parse_patch_text(&patch_text_in_double_quoted_heredoc, ParseMode::Lenient),
Ok(expected_patch.clone())
);
let patch_text_in_mismatched_quotes_heredoc = format!("<<\"EOF'\n{patch_text}\nEOF\n");
assert_eq!(
parse_patch_text(&patch_text_in_mismatched_quotes_heredoc, ParseMode::Strict),
Err(expected_error.clone())
);
assert_eq!(
parse_patch_text(&patch_text_in_mismatched_quotes_heredoc, ParseMode::Lenient),
Err(expected_error.clone())
);
let patch_text_with_missing_closing_heredoc =
"<<EOF\n*** Begin Patch\n*** Update File: file2.py\nEOF\n".to_string();
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Strict),
Err(expected_error.clone())
);
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Lenient),
Err(InvalidPatchError(
"The last line of the patch must be '*** End Patch'".to_string()
))
);
}
#[test]
fn test_parse_one_hunk() {
assert_eq!(

View File

@@ -20,6 +20,7 @@ clap = { version = "4", features = ["derive"] }
codex-core = { path = "../core" }
codex-common = { path = "../common", features = ["cli"] }
codex-exec = { path = "../exec" }
codex-login = { path = "../login" }
codex-linux-sandbox = { path = "../linux-sandbox" }
codex-mcp-server = { path = "../mcp-server" }
codex-tui = { path = "../tui" }

View File

@@ -1,5 +1,6 @@
use std::path::PathBuf;
use codex_common::CliConfigOverrides;
use codex_common::SandboxPermissionOption;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
@@ -20,12 +21,14 @@ pub async fn run_command_under_seatbelt(
let SeatbeltCommand {
full_auto,
sandbox,
config_overrides,
command,
} = command;
run_command_under_sandbox(
full_auto,
sandbox,
command,
config_overrides,
codex_linux_sandbox_exe,
SandboxType::Seatbelt,
)
@@ -39,12 +42,14 @@ pub async fn run_command_under_landlock(
let LandlockCommand {
full_auto,
sandbox,
config_overrides,
command,
} = command;
run_command_under_sandbox(
full_auto,
sandbox,
command,
config_overrides,
codex_linux_sandbox_exe,
SandboxType::Landlock,
)
@@ -60,16 +65,22 @@ async fn run_command_under_sandbox(
full_auto: bool,
sandbox: SandboxPermissionOption,
command: Vec<String>,
config_overrides: CliConfigOverrides,
codex_linux_sandbox_exe: Option<PathBuf>,
sandbox_type: SandboxType,
) -> anyhow::Result<()> {
let sandbox_policy = create_sandbox_policy(full_auto, sandbox);
let cwd = std::env::current_dir()?;
let config = Config::load_with_overrides(ConfigOverrides {
sandbox_policy: Some(sandbox_policy),
codex_linux_sandbox_exe,
..Default::default()
})?;
let config = Config::load_with_cli_overrides(
config_overrides
.parse_overrides()
.map_err(anyhow::Error::msg)?,
ConfigOverrides {
sandbox_policy: Some(sandbox_policy),
codex_linux_sandbox_exe,
..Default::default()
},
)?;
let stdio_policy = StdioPolicy::Inherit;
let env = create_env(&config.shell_environment_policy);

View File

@@ -1,8 +1,10 @@
pub mod debug_sandbox;
mod exit_status;
pub mod login;
pub mod proto;
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_common::SandboxPermissionOption;
#[derive(Debug, Parser)]
@@ -14,6 +16,9 @@ pub struct SeatbeltCommand {
#[clap(flatten)]
pub sandbox: SandboxPermissionOption,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
/// Full command args to run under seatbelt.
#[arg(trailing_var_arg = true)]
pub command: Vec<String>,
@@ -28,6 +33,9 @@ pub struct LandlockCommand {
#[clap(flatten)]
pub sandbox: SandboxPermissionOption,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
/// Full command args to run under landlock.
#[arg(trailing_var_arg = true)]
pub command: Vec<String>,

35
codex-rs/cli/src/login.rs Normal file
View File

@@ -0,0 +1,35 @@
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_login::login_with_chatgpt;
pub async fn run_login_with_chatgpt(cli_config_overrides: CliConfigOverrides) -> ! {
let cli_overrides = match cli_config_overrides.parse_overrides() {
Ok(v) => v,
Err(e) => {
eprintln!("Error parsing -c overrides: {e}");
std::process::exit(1);
}
};
let config_overrides = ConfigOverrides::default();
let config = match Config::load_with_cli_overrides(cli_overrides, config_overrides) {
Ok(config) => config,
Err(e) => {
eprintln!("Error loading configuration: {e}");
std::process::exit(1);
}
};
let capture_output = false;
match login_with_chatgpt(&config.codex_home, capture_output).await {
Ok(_) => {
eprintln!("Successfully logged in");
std::process::exit(0);
}
Err(e) => {
eprintln!("Error logging in: {e}");
std::process::exit(1);
}
}
}

View File

@@ -1,7 +1,9 @@
use clap::Parser;
use codex_cli::LandlockCommand;
use codex_cli::SeatbeltCommand;
use codex_cli::login::run_login_with_chatgpt;
use codex_cli::proto;
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
use codex_tui::Cli as TuiCli;
use std::path::PathBuf;
@@ -19,6 +21,9 @@ use crate::proto::ProtoCli;
subcommand_negates_reqs = true
)]
struct MultitoolCli {
#[clap(flatten)]
pub config_overrides: CliConfigOverrides,
#[clap(flatten)]
interactive: TuiCli,
@@ -32,6 +37,9 @@ enum Subcommand {
#[clap(visible_alias = "e")]
Exec(ExecCli),
/// Login with ChatGPT.
Login(LoginCommand),
/// Experimental: run Codex as an MCP server.
Mcp,
@@ -59,7 +67,10 @@ enum DebugCommand {
}
#[derive(Debug, Parser)]
struct ReplProto {}
struct LoginCommand {
#[clap(skip)]
config_overrides: CliConfigOverrides,
}
fn main() -> anyhow::Result<()> {
codex_linux_sandbox::run_with_sandbox(|codex_linux_sandbox_exe| async move {
@@ -73,28 +84,38 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
match cli.subcommand {
None => {
codex_tui::run_main(cli.interactive, codex_linux_sandbox_exe)?;
let mut tui_cli = cli.interactive;
prepend_config_flags(&mut tui_cli.config_overrides, cli.config_overrides);
codex_tui::run_main(tui_cli, codex_linux_sandbox_exe)?;
}
Some(Subcommand::Exec(exec_cli)) => {
Some(Subcommand::Exec(mut exec_cli)) => {
prepend_config_flags(&mut exec_cli.config_overrides, cli.config_overrides);
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Proto(proto_cli)) => {
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);
run_login_with_chatgpt(login_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(&mut proto_cli.config_overrides, cli.config_overrides);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
DebugCommand::Seatbelt(seatbelt_command) => {
DebugCommand::Seatbelt(mut seatbelt_cli) => {
prepend_config_flags(&mut seatbelt_cli.config_overrides, cli.config_overrides);
codex_cli::debug_sandbox::run_command_under_seatbelt(
seatbelt_command,
seatbelt_cli,
codex_linux_sandbox_exe,
)
.await?;
}
DebugCommand::Landlock(landlock_command) => {
DebugCommand::Landlock(mut landlock_cli) => {
prepend_config_flags(&mut landlock_cli.config_overrides, cli.config_overrides);
codex_cli::debug_sandbox::run_command_under_landlock(
landlock_command,
landlock_cli,
codex_linux_sandbox_exe,
)
.await?;
@@ -104,3 +125,14 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
Ok(())
}
/// Prepend root-level overrides so they have lower precedence than
/// CLI-specific ones specified after the subcommand (if any).
fn prepend_config_flags(
subcommand_config_overrides: &mut CliConfigOverrides,
cli_config_overrides: CliConfigOverrides,
) {
subcommand_config_overrides
.raw_overrides
.splice(0..0, cli_config_overrides.raw_overrides);
}

View File

@@ -2,6 +2,7 @@ use std::io::IsTerminal;
use std::sync::Arc;
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_core::Codex;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
@@ -13,9 +14,12 @@ use tracing::error;
use tracing::info;
#[derive(Debug, Parser)]
pub struct ProtoCli {}
pub struct ProtoCli {
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
}
pub async fn run_main(_opts: ProtoCli) -> anyhow::Result<()> {
pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
if std::io::stdin().is_terminal() {
anyhow::bail!("Protocol mode expects stdin to be a pipe, not a terminal");
}
@@ -24,7 +28,12 @@ pub async fn run_main(_opts: ProtoCli) -> anyhow::Result<()> {
.with_writer(std::io::stderr)
.init();
let config = Config::load_with_overrides(ConfigOverrides::default())?;
let ProtoCli { config_overrides } = opts;
let overrides_vec = config_overrides
.parse_overrides()
.map_err(anyhow::Error::msg)?;
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
let ctrl_c = notify_on_sigint();
let (codex, _init_id) = Codex::spawn(config, ctrl_c.clone()).await?;
let codex = Arc::new(codex);

View File

@@ -9,8 +9,10 @@ workspace = true
[dependencies]
clap = { version = "4", features = ["derive", "wrap_help"], optional = true }
codex-core = { path = "../core" }
toml = { version = "0.8", optional = true }
serde = { version = "1", optional = true }
[features]
# Separate feature so that `clap` is not a mandatory dependency.
cli = ["clap"]
cli = ["clap", "toml", "serde"]
elapsed = []

View File

@@ -0,0 +1,170 @@
//! Support for `-c key=value` overrides shared across Codex CLI tools.
//!
//! This module provides a [`CliConfigOverrides`] struct that can be embedded
//! into a `clap`-derived CLI struct using `#[clap(flatten)]`. Each occurrence
//! of `-c key=value` (or `--config key=value`) will be collected as a raw
//! string. Helper methods are provided to convert the raw strings into
//! key/value pairs as well as to apply them onto a mutable
//! `serde_json::Value` representing the configuration tree.
use clap::ArgAction;
use clap::Parser;
use serde::de::Error as SerdeError;
use toml::Value;
/// CLI option that captures arbitrary configuration overrides specified as
/// `-c key=value`. It intentionally keeps both halves **unparsed** so that the
/// calling code can decide how to interpret the right-hand side.
#[derive(Parser, Debug, Default, Clone)]
pub struct CliConfigOverrides {
/// Override a configuration value that would otherwise be loaded from
/// `~/.codex/config.toml`. Use a dotted path (`foo.bar.baz`) to override
/// nested values. The `value` portion is parsed as JSON. If it fails to
/// parse as JSON, the raw string is used as a literal.
///
/// Examples:
/// - `-c model="o3"`
/// - `-c 'sandbox_permissions=["disk-full-read-access"]'`
/// - `-c shell_environment_policy.inherit=all`
#[arg(
short = 'c',
long = "config",
value_name = "key=value",
action = ArgAction::Append,
global = true,
)]
pub raw_overrides: Vec<String>,
}
impl CliConfigOverrides {
/// Parse the raw strings captured from the CLI into a list of `(path,
/// value)` tuples where `value` is a `serde_json::Value`.
pub fn parse_overrides(&self) -> Result<Vec<(String, Value)>, String> {
self.raw_overrides
.iter()
.map(|s| {
// Only split on the *first* '=' so values are free to contain
// the character.
let mut parts = s.splitn(2, '=');
let key = match parts.next() {
Some(k) => k.trim(),
None => return Err("Override missing key".to_string()),
};
let value_str = parts
.next()
.ok_or_else(|| format!("Invalid override (missing '='): {s}"))?
.trim();
if key.is_empty() {
return Err(format!("Empty key in override: {s}"));
}
// Attempt to parse as JSON. If that fails, treat it as a raw
// string. This allows convenient usage such as
// `-c model=o3` without the quotes.
let value: Value = match parse_toml_value(value_str) {
Ok(v) => v,
Err(_) => Value::String(value_str.to_string()),
};
Ok((key.to_string(), value))
})
.collect()
}
/// Apply all parsed overrides onto `target`. Intermediate objects will be
/// created as necessary. Values located at the destination path will be
/// replaced.
pub fn apply_on_value(&self, target: &mut Value) -> Result<(), String> {
let overrides = self.parse_overrides()?;
for (path, value) in overrides {
apply_single_override(target, &path, value);
}
Ok(())
}
}
/// Apply a single override onto `root`, creating intermediate objects as
/// necessary.
fn apply_single_override(root: &mut Value, path: &str, value: Value) {
use toml::value::Table;
let parts: Vec<&str> = path.split('.').collect();
let mut current = root;
for (i, part) in parts.iter().enumerate() {
let is_last = i == parts.len() - 1;
if is_last {
match current {
Value::Table(tbl) => {
tbl.insert((*part).to_string(), value);
}
_ => {
let mut tbl = Table::new();
tbl.insert((*part).to_string(), value);
*current = Value::Table(tbl);
}
}
return;
}
// Traverse or create intermediate table.
match current {
Value::Table(tbl) => {
current = tbl
.entry((*part).to_string())
.or_insert_with(|| Value::Table(Table::new()));
}
_ => {
*current = Value::Table(Table::new());
if let Value::Table(tbl) = current {
current = tbl
.entry((*part).to_string())
.or_insert_with(|| Value::Table(Table::new()));
}
}
}
}
}
fn parse_toml_value(raw: &str) -> Result<Value, toml::de::Error> {
let wrapped = format!("_x_ = {raw}");
let table: toml::Table = toml::from_str(&wrapped)?;
table
.get("_x_")
.cloned()
.ok_or_else(|| SerdeError::custom("missing sentinel key"))
}
#[cfg(all(test, feature = "cli"))]
#[allow(clippy::expect_used, clippy::unwrap_used)]
mod tests {
use super::*;
#[test]
fn parses_basic_scalar() {
let v = parse_toml_value("42").expect("parse");
assert_eq!(v.as_integer(), Some(42));
}
#[test]
fn fails_on_unquoted_string() {
assert!(parse_toml_value("hello").is_err());
}
#[test]
fn parses_array() {
let v = parse_toml_value("[1, 2, 3]").expect("parse");
let arr = v.as_array().expect("array");
assert_eq!(arr.len(), 3);
}
#[test]
fn parses_inline_table() {
let v = parse_toml_value("{a = 1, b = 2}").expect("parse");
let tbl = v.as_table().expect("table");
assert_eq!(tbl.get("a").unwrap().as_integer(), Some(1));
assert_eq!(tbl.get("b").unwrap().as_integer(), Some(2));
}
}

View File

@@ -8,3 +8,9 @@ pub mod elapsed;
pub use approval_mode_cli_arg::ApprovalModeCliArg;
#[cfg(feature = "cli")]
pub use approval_mode_cli_arg::SandboxPermissionOption;
#[cfg(any(feature = "cli", test))]
mod config_override;
#[cfg(feature = "cli")]
pub use config_override::CliConfigOverrides;

415
codex-rs/config.md Normal file
View File

@@ -0,0 +1,415 @@
# Config
Codex supports several mechanisms for setting config values:
- Config-specific command-line flags, such as `--model o3` (highest precedence).
- A generic `-c`/`--config` flag that takes a `key=value` pair, such as `--config model="o3"`.
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"`.
- Values can contain objects, such as `--config shell_environment_policy.include_only=["PATH", "HOME", "USER"]`.
- For consistency with `config.toml`, values are in TOML format rather than JSON format, so use `{a = 1, b = 2}` rather than `{"a": 1, "b": 2}`.
- If `value` cannot be parsed as a valid TOML value, it is treated as a string value. This means that both `-c model="o3"` and `-c model=o3` are equivalent.
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.codex`. (Note `CODEX_HOME` will also be where logs and other Codex-related information are stored.)
Both the `--config` flag and the `config.toml` file support the following options:
## model
The model that Codex should use.
```toml
model = "o3" # overrides the default of "codex-mini-latest"
```
## model_provider
Codex comes bundled with a number of "model providers" predefined. This config value is a string that indicates which provider to use. You can also define your own providers via `model_providers`.
For example, if you are running ollama with Mistral locally, then you would need to add the following to your config:
```toml
model = "mistral"
model_provider = "ollama"
```
because the following definition for `ollama` is included in Codex:
```toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
wire_api = "chat"
```
This option defaults to `"openai"` and the corresponding provider is defined as follows:
```toml
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"
```
## model_providers
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the correspodning provider.
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you
```toml
# Recall that in TOML, root keys must be listed before tables.
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
# Name of the provider that will be displayed in the Codex UI.
name = "OpenAI using Chat Completions"
# The path `/chat/completions` will be amended to this URL to make the POST
# request for the chat completions.
base_url = "https://api.openai.com/v1"
# If `env_key` is set, identifies an environment variable that must be set when
# using Codex with this provider. The value of the environment variable must be
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
env_key = "OPENAI_API_KEY"
# valid values for wire_api are "chat" and "responses".
wire_api = "chat"
```
## approval_policy
Determines when the user should be prompted to approve whether Codex can execute a command:
```toml
# This is analogous to --suggest in the TypeScript Codex CLI
approval_policy = "unless-allow-listed"
```
```toml
# If the command fails when run in the sandbox, Codex asks for permission to
# retry the command outside the sandbox.
approval_policy = "on-failure"
```
```toml
# User is never prompted: if the command fails, Codex will automatically try
# something out. Note the `exec` subcommand always uses this mode.
approval_policy = "never"
```
## profiles
A _profile_ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
want to use at runtime via the `--profile` flag.
Here is an example of a `config.toml` that defines multiple profiles:
```toml
model = "o3"
approval_policy = "unless-allow-listed"
sandbox_permissions = ["disk-full-read-access"]
disable_response_storage = false
# Setting `profile` is equivalent to specifying `--profile o3` on the command
# line, though the `--profile` flag can still be used to override this value.
profile = "o3"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
[profiles.o3]
model = "o3"
model_provider = "openai"
approval_policy = "never"
[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider = "openai-chat-completions"
[profiles.zdr]
model = "o3"
model_provider = "openai"
approval_policy = "on-failure"
disable_response_storage = true
```
Users can specify config values at multiple levels. Order of precedence is as follows:
1. custom command-line argument, e.g., `--model o3`
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
3. as an entry in `config.toml`, e.g., `model = "o3"`
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `codex-mini-latest`)
## model_reasoning_effort
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
- `"low"`
- `"medium"` (default)
- `"high"`
To disable reasoning, set `model_reasoning_effort` to `"none"` in your config:
```toml
model_reasoning_effort = "none" # disable reasoning
```
## model_reasoning_summary
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
- `"auto"` (default)
- `"concise"`
- `"detailed"`
To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in your config:
```toml
model_reasoning_summary = "none" # disable reasoning summaries
```
## sandbox_permissions
List of permissions to grant to the sandbox that Codex uses to execute untrusted commands:
```toml
# This is comparable to --full-auto in the TypeScript Codex CLI, though
# specifying `disk-write-platform-global-temp-folder` adds /tmp as a writable
# folder in addition to $TMPDIR.
sandbox_permissions = [
"disk-full-read-access",
"disk-write-platform-user-temp-folder",
"disk-write-platform-global-temp-folder",
"disk-write-cwd",
]
```
To add additional writable folders, use `disk-write-folder`, which takes a parameter (this can be specified multiple times):
```toml
sandbox_permissions = [
# ...
"disk-write-folder=/Users/mbolin/.pyenv/shims",
]
```
## mcp_servers
Defines the list of MCP servers that Codex can consult for tool use. Currently, only servers that are launched by executing a program that communicate over stdio are supported. For servers that use the SSE transport, consider an adapter like [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy).
**Note:** Codex may cache the list of tools and resources from an MCP server so that Codex can include this information in context at startup without spawning all the servers. This is designed to save resources by loading MCP servers lazily.
This config option is comparable to how Claude and Cursor define `mcpServers` in their respective JSON config files, though because Codex uses TOML for its config language, the format is slightly different. For example, the following config in JSON:
```json
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "mcp-server"],
"env": {
"API_KEY": "value"
}
}
}
}
```
Should be represented as follows in `~/.codex/config.toml`:
```toml
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
[mcp_servers.server-name]
command = "npx"
args = ["-y", "mcp-server"]
env = { "API_KEY" = "value" }
```
## disable_response_storage
Currently, customers whose accounts are set to use Zero Data Retention (ZDR) must set `disable_response_storage` to `true` so that Codex uses an alternative to the Responses API that works with ZDR:
```toml
disable_response_storage = true
```
## shell_environment_policy
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it passes **only a minimal core subset** of your environment to those subprocesses to avoid leaking credentials. You can tune this behavior via the **`shell_environment_policy`** block in
`config.toml`:
```toml
[shell_environment_policy]
# inherit can be "core" (default), "all", or "none"
inherit = "core"
# set to true to *skip* the filter for `"*KEY*"` and `"*TOKEN*"`
ignore_default_excludes = false
# exclude patterns (case-insensitive globs)
exclude = ["AWS_*", "AZURE_*"]
# force-set / override values
set = { CI = "1" }
# if provided, *only* vars matching these patterns are kept
include_only = ["PATH", "HOME"]
```
| Field | Type | Default | Description |
| ------------------------- | -------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `inherit` | string | `core` | Starting template for the environment:<br>`core` (`HOME`, `PATH`, `USER`, …), `all` (clone full parent env), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
| `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. |
| `set` | table&lt;string,string&gt; | `{}` | Explicit key/value overrides or additions always win over inherited values. |
| `include_only` | array&lt;string&gt; | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) |
The patterns are **glob style**, not full regular expressions: `*` matches any
number of characters, `?` matches exactly one, and character classes like
`[A-Z]`/`[^0-9]` are supported. Matching is always **case-insensitive**. This
syntax is documented in code as `EnvironmentVariablePattern` (see
`core/src/config_types.rs`).
If you just need a clean slate with a few custom entries you can write:
```toml
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
```
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
## notify
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
```json
{
"type": "agent-turn-complete",
"turn-id": "12345",
"input-messages": ["Rename `foo` to `bar` and update the callsites."],
"last-assistant-message": "Rename complete and verified `cargo build` succeeds."
}
```
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
As an example, here is a Python script that parses the JSON and decides whether to show a desktop push notification using [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS:
```python
#!/usr/bin/env python3
import json
import subprocess
import sys
def main() -> int:
if len(sys.argv) != 2:
print("Usage: notify.py <NOTIFICATION_JSON>")
return 1
try:
notification = json.loads(sys.argv[1])
except json.JSONDecodeError:
return 1
match notification_type := notification.get("type"):
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
else:
title = "Codex: Turn Complete!"
input_messages = notification.get("input_messages", [])
message = " ".join(input_messages)
title += message
case _:
print(f"not sending a push notification for: {notification_type}")
return 0
subprocess.check_output(
[
"terminal-notifier",
"-title",
title,
"-message",
message,
"-group",
"codex",
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
]
)
return 0
if __name__ == "__main__":
sys.exit(main())
```
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
## history
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
To disable this behavior, configure `[history]` as follows:
```toml
[history]
persistence = "none" # "save-all" is the default value
```
## file_opener
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
For example, if the model output includes a reference such as `【F:/home/user/project/main.py†L42-L50】`, then this would be rewritten to link to the URI `vscode://file/home/user/project/main.py:42`.
Note this is **not** a general editor setting (like `$EDITOR`), as it only accepts a fixed set of values:
- `"vscode"` (default)
- `"vscode-insiders"`
- `"windsurf"`
- `"cursor"`
- `"none"` to explicitly disable this feature
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
## hide_agent_reasoning
Codex intermittently emits "reasoning" events that show the models internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
```toml
hide_agent_reasoning = true # defaults to false
```
## project_doc_max_bytes
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
## tui
Options that are specific to the TUI.
```toml
[tui]
# This will make it so that Codex does not try to process mouse events, which
# means your Terminal's native drag-to-text to text selection and copy/paste
# should work. The tradeoff is that Codex will not receive any mouse events, so
# it will not be possible to use the mouse to scroll conversation history.
#
# Note that most terminals support holding down a modifier key when using the
# mouse to support text selection. For example, even if Codex mouse capture is
# enabled (i.e., this is set to `false`), you can still hold down alt while
# dragging the mouse to select text.
disable_mouse_capture = true # defaults to `false`
```

View File

@@ -16,6 +16,7 @@ async-channel = "2.3.1"
base64 = "0.21"
bytes = "1.10.1"
codex-apply-patch = { path = "../apply-patch" }
codex-login = { path = "../login" }
codex-mcp-client = { path = "../mcp-client" }
dirs = "6"
env-flags = "0.1.1"
@@ -31,6 +32,8 @@ rand = "0.9"
reqwest = { version = "0.12", features = ["json", "stream"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
strum = "0.27.1"
strum_macros = "0.27.1"
thiserror = "2.0.12"
time = { version = "0.3", features = ["formatting", "local-offset", "macros"] }
tokio = { version = "1", features = [
@@ -56,6 +59,10 @@ seccompiler = "0.5.0"
[target.x86_64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
# Build OpenSSL from source for musl builds.
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
[dev-dependencies]
assert_cmd = "2"
maplit = "1.0.2"

View File

@@ -25,10 +25,10 @@ use crate::flags::OPENAI_REQUEST_MAX_RETRIES;
use crate::flags::OPENAI_STREAM_IDLE_TIMEOUT_MS;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::openai_tools::create_tools_json_for_chat_completions_api;
use crate::util::backoff;
/// Implementation for the classic Chat Completions API. This is intentionally
/// minimal: we only stream back plain assistant text.
/// Implementation for the classic Chat Completions API.
pub(crate) async fn stream_chat_completions(
prompt: &Prompt,
model: &str,
@@ -38,35 +38,89 @@ pub(crate) async fn stream_chat_completions(
// Build messages array
let mut messages = Vec::<serde_json::Value>::new();
let full_instructions = prompt.get_full_instructions();
let full_instructions = prompt.get_full_instructions(model);
messages.push(json!({"role": "system", "content": full_instructions}));
for item in &prompt.input {
if let ResponseItem::Message { role, content } = item {
let mut text = String::new();
for c in content {
match c {
ContentItem::InputText { text: t } | ContentItem::OutputText { text: t } => {
text.push_str(t);
match item {
ResponseItem::Message { role, content } => {
let mut text = String::new();
for c in content {
match c {
ContentItem::InputText { text: t }
| ContentItem::OutputText { text: t } => {
text.push_str(t);
}
_ => {}
}
_ => {}
}
messages.push(json!({"role": role, "content": text}));
}
ResponseItem::FunctionCall {
name,
arguments,
call_id,
} => {
messages.push(json!({
"role": "assistant",
"content": null,
"tool_calls": [{
"id": call_id,
"type": "function",
"function": {
"name": name,
"arguments": arguments,
}
}]
}));
}
ResponseItem::LocalShellCall {
id,
call_id: _,
status,
action,
} => {
// Confirm with API team.
messages.push(json!({
"role": "assistant",
"content": null,
"tool_calls": [{
"id": id.clone().unwrap_or_else(|| "".to_string()),
"type": "local_shell_call",
"status": status,
"action": action,
}]
}));
}
ResponseItem::FunctionCallOutput { call_id, output } => {
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": output.content,
}));
}
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
// Omit these items from the conversation history.
continue;
}
messages.push(json!({"role": role, "content": text}));
}
}
let tools_json = create_tools_json_for_chat_completions_api(prompt, model)?;
let payload = json!({
"model": model,
"messages": messages,
"stream": true
"stream": true,
"tools": tools_json,
});
let base_url = provider.base_url.trim_end_matches('/');
let url = format!("{}/chat/completions", base_url);
debug!(url, "POST (chat)");
trace!("request payload: {}", payload);
debug!(
"POST to {url}: {}",
serde_json::to_string_pretty(&payload).unwrap_or_default()
);
let api_key = provider.api_key()?;
let mut attempt = 0;
@@ -134,6 +188,21 @@ where
let idle_timeout = *OPENAI_STREAM_IDLE_TIMEOUT_MS;
// State to accumulate a function call across streaming chunks.
// OpenAI may split the `arguments` string over multiple `delta` events
// until the chunk whose `finish_reason` is `tool_calls` is emitted. We
// keep collecting the pieces here and forward a single
// `ResponseItem::FunctionCall` once the call is complete.
#[derive(Default)]
struct FunctionCallState {
name: Option<String>,
arguments: String,
call_id: Option<String>,
active: bool,
}
let mut fn_call_state = FunctionCallState::default();
loop {
let sse = match timeout(idle_timeout, stream.next()).await {
Ok(Some(Ok(ev))) => ev,
@@ -173,23 +242,89 @@ where
Ok(v) => v,
Err(_) => continue,
};
trace!("chat_completions received SSE chunk: {chunk:?}");
let content_opt = chunk
.get("choices")
.and_then(|c| c.get(0))
.and_then(|c| c.get("delta"))
.and_then(|d| d.get("content"))
.and_then(|c| c.as_str());
let choice_opt = chunk.get("choices").and_then(|c| c.get(0));
if let Some(content) = content_opt {
let item = ResponseItem::Message {
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: content.to_string(),
}],
};
if let Some(choice) = choice_opt {
// Handle assistant content tokens.
if let Some(content) = choice
.get("delta")
.and_then(|d| d.get("content"))
.and_then(|c| c.as_str())
{
let item = ResponseItem::Message {
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: content.to_string(),
}],
};
let _ = tx_event.send(Ok(ResponseEvent::OutputItemDone(item))).await;
let _ = tx_event.send(Ok(ResponseEvent::OutputItemDone(item))).await;
}
// Handle streaming function / tool calls.
if let Some(tool_calls) = choice
.get("delta")
.and_then(|d| d.get("tool_calls"))
.and_then(|tc| tc.as_array())
{
if let Some(tool_call) = tool_calls.first() {
// Mark that we have an active function call in progress.
fn_call_state.active = true;
// Extract call_id if present.
if let Some(id) = tool_call.get("id").and_then(|v| v.as_str()) {
fn_call_state.call_id.get_or_insert_with(|| id.to_string());
}
// Extract function details if present.
if let Some(function) = tool_call.get("function") {
if let Some(name) = function.get("name").and_then(|n| n.as_str()) {
fn_call_state.name.get_or_insert_with(|| name.to_string());
}
if let Some(args_fragment) =
function.get("arguments").and_then(|a| a.as_str())
{
fn_call_state.arguments.push_str(args_fragment);
}
}
}
}
// Emit end-of-turn when finish_reason signals completion.
if let Some(finish_reason) = choice.get("finish_reason").and_then(|v| v.as_str()) {
match finish_reason {
"tool_calls" if fn_call_state.active => {
// Build the FunctionCall response item.
let item = ResponseItem::FunctionCall {
name: fn_call_state.name.clone().unwrap_or_else(|| "".to_string()),
arguments: fn_call_state.arguments.clone(),
call_id: fn_call_state.call_id.clone().unwrap_or_else(String::new),
};
// Emit it downstream.
let _ = tx_event.send(Ok(ResponseEvent::OutputItemDone(item))).await;
}
"stop" => {
// Regular turn without tool-call.
}
_ => {}
}
// Emit Completed regardless of reason so the agent can advance.
let _ = tx_event
.send(Ok(ResponseEvent::Completed {
response_id: String::new(),
}))
.await;
// Prepare for potential next turn (should not happen in same stream).
// fn_call_state = FunctionCallState::default();
return; // End processing for this SSE stream.
}
}
}
}
@@ -236,9 +371,14 @@ where
Poll::Ready(None) => return Poll::Ready(None),
Poll::Ready(Some(Err(e))) => return Poll::Ready(Some(Err(e))),
Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item)))) => {
// Accumulate *assistant* text but do not emit yet.
if let crate::models::ResponseItem::Message { role, content } = &item {
if role == "assistant" {
// If this is an incremental assistant message chunk, accumulate but
// do NOT emit yet. Forward any other item (e.g. FunctionCall) right
// away so downstream consumers see it.
let is_assistant_delta = matches!(&item, crate::models::ResponseItem::Message { role, .. } if role == "assistant");
if is_assistant_delta {
if let crate::models::ResponseItem::Message { content, .. } = &item {
if let Some(text) = content.iter().find_map(|c| match c {
crate::models::ContentItem::OutputText { text } => Some(text),
_ => None,
@@ -246,10 +386,13 @@ where
this.cumulative.push_str(text);
}
}
// Swallow partial assistant chunk; keep polling.
continue;
}
// Swallow partial event; keep polling.
continue;
// Not an assistant message forward immediately.
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item))));
}
Poll::Ready(Some(Ok(ResponseEvent::Completed { response_id }))) => {
if !this.cumulative.is_empty() {

View File

@@ -1,7 +1,5 @@
use std::collections::BTreeMap;
use std::io::BufRead;
use std::path::Path;
use std::sync::LazyLock;
use std::time::Duration;
use bytes::Bytes;
@@ -11,7 +9,6 @@ use reqwest::StatusCode;
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value;
use serde_json::json;
use tokio::sync::mpsc;
use tokio::time::timeout;
use tokio_util::io::ReaderStream;
@@ -21,12 +18,13 @@ use tracing::warn;
use crate::chat_completions::AggregateStreamExt;
use crate::chat_completions::stream_chat_completions;
use crate::client_common::Payload;
use crate::client_common::Prompt;
use crate::client_common::Reasoning;
use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::client_common::Summary;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::create_reasoning_param_for_request;
use crate::config_types::ReasoningEffort as ReasoningEffortConfig;
use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::error::CodexErr;
use crate::error::EnvVarError;
use crate::error::Result;
@@ -36,84 +34,31 @@ use crate::flags::OPENAI_STREAM_IDLE_TIMEOUT_MS;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::models::ResponseItem;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::util::backoff;
/// When serialized as JSON, this produces a valid "Tool" in the OpenAI
/// Responses API.
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type")]
enum OpenAiTool {
#[serde(rename = "function")]
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
}
#[derive(Debug, Clone, Serialize)]
struct ResponsesApiTool {
name: &'static str,
description: &'static str,
strict: bool,
parameters: JsonSchema,
}
/// Generic JSONSchema subset needed for our tool definitions
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type", rename_all = "lowercase")]
enum JsonSchema {
String,
Number,
Array {
items: Box<JsonSchema>,
},
Object {
properties: BTreeMap<String, JsonSchema>,
required: &'static [&'static str],
#[serde(rename = "additionalProperties")]
additional_properties: bool,
},
}
/// Tool usage specification
static DEFAULT_TOOLS: LazyLock<Vec<OpenAiTool>> = LazyLock::new(|| {
let mut properties = BTreeMap::new();
properties.insert(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String),
},
);
properties.insert("workdir".to_string(), JsonSchema::String);
properties.insert("timeout".to_string(), JsonSchema::Number);
vec![OpenAiTool::Function(ResponsesApiTool {
name: "shell",
description: "Runs a shell command, and returns its output.",
strict: false,
parameters: JsonSchema::Object {
properties,
required: &["command"],
additional_properties: false,
},
})]
});
static DEFAULT_CODEX_MODEL_TOOLS: LazyLock<Vec<OpenAiTool>> =
LazyLock::new(|| vec![OpenAiTool::LocalShell {}]);
#[derive(Clone)]
pub struct ModelClient {
model: String,
client: reqwest::Client,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
summary: ReasoningSummaryConfig,
}
impl ModelClient {
pub fn new(model: impl ToString, provider: ModelProviderInfo) -> Self {
pub fn new(
model: impl ToString,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
summary: ReasoningSummaryConfig,
) -> Self {
Self {
model: model.to_string(),
client: reqwest::Client::new(),
provider,
effort,
summary,
}
}
@@ -161,38 +106,17 @@ impl ModelClient {
return stream_from_fixture(path).await;
}
// Assemble tool list: built-in tools + any extra tools from the prompt.
let default_tools = if self.model.starts_with("codex") {
&DEFAULT_CODEX_MODEL_TOOLS
} else {
&DEFAULT_TOOLS
};
let mut tools_json = Vec::with_capacity(default_tools.len() + prompt.extra_tools.len());
for t in default_tools.iter() {
tools_json.push(serde_json::to_value(t)?);
}
tools_json.extend(
prompt
.extra_tools
.clone()
.into_iter()
.map(|(name, tool)| mcp_tool_to_openai_tool(name, tool)),
);
debug!("tools_json: {}", serde_json::to_string_pretty(&tools_json)?);
let full_instructions = prompt.get_full_instructions();
let payload = Payload {
let full_instructions = prompt.get_full_instructions(&self.model);
let tools_json = create_tools_json_for_responses_api(prompt, &self.model)?;
let reasoning = create_reasoning_param_for_request(&self.model, self.effort, self.summary);
let payload = ResponsesApiRequest {
model: &self.model,
instructions: &full_instructions,
input: &prompt.input,
tools: &tools_json,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: Some(Reasoning {
effort: "high",
summary: Some(Summary::Auto),
}),
reasoning,
previous_response_id: prompt.prev_id.clone(),
store: prompt.store,
stream: true,
@@ -201,8 +125,7 @@ impl ModelClient {
let base_url = self.provider.base_url.clone();
let base_url = base_url.trim_end_matches('/');
let url = format!("{}/responses", base_url);
debug!(url, "POST");
trace!("request payload: {}", serde_json::to_string(&payload)?);
trace!("POST to {url}: {}", serde_json::to_string(&payload)?);
let mut attempt = 0;
loop {
@@ -276,20 +199,6 @@ impl ModelClient {
}
}
fn mcp_tool_to_openai_tool(
fully_qualified_name: String,
tool: mcp_types::Tool,
) -> serde_json::Value {
// TODO(mbolin): Change the contract of this function to return
// ResponsesApiTool.
json!({
"name": fully_qualified_name,
"description": tool.description,
"parameters": tool.input_schema,
"type": "function",
})
}
#[derive(Debug, Deserialize, Serialize)]
struct SseEvent {
#[serde(rename = "type")]
@@ -401,6 +310,19 @@ where
};
};
}
"response.content_part.done"
| "response.created"
| "response.function_call_arguments.delta"
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.delta"
| "response.output_text.done"
| "response.reasoning_summary_part.added"
| "response.reasoning_summary_text.delta"
| "response.reasoning_summary_text.done" => {
// Currently, we ignore these events, but we handle them
// separately to skip the logging message in the `other` case.
}
other => debug!(other, "sse event"),
}
}

View File

@@ -1,5 +1,8 @@
use crate::config_types::ReasoningEffort as ReasoningEffortConfig;
use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::error::Result;
use crate::models::ResponseItem;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
use futures::Stream;
use serde::Serialize;
use std::borrow::Cow;
@@ -22,7 +25,7 @@ pub struct Prompt {
pub prev_id: Option<String>,
/// Optional instructions from the user to amend to the built-in agent
/// instructions.
pub instructions: Option<String>,
pub user_instructions: Option<String>,
/// Whether to store response on server side (disable_response_storage = !store).
pub store: bool,
@@ -33,14 +36,15 @@ pub struct Prompt {
}
impl Prompt {
pub(crate) fn get_full_instructions(&self) -> Cow<str> {
match &self.instructions {
Some(instructions) => {
let instructions = format!("{BASE_INSTRUCTIONS}\n{instructions}");
Cow::Owned(instructions)
}
None => Cow::Borrowed(BASE_INSTRUCTIONS),
pub(crate) fn get_full_instructions(&self, model: &str) -> Cow<str> {
let mut sections: Vec<&str> = vec![BASE_INSTRUCTIONS];
if let Some(ref user) = self.user_instructions {
sections.push(user);
}
if model.starts_with("gpt-4.1") {
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
}
Cow::Owned(sections.join("\n"))
}
}
@@ -52,25 +56,59 @@ pub enum ResponseEvent {
#[derive(Debug, Serialize)]
pub(crate) struct Reasoning {
pub(crate) effort: &'static str,
pub(crate) effort: OpenAiReasoningEffort,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) summary: Option<Summary>,
pub(crate) summary: Option<OpenAiReasoningSummary>,
}
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub(crate) enum OpenAiReasoningEffort {
Low,
#[default]
Medium,
High,
}
impl From<ReasoningEffortConfig> for Option<OpenAiReasoningEffort> {
fn from(effort: ReasoningEffortConfig) -> Self {
match effort {
ReasoningEffortConfig::Low => Some(OpenAiReasoningEffort::Low),
ReasoningEffortConfig::Medium => Some(OpenAiReasoningEffort::Medium),
ReasoningEffortConfig::High => Some(OpenAiReasoningEffort::High),
ReasoningEffortConfig::None => None,
}
}
}
/// A summary of the reasoning performed by the model. This can be useful for
/// debugging and understanding the model's reasoning process.
#[derive(Debug, Serialize)]
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub(crate) enum Summary {
pub(crate) enum OpenAiReasoningSummary {
#[default]
Auto,
#[allow(dead_code)] // Will go away once this is configurable.
Concise,
#[allow(dead_code)] // Will go away once this is configurable.
Detailed,
}
impl From<ReasoningSummaryConfig> for Option<OpenAiReasoningSummary> {
fn from(summary: ReasoningSummaryConfig) -> Self {
match summary {
ReasoningSummaryConfig::Auto => Some(OpenAiReasoningSummary::Auto),
ReasoningSummaryConfig::Concise => Some(OpenAiReasoningSummary::Concise),
ReasoningSummaryConfig::Detailed => Some(OpenAiReasoningSummary::Detailed),
ReasoningSummaryConfig::None => None,
}
}
}
/// Request object that is serialized as JSON and POST'ed when using the
/// Responses API.
#[derive(Debug, Serialize)]
pub(crate) struct Payload<'a> {
pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) model: &'a str,
pub(crate) instructions: &'a str,
// TODO(mbolin): ResponseItem::Other should not be serialized. Currently,
@@ -88,6 +126,40 @@ pub(crate) struct Payload<'a> {
pub(crate) stream: bool,
}
pub(crate) fn create_reasoning_param_for_request(
model: &str,
effort: ReasoningEffortConfig,
summary: ReasoningSummaryConfig,
) -> Option<Reasoning> {
let effort: Option<OpenAiReasoningEffort> = effort.into();
let effort = effort?;
if model_supports_reasoning_summaries(model) {
Some(Reasoning {
effort,
summary: summary.into(),
})
} else {
None
}
}
pub fn model_supports_reasoning_summaries(model: &str) -> bool {
// Currently, we hardcode this rule to decide whether enable reasoning.
// We expect reasoning to apply only to OpenAI models, but we do not want
// users to have to mess with their config to disable reasoning for models
// that do not support it, such as `gpt-4.1`.
//
// Though if a user is using Codex with non-OpenAI models that, say, happen
// to start with "o", then they can set `model_reasoning_effort = "none` in
// config.toml to disable reasoning.
//
// Ultimately, this should also be configurable in config.toml, but we
// need to have defaults that "just work." Perhaps we could have a
// "reasoning models pattern" as part of ModelProviderInfo?
model.starts_with("o") || model.starts_with("codex")
}
pub(crate) struct ResponseStream {
pub(crate) rx_event: mpsc::Receiver<Result<ResponseEvent>>,
}

View File

@@ -20,6 +20,7 @@ use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_apply_patch::print_summary;
use futures::prelude::*;
use mcp_types::CallToolResult;
use serde::Serialize;
use serde_json;
use tokio::sync::Notify;
@@ -58,7 +59,7 @@ use crate::models::ReasoningItemReasoningSummary;
use crate::models::ResponseInputItem;
use crate::models::ResponseItem;
use crate::models::ShellToolCallParams;
use crate::project_doc::create_full_instructions;
use crate::project_doc::get_user_instructions;
use crate::protocol::AgentMessageEvent;
use crate::protocol::AgentReasoningEvent;
use crate::protocol::ApplyPatchApprovalRequestEvent;
@@ -103,10 +104,12 @@ impl Codex {
let (tx_sub, rx_sub) = async_channel::bounded(64);
let (tx_event, rx_event) = async_channel::bounded(64);
let instructions = create_full_instructions(&config).await;
let instructions = get_user_instructions(&config).await;
let configure_session = Op::ConfigureSession {
provider: config.model_provider.clone(),
model: config.model.clone(),
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
instructions,
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
@@ -295,6 +298,17 @@ impl Session {
state.approved_commands.insert(cmd);
}
/// Records items to both the rollout and the chat completions/ZDR
/// transcript, if enabled.
async fn record_conversation_items(&self, items: &[ResponseItem]) {
debug!("Recording items for conversation: {items:?}");
self.record_rollout_items(items).await;
if let Some(transcript) = self.state.lock().unwrap().zdr_transcript.as_mut() {
transcript.record_items(items);
}
}
/// Append the given items to the session's rollout transcript (if enabled)
/// and persist them to disk.
async fn record_rollout_items(&self, items: &[ResponseItem]) {
@@ -388,7 +402,7 @@ impl Session {
tool: &str,
arguments: Option<serde_json::Value>,
timeout: Option<Duration>,
) -> anyhow::Result<mcp_types::CallToolResult> {
) -> anyhow::Result<CallToolResult> {
self.mcp_connection_manager
.call_tool(server, tool, arguments, timeout)
.await
@@ -542,6 +556,8 @@ async fn submission_loop(
Op::ConfigureSession {
provider,
model,
model_reasoning_effort,
model_reasoning_summary,
instructions,
approval_policy,
sandbox_policy,
@@ -563,7 +579,12 @@ async fn submission_loop(
return;
}
let client = ModelClient::new(model.clone(), provider.clone());
let client = ModelClient::new(
model.clone(),
provider.clone(),
model_reasoning_effort,
model_reasoning_summary,
);
// abort any current running session and clone its state
let retain_zdr_transcript =
@@ -760,6 +781,19 @@ async fn submission_loop(
debug!("Agent loop exited");
}
/// Takes a user message as input and runs a loop where, at each turn, the model
/// replies with either:
///
/// - requested function calls
/// - an assistant message
///
/// While it is possible for the model to return multiple of these items in a
/// single turn, in practice, we generally one item per turn:
///
/// - If the model requests a function call, we execute it and send the output
/// back to the model in the next turn.
/// - If the model sends only an assistant message, we record it in the
/// conversation history and consider the task complete.
async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
if input.is_empty() {
return;
@@ -772,10 +806,14 @@ async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
return;
}
let mut pending_response_input: Vec<ResponseInputItem> = vec![ResponseInputItem::from(input)];
let initial_input_for_turn = ResponseInputItem::from(input);
sess.record_conversation_items(&[initial_input_for_turn.clone().into()])
.await;
let mut input_for_next_turn: Vec<ResponseInputItem> = vec![initial_input_for_turn];
let last_agent_message: Option<String>;
loop {
let mut net_new_turn_input = pending_response_input
let mut net_new_turn_input = input_for_next_turn
.drain(..)
.map(ResponseItem::from)
.collect::<Vec<_>>();
@@ -783,11 +821,12 @@ async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
// Note that pending_input would be something like a message the user
// submitted through the UI while the model was running. Though the UI
// may support this, the model might not.
let pending_input = sess.get_pending_input().into_iter().map(ResponseItem::from);
net_new_turn_input.extend(pending_input);
// Persist only the net-new items of this turn to the rollout.
sess.record_rollout_items(&net_new_turn_input).await;
let pending_input = sess
.get_pending_input()
.into_iter()
.map(ResponseItem::from)
.collect::<Vec<ResponseItem>>();
sess.record_conversation_items(&pending_input).await;
// Construct the input that we will send to the model. When using the
// Chat completions API (or ZDR clients), the model needs the full
@@ -796,20 +835,24 @@ async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
// represents an append-only log without duplicates.
let turn_input: Vec<ResponseItem> =
if let Some(transcript) = sess.state.lock().unwrap().zdr_transcript.as_mut() {
// If we are using Chat/ZDR, we need to send the transcript with every turn.
// 1. Build up the conversation history for the next turn.
let full_transcript = [transcript.contents(), net_new_turn_input.clone()].concat();
// 2. Update the in-memory transcript so that future turns
// include these items as part of the history.
transcript.record_items(&net_new_turn_input);
// Note that `transcript.record_items()` does some filtering
// such that `full_transcript` may include items that were
// excluded from `transcript`.
full_transcript
// If we are using Chat/ZDR, we need to send the transcript with
// every turn. By induction, `transcript` already contains:
// - The `input` that kicked off this task.
// - Each `ResponseItem` that was recorded in the previous turn.
// - Each response to a `ResponseItem` (in practice, the only
// response type we seem to have is `FunctionCallOutput`).
//
// The only thing the `transcript` does not contain is the
// `pending_input` that was injected while the model was
// running. We need to add that to the conversation history
// so that the model can see it in the next turn.
[transcript.contents(), pending_input].concat()
} else {
// In practice, net_new_turn_input should contain only:
// - User messages
// - Outputs for function calls requested by the model
net_new_turn_input.extend(pending_input);
// Responses API path we can just send the new items and
// record the same.
net_new_turn_input
@@ -830,29 +873,86 @@ async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
.collect();
match run_turn(&sess, sub_id.clone(), turn_input).await {
Ok(turn_output) => {
let (items, responses): (Vec<_>, Vec<_>) = turn_output
.into_iter()
.map(|p| (p.item, p.response))
.unzip();
let responses = responses
.into_iter()
.flatten()
.collect::<Vec<ResponseInputItem>>();
let mut items_to_record_in_conversation_history = Vec::<ResponseItem>::new();
let mut responses = Vec::<ResponseInputItem>::new();
for processed_response_item in turn_output {
let ProcessedResponseItem { item, response } = processed_response_item;
match (&item, &response) {
(ResponseItem::Message { role, .. }, None) if role == "assistant" => {
// If the model returned a message, we need to record it.
items_to_record_in_conversation_history.push(item);
}
(
ResponseItem::LocalShellCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::McpToolCallOutput { call_id, result }),
) => {
items_to_record_in_conversation_history.push(item);
let (content, success): (String, Option<bool>) = match result {
Ok(CallToolResult { content, is_error }) => {
match serde_json::to_string(content) {
Ok(content) => (content, *is_error),
Err(e) => {
warn!("Failed to serialize MCP tool call output: {e}");
(e.to_string(), Some(true))
}
}
}
Err(e) => (e.clone(), Some(true)),
};
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload { content, success },
},
);
}
(ResponseItem::Reasoning { .. }, None) => {
// Omit from conversation history.
}
_ => {
warn!("Unexpected response item: {item:?} with response: {response:?}");
}
};
if let Some(response) = response {
responses.push(response);
}
}
// Only attempt to take the lock if there is something to record.
if !items.is_empty() {
// First persist model-generated output to the rollout file this only borrows.
sess.record_rollout_items(&items).await;
// For ZDR we also need to keep a transcript clone.
if let Some(transcript) = sess.state.lock().unwrap().zdr_transcript.as_mut() {
transcript.record_items(&items);
}
if !items_to_record_in_conversation_history.is_empty() {
sess.record_conversation_items(&items_to_record_in_conversation_history)
.await;
}
if responses.is_empty() {
debug!("Turn completed");
last_agent_message = get_last_assistant_message_from_turn(&items);
last_agent_message = get_last_assistant_message_from_turn(
&items_to_record_in_conversation_history,
);
sess.maybe_notify(UserNotification::AgentTurnComplete {
turn_id: sub_id.clone(),
input_messages: turn_input_messages,
@@ -861,7 +961,7 @@ async fn run_task(sess: Arc<Session>, sub_id: String, input: Vec<InputItem>) {
break;
}
pending_response_input = responses;
input_for_next_turn = responses;
}
Err(e) => {
info!("Turn error: {e:#}");
@@ -890,9 +990,8 @@ async fn run_turn(
input: Vec<ResponseItem>,
) -> CodexResult<Vec<ProcessedResponseItem>> {
// Decide whether to use server-side storage (previous_response_id) or disable it
let (prev_id, store, is_first_turn) = {
let (prev_id, store) = {
let state = sess.state.lock().unwrap();
let is_first_turn = state.previous_response_id.is_none();
let store = state.zdr_transcript.is_none();
let prev_id = if store {
state.previous_response_id.clone()
@@ -901,20 +1000,14 @@ async fn run_turn(
// back, but trying to use it results in a 400.
None
};
(prev_id, store, is_first_turn)
};
let instructions = if is_first_turn {
sess.instructions.clone()
} else {
None
(prev_id, store)
};
let extra_tools = sess.mcp_connection_manager.list_all_tools();
let prompt = Prompt {
input,
prev_id,
instructions,
user_instructions: sess.instructions.clone(),
store,
extra_tools,
};

View File

@@ -1,6 +1,8 @@
use crate::config_profile::ConfigProfile;
use crate::config_types::History;
use crate::config_types::McpServerConfig;
use crate::config_types::ReasoningEffort;
use crate::config_types::ReasoningSummary;
use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyToml;
use crate::config_types::Tui;
@@ -16,6 +18,7 @@ use serde::Deserialize;
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use toml::Value as TomlValue;
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -41,6 +44,11 @@ pub struct Config {
pub shell_environment_policy: ShellEnvironmentPolicy,
/// When `true`, `AgentReasoning` events emitted by the backend will be
/// suppressed from the frontend output. This can reduce visual noise when
/// users are only interested in the final agent responses.
pub hide_agent_reasoning: bool,
/// Disable server-side response storage (sends the full conversation
/// context with every request). Currently necessary for OpenAI customers
/// who have opted into Zero Data Retention (ZDR).
@@ -106,6 +114,116 @@ pub struct Config {
///
/// When this program is invoked, arg0 will be set to `codex-linux-sandbox`.
pub codex_linux_sandbox_exe: Option<PathBuf>,
/// If not "none", the value to use for `reasoning.effort` when making a
/// request using the Responses API.
pub model_reasoning_effort: ReasoningEffort,
/// If not "none", the value to use for `reasoning.summary` when making a
/// request using the Responses API.
pub model_reasoning_summary: ReasoningSummary,
}
impl Config {
/// Load configuration with *generic* CLI overrides (`-c key=value`) applied
/// **in between** the values parsed from `config.toml` and the
/// strongly-typed overrides specified via [`ConfigOverrides`].
///
/// The precedence order is therefore: `config.toml` < `-c` overrides <
/// `ConfigOverrides`.
pub fn load_with_cli_overrides(
cli_overrides: Vec<(String, TomlValue)>,
overrides: ConfigOverrides,
) -> std::io::Result<Self> {
// Resolve the directory that stores Codex state (e.g. ~/.codex or the
// value of $CODEX_HOME) so we can embed it into the resulting
// `Config` instance.
let codex_home = find_codex_home()?;
// Step 1: parse `config.toml` into a generic JSON value.
let mut root_value = load_config_as_toml(&codex_home)?;
// Step 2: apply the `-c` overrides.
for (path, value) in cli_overrides.into_iter() {
apply_toml_override(&mut root_value, &path, value);
}
// Step 3: deserialize into `ConfigToml` so that Serde can enforce the
// correct types.
let cfg: ConfigToml = root_value.try_into().map_err(|e| {
tracing::error!("Failed to deserialize overridden config: {e}");
std::io::Error::new(std::io::ErrorKind::InvalidData, e)
})?;
// Step 4: merge with the strongly-typed overrides.
Self::load_from_base_config_with_overrides(cfg, overrides, codex_home)
}
}
/// Read `CODEX_HOME/config.toml` and return it as a generic TOML value. Returns
/// an empty TOML table when the file does not exist.
fn load_config_as_toml(codex_home: &Path) -> std::io::Result<TomlValue> {
let config_path = codex_home.join("config.toml");
match std::fs::read_to_string(&config_path) {
Ok(contents) => match toml::from_str::<TomlValue>(&contents) {
Ok(val) => Ok(val),
Err(e) => {
tracing::error!("Failed to parse config.toml: {e}");
Err(std::io::Error::new(std::io::ErrorKind::InvalidData, e))
}
},
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tracing::info!("config.toml not found, using defaults");
Ok(TomlValue::Table(Default::default()))
}
Err(e) => {
tracing::error!("Failed to read config.toml: {e}");
Err(e)
}
}
}
/// Apply a single dotted-path override onto a TOML value.
fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
use toml::value::Table;
let segments: Vec<&str> = path.split('.').collect();
let mut current = root;
for (idx, segment) in segments.iter().enumerate() {
let is_last = idx == segments.len() - 1;
if is_last {
match current {
TomlValue::Table(table) => {
table.insert(segment.to_string(), value);
}
_ => {
let mut table = Table::new();
table.insert(segment.to_string(), value);
*current = TomlValue::Table(table);
}
}
return;
}
// Traverse or create intermediate object.
match current {
TomlValue::Table(table) => {
current = table
.entry(segment.to_string())
.or_insert_with(|| TomlValue::Table(Table::new()));
}
_ => {
*current = TomlValue::Table(Table::new());
if let TomlValue::Table(tbl) = current {
current = tbl
.entry(segment.to_string())
.or_insert_with(|| TomlValue::Table(Table::new()));
}
}
}
}
}
/// Base config deserialized from ~/.codex/config.toml.
@@ -169,29 +287,13 @@ pub struct ConfigToml {
/// Collection of settings that are specific to the TUI.
pub tui: Option<Tui>,
}
impl ConfigToml {
/// Attempt to parse the file at `~/.codex/config.toml`. If it does not
/// exist, return a default config. Though if it exists and cannot be
/// parsed, report that to the user and force them to fix it.
fn load_from_toml(codex_home: &Path) -> std::io::Result<Self> {
let config_toml_path = codex_home.join("config.toml");
match std::fs::read_to_string(&config_toml_path) {
Ok(contents) => toml::from_str::<Self>(&contents).map_err(|e| {
tracing::error!("Failed to parse config.toml: {e}");
std::io::Error::new(std::io::ErrorKind::InvalidData, e)
}),
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tracing::info!("config.toml not found, using defaults");
Ok(Self::default())
}
Err(e) => {
tracing::error!("Failed to read config.toml: {e}");
Err(e)
}
}
}
/// When set to `true`, `AgentReasoning` events will be hidden from the
/// UI/output. Defaults to `false`.
pub hide_agent_reasoning: Option<bool>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
}
fn deserialize_sandbox_permissions<'de, D>(
@@ -227,28 +329,12 @@ pub struct ConfigOverrides {
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_policy: Option<SandboxPolicy>,
pub disable_response_storage: Option<bool>,
pub model_provider: Option<String>,
pub config_profile: Option<String>,
pub codex_linux_sandbox_exe: Option<PathBuf>,
}
impl Config {
/// Load configuration, optionally applying overrides (CLI flags). Merges
/// ~/.codex/config.toml, ~/.codex/instructions.md, embedded defaults, and
/// any values provided in `overrides` (highest precedence).
pub fn load_with_overrides(overrides: ConfigOverrides) -> std::io::Result<Self> {
// Resolve the directory that stores Codex state (e.g. ~/.codex or the
// value of $CODEX_HOME) so we can embed it into the resulting
// `Config` instance.
let codex_home = find_codex_home()?;
let cfg: ConfigToml = ConfigToml::load_from_toml(&codex_home)?;
tracing::warn!("Config parsed from config.toml: {cfg:?}");
Self::load_from_base_config_with_overrides(cfg, overrides, codex_home)
}
/// Meant to be used exclusively for tests: `load_with_overrides()` should
/// be used in all other cases.
pub fn load_from_base_config_with_overrides(
@@ -264,7 +350,6 @@ impl Config {
cwd,
approval_policy,
sandbox_policy,
disable_response_storage,
model_provider,
config_profile: config_profile_key,
codex_linux_sandbox_exe,
@@ -356,8 +441,8 @@ impl Config {
.unwrap_or_else(AskForApproval::default),
sandbox_policy,
shell_environment_policy,
disable_response_storage: disable_response_storage
.or(config_profile.disable_response_storage)
disable_response_storage: config_profile
.disable_response_storage
.or(cfg.disable_response_storage)
.unwrap_or(false),
notify: cfg.notify,
@@ -370,6 +455,10 @@ impl Config {
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
model_reasoning_effort: cfg.model_reasoning_effort.unwrap_or_default(),
model_reasoning_summary: cfg.model_reasoning_summary.unwrap_or_default(),
};
Ok(config)
}
@@ -711,6 +800,9 @@ disable_response_storage = true
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
},
o3_profile_config
);
@@ -750,6 +842,9 @@ disable_response_storage = true
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -804,6 +899,9 @@ disable_response_storage = true
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);

View File

@@ -4,9 +4,11 @@
// definitions that do not contain business logic.
use std::collections::HashMap;
use strum_macros::Display;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Serialize;
#[derive(Deserialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
@@ -89,7 +91,7 @@ pub struct Tui {
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum ShellEnvironmentPolicyInherit {
/// "Core" environment variables for the platform. On UNIX, this would
/// include HOME, LOGNAME, PATH, SHELL, and USER, among others.
@@ -175,3 +177,31 @@ impl From<ShellEnvironmentPolicyToml> for ShellEnvironmentPolicy {
}
}
}
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningEffort {
Low,
#[default]
Medium,
High,
/// Option to disable reasoning.
None,
}
/// A summary of the reasoning performed by the model. This can be useful for
/// debugging and understanding the model's reasoning process.
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningSummary {
#[default]
Auto,
Concise,
Detailed,
/// Option to disable reasoning summaries.
None,
}

View File

@@ -27,9 +27,13 @@ mod model_provider_info;
pub use model_provider_info::ModelProviderInfo;
pub use model_provider_info::WireApi;
mod models;
pub mod openai_api_key;
mod openai_tools;
mod project_doc;
pub mod protocol;
mod rollout;
mod safety;
mod user_notification;
pub mod util;
pub use client_common::model_supports_reasoning_summaries;

View File

@@ -50,51 +50,18 @@ pub(crate) async fn handle_mcp_tool_call(
notify_mcp_tool_call_event(sess, sub_id, tool_call_begin_event).await;
// Perform the tool call.
let (tool_call_end_event, tool_call_err) = match sess
let result = sess
.call_tool(&server, &tool_name, arguments_value, timeout)
.await
{
Ok(result) => (
EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id,
success: !result.is_error.unwrap_or(false),
result: Some(result),
}),
None,
),
Err(e) => (
EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id,
success: false,
result: None,
}),
Some(e),
),
};
.map_err(|e| format!("tool call error: {e}"));
let tool_call_end_event = EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id: call_id.clone(),
result: result.clone(),
});
notify_mcp_tool_call_event(sess, sub_id, tool_call_end_event.clone()).await;
let EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id,
success,
result,
}) = tool_call_end_event
else {
unimplemented!("unexpected event type");
};
ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: result.map_or_else(
|| format!("err: {tool_call_err:?}"),
|result| {
serde_json::to_string(&result)
.unwrap_or_else(|e| format!("JSON serialization error: {e}"))
},
),
success: Some(success),
},
}
ResponseInputItem::McpToolCallOutput { call_id, result }
}
async fn notify_mcp_tool_call_event(sess: &Session, sub_id: &str, event: EventMsg) {

View File

@@ -11,6 +11,7 @@ use std::collections::HashMap;
use std::env::VarError;
use crate::error::EnvVarError;
use crate::openai_api_key::get_openai_api_key;
/// Wire protocol that the provider speaks. Most third-party services only
/// implement the classic OpenAI Chat Completions JSON schema, whereas OpenAI
@@ -52,20 +53,27 @@ impl ModelProviderInfo {
/// cannot be found, returns an error.
pub fn api_key(&self) -> crate::error::Result<Option<String>> {
match &self.env_key {
Some(env_key) => std::env::var(env_key)
.and_then(|v| {
if v.trim().is_empty() {
Err(VarError::NotPresent)
} else {
Ok(Some(v))
}
})
.map_err(|_| {
crate::error::CodexErr::EnvVar(EnvVarError {
var: env_key.clone(),
instructions: self.env_key_instructions.clone(),
Some(env_key) => {
let env_value = if env_key == crate::openai_api_key::OPENAI_API_KEY_ENV_VAR {
get_openai_api_key().map_or_else(|| Err(VarError::NotPresent), Ok)
} else {
std::env::var(env_key)
};
env_value
.and_then(|v| {
if v.trim().is_empty() {
Err(VarError::NotPresent)
} else {
Ok(Some(v))
}
})
}),
.map_err(|_| {
crate::error::CodexErr::EnvVar(EnvVarError {
var: env_key.clone(),
instructions: self.env_key_instructions.clone(),
})
})
}
None => Ok(None),
}
}

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use base64::Engine;
use mcp_types::CallToolResult;
use serde::Deserialize;
use serde::Serialize;
use serde::ser::Serializer;
@@ -18,6 +19,10 @@ pub enum ResponseInputItem {
call_id: String,
output: FunctionCallOutputPayload,
},
McpToolCallOutput {
call_id: String,
result: Result<CallToolResult, String>,
},
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -77,6 +82,19 @@ impl From<ResponseInputItem> for ResponseItem {
ResponseInputItem::FunctionCallOutput { call_id, output } => {
Self::FunctionCallOutput { call_id, output }
}
ResponseInputItem::McpToolCallOutput { call_id, result } => Self::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
success: Some(result.is_ok()),
content: result.map_or_else(
|tool_call_err| format!("err: {tool_call_err:?}"),
|result| {
serde_json::to_string(&result)
.unwrap_or_else(|e| format!("JSON serialization error: {e}"))
},
),
},
},
}
}
}

View File

@@ -0,0 +1,24 @@
use std::env;
use std::sync::LazyLock;
use std::sync::RwLock;
pub const OPENAI_API_KEY_ENV_VAR: &str = "OPENAI_API_KEY";
static OPENAI_API_KEY: LazyLock<RwLock<Option<String>>> = LazyLock::new(|| {
let val = env::var(OPENAI_API_KEY_ENV_VAR)
.ok()
.and_then(|s| if s.is_empty() { None } else { Some(s) });
RwLock::new(val)
});
pub fn get_openai_api_key() -> Option<String> {
#![allow(clippy::unwrap_used)]
OPENAI_API_KEY.read().unwrap().clone()
}
pub fn set_openai_api_key(value: String) {
#![allow(clippy::unwrap_used)]
if !value.is_empty() {
*OPENAI_API_KEY.write().unwrap() = Some(value);
}
}

View File

@@ -0,0 +1,157 @@
use serde::Serialize;
use serde_json::json;
use std::collections::BTreeMap;
use std::sync::LazyLock;
use crate::client_common::Prompt;
#[derive(Debug, Clone, Serialize)]
pub(crate) struct ResponsesApiTool {
name: &'static str,
description: &'static str,
strict: bool,
parameters: JsonSchema,
}
/// When serialized as JSON, this produces a valid "Tool" in the OpenAI
/// Responses API.
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type")]
pub(crate) enum OpenAiTool {
#[serde(rename = "function")]
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
}
/// Generic JSONSchema subset needed for our tool definitions
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type", rename_all = "lowercase")]
pub(crate) enum JsonSchema {
String,
Number,
Array {
items: Box<JsonSchema>,
},
Object {
properties: BTreeMap<String, JsonSchema>,
required: &'static [&'static str],
#[serde(rename = "additionalProperties")]
additional_properties: bool,
},
}
/// Tool usage specification
static DEFAULT_TOOLS: LazyLock<Vec<OpenAiTool>> = LazyLock::new(|| {
let mut properties = BTreeMap::new();
properties.insert(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String),
},
);
properties.insert("workdir".to_string(), JsonSchema::String);
properties.insert("timeout".to_string(), JsonSchema::Number);
vec![OpenAiTool::Function(ResponsesApiTool {
name: "shell",
description: "Runs a shell command, and returns its output.",
strict: false,
parameters: JsonSchema::Object {
properties,
required: &["command"],
additional_properties: false,
},
})]
});
static DEFAULT_CODEX_MODEL_TOOLS: LazyLock<Vec<OpenAiTool>> =
LazyLock::new(|| vec![OpenAiTool::LocalShell {}]);
/// Returns JSON values that are compatible with Function Calling in the
/// Responses API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=responses
pub(crate) fn create_tools_json_for_responses_api(
prompt: &Prompt,
model: &str,
) -> crate::error::Result<Vec<serde_json::Value>> {
// Assemble tool list: built-in tools + any extra tools from the prompt.
let default_tools = if model.starts_with("codex") {
&DEFAULT_CODEX_MODEL_TOOLS
} else {
&DEFAULT_TOOLS
};
let mut tools_json = Vec::with_capacity(default_tools.len() + prompt.extra_tools.len());
for t in default_tools.iter() {
tools_json.push(serde_json::to_value(t)?);
}
tools_json.extend(
prompt
.extra_tools
.clone()
.into_iter()
.map(|(name, tool)| mcp_tool_to_openai_tool(name, tool)),
);
Ok(tools_json)
}
/// Returns JSON values that are compatible with Function Calling in the
/// Chat Completions API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
pub(crate) fn create_tools_json_for_chat_completions_api(
prompt: &Prompt,
model: &str,
) -> crate::error::Result<Vec<serde_json::Value>> {
// We start with the JSON for the Responses API and than rewrite it to match
// the chat completions tool call format.
let responses_api_tools_json = create_tools_json_for_responses_api(prompt, model)?;
let tools_json = responses_api_tools_json
.into_iter()
.filter_map(|mut tool| {
if tool.get("type") != Some(&serde_json::Value::String("function".to_string())) {
return None;
}
if let Some(map) = tool.as_object_mut() {
// Remove "type" field as it is not needed in chat completions.
map.remove("type");
Some(json!({
"type": "function",
"function": map,
}))
} else {
None
}
})
.collect::<Vec<serde_json::Value>>();
Ok(tools_json)
}
fn mcp_tool_to_openai_tool(
fully_qualified_name: String,
tool: mcp_types::Tool,
) -> serde_json::Value {
let mcp_types::Tool {
description,
mut input_schema,
..
} = tool;
// OpenAI models mandate the "properties" field in the schema. The Agents
// SDK fixed this by inserting an empty object for "properties" if it is not
// already present https://github.com/openai/openai-agents-python/issues/449
// so here we do the same.
if input_schema.properties.is_none() {
input_schema.properties = Some(serde_json::Value::Object(serde_json::Map::new()));
}
// TODO(mbolin): Change the contract of this function to return
// ResponsesApiTool.
json!({
"name": fully_qualified_name,
"description": description,
"parameters": input_schema,
"type": "function",
})
}

View File

@@ -25,7 +25,7 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn create_full_instructions(config: &Config) -> Option<String> {
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
match find_project_doc(config).await {
Ok(Some(project_doc)) => match &config.instructions {
Some(original_instructions) => Some(format!(
@@ -168,7 +168,7 @@ mod tests {
async fn no_doc_file_returns_none() {
let tmp = tempfile::tempdir().expect("tempdir");
let res = create_full_instructions(&make_config(&tmp, 4096, None)).await;
let res = get_user_instructions(&make_config(&tmp, 4096, None)).await;
assert!(
res.is_none(),
"Expected None when AGENTS.md is absent and no system instructions provided"
@@ -182,7 +182,7 @@ mod tests {
let tmp = tempfile::tempdir().expect("tempdir");
fs::write(tmp.path().join("AGENTS.md"), "hello world").unwrap();
let res = create_full_instructions(&make_config(&tmp, 4096, None))
let res = get_user_instructions(&make_config(&tmp, 4096, None))
.await
.expect("doc expected");
@@ -201,7 +201,7 @@ mod tests {
let huge = "A".repeat(LIMIT * 2); // 2 KiB
fs::write(tmp.path().join("AGENTS.md"), &huge).unwrap();
let res = create_full_instructions(&make_config(&tmp, LIMIT, None))
let res = get_user_instructions(&make_config(&tmp, LIMIT, None))
.await
.expect("doc expected");
@@ -233,7 +233,7 @@ mod tests {
let mut cfg = make_config(&repo, 4096, None);
cfg.cwd = nested;
let res = create_full_instructions(&cfg).await.expect("doc expected");
let res = get_user_instructions(&cfg).await.expect("doc expected");
assert_eq!(res, "root level doc");
}
@@ -243,7 +243,7 @@ mod tests {
let tmp = tempfile::tempdir().expect("tempdir");
fs::write(tmp.path().join("AGENTS.md"), "something").unwrap();
let res = create_full_instructions(&make_config(&tmp, 0, None)).await;
let res = get_user_instructions(&make_config(&tmp, 0, None)).await;
assert!(
res.is_none(),
"With limit 0 the function should return None"
@@ -259,7 +259,7 @@ mod tests {
const INSTRUCTIONS: &str = "base instructions";
let res = create_full_instructions(&make_config(&tmp, 4096, Some(INSTRUCTIONS)))
let res = get_user_instructions(&make_config(&tmp, 4096, Some(INSTRUCTIONS)))
.await
.expect("should produce a combined instruction string");
@@ -276,7 +276,7 @@ mod tests {
const INSTRUCTIONS: &str = "some instructions";
let res = create_full_instructions(&make_config(&tmp, 4096, Some(INSTRUCTIONS))).await;
let res = get_user_instructions(&make_config(&tmp, 4096, Some(INSTRUCTIONS))).await;
assert_eq!(res, Some(INSTRUCTIONS.to_string()));
}

View File

@@ -12,6 +12,8 @@ use serde::Deserialize;
use serde::Serialize;
use uuid::Uuid;
use crate::config_types::ReasoningEffort as ReasoningEffortConfig;
use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::message_history::HistoryEntry;
use crate::model_provider_info::ModelProviderInfo;
@@ -37,6 +39,10 @@ pub enum Op {
/// If not specified, server will use its default model.
model: String,
model_reasoning_effort: ReasoningEffortConfig,
model_reasoning_summary: ReasoningSummaryConfig,
/// Model instructions
instructions: Option<String>,
/// When to escalate for approval for execution
@@ -396,10 +402,17 @@ pub struct McpToolCallBeginEvent {
pub struct McpToolCallEndEvent {
/// Identifier for the corresponding McpToolCallBegin that finished.
pub call_id: String,
/// Whether the tool call was successful. If `false`, `result` might not be present.
pub success: bool,
/// Result of the tool call. Note this could be an error.
pub result: Option<CallToolResult>,
pub result: Result<CallToolResult, String>,
}
impl McpToolCallEndEvent {
pub fn is_success(&self) -> bool {
match &self.result {
Ok(result) => !result.is_error.unwrap_or(false),
Err(_) => false,
}
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
@@ -554,7 +567,7 @@ mod tests {
id: "1234".to_string(),
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id,
model: "o4-mini".to_string(),
model: "codex-mini-latest".to_string(),
history_log_id: 0,
history_entry_count: 0,
}),
@@ -562,7 +575,7 @@ mod tests {
let serialized = serde_json::to_string(&event).unwrap();
assert_eq!(
serialized,
r#"{"id":"1234","msg":{"type":"session_configured","session_id":"67e55044-10b1-426f-9247-bb680e5fe0c8","model":"o4-mini","history_log_id":0,"history_entry_count":0}}"#
r#"{"id":"1234","msg":{"type":"session_configured","session_id":"67e55044-10b1-426f-9247-bb680e5fe0c8","model":"codex-mini-latest","history_log_id":0,"history_entry_count":0}}"#
);
}
}

View File

@@ -1,5 +1,6 @@
use clap::Parser;
use clap::ValueEnum;
use codex_common::CliConfigOverrides;
use codex_common::SandboxPermissionOption;
use std::path::PathBuf;
@@ -33,9 +34,8 @@ pub struct Cli {
#[arg(long = "skip-git-repo-check", default_value_t = false)]
pub skip_git_repo_check: bool,
/// Disable serverside response storage (sends the full conversation context with every request)
#[arg(long = "disable-response-storage", default_value_t = false)]
pub disable_response_storage: bool,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
/// Specifies color settings for use in the output.
#[arg(long = "color", value_enum, default_value_t = Color::Auto)]
@@ -45,8 +45,10 @@ pub struct Cli {
#[arg(long = "output-last-message")]
pub last_message_file: Option<PathBuf>,
/// Initial instructions for the agent.
pub prompt: String,
/// Initial instructions for the agent. If not provided as an argument (or
/// if `-` is used), instructions are read from stdin.
#[arg(value_name = "PROMPT")]
pub prompt: Option<String>,
}
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, ValueEnum)]

View File

@@ -1,6 +1,7 @@
use chrono::Utc;
use codex_common::elapsed::format_elapsed;
use codex_core::WireApi;
use codex_core::config::Config;
use codex_core::model_supports_reasoning_summaries;
use codex_core::protocol::AgentMessageEvent;
use codex_core::protocol::BackgroundEventEvent;
use codex_core::protocol::ErrorEvent;
@@ -37,15 +38,20 @@ pub(crate) struct EventProcessor {
// using .style() with one of these fields. If you need a new style, add a
// new field here.
bold: Style,
italic: Style,
dimmed: Style,
magenta: Style,
red: Style,
green: Style,
cyan: Style,
/// Whether to include `AgentReasoning` events in the output.
show_agent_reasoning: bool,
}
impl EventProcessor {
pub(crate) fn create_with_ansi(with_ansi: bool) -> Self {
pub(crate) fn create_with_ansi(with_ansi: bool, show_agent_reasoning: bool) -> Self {
let call_id_to_command = HashMap::new();
let call_id_to_patch = HashMap::new();
let call_id_to_tool_call = HashMap::new();
@@ -55,22 +61,28 @@ impl EventProcessor {
call_id_to_command,
call_id_to_patch,
bold: Style::new().bold(),
italic: Style::new().italic(),
dimmed: Style::new().dimmed(),
magenta: Style::new().magenta(),
red: Style::new().red(),
green: Style::new().green(),
cyan: Style::new().cyan(),
call_id_to_tool_call,
show_agent_reasoning,
}
} else {
Self {
call_id_to_command,
call_id_to_patch,
bold: Style::new(),
italic: Style::new(),
dimmed: Style::new(),
magenta: Style::new(),
red: Style::new(),
green: Style::new(),
cyan: Style::new(),
call_id_to_tool_call,
show_agent_reasoning,
}
}
}
@@ -94,59 +106,85 @@ struct PatchApplyBegin {
auto_approved: bool,
}
// Timestamped println helper. The timestamp is styled with self.dimmed.
#[macro_export]
macro_rules! ts_println {
($($arg:tt)*) => {{
let now = Utc::now();
let formatted = now.format("%Y-%m-%dT%H:%M:%S").to_string();
print!("[{}] ", formatted);
($self:ident, $($arg:tt)*) => {{
let now = chrono::Utc::now();
let formatted = now.format("[%Y-%m-%dT%H:%M:%S]");
print!("{} ", formatted.style($self.dimmed));
println!($($arg)*);
}};
}
/// Print a concise summary of the effective configuration that will be used
/// for the session. This mirrors the information shown in the TUI welcome
/// screen.
pub(crate) fn print_config_summary(config: &Config, with_ansi: bool) {
let bold = if with_ansi {
Style::new().bold()
} else {
Style::new()
};
impl EventProcessor {
/// Print a concise summary of the effective configuration that will be used
/// for the session. This mirrors the information shown in the TUI welcome
/// screen.
pub(crate) fn print_config_summary(&mut self, config: &Config, prompt: &str) {
const VERSION: &str = env!("CARGO_PKG_VERSION");
ts_println!(
self,
"OpenAI Codex v{} (research preview)\n--------",
VERSION
);
ts_println!("OpenAI Codex (research preview)\n--------");
let mut entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", format!("{:?}", config.approval_policy)),
("sandbox", format!("{:?}", config.sandbox_policy)),
];
if config.model_provider.wire_api == WireApi::Responses
&& model_supports_reasoning_summaries(&config.model)
{
entries.push((
"reasoning effort",
config.model_reasoning_effort.to_string(),
));
entries.push((
"reasoning summaries",
config.model_reasoning_summary.to_string(),
));
}
let entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", format!("{:?}", config.approval_policy)),
("sandbox", format!("{:?}", config.sandbox_policy)),
];
for (key, value) in entries {
println!("{} {}", format!("{key}:").style(self.bold), value);
}
for (key, value) in entries {
println!("{} {}", format!("{key}: ").style(bold), value);
println!("--------");
// Echo the prompt that will be sent to the agent so it is visible in the
// transcript/logs before any events come in. Note the prompt may have been
// read from stdin, so it may not be visible in the terminal otherwise.
ts_println!(
self,
"{}\n{}",
"User instructions:".style(self.bold).style(self.cyan),
prompt
);
}
println!("--------\n");
}
impl EventProcessor {
pub(crate) fn process_event(&mut self, event: Event) {
let Event { id: _, msg } = event;
match msg {
EventMsg::Error(ErrorEvent { message }) => {
let prefix = "ERROR:".style(self.red);
ts_println!("{prefix} {message}");
ts_println!(self, "{prefix} {message}");
}
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
ts_println!("{}", message.style(self.dimmed));
ts_println!(self, "{}", message.style(self.dimmed));
}
EventMsg::TaskStarted | EventMsg::TaskComplete(_) => {
// Ignore.
}
EventMsg::AgentMessage(AgentMessageEvent { message }) => {
let prefix = "Agent message:".style(self.bold);
ts_println!("{prefix} {message}");
ts_println!(
self,
"{}\n{message}",
"codex".style(self.bold).style(self.magenta)
);
}
EventMsg::ExecCommandBegin(ExecCommandBeginEvent {
call_id,
@@ -161,6 +199,7 @@ impl EventProcessor {
},
);
ts_println!(
self,
"{} {} in {}",
"exec".style(self.magenta),
escape_command(&command).style(self.bold),
@@ -196,11 +235,11 @@ impl EventProcessor {
match exit_code {
0 => {
let title = format!("{call} succeeded{duration}:");
ts_println!("{}", title.style(self.green));
ts_println!(self, "{}", title.style(self.green));
}
_ => {
let title = format!("{call} exited {exit_code}{duration}:");
ts_println!("{}", title.style(self.red));
ts_println!(self, "{}", title.style(self.red));
}
}
println!("{}", truncated_output.style(self.dimmed));
@@ -237,16 +276,15 @@ impl EventProcessor {
);
ts_println!(
self,
"{} {}",
"tool".style(self.magenta),
invocation.style(self.bold),
);
}
EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id,
success,
result,
}) => {
EventMsg::McpToolCallEnd(tool_call_end_event) => {
let is_success = tool_call_end_event.is_success();
let McpToolCallEndEvent { call_id, result } = tool_call_end_event;
// Retrieve start time and invocation for duration calculation and labeling.
let info = self.call_id_to_tool_call.remove(&call_id);
@@ -261,13 +299,13 @@ impl EventProcessor {
(String::new(), format!("tool('{call_id}')"))
};
let status_str = if success { "success" } else { "failed" };
let title_style = if success { self.green } else { self.red };
let status_str = if is_success { "success" } else { "failed" };
let title_style = if is_success { self.green } else { self.red };
let title = format!("{invocation} {status_str}{duration}:");
ts_println!("{}", title.style(title_style));
ts_println!(self, "{}", title.style(title_style));
if let Some(res) = result {
if let Ok(res) = result {
let val: serde_json::Value = res.into();
let pretty =
serde_json::to_string_pretty(&val).unwrap_or_else(|_| val.to_string());
@@ -293,6 +331,7 @@ impl EventProcessor {
);
ts_println!(
self,
"{} auto_approved={}:",
"apply_patch".style(self.magenta),
auto_approved,
@@ -384,7 +423,7 @@ impl EventProcessor {
};
let title = format!("{label} exited {exit_code}{duration}:");
ts_println!("{}", title.style(title_style));
ts_println!(self, "{}", title.style(title_style));
for line in output.lines() {
println!("{}", line.style(self.dimmed));
}
@@ -396,7 +435,14 @@ impl EventProcessor {
// Should we exit?
}
EventMsg::AgentReasoning(agent_reasoning_event) => {
println!("thinking: {}", agent_reasoning_event.text);
if self.show_agent_reasoning {
ts_println!(
self,
"{}\n{}",
"thinking".style(self.italic).style(self.magenta),
agent_reasoning_event.text
);
}
}
EventMsg::SessionConfigured(session_configured_event) => {
let SessionConfiguredEvent {
@@ -407,12 +453,13 @@ impl EventProcessor {
} = session_configured_event;
ts_println!(
self,
"{} {}",
"codex session".style(self.magenta).style(self.bold),
session_id.to_string().style(self.dimmed)
);
ts_println!("model: {}", model);
ts_println!(self, "model: {}", model);
println!();
}
EventMsg::GetHistoryEntryResponse(_) => {

View File

@@ -2,6 +2,7 @@ mod cli;
mod event_processor;
use std::io::IsTerminal;
use std::io::Read;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
@@ -19,7 +20,6 @@ use codex_core::protocol::SandboxPolicy;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::util::is_inside_git_repo;
use event_processor::EventProcessor;
use event_processor::print_config_summary;
use tracing::debug;
use tracing::error;
use tracing::info;
@@ -34,12 +34,47 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
sandbox,
cwd,
skip_git_repo_check,
disable_response_storage,
color,
last_message_file,
prompt,
config_overrides,
} = cli;
// Determine the prompt based on CLI arg and/or stdin.
let prompt = match prompt {
Some(p) if p != "-" => p,
// Either `-` was passed or no positional arg.
maybe_dash => {
// When no arg (None) **and** stdin is a TTY, bail out early unless the
// user explicitly forced reading via `-`.
let force_stdin = matches!(maybe_dash.as_deref(), Some("-"));
if std::io::stdin().is_terminal() && !force_stdin {
eprintln!(
"No prompt provided. Either specify one as an argument or pipe the prompt into stdin."
);
std::process::exit(1);
}
// Ensure the user knows we are waiting on stdin, as they may
// have gotten into this state by mistake. If so, and they are not
// writing to stdin, Codex will hang indefinitely, so this should
// help them debug in that case.
if !force_stdin {
eprintln!("Reading prompt from stdin...");
}
let mut buffer = String::new();
if let Err(e) = std::io::stdin().read_to_string(&mut buffer) {
eprintln!("Failed to read prompt from stdin: {e}");
std::process::exit(1);
} else if buffer.trim().is_empty() {
eprintln!("No prompt provided via stdin.");
std::process::exit(1);
}
buffer
}
};
let (stdout_with_ansi, stderr_with_ansi) = match color {
cli::Color::Always => (true, true),
cli::Color::Never => (false, false),
@@ -63,18 +98,25 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// the user for approval.
approval_policy: Some(AskForApproval::Never),
sandbox_policy,
disable_response_storage: if disable_response_storage {
Some(true)
} else {
None
},
cwd: cwd.map(|p| p.canonicalize().unwrap_or(p)),
model_provider: None,
codex_linux_sandbox_exe,
};
let config = Config::load_with_overrides(overrides)?;
// Print the effective configuration so users can see what Codex is using.
print_config_summary(&config, stdout_with_ansi);
// Parse `-c` overrides.
let cli_kv_overrides = match config_overrides.parse_overrides() {
Ok(v) => v,
Err(e) => {
eprintln!("Error parsing -c overrides: {e}");
std::process::exit(1);
}
};
let config = Config::load_with_cli_overrides(cli_kv_overrides, overrides)?;
let mut event_processor =
EventProcessor::create_with_ansi(stdout_with_ansi, !config.hide_agent_reasoning);
// Print the effective configuration and prompt so users can see what Codex
// is using.
event_processor.print_config_summary(&config, &prompt);
if !skip_git_repo_check && !is_inside_git_repo(&config) {
eprintln!("Not inside a Git repo and --skip-git-repo-check was not specified.");
@@ -164,7 +206,6 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
info!("Sent prompt with event ID: {initial_prompt_task_id}");
// Run the loop until the task is complete.
let mut event_processor = EventProcessor::create_with_ansi(stdout_with_ansi);
while let Some(event) = rx.recv().await {
let (is_last_event, last_assistant_message) = match &event.msg {
EventMsg::TaskComplete(TaskCompleteEvent { last_agent_message }) => {

View File

@@ -10,13 +10,30 @@
//! This allows us to ship a completely separate set of functionality as part
//! of the `codex-exec` binary.
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_exec::Cli;
use codex_exec::run_main;
#[derive(Parser, Debug)]
struct TopCli {
#[clap(flatten)]
config_overrides: CliConfigOverrides,
#[clap(flatten)]
inner: Cli,
}
fn main() -> anyhow::Result<()> {
codex_linux_sandbox::run_with_sandbox(|codex_linux_sandbox_exe| async move {
let cli = Cli::parse();
run_main(cli, codex_linux_sandbox_exe).await?;
let top_cli = TopCli::parse();
// Merge root-level overrides into inner CLI struct so downstream logic remains unchanged.
let mut inner = top_cli.inner;
inner
.config_overrides
.raw_overrides
.splice(0..0, top_cli.config_overrides.raw_overrides);
run_main(inner, codex_linux_sandbox_exe).await?;
Ok(())
})
}

View File

@@ -24,7 +24,7 @@ env_logger = "0.11.5"
log = "0.4"
multimap = "0.10.0"
path-absolutize = "3.1.1"
regex = "1.11.1"
regex-lite = "0.1"
serde = { version = "1.0.194", features = ["derive"] }
serde_json = "1.0.110"
serde_with = { version = "3", features = ["macros"] }

View File

@@ -1,6 +1,6 @@
use multimap::MultiMap;
use regex::Error as RegexError;
use regex::Regex;
use regex_lite::Error as RegexError;
use regex_lite::Regex;
use crate::ExecCall;
use crate::Forbidden;
@@ -29,7 +29,7 @@ impl Policy {
} else {
let escaped_substrings = forbidden_substrings
.iter()
.map(|s| regex::escape(s))
.map(|s| regex_lite::escape(s))
.collect::<Vec<_>>()
.join("|");
Some(Regex::new(&format!("({escaped_substrings})"))?)

View File

@@ -7,7 +7,7 @@ use crate::arg_matcher::ArgMatcher;
use crate::opt::OptMeta;
use log::info;
use multimap::MultiMap;
use regex::Regex;
use regex_lite::Regex;
use starlark::any::ProvidesStaticType;
use starlark::environment::GlobalsBuilder;
use starlark::environment::LibraryExtension;
@@ -73,7 +73,7 @@ impl PolicyParser {
#[derive(Debug)]
pub struct ForbiddenProgramRegex {
pub regex: regex::Regex,
pub regex: regex_lite::Regex,
pub reason: String,
}
@@ -93,7 +93,7 @@ impl PolicyBuilder {
}
}
fn build(self) -> Result<Policy, regex::Error> {
fn build(self) -> Result<Policy, regex_lite::Error> {
let programs = self.programs.into_inner();
let forbidden_program_regexes = self.forbidden_program_regexes.into_inner();
let forbidden_substrings = self.forbidden_substrings.into_inner();
@@ -207,7 +207,7 @@ fn policy_builtins(builder: &mut GlobalsBuilder) {
.unwrap()
.downcast_ref::<PolicyBuilder>()
.unwrap();
let compiled_regex = regex::Regex::new(&regex)?;
let compiled_regex = regex_lite::Regex::new(&regex)?;
policy_builder.add_forbidden_program_regex(compiled_regex, reason);
Ok(NoneType)
}

View File

@@ -1,15 +1,21 @@
set positional-arguments
# Display help
help:
just -l
# Install the `codex-tui` binary
install:
cargo install --path tui
# `codex`
codex *args:
cargo run --bin codex -- "$@"
# Run the TUI app
# `codex exec`
exec *args:
cargo run --bin codex -- exec "$@"
# `codex tui`
tui *args:
cargo run --bin codex -- tui {{args}}
cargo run --bin codex -- tui "$@"
# Run the Proto app
proto *args:
cargo run --bin codex -- proto {{args}}
# format code
fmt:
cargo fmt -- --config imports_granularity=Item

View File

@@ -15,6 +15,23 @@ use std::sync::Arc;
use tempfile::NamedTempFile;
use tokio::sync::Notify;
// At least on GitHub CI, the arm64 tests appear to need longer timeouts.
#[cfg(not(target_arch = "aarch64"))]
const SHORT_TIMEOUT_MS: u64 = 200;
#[cfg(target_arch = "aarch64")]
const SHORT_TIMEOUT_MS: u64 = 5_000;
#[cfg(not(target_arch = "aarch64"))]
const LONG_TIMEOUT_MS: u64 = 1_000;
#[cfg(target_arch = "aarch64")]
const LONG_TIMEOUT_MS: u64 = 5_000;
#[cfg(not(target_arch = "aarch64"))]
const NETWORK_TIMEOUT_MS: u64 = 2_000;
#[cfg(target_arch = "aarch64")]
const NETWORK_TIMEOUT_MS: u64 = 10_000;
fn create_env_from_core_vars() -> HashMap<String, String> {
let policy = ShellEnvironmentPolicy::default();
create_env(&policy)
@@ -52,7 +69,7 @@ async fn run_cmd(cmd: &[&str], writable_roots: &[PathBuf], timeout_ms: u64) {
#[tokio::test]
async fn test_root_read() {
run_cmd(&["ls", "-l", "/bin"], &[], 200).await;
run_cmd(&["ls", "-l", "/bin"], &[], SHORT_TIMEOUT_MS).await;
}
#[tokio::test]
@@ -63,7 +80,7 @@ async fn test_root_write() {
run_cmd(
&["bash", "-lc", &format!("echo blah > {}", tmpfile_path)],
&[],
200,
SHORT_TIMEOUT_MS,
)
.await;
}
@@ -75,7 +92,7 @@ async fn test_dev_null_write() {
&[],
// We have seen timeouts when running this test in CI on GitHub,
// so we are using a generous timeout until we can diagnose further.
1_000,
LONG_TIMEOUT_MS,
)
.await;
}
@@ -93,7 +110,7 @@ async fn test_writable_root() {
&[tmpdir.path().to_path_buf()],
// We have seen timeouts when running this test in CI on GitHub,
// so we are using a generous timeout until we can diagnose further.
1_000,
LONG_TIMEOUT_MS,
)
.await;
}
@@ -115,7 +132,7 @@ async fn assert_network_blocked(cmd: &[&str]) {
cwd,
// Give the tool a generous 2-second timeout so even slow DNS timeouts
// do not stall the suite.
timeout_ms: Some(2_000),
timeout_ms: Some(NETWORK_TIMEOUT_MS),
env: create_env_from_core_vars(),
};

20
codex-rs/login/Cargo.toml Normal file
View File

@@ -0,0 +1,20 @@
[package]
name = "codex-login"
version = { workspace = true }
edition = "2024"
[lints]
workspace = true
[dependencies]
chrono = { version = "0.4", features = ["serde"] }
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = [
"io-std",
"macros",
"process",
"rt-multi-thread",
"signal",
] }

168
codex-rs/login/src/lib.rs Normal file
View File

@@ -0,0 +1,168 @@
use chrono::DateTime;
use chrono::Utc;
use serde::Deserialize;
use serde::Serialize;
use std::fs::OpenOptions;
use std::io::Read;
use std::io::Write;
#[cfg(unix)]
use std::os::unix::fs::OpenOptionsExt;
use std::path::Path;
use std::process::Stdio;
use tokio::process::Command;
const SOURCE_FOR_PYTHON_SERVER: &str = include_str!("./login_with_chatgpt.py");
const CLIENT_ID: &str = "app_EMoamEEZ73f0CkXaXp7hrann";
/// Run `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}` with the CODEX_HOME
/// environment variable set to the provided `codex_home` path. If the
/// subprocess exits 0, read the OPENAI_API_KEY property out of
/// CODEX_HOME/auth.json and return Ok(OPENAI_API_KEY). Otherwise, return Err
/// with any information from the subprocess.
///
/// If `capture_output` is true, the subprocess's output will be captured and
/// recorded in memory. Otherwise, the subprocess's output will be sent to the
/// current process's stdout/stderr.
pub async fn login_with_chatgpt(
codex_home: &Path,
capture_output: bool,
) -> std::io::Result<String> {
let child = Command::new("python3")
.arg("-c")
.arg(SOURCE_FOR_PYTHON_SERVER)
.env("CODEX_HOME", codex_home)
.stdin(Stdio::null())
.stdout(if capture_output {
Stdio::piped()
} else {
Stdio::inherit()
})
.stderr(if capture_output {
Stdio::piped()
} else {
Stdio::inherit()
})
.spawn()?;
let output = child.wait_with_output().await?;
if output.status.success() {
try_read_openai_api_key(codex_home).await
} else {
let stderr = String::from_utf8_lossy(&output.stderr);
Err(std::io::Error::other(format!(
"login_with_chatgpt subprocess failed: {stderr}"
)))
}
}
/// Attempt to read the `OPENAI_API_KEY` from the `auth.json` file in the given
/// `CODEX_HOME` directory, refreshing it, if necessary.
pub async fn try_read_openai_api_key(codex_home: &Path) -> std::io::Result<String> {
let auth_path = codex_home.join("auth.json");
let mut file = std::fs::File::open(&auth_path)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let auth_dot_json: AuthDotJson = serde_json::from_str(&contents)?;
if is_expired(&auth_dot_json) {
let refresh_response = try_refresh_token(&auth_dot_json).await?;
let mut auth_dot_json = auth_dot_json;
auth_dot_json.tokens.id_token = refresh_response.id_token;
if let Some(refresh_token) = refresh_response.refresh_token {
auth_dot_json.tokens.refresh_token = refresh_token;
}
auth_dot_json.last_refresh = Utc::now();
let mut options = OpenOptions::new();
options.write(true).create(true);
#[cfg(unix)]
{
options.mode(0o600);
}
let json_data = serde_json::to_string(&auth_dot_json)?;
{
let mut file = options.open(&auth_path)?;
file.write_all(json_data.as_bytes())?;
file.flush()?;
}
Ok(auth_dot_json.openai_api_key)
} else {
Ok(auth_dot_json.openai_api_key)
}
}
fn is_expired(auth_dot_json: &AuthDotJson) -> bool {
let last_refresh = auth_dot_json.last_refresh;
last_refresh < Utc::now() - chrono::Duration::days(28)
}
async fn try_refresh_token(auth_dot_json: &AuthDotJson) -> std::io::Result<RefreshResponse> {
let refresh_request = RefreshRequest {
client_id: CLIENT_ID,
grant_type: "refresh_token",
refresh_token: auth_dot_json.tokens.refresh_token.clone(),
scope: "openid profile email",
};
let client = reqwest::Client::new();
let response = client
.post("https://auth.openai.com/oauth/token")
.header("Content-Type", "application/json")
.json(&refresh_request)
.send()
.await
.map_err(std::io::Error::other)?;
if response.status().is_success() {
let refresh_response = response
.json::<RefreshResponse>()
.await
.map_err(std::io::Error::other)?;
Ok(refresh_response)
} else {
Err(std::io::Error::other(format!(
"Failed to refresh token: {}",
response.status()
)))
}
}
#[derive(Serialize)]
struct RefreshRequest {
client_id: &'static str,
grant_type: &'static str,
refresh_token: String,
scope: &'static str,
}
#[derive(Deserialize)]
struct RefreshResponse {
id_token: String,
refresh_token: Option<String>,
}
/// Expected structure for $CODEX_HOME/auth.json.
#[derive(Deserialize, Serialize)]
struct AuthDotJson {
#[serde(rename = "OPENAI_API_KEY")]
openai_api_key: String,
tokens: TokenData,
last_refresh: DateTime<Utc>,
}
#[derive(Deserialize, Serialize)]
struct TokenData {
/// This is a JWT.
id_token: String,
/// This is a JWT.
#[allow(dead_code)]
access_token: String,
refresh_token: String,
}

View File

@@ -0,0 +1,624 @@
"""Script that spawns a local webserver for retrieving an OpenAI API key.
- Listens on 127.0.0.1:1455
- Opens http://localhost:1455/auth/callback in the browser
- If the user successfully navigates the auth flow,
$CODEX_HOME/auth.json will be written with the API key.
- User will be redirected to http://localhost:1455/success upon success.
The script should exit with a non-zero code if the user fails to navigate the
auth flow.
"""
from __future__ import annotations
import argparse
import base64
import datetime
import errno
import hashlib
import http.server
import json
import os
import secrets
import sys
import threading
import urllib.parse
import urllib.request
import webbrowser
from dataclasses import dataclass
# Required port for OAuth client.
REQUIRED_PORT = 1455
URL_BASE = f"http://localhost:{REQUIRED_PORT}"
DEFAULT_ISSUER = "https://auth.openai.com"
DEFAULT_CLIENT_ID = "app_EMoamEEZ73f0CkXaXp7hrann"
EXIT_CODE_WHEN_ADDRESS_ALREADY_IN_USE = 13
@dataclass
class TokenData:
id_token: str
access_token: str
refresh_token: str
@dataclass
class AuthBundle:
"""Aggregates authentication data produced after successful OAuth flow."""
api_key: str
token_data: TokenData
last_refresh: str
def main() -> None:
parser = argparse.ArgumentParser(description="Retrieve API key via local HTTP flow")
parser.add_argument(
"--no-browser",
action="store_true",
help="Do not automatically open the browser",
)
parser.add_argument("--verbose", action="store_true", help="Enable request logging")
args = parser.parse_args()
codex_home = os.environ.get("CODEX_HOME")
if not codex_home:
eprint("ERROR: CODEX_HOME environment variable is not set")
sys.exit(1)
# Spawn server.
try:
httpd = _ApiKeyHTTPServer(
("127.0.0.1", REQUIRED_PORT),
_ApiKeyHTTPHandler,
codex_home=codex_home,
verbose=args.verbose,
)
except OSError as e:
eprint(f"ERROR: {e}")
if e.errno == errno.EADDRINUSE:
# Caller might want to handle this case specially.
sys.exit(EXIT_CODE_WHEN_ADDRESS_ALREADY_IN_USE)
else:
sys.exit(1)
auth_url = httpd.auth_url()
with httpd:
eprint(f"Starting local login server on {URL_BASE}")
if not args.no_browser:
try:
webbrowser.open(auth_url, new=1, autoraise=True)
except Exception as e:
eprint(f"Failed to open browser: {e}")
eprint(
f"If your browser did not open, navigate to this URL to authenticate:\n\n{auth_url}"
)
# Run the server in the main thread until `shutdown()` is called by the
# request handler.
try:
httpd.serve_forever()
except KeyboardInterrupt:
eprint("\nKeyboard interrupt received, exiting.")
# Server has been shut down by the request handler. Exit with the code
# it set (0 on success, non-zero on failure).
sys.exit(httpd.exit_code)
class _ApiKeyHTTPHandler(http.server.BaseHTTPRequestHandler):
"""A minimal request handler that captures an *api key* from query/post."""
# We store the result in the server instance itself.
server: "_ApiKeyHTTPServer" # type: ignore[override] - helpful annotation
def do_GET(self) -> None: # noqa: N802 required by BaseHTTPRequestHandler
path = urllib.parse.urlparse(self.path).path
if path == "/success":
# Serve confirmation page then gracefully shut down the server so
# the main thread can exit with the previously captured exit code.
self._send_html(LOGIN_SUCCESS_HTML)
# Ensure the data is flushed to the client before we stop.
try:
self.wfile.flush()
except Exception as e:
eprint(f"Failed to flush response: {e}")
self.request_shutdown()
elif path == "/auth/callback":
query = urllib.parse.urlparse(self.path).query
params = urllib.parse.parse_qs(query)
# Validate state -------------------------------------------------
if params.get("state", [None])[0] != self.server.state:
self.send_error(400, "State parameter mismatch")
return
# Standard OAuth flow -----------------------------------------
code = params.get("code", [None])[0]
if not code:
self.send_error(400, "Missing authorization code")
return
try:
auth_bundle, success_url = self._exchange_code_for_api_key(code)
except Exception as exc: # noqa: BLE001 propagate to client
self.send_error(500, f"Token exchange failed: {exc}")
return
# Persist API key along with additional token metadata.
if _write_auth_file(
auth=auth_bundle,
codex_home=self.server.codex_home,
):
self.server.exit_code = 0
self._send_redirect(success_url)
else:
self.send_error(500, "Unable to persist auth file")
else:
self.send_error(404, "Endpoint not supported")
def do_POST(self) -> None: # noqa: N802 required by BaseHTTPRequestHandler
self.send_error(404, "Endpoint not supported")
def send_error(self, code, message=None, explain=None) -> None:
"""Send an error response and stop the server.
We avoid calling `sys.exit()` directly from the request-handling thread
so that the response has a chance to be written to the socket. Instead
we shut the server down; the main thread will then exit with the
appropriate status code.
"""
super().send_error(code, message, explain)
try:
self.wfile.flush()
except Exception as e:
eprint(f"Failed to flush response: {e}")
self.request_shutdown()
def _send_redirect(self, url: str) -> None:
self.send_response(302)
self.send_header("Location", url)
self.end_headers()
def _send_html(self, body: str) -> None:
encoded = body.encode()
self.send_response(200)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(encoded)))
self.end_headers()
self.wfile.write(encoded)
# Silence logging for cleanliness unless --verbose flag is used.
def log_message(self, fmt: str, *args): # type: ignore[override]
if getattr(self.server, "verbose", False): # type: ignore[attr-defined]
super().log_message(fmt, *args)
def _exchange_code_for_api_key(self, code: str) -> tuple[AuthBundle, str]:
"""Perform token + token-exchange to obtain an OpenAI API key.
Returns (AuthBundle, success_url).
"""
token_endpoint = f"{self.server.issuer}/oauth/token"
# 1. Authorization-code -> (id_token, access_token, refresh_token)
data = urllib.parse.urlencode(
{
"grant_type": "authorization_code",
"code": code,
"redirect_uri": self.server.redirect_uri,
"client_id": self.server.client_id,
"code_verifier": self.server.pkce.code_verifier,
}
).encode()
token_data: TokenData
with urllib.request.urlopen(
urllib.request.Request(
token_endpoint,
data=data,
method="POST",
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
) as resp:
payload = json.loads(resp.read().decode())
token_data = TokenData(
id_token=payload["id_token"],
access_token=payload["access_token"],
refresh_token=payload["refresh_token"],
)
id_token_parts = token_data.id_token.split(".")
if len(id_token_parts) != 3:
raise ValueError("Invalid ID token")
access_token_parts = token_data.access_token.split(".")
if len(access_token_parts) != 3:
raise ValueError("Invalid access token")
id_token_claims = json.loads(
base64.urlsafe_b64decode(id_token_parts[1] + "==").decode("utf-8")
)
access_token_claims = json.loads(
base64.urlsafe_b64decode(access_token_parts[1] + "==").decode("utf-8")
)
token_claims = id_token_claims.get("https://api.openai.com/auth", {})
access_claims = access_token_claims.get("https://api.openai.com/auth", {})
org_id = token_claims.get("organization_id")
if not org_id:
raise ValueError("Missing organization in id_token claims")
project_id = token_claims.get("project_id")
if not project_id:
raise ValueError("Missing project in id_token claims")
random_id = secrets.token_hex(6)
# 2. Token exchange to obtain API key
today = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d")
exchange_data = urllib.parse.urlencode(
{
"grant_type": "urn:ietf:params:oauth:grant-type:token-exchange",
"client_id": self.server.client_id,
"requested_token": "openai-api-key",
"subject_token": token_data.id_token,
"subject_token_type": "urn:ietf:params:oauth:token-type:id_token",
"name": f"Codex CLI [auto-generated] ({today}) [{random_id}]",
}
).encode()
exchanged_access_token: str
with urllib.request.urlopen(
urllib.request.Request(
token_endpoint,
data=exchange_data,
method="POST",
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
) as resp:
exchange_payload = json.loads(resp.read().decode())
exchanged_access_token = exchange_payload["access_token"]
# Determine whether the organization still requires additional
# setup (e.g., adding a payment method) based on the ID-token
# claim provided by the auth service.
completed_onboarding = token_claims.get("completed_platform_onboarding") == True
chatgpt_plan_type = access_claims.get("chatgpt_plan_type")
is_org_owner = token_claims.get("is_org_owner") == True
needs_setup = not completed_onboarding and is_org_owner
# Build the success URL on the same host/port as the callback and
# include the required query parameters for the front-end page.
success_url_query = {
"id_token": token_data.id_token,
"needs_setup": "true" if needs_setup else "false",
"org_id": org_id,
"project_id": project_id,
"plan_type": chatgpt_plan_type,
"platform_url": (
"https://platform.openai.com"
if self.server.issuer == "https://auth.openai.com"
else "https://platform.api.openai.org"
),
}
success_url = f"{URL_BASE}/success?{urllib.parse.urlencode(success_url_query)}"
# TODO(mbolin): Port maybeRedeemCredits() to Python and call it here.
# Persist refresh_token/id_token for future use (redeem credits etc.)
last_refresh_str = (
datetime.datetime.now(datetime.timezone.utc)
.isoformat()
.replace("+00:00", "Z")
)
auth_bundle = AuthBundle(
api_key=exchanged_access_token,
token_data=token_data,
last_refresh=last_refresh_str,
)
return (auth_bundle, success_url)
def request_shutdown(self) -> None:
# shutdown() must be invoked from another thread to avoid
# deadlocking the serve_forever() loop, which is running in this
# same thread. A short-lived helper thread does the trick.
threading.Thread(target=self.server.shutdown, daemon=True).start()
def _write_auth_file(*, auth: AuthBundle, codex_home: str) -> bool:
"""Persist *api_key* to $CODEX_HOME/auth.json.
Returns True on success, False otherwise. Any error is printed to
*stderr* so that the Rust layer can surface the problem.
"""
if not os.path.isdir(codex_home):
try:
os.makedirs(codex_home, exist_ok=True)
except Exception as exc: # pragma: no cover unlikely
eprint(f"ERROR: unable to create CODEX_HOME directory: {exc}")
return False
auth_path = os.path.join(codex_home, "auth.json")
auth_json_contents = {
"OPENAI_API_KEY": auth.api_key,
"tokens": {
"id_token": auth.token_data.id_token,
"access_token": auth.token_data.access_token,
"refresh_token": auth.token_data.refresh_token,
},
"last_refresh": auth.last_refresh,
}
try:
with open(auth_path, "w", encoding="utf-8") as fp:
if hasattr(os, "fchmod"): # POSIX-safe
os.fchmod(fp.fileno(), 0o600)
json.dump(auth_json_contents, fp, indent=2)
except Exception as exc: # pragma: no cover permissions/filesystem
eprint(f"ERROR: unable to write auth file: {exc}")
return False
return True
@dataclass
class PkceCodes:
code_verifier: str
code_challenge: str
class _ApiKeyHTTPServer(http.server.HTTPServer):
"""HTTPServer with shutdown helper & self-contained OAuth configuration."""
def __init__(
self,
server_address: tuple[str, int],
request_handler_class: type[http.server.BaseHTTPRequestHandler],
*,
codex_home: str,
verbose: bool = False,
) -> None:
super().__init__(server_address, request_handler_class, bind_and_activate=True)
self.exit_code = 1
self.codex_home = codex_home
self.verbose: bool = verbose
self.issuer: str = DEFAULT_ISSUER
self.client_id: str = DEFAULT_CLIENT_ID
port = server_address[1]
self.redirect_uri: str = f"http://localhost:{port}/auth/callback"
self.pkce: PkceCodes = _generate_pkce()
self.state: str = secrets.token_hex(32)
def auth_url(self) -> str:
"""Return fully-formed OpenID authorization URL."""
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": "openid profile email offline_access",
"code_challenge": self.pkce.code_challenge,
"code_challenge_method": "S256",
"id_token_add_organizations": "true",
"state": self.state,
}
return f"{self.issuer}/oauth/authorize?" + urllib.parse.urlencode(params)
def _generate_pkce() -> PkceCodes:
"""Generate PKCE *code_verifier* and *code_challenge* (S256)."""
code_verifier = secrets.token_hex(64)
digest = hashlib.sha256(code_verifier.encode()).digest()
code_challenge = base64.urlsafe_b64encode(digest).rstrip(b"=").decode()
return PkceCodes(code_verifier, code_challenge)
def eprint(*args, **kwargs) -> None:
print(*args, file=sys.stderr, **kwargs)
LOGIN_SUCCESS_HTML = """<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Sign into Codex CLI</title>
<link rel="icon" href='data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"%3E%3Cpath stroke="%23000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"/%3E%3C/svg%3E' type="image/svg+xml">
<style>
.container {
margin: auto;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
position: relative;
background: white;
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
}
.inner-container {
width: 400px;
flex-direction: column;
justify-content: flex-start;
align-items: center;
gap: 20px;
display: inline-flex;
}
.content {
align-self: stretch;
flex-direction: column;
justify-content: flex-start;
align-items: center;
gap: 20px;
display: flex;
}
.svg-wrapper {
position: relative;
}
.title {
text-align: center;
color: var(--text-primary, #0D0D0D);
font-size: 28px;
font-weight: 400;
line-height: 36.40px;
word-wrap: break-word;
}
.setup-box {
width: 600px;
padding: 16px 20px;
background: var(--bg-primary, white);
box-shadow: 0px 4px 16px rgba(0, 0, 0, 0.05);
border-radius: 16px;
outline: 1px var(--border-default, rgba(13, 13, 13, 0.10)) solid;
outline-offset: -1px;
justify-content: flex-start;
align-items: center;
gap: 16px;
display: inline-flex;
}
.setup-content {
flex: 1 1 0;
justify-content: flex-start;
align-items: center;
gap: 24px;
display: flex;
}
.setup-text {
flex: 1 1 0;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
gap: 4px;
display: inline-flex;
}
.setup-title {
align-self: stretch;
color: var(--text-primary, #0D0D0D);
font-size: 14px;
font-weight: 510;
line-height: 20px;
word-wrap: break-word;
}
.setup-description {
align-self: stretch;
color: var(--text-secondary, #5D5D5D);
font-size: 14px;
font-weight: 400;
line-height: 20px;
word-wrap: break-word;
}
.redirect-box {
justify-content: flex-start;
align-items: center;
gap: 8px;
display: flex;
}
.close-button,
.redirect-button {
height: 28px;
padding: 8px 16px;
background: var(--interactive-bg-primary-default, #0D0D0D);
border-radius: 999px;
justify-content: center;
align-items: center;
gap: 4px;
display: flex;
}
.close-button,
.redirect-text {
color: var(--interactive-label-primary-default, white);
font-size: 14px;
font-weight: 510;
line-height: 20px;
word-wrap: break-word;
text-decoration: none;
}
</style>
</head>
<body>
<div class="container">
<div class="inner-container">
<div class="content">
<div data-svg-wrapper class="svg-wrapper">
<svg width="56" height="56" viewBox="0 0 56 56" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.6665 28.0003C4.6665 15.1137 15.1132 4.66699 27.9998 4.66699C40.8865 4.66699 51.3332 15.1137 51.3332 28.0003C51.3332 40.887 40.8865 51.3337 27.9998 51.3337C15.1132 51.3337 4.6665 40.887 4.6665 28.0003ZM37.5093 18.5088C36.4554 17.7672 34.9999 18.0203 34.2583 19.0742L24.8508 32.4427L20.9764 28.1808C20.1095 27.2272 18.6338 27.1569 17.6803 28.0238C16.7267 28.8906 16.6565 30.3664 17.5233 31.3199L23.3566 37.7366C23.833 38.2606 24.5216 38.5399 25.2284 38.4958C25.9353 38.4517 26.5838 38.089 26.9914 37.5098L38.0747 21.7598C38.8163 20.7059 38.5632 19.2504 37.5093 18.5088Z" fill="var(--green-400, #04B84C)"/>
</svg>
</div>
<div class="title">Signed in to Codex CLI</div>
</div>
<div class="close-box" style="display: none;">
<div class="setup-description">You may now close this page</div>
</div>
<div class="setup-box" style="display: none;">
<div class="setup-content">
<div class="setup-text">
<div class="setup-title">Finish setting up your API organization</div>
<div class="setup-description">Add a payment method to use your organization.</div>
</div>
<div class="redirect-box">
<div data-hasendicon="false" data-hasstarticon="false" data-ishovered="false" data-isinactive="false" data-ispressed="false" data-size="large" data-type="primary" class="redirect-button">
<div class="redirect-text">Redirecting in 3s...</div>
</div>
</div>
</div>
</div>
</div>
</div>
<script>
(function () {
const params = new URLSearchParams(window.location.search);
const needsSetup = params.get('needs_setup') === 'true';
const platformUrl = params.get('platform_url') || 'https://platform.openai.com';
const orgId = params.get('org_id');
const projectId = params.get('project_id');
const planType = params.get('plan_type');
const idToken = params.get('id_token');
// Show different message and optional redirect when setup is required
if (needsSetup) {
const setupBox = document.querySelector('.setup-box');
setupBox.style.display = 'flex';
const redirectUrlObj = new URL('/org-setup', platformUrl);
redirectUrlObj.searchParams.set('p', planType);
redirectUrlObj.searchParams.set('t', idToken);
redirectUrlObj.searchParams.set('with_org', orgId);
redirectUrlObj.searchParams.set('project_id', projectId);
const redirectUrl = redirectUrlObj.toString();
const message = document.querySelector('.redirect-text');
let countdown = 3;
function tick() {
message.textContent =
'Redirecting in ' + countdown + 's…';
if (countdown === 0) {
window.location.replace(redirectUrl);
} else {
countdown -= 1;
setTimeout(tick, 1000);
}
}
tick();
} else {
const closeBox = document.querySelector('.close-box');
closeBox.style.display = 'flex';
}
})();
</script>
</body>
</html>"""
# Unconditionally call `main()` instead of gating it behind
# `if __name__ == "__main__"` because this script is either:
#
# - invoked as a string passed to `python3 -c`
# - run via `python3 login_with_chatgpt.py` for testing as part of local
# development
main()

View File

@@ -20,9 +20,22 @@ use mcp_types::Implementation;
use mcp_types::InitializeRequestParams;
use mcp_types::ListToolsRequestParams;
use mcp_types::MCP_SCHEMA_VERSION;
use tracing_subscriber::EnvFilter;
#[tokio::main]
async fn main() -> Result<()> {
let default_level = "debug";
let _ = tracing_subscriber::fmt()
// Fallback to the `default_level` log filter if the environment
// variable is not set _or_ contains an invalid value
.with_env_filter(
EnvFilter::try_from_default_env()
.or_else(|_| EnvFilter::try_new(default_level))
.unwrap_or_else(|_| EnvFilter::new(default_level)),
)
.with_writer(std::io::stderr)
.try_init();
// Collect command-line arguments excluding the program name itself.
let mut args: Vec<String> = std::env::args().skip(1).collect();

View File

@@ -22,6 +22,7 @@ mcp-types = { path = "../mcp-types" }
schemars = "0.8.22"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
toml = "0.8"
tracing = { version = "0.1.41", features = ["log"] }
tracing-subscriber = { version = "0.3", features = ["fmt", "env-filter"] }
tokio = { version = "1", features = [

View File

@@ -1,15 +1,16 @@
//! Configuration object accepted by the `codex` MCP tool-call.
use std::path::PathBuf;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use mcp_types::Tool;
use mcp_types::ToolInputSchema;
use schemars::JsonSchema;
use schemars::r#gen::SchemaSettings;
use serde::Deserialize;
use std::collections::HashMap;
use std::path::PathBuf;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use crate::json_to_toml::json_to_toml;
/// Client-supplied configuration for a `codex` tool-call.
#[derive(Debug, Clone, Deserialize, JsonSchema)]
@@ -41,12 +42,10 @@ pub(crate) struct CodexToolCallParam {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub sandbox_permissions: Option<Vec<CodexToolCallSandboxPermission>>,
/// Disable server-side response storage.
/// Individual config settings that will override what is in
/// CODEX_HOME/config.toml.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub disable_response_storage: Option<bool>,
// Custom system instructions.
// #[serde(default, skip_serializing_if = "Option::is_none")]
// pub instructions: Option<String>,
pub config: Option<HashMap<String, serde_json::Value>>,
}
// Create custom enums for use with `CodexToolCallApprovalPolicy` where we
@@ -155,7 +154,7 @@ impl CodexToolCallParam {
cwd,
approval_policy,
sandbox_permissions,
disable_response_storage,
config: cli_overrides,
} = self;
let sandbox_policy = sandbox_permissions.map(|perms| {
SandboxPolicy::from(perms.into_iter().map(Into::into).collect::<Vec<_>>())
@@ -168,12 +167,17 @@ impl CodexToolCallParam {
cwd: cwd.map(PathBuf::from),
approval_policy: approval_policy.map(Into::into),
sandbox_policy,
disable_response_storage,
model_provider: None,
codex_linux_sandbox_exe,
};
let cfg = codex_core::config::Config::load_with_overrides(overrides)?;
let cli_overrides = cli_overrides
.unwrap_or_default()
.into_iter()
.map(|(k, v)| (k, json_to_toml(v)))
.collect();
let cfg = codex_core::config::Config::load_with_cli_overrides(cli_overrides, overrides)?;
Ok((prompt, cfg))
}
@@ -216,14 +220,15 @@ mod tests {
],
"type": "string"
},
"config": {
"description": "Individual config settings that will override what is in CODEX_HOME/config.toml.",
"additionalProperties": true,
"type": "object"
},
"cwd": {
"description": "Working directory for the session. If relative, it is resolved against the server process's current working directory.",
"type": "string"
},
"disable-response-storage": {
"description": "Disable server-side response storage.",
"type": "boolean"
},
"model": {
"description": "Optional override for the model name (e.g. \"o3\", \"o4-mini\")",
"type": "string"

View File

@@ -0,0 +1,84 @@
use serde_json::Value as JsonValue;
use toml::Value as TomlValue;
/// Convert a `serde_json::Value` into a semantically equivalent `toml::Value`.
pub(crate) fn json_to_toml(v: JsonValue) -> TomlValue {
match v {
JsonValue::Null => TomlValue::String(String::new()),
JsonValue::Bool(b) => TomlValue::Boolean(b),
JsonValue::Number(n) => {
if let Some(i) = n.as_i64() {
TomlValue::Integer(i)
} else if let Some(f) = n.as_f64() {
TomlValue::Float(f)
} else {
TomlValue::String(n.to_string())
}
}
JsonValue::String(s) => TomlValue::String(s),
JsonValue::Array(arr) => TomlValue::Array(arr.into_iter().map(json_to_toml).collect()),
JsonValue::Object(map) => {
let tbl = map
.into_iter()
.map(|(k, v)| (k, json_to_toml(v)))
.collect::<toml::value::Table>();
TomlValue::Table(tbl)
}
}
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use serde_json::json;
#[test]
fn json_number_to_toml() {
let json_value = json!(123);
assert_eq!(TomlValue::Integer(123), json_to_toml(json_value));
}
#[test]
fn json_array_to_toml() {
let json_value = json!([true, 1]);
assert_eq!(
TomlValue::Array(vec![TomlValue::Boolean(true), TomlValue::Integer(1)]),
json_to_toml(json_value)
);
}
#[test]
fn json_bool_to_toml() {
let json_value = json!(false);
assert_eq!(TomlValue::Boolean(false), json_to_toml(json_value));
}
#[test]
fn json_float_to_toml() {
let json_value = json!(1.25);
assert_eq!(TomlValue::Float(1.25), json_to_toml(json_value));
}
#[test]
fn json_null_to_toml() {
let json_value = serde_json::Value::Null;
assert_eq!(TomlValue::String(String::new()), json_to_toml(json_value));
}
#[test]
fn json_object_nested() {
let json_value = json!({ "outer": { "inner": 2 } });
let expected = {
let mut inner = toml::value::Table::new();
inner.insert("inner".into(), TomlValue::Integer(2));
let mut outer = toml::value::Table::new();
outer.insert("outer".into(), TomlValue::Table(inner));
TomlValue::Table(outer)
};
assert_eq!(json_to_toml(json_value), expected);
}
}

View File

@@ -16,6 +16,7 @@ use tracing::info;
mod codex_tool_config;
mod codex_tool_runner;
mod json_to_toml;
mod message_processor;
use crate::message_processor::MessageProcessor;

View File

@@ -1144,6 +1144,7 @@ pub enum ServerRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
pub enum ServerResult {
Result(Result),
InitializeResult(InitializeResult),

View File

@@ -16,13 +16,16 @@ workspace = true
[dependencies]
anyhow = "1"
base64 = "0.22.1"
clap = { version = "4", features = ["derive"] }
codex-ansi-escape = { path = "../ansi-escape" }
codex-core = { path = "../core" }
codex-common = { path = "../common", features = ["cli", "elapsed"] }
codex-linux-sandbox = { path = "../linux-sandbox" }
codex-login = { path = "../login" }
color-eyre = "0.6.3"
crossterm = { version = "0.28.1", features = ["bracketed-paste"] }
image = { version = "^0.25.6", default-features = false, features = ["jpeg"] }
lazy_static = "1"
mcp-types = { path = "../mcp-types" }
path-clean = "1.0.1"
@@ -30,8 +33,9 @@ ratatui = { version = "0.29.0", features = [
"unstable-widget-ref",
"unstable-rendered-line-info",
] }
regex = "1"
serde_json = "1"
ratatui-image = "8.0.0"
regex-lite = "0.1"
serde_json = { version = "1", features = ["preserve_order"] }
shlex = "1.3.0"
strum = "0.27.1"
strum_macros = "0.27.1"
@@ -48,6 +52,7 @@ tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }
tui-input = "0.11.1"
tui-markdown = "0.3.3"
tui-textarea = "0.7.0"
unicode-segmentation = "1.12.0"
uuid = "1"
[dev-dependencies]

View File

@@ -3,6 +3,7 @@ use crate::app_event_sender::AppEventSender;
use crate::chatwidget::ChatWidget;
use crate::git_warning_screen::GitWarningOutcome;
use crate::git_warning_screen::GitWarningScreen;
use crate::login_screen::LoginScreen;
use crate::mouse_capture::MouseCapture;
use crate::scroll_event_helper::ScrollEventHelper;
use crate::slash_command::SlashCommand;
@@ -15,28 +16,49 @@ use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::MouseEvent;
use crossterm::event::MouseEventKind;
use std::path::PathBuf;
use std::sync::mpsc::Receiver;
use std::sync::mpsc::channel;
/// Toplevel application state which fullscreen view is currently active.
enum AppState {
/// Top-level application state: which full-screen view is currently active.
#[allow(clippy::large_enum_variant)]
enum AppState<'a> {
/// The main chat UI is visible.
Chat,
/// The startup warning that recommends running codex inside a Git repo.
Chat {
/// Boxed to avoid a large enum variant and reduce the overall size of
/// `AppState`.
widget: Box<ChatWidget<'a>>,
},
/// The login screen for the OpenAI provider.
Login { screen: LoginScreen },
/// The start-up warning that recommends running codex inside a Git repo.
GitWarning { screen: GitWarningScreen },
}
pub(crate) struct App<'a> {
app_event_tx: AppEventSender,
app_event_rx: Receiver<AppEvent>,
chat_widget: ChatWidget<'a>,
app_state: AppState,
app_state: AppState<'a>,
/// Stored parameters needed to instantiate the ChatWidget later, e.g.,
/// after dismissing the Git-repo warning.
chat_args: Option<ChatWidgetArgs>,
}
impl App<'_> {
/// Aggregate parameters needed to create a `ChatWidget`, as creation may be
/// deferred until after the Git warning screen is dismissed.
#[derive(Clone)]
struct ChatWidgetArgs {
config: Config,
initial_prompt: Option<String>,
initial_images: Vec<PathBuf>,
}
impl<'a> App<'a> {
pub(crate) fn new(
config: Config,
initial_prompt: Option<String>,
show_login_screen: bool,
show_git_warning: bool,
initial_images: Vec<std::path::PathBuf>,
) -> Self {
@@ -94,26 +116,44 @@ impl App<'_> {
});
}
let chat_widget = ChatWidget::new(
config,
app_event_tx.clone(),
initial_prompt.clone(),
initial_images,
);
let app_state = if show_git_warning {
AppState::GitWarning {
screen: GitWarningScreen::new(),
}
let (app_state, chat_args) = if show_login_screen {
(
AppState::Login {
screen: LoginScreen::new(app_event_tx.clone(), config.codex_home.clone()),
},
Some(ChatWidgetArgs {
config,
initial_prompt,
initial_images,
}),
)
} else if show_git_warning {
(
AppState::GitWarning {
screen: GitWarningScreen::new(),
},
Some(ChatWidgetArgs {
config,
initial_prompt,
initial_images,
}),
)
} else {
AppState::Chat
let chat_widget =
ChatWidget::new(config, app_event_tx.clone(), initial_prompt, initial_images);
(
AppState::Chat {
widget: Box::new(chat_widget),
},
None,
)
};
Self {
app_event_tx,
app_event_rx,
chat_widget,
app_state,
chat_args,
}
}
@@ -144,7 +184,15 @@ impl App<'_> {
modifiers: crossterm::event::KeyModifiers::CONTROL,
..
} => {
self.chat_widget.submit_op(Op::Interrupt);
// Forward interrupt to ChatWidget when active.
match &mut self.app_state {
AppState::Chat { widget } => {
widget.submit_op(Op::Interrupt);
}
AppState::Login { .. } | AppState::GitWarning { .. } => {
// No-op.
}
}
}
KeyEvent {
code: KeyCode::Char('d'),
@@ -167,20 +215,19 @@ impl App<'_> {
AppEvent::ExitRequest => {
break;
}
AppEvent::CodexOp(op) => {
if matches!(self.app_state, AppState::Chat) {
self.chat_widget.submit_op(op);
}
}
AppEvent::LatestLog(line) => {
if matches!(self.app_state, AppState::Chat) {
self.chat_widget.update_latest_log(line);
}
}
AppEvent::CodexOp(op) => match &mut self.app_state {
AppState::Chat { widget } => widget.submit_op(op),
AppState::Login { .. } | AppState::GitWarning { .. } => {}
},
AppEvent::LatestLog(line) => match &mut self.app_state {
AppState::Chat { widget } => widget.update_latest_log(line),
AppState::Login { .. } | AppState::GitWarning { .. } => {}
},
AppEvent::DispatchCommand(command) => match command {
SlashCommand::Clear => {
self.chat_widget.clear_conversation_history();
}
SlashCommand::Clear => match &mut self.app_state {
AppState::Chat { widget } => widget.clear_conversation_history(),
AppState::Login { .. } | AppState::GitWarning { .. } => {}
},
SlashCommand::ToggleMouseMode => {
if let Err(e) = mouse_capture.toggle() {
tracing::error!("Failed to toggle mouse mode: {e}");
@@ -199,8 +246,11 @@ impl App<'_> {
fn draw_next_frame(&mut self, terminal: &mut tui::Tui) -> Result<()> {
match &mut self.app_state {
AppState::Chat => {
terminal.draw(|frame| frame.render_widget_ref(&self.chat_widget, frame.area()))?;
AppState::Chat { widget } => {
terminal.draw(|frame| frame.render_widget_ref(&**widget, frame.area()))?;
}
AppState::Login { screen } => {
terminal.draw(|frame| frame.render_widget_ref(&*screen, frame.area()))?;
}
AppState::GitWarning { screen } => {
terminal.draw(|frame| frame.render_widget_ref(&*screen, frame.area()))?;
@@ -213,12 +263,25 @@ impl App<'_> {
/// with it.
fn dispatch_key_event(&mut self, key_event: KeyEvent) {
match &mut self.app_state {
AppState::Chat => {
self.chat_widget.handle_key_event(key_event);
AppState::Chat { widget } => {
widget.handle_key_event(key_event);
}
AppState::Login { screen } => screen.handle_key_event(key_event),
AppState::GitWarning { screen } => match screen.handle_key_event(key_event) {
GitWarningOutcome::Continue => {
self.app_state = AppState::Chat;
// User accepted switch to chat view.
let args = match self.chat_args.take() {
Some(args) => args,
None => panic!("ChatWidgetArgs already consumed"),
};
let widget = Box::new(ChatWidget::new(
args.config,
self.app_event_tx.clone(),
args.initial_prompt,
args.initial_images,
));
self.app_state = AppState::Chat { widget };
self.app_event_tx.send(AppEvent::Redraw);
}
GitWarningOutcome::Quit => {
@@ -232,14 +295,16 @@ impl App<'_> {
}
fn dispatch_scroll_event(&mut self, scroll_delta: i32) {
if matches!(self.app_state, AppState::Chat) {
self.chat_widget.handle_scroll_delta(scroll_delta);
match &mut self.app_state {
AppState::Chat { widget } => widget.handle_scroll_delta(scroll_delta),
AppState::Login { .. } | AppState::GitWarning { .. } => {}
}
}
fn dispatch_codex_event(&mut self, event: Event) {
if matches!(self.app_state, AppState::Chat) {
self.chat_widget.handle_codex_event(event);
match &mut self.app_state {
AppState::Chat { widget } => widget.handle_codex_event(event),
AppState::Login { .. } | AppState::GitWarning { .. } => {}
}
}
}

View File

@@ -4,6 +4,7 @@ use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::style::Color;
use ratatui::style::Style;
use ratatui::style::Stylize;
use ratatui::widgets::Block;
use ratatui::widgets::BorderType;
use ratatui::widgets::Borders;
@@ -147,8 +148,6 @@ impl CommandPopup {
impl WidgetRef for CommandPopup {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let style = Style::default().bg(Color::Blue).fg(Color::White);
let matches = self.filtered_commands();
let mut rows: Vec<Row> = Vec::new();
@@ -157,21 +156,25 @@ impl WidgetRef for CommandPopup {
if visible_matches.is_empty() {
rows.push(Row::new(vec![
Cell::from("").style(style),
Cell::from("No matching commands").style(style.add_modifier(Modifier::ITALIC)),
Cell::from(""),
Cell::from("No matching commands").add_modifier(Modifier::ITALIC),
]));
} else {
let default_style = Style::default();
let command_style = Style::default().fg(Color::LightBlue);
for (idx, cmd) in visible_matches.iter().enumerate() {
let highlight = Style::default().bg(Color::White).fg(Color::Blue);
let cmd_style = if Some(idx) == self.selected_idx {
highlight
let (cmd_style, desc_style) = if Some(idx) == self.selected_idx {
(
command_style.bg(Color::DarkGray),
default_style.bg(Color::DarkGray),
)
} else {
style
(command_style, default_style)
};
rows.push(Row::new(vec![
Cell::from(cmd.command().to_string()).style(cmd_style),
Cell::from(cmd.description().to_string()).style(style),
Cell::from(format!("/{}", cmd.command())).style(cmd_style),
Cell::from(cmd.description().to_string()).style(desc_style),
]));
}
}
@@ -182,13 +185,11 @@ impl WidgetRef for CommandPopup {
rows,
[Constraint::Length(FIRST_COLUMN_WIDTH), Constraint::Min(10)],
)
.style(style)
.column_spacing(1)
.column_spacing(0)
.block(
Block::default()
.borders(Borders::ALL)
.border_type(BorderType::Rounded)
.style(style),
.border_type(BorderType::Rounded),
);
table.render(area, buf);

View File

@@ -0,0 +1,20 @@
use ratatui::prelude::*;
/// Trait implemented by every type that can live inside the conversation
/// history list. It provides two primitives that the parent scroll-view
/// needs: how *tall* the widget is at a given width and how to render an
/// arbitrary contiguous *window* of that widget.
///
/// The `first_visible_line` argument to [`render_window`] allows partial
/// rendering when the top of the widget is scrolled off-screen. The caller
/// guarantees that `first_visible_line + area.height as usize` never exceeds
/// the total height previously returned by [`height`].
pub(crate) trait CellWidget {
/// Total height measured in wrapped terminal lines when drawn with the
/// given *content* width (no scrollbar column included).
fn height(&self, width: u16) -> usize;
/// Render a *window* that starts `first_visible_line` lines below the top
/// of the widget. The windows size is given by `area`.
fn render_window(&self, first_visible_line: usize, area: Rect, buf: &mut Buffer);
}

View File

@@ -239,9 +239,11 @@ impl ChatWidget<'_> {
self.request_redraw();
}
EventMsg::AgentReasoning(AgentReasoningEvent { text }) => {
self.conversation_history
.add_agent_reasoning(&self.config, text);
self.request_redraw();
if !self.config.hide_agent_reasoning {
self.conversation_history
.add_agent_reasoning(&self.config, text);
self.request_redraw();
}
}
EventMsg::TaskStarted => {
self.bottom_pane.set_task_running(true);
@@ -343,11 +345,9 @@ impl ChatWidget<'_> {
.add_active_mcp_tool_call(call_id, server, tool, arguments);
self.request_redraw();
}
EventMsg::McpToolCallEnd(McpToolCallEndEvent {
call_id,
success,
result,
}) => {
EventMsg::McpToolCallEnd(mcp_tool_call_end_event) => {
let success = mcp_tool_call_end_event.is_success();
let McpToolCallEndEvent { call_id, result } = mcp_tool_call_end_event;
self.conversation_history
.record_completed_mcp_tool_call(call_id, success, result);
self.request_redraw();

View File

@@ -1,6 +1,6 @@
#![allow(clippy::expect_used)]
use regex::Regex;
use regex_lite::Regex;
// This is defined in its own file so we can limit the scope of
// `allow(clippy::expect_used)` because we cannot scope it to the `lazy_static!`

View File

@@ -1,5 +1,6 @@
use clap::Parser;
use codex_common::ApprovalModeCliArg;
use codex_common::CliConfigOverrides;
use codex_common::SandboxPermissionOption;
use std::path::PathBuf;
@@ -40,7 +41,6 @@ pub struct Cli {
#[arg(long = "skip-git-repo-check", default_value_t = false)]
pub skip_git_repo_check: bool,
/// Disable serverside response storage (sends the full conversation context with every request)
#[arg(long = "disable-response-storage", default_value_t = false)]
pub disable_response_storage: bool,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
}

View File

@@ -1,3 +1,4 @@
use crate::cell_widget::CellWidget;
use crate::history_cell::CommandOutput;
use crate::history_cell::HistoryCell;
use crate::history_cell::PatchEventType;
@@ -236,11 +237,7 @@ impl ConversationHistoryWidget {
fn add_to_history(&mut self, cell: HistoryCell) {
let width = self.cached_width.get();
let count = if width > 0 {
wrapped_line_count_for_cell(&cell, width)
} else {
0
};
let count = if width > 0 { cell.height(width) } else { 0 };
self.entries.push(Entry {
cell,
@@ -284,9 +281,7 @@ impl ConversationHistoryWidget {
// Update cached line count.
if width > 0 {
entry
.line_count
.set(wrapped_line_count_for_cell(cell, width));
entry.line_count.set(cell.height(width));
}
break;
}
@@ -298,20 +293,12 @@ impl ConversationHistoryWidget {
&mut self,
call_id: String,
success: bool,
result: Option<mcp_types::CallToolResult>,
result: Result<mcp_types::CallToolResult, String>,
) {
// Convert result into serde_json::Value early so we don't have to
// worry about lifetimes inside the match arm.
let result_val = result.map(|r| {
serde_json::to_value(r)
.unwrap_or_else(|_| serde_json::Value::String("<serialization error>".into()))
});
let width = self.cached_width.get();
for entry in self.entries.iter_mut() {
if let HistoryCell::ActiveMcpToolCall {
call_id: history_id,
fq_tool_name,
invocation,
start,
..
@@ -319,18 +306,16 @@ impl ConversationHistoryWidget {
{
if &call_id == history_id {
let completed = HistoryCell::new_completed_mcp_tool_call(
fq_tool_name.clone(),
width,
invocation.clone(),
*start,
success,
result_val,
result,
);
entry.cell = completed;
if width > 0 {
entry
.line_count
.set(wrapped_line_count_for_cell(&entry.cell, width));
entry.line_count.set(entry.cell.height(width));
}
break;
@@ -378,7 +363,7 @@ impl WidgetRef for ConversationHistoryWidget {
let mut num_lines: usize = 0;
for entry in &self.entries {
let count = wrapped_line_count_for_cell(&entry.cell, effective_width);
let count = entry.cell.height(effective_width);
num_lines += count;
entry.line_count.set(count);
}
@@ -397,79 +382,69 @@ impl WidgetRef for ConversationHistoryWidget {
self.scroll_position.min(max_scroll)
};
// ------------------------------------------------------------------
// Build a *window* into the history so we only clone the `Line`s that
// may actually be visible in this frame. We still hand the slice off
// to a `Paragraph` with an additional scroll offset to avoid slicing
// inside a wrapped line (we dont have per-subline granularity).
// ------------------------------------------------------------------
// Find the first entry that intersects the current scroll position.
let mut cumulative = 0usize;
let mut first_idx = 0usize;
for (idx, entry) in self.entries.iter().enumerate() {
let next = cumulative + entry.line_count.get();
if next > scroll_pos {
first_idx = idx;
break;
}
cumulative = next;
}
let offset_into_first = scroll_pos - cumulative;
// Collect enough raw lines from `first_idx` onward to cover the
// viewport. We may fetch *slightly* more than necessary (whole cells)
// but never the entire history.
let mut collected_wrapped = 0usize;
let mut visible_lines: Vec<Line<'static>> = Vec::new();
for entry in &self.entries[first_idx..] {
visible_lines.extend(entry.cell.lines().iter().cloned());
collected_wrapped += entry.line_count.get();
if collected_wrapped >= offset_into_first + viewport_height {
break;
}
}
// Build the Paragraph with wrapping enabled so long lines are not
// clipped. Apply vertical scroll so that `offset_into_first` wrapped
// lines are hidden at the top.
// ------------------------------------------------------------------
// Render order:
// 1. Clear the whole widget area so we do not leave behind any glyphs
// from the previous frame.
// 1. Clear full widget area (avoid artifacts from prior frame).
// 2. Draw the surrounding Block (border and title).
// 3. Draw the Paragraph inside the Block, **leaving the right-most
// column free** for the scrollbar.
// 4. Finally draw the scrollbar (if needed).
// 3. Render *each* visible HistoryCell into its own sub-Rect while
// respecting partial visibility at the top and bottom.
// 4. Draw the scrollbar track / thumb in the reserved column.
// ------------------------------------------------------------------
// Clear the widget area to avoid visual artifacts from previous frames.
// Clear entire widget area first.
Clear.render(area, buf);
// Draw the outer border and title first so the Paragraph does not
// overwrite it.
// Draw border + title.
block.render(area, buf);
// Area available for text after accounting for the scrollbar.
let text_area = Rect {
x: inner.x,
y: inner.y,
width: effective_width,
height: inner.height,
};
// ------------------------------------------------------------------
// Calculate which cells are visible for the current scroll position
// and paint them one by one.
// ------------------------------------------------------------------
let paragraph = Paragraph::new(visible_lines)
.wrap(wrap_cfg())
.scroll((offset_into_first as u16, 0));
let mut y_cursor = inner.y; // first line inside viewport
let mut remaining_height = inner.height as usize;
let mut lines_to_skip = scroll_pos; // number of wrapped lines to skip (above viewport)
paragraph.render(text_area, buf);
for entry in &self.entries {
let cell_height = entry.line_count.get();
// Always render a scrollbar *track* so that the reserved column is
// visually filled, even when the content fits within the viewport.
// We only draw the *thumb* when the content actually overflows.
// Completely above viewport? Skip whole cell.
if lines_to_skip >= cell_height {
lines_to_skip -= cell_height;
continue;
}
// Determine how much of this cell is visible.
let visible_height = (cell_height - lines_to_skip).min(remaining_height);
if visible_height == 0 {
break; // no space left
}
let cell_rect = Rect {
x: inner.x,
y: y_cursor,
width: effective_width,
height: visible_height as u16,
};
entry.cell.render_window(lines_to_skip, cell_rect, buf);
// Advance cursor inside viewport.
y_cursor += visible_height as u16;
remaining_height -= visible_height;
// After the first (possibly partially skipped) cell, we no longer
// need to skip lines at the top.
lines_to_skip = 0;
if remaining_height == 0 {
break; // viewport filled
}
}
// Always render a scrollbar *track* so the reserved column is filled.
let overflow = num_lines.saturating_sub(viewport_height);
let mut scroll_state = ScrollbarState::default()
@@ -521,15 +496,6 @@ impl WidgetRef for ConversationHistoryWidget {
/// Common [`Wrap`] configuration used for both measurement and rendering so
/// they stay in sync.
#[inline]
const fn wrap_cfg() -> ratatui::widgets::Wrap {
pub(crate) const fn wrap_cfg() -> ratatui::widgets::Wrap {
ratatui::widgets::Wrap { trim: false }
}
/// Returns the wrapped line count for `cell` at the given `width` using the
/// same wrapping rules that `ConversationHistoryWidget` uses during
/// rendering.
fn wrapped_line_count_for_cell(cell: &HistoryCell, width: u16) -> usize {
Paragraph::new(cell.lines().clone())
.wrap(wrap_cfg())
.line_count(width)
}

View File

@@ -1,21 +1,36 @@
use crate::cell_widget::CellWidget;
use crate::exec_command::escape_command;
use crate::markdown::append_markdown;
use crate::text_block::TextBlock;
use crate::text_formatting::format_and_truncate_tool_result;
use base64::Engine;
use codex_ansi_escape::ansi_escape_line;
use codex_common::elapsed::format_duration;
use codex_core::WireApi;
use codex_core::config::Config;
use codex_core::model_supports_reasoning_summaries;
use codex_core::protocol::FileChange;
use codex_core::protocol::SessionConfiguredEvent;
use image::DynamicImage;
use image::GenericImageView;
use image::ImageReader;
use lazy_static::lazy_static;
use mcp_types::EmbeddedResourceResource;
use ratatui::prelude::*;
use ratatui::style::Color;
use ratatui::style::Modifier;
use ratatui::style::Style;
use ratatui::text::Line as RtLine;
use ratatui::text::Span as RtSpan;
use ratatui_image::Image as TuiImage;
use ratatui_image::Resize as ImgResize;
use ratatui_image::picker::ProtocolType;
use std::collections::HashMap;
use std::io::Cursor;
use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
use crate::exec_command::escape_command;
use crate::markdown::append_markdown;
use tracing::error;
pub(crate) struct CommandOutput {
pub(crate) exit_code: i32,
@@ -34,16 +49,16 @@ pub(crate) enum PatchEventType {
/// scrollable list.
pub(crate) enum HistoryCell {
/// Welcome message.
WelcomeMessage { lines: Vec<Line<'static>> },
WelcomeMessage { view: TextBlock },
/// Message from the user.
UserPrompt { lines: Vec<Line<'static>> },
UserPrompt { view: TextBlock },
/// Message from the agent.
AgentMessage { lines: Vec<Line<'static>> },
AgentMessage { view: TextBlock },
/// Reasoning event from the agent.
AgentReasoning { lines: Vec<Line<'static>> },
AgentReasoning { view: TextBlock },
/// An exec tool call that has not finished yet.
ActiveExecCommand {
@@ -51,45 +66,53 @@ pub(crate) enum HistoryCell {
/// The shell command, escaped and formatted.
command: String,
start: Instant,
lines: Vec<Line<'static>>,
view: TextBlock,
},
/// Completed exec tool call.
CompletedExecCommand { lines: Vec<Line<'static>> },
CompletedExecCommand { view: TextBlock },
/// An MCP tool call that has not finished yet.
ActiveMcpToolCall {
call_id: String,
/// `server.tool` fully-qualified name so we can show a concise label
fq_tool_name: String,
/// Formatted invocation that mirrors the `$ cmd ...` style of exec
/// commands. We keep this around so the completed state can reuse the
/// exact same text without re-formatting.
invocation: String,
/// Formatted line that shows the command name and arguments
invocation: Line<'static>,
start: Instant,
lines: Vec<Line<'static>>,
view: TextBlock,
},
/// Completed MCP tool call.
CompletedMcpToolCall { lines: Vec<Line<'static>> },
/// Completed MCP tool call where we show the result serialized as JSON.
CompletedMcpToolCall { view: TextBlock },
/// Background event
BackgroundEvent { lines: Vec<Line<'static>> },
/// Completed MCP tool call where the result is an image.
/// Admittedly, [mcp_types::CallToolResult] can have multiple content types,
/// which could be a mix of text and images, so we need to tighten this up.
// NOTE: For image output we keep the *original* image around and lazily
// compute a resized copy that fits the available cell width. Caching the
// resized version avoids doing the potentially expensive rescale twice
// because the scroll-view first calls `height()` for layouting and then
// `render_window()` for painting.
CompletedMcpToolCallWithImageOutput {
image: DynamicImage,
/// Cached data derived from the current terminal width. The cache is
/// invalidated whenever the width changes (e.g. when the user
/// resizes the window).
render_cache: std::cell::RefCell<Option<ImageRenderCache>>,
},
/// Background event.
BackgroundEvent { view: TextBlock },
/// Error event from the backend.
ErrorEvent { lines: Vec<Line<'static>> },
ErrorEvent { view: TextBlock },
/// Info describing the newlyinitialized session.
SessionInfo { lines: Vec<Line<'static>> },
/// Info describing the newly-initialized session.
SessionInfo { view: TextBlock },
/// A pending code patch that is awaiting user approval. Mirrors the
/// behaviour of `ActiveExecCommand` so the user sees *what* patch the
/// model wants to apply before being prompted to approve or deny it.
PendingPatch {
/// Identifier so that a future `PatchApplyEnd` can update the entry
/// with the final status (not yet implemented).
lines: Vec<Line<'static>>,
},
PendingPatch { view: TextBlock },
}
const TOOL_CALL_MAX_LINES: usize = 5;
@@ -107,10 +130,13 @@ impl HistoryCell {
history_entry_count: _,
} = event;
if is_first_event {
const VERSION: &str = env!("CARGO_PKG_VERSION");
let mut lines: Vec<Line<'static>> = vec![
Line::from(vec![
"OpenAI ".into(),
"Codex".bold(),
format!(" v{}", VERSION).into(),
" (research preview)".dim(),
]),
Line::from(""),
@@ -121,20 +147,36 @@ impl HistoryCell {
]),
];
let entries = vec![
let mut entries = vec![
("workdir", config.cwd.display().to_string()),
("model", config.model.clone()),
("provider", config.model_provider_id.clone()),
("approval", format!("{:?}", config.approval_policy)),
("sandbox", format!("{:?}", config.sandbox_policy)),
];
if config.model_provider.wire_api == WireApi::Responses
&& model_supports_reasoning_summaries(&config.model)
{
entries.push((
"reasoning effort",
config.model_reasoning_effort.to_string(),
));
entries.push((
"reasoning summaries",
config.model_reasoning_summary.to_string(),
));
}
for (key, value) in entries {
lines.push(Line::from(vec![format!("{key}: ").bold(), value.into()]));
}
lines.push(Line::from(""));
HistoryCell::WelcomeMessage { lines }
HistoryCell::WelcomeMessage {
view: TextBlock::new(lines),
}
} else if config.model == model {
HistoryCell::SessionInfo { lines: vec![] }
HistoryCell::SessionInfo {
view: TextBlock::new(Vec::new()),
}
} else {
let lines = vec![
Line::from("model changed:".magenta().bold()),
@@ -142,7 +184,9 @@ impl HistoryCell {
Line::from(format!("used: {}", model)),
Line::from(""),
];
HistoryCell::SessionInfo { lines }
HistoryCell::SessionInfo {
view: TextBlock::new(lines),
}
}
}
@@ -152,7 +196,9 @@ impl HistoryCell {
lines.extend(message.lines().map(|l| Line::from(l.to_string())));
lines.push(Line::from(""));
HistoryCell::UserPrompt { lines }
HistoryCell::UserPrompt {
view: TextBlock::new(lines),
}
}
pub(crate) fn new_agent_message(config: &Config, message: String) -> Self {
@@ -161,7 +207,9 @@ impl HistoryCell {
append_markdown(&message, &mut lines, config);
lines.push(Line::from(""));
HistoryCell::AgentMessage { lines }
HistoryCell::AgentMessage {
view: TextBlock::new(lines),
}
}
pub(crate) fn new_agent_reasoning(config: &Config, text: String) -> Self {
@@ -170,7 +218,9 @@ impl HistoryCell {
append_markdown(&text, &mut lines, config);
lines.push(Line::from(""));
HistoryCell::AgentReasoning { lines }
HistoryCell::AgentReasoning {
view: TextBlock::new(lines),
}
}
pub(crate) fn new_active_exec_command(call_id: String, command: Vec<String>) -> Self {
@@ -187,7 +237,7 @@ impl HistoryCell {
call_id,
command: command_escaped,
start,
lines,
view: TextBlock::new(lines),
}
}
@@ -226,7 +276,9 @@ impl HistoryCell {
}
lines.push(Line::from(""));
HistoryCell::CompletedExecCommand { lines }
HistoryCell::CompletedExecCommand {
view: TextBlock::new(lines),
}
}
pub(crate) fn new_active_mcp_tool_call(
@@ -235,8 +287,6 @@ impl HistoryCell {
tool: String,
arguments: Option<serde_json::Value>,
) -> Self {
let fq_tool_name = format!("{server}.{tool}");
// Format the arguments as compact JSON so they roughly fit on one
// line. If there are no arguments we keep it empty so the invocation
// mirrors a function-style call.
@@ -248,63 +298,155 @@ impl HistoryCell {
})
.unwrap_or_default();
let invocation = if args_str.is_empty() {
format!("{fq_tool_name}()")
} else {
format!("{fq_tool_name}({args_str})")
};
let invocation_spans = vec![
Span::styled(server, Style::default().fg(Color::Blue)),
Span::raw("."),
Span::styled(tool, Style::default().fg(Color::Blue)),
Span::raw("("),
Span::styled(args_str, Style::default().fg(Color::Gray)),
Span::raw(")"),
];
let invocation = Line::from(invocation_spans);
let start = Instant::now();
let title_line = Line::from(vec!["tool".magenta(), " running...".dim()]);
let lines: Vec<Line<'static>> = vec![
title_line,
Line::from(format!("$ {invocation}")),
Line::from(""),
];
let lines: Vec<Line<'static>> = vec![title_line, invocation.clone(), Line::from("")];
HistoryCell::ActiveMcpToolCall {
call_id,
fq_tool_name,
invocation,
start,
lines,
view: TextBlock::new(lines),
}
}
/// If the first content is an image, return a new cell with the image.
/// TODO(rgwood-dd): Handle images properly even if they're not the first result.
fn try_new_completed_mcp_tool_call_with_image_output(
result: &Result<mcp_types::CallToolResult, String>,
) -> Option<Self> {
match result {
Ok(mcp_types::CallToolResult { content, .. }) => {
if let Some(mcp_types::CallToolResultContent::ImageContent(image)) = content.first()
{
let raw_data =
match base64::engine::general_purpose::STANDARD.decode(&image.data) {
Ok(data) => data,
Err(e) => {
error!("Failed to decode image data: {e}");
return None;
}
};
let reader = match ImageReader::new(Cursor::new(raw_data)).with_guessed_format()
{
Ok(reader) => reader,
Err(e) => {
error!("Failed to guess image format: {e}");
return None;
}
};
let image = match reader.decode() {
Ok(image) => image,
Err(e) => {
error!("Image decoding failed: {e}");
return None;
}
};
Some(HistoryCell::CompletedMcpToolCallWithImageOutput {
image,
render_cache: std::cell::RefCell::new(None),
})
} else {
None
}
}
_ => None,
}
}
pub(crate) fn new_completed_mcp_tool_call(
fq_tool_name: String,
invocation: String,
num_cols: u16,
invocation: Line<'static>,
start: Instant,
success: bool,
result: Option<serde_json::Value>,
result: Result<mcp_types::CallToolResult, String>,
) -> Self {
if let Some(cell) = Self::try_new_completed_mcp_tool_call_with_image_output(&result) {
return cell;
}
let duration = format_duration(start.elapsed());
let status_str = if success { "success" } else { "failed" };
let title_line = Line::from(vec![
"tool".magenta(),
format!(" {fq_tool_name} ({status_str}, duration: {})", duration).dim(),
" ".into(),
if success {
status_str.green()
} else {
status_str.red()
},
format!(", duration: {duration}").gray(),
]);
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(title_line);
lines.push(Line::from(format!("$ {invocation}")));
lines.push(invocation);
if let Some(res_val) = result {
let json_pretty =
serde_json::to_string_pretty(&res_val).unwrap_or_else(|_| res_val.to_string());
let mut iter = json_pretty.lines();
for raw in iter.by_ref().take(TOOL_CALL_MAX_LINES) {
lines.push(Line::from(raw.to_string()).dim());
match result {
Ok(mcp_types::CallToolResult { content, .. }) => {
if !content.is_empty() {
lines.push(Line::from(""));
for tool_call_result in content {
let line_text = match tool_call_result {
mcp_types::CallToolResultContent::TextContent(text) => {
format_and_truncate_tool_result(
&text.text,
TOOL_CALL_MAX_LINES,
num_cols as usize,
)
}
mcp_types::CallToolResultContent::ImageContent(_) => {
// TODO show images even if they're not the first result, will require a refactor of `CompletedMcpToolCall`
"<image content>".to_string()
}
mcp_types::CallToolResultContent::AudioContent(_) => {
"<audio content>".to_string()
}
mcp_types::CallToolResultContent::EmbeddedResource(resource) => {
let uri = match resource.resource {
EmbeddedResourceResource::TextResourceContents(text) => {
text.uri
}
EmbeddedResourceResource::BlobResourceContents(blob) => {
blob.uri
}
};
format!("embedded resource: {uri}")
}
};
lines.push(Line::styled(line_text, Style::default().fg(Color::Gray)));
}
}
lines.push(Line::from(""));
}
let remaining = iter.count();
if remaining > 0 {
lines.push(Line::from(format!("... {} additional lines", remaining)).dim());
Err(e) => {
lines.push(Line::from(vec![
Span::styled(
"Error: ",
Style::default().fg(Color::Red).add_modifier(Modifier::BOLD),
),
Span::raw(e),
]));
}
};
HistoryCell::CompletedMcpToolCall {
view: TextBlock::new(lines),
}
lines.push(Line::from(""));
HistoryCell::CompletedMcpToolCall { lines }
}
pub(crate) fn new_background_event(message: String) -> Self {
@@ -312,7 +454,9 @@ impl HistoryCell {
lines.push(Line::from("event".dim()));
lines.extend(message.lines().map(|l| Line::from(l.to_string()).dim()));
lines.push(Line::from(""));
HistoryCell::BackgroundEvent { lines }
HistoryCell::BackgroundEvent {
view: TextBlock::new(lines),
}
}
pub(crate) fn new_error_event(message: String) -> Self {
@@ -320,7 +464,9 @@ impl HistoryCell {
vec!["ERROR: ".red().bold(), message.into()].into(),
"".into(),
];
HistoryCell::ErrorEvent { lines }
HistoryCell::ErrorEvent {
view: TextBlock::new(lines),
}
}
/// Create a new `PendingPatch` cell that lists the filelevel summary of
@@ -339,7 +485,9 @@ impl HistoryCell {
auto_approved: false,
} => {
let lines = vec![Line::from("patch applied".magenta().bold())];
return Self::PendingPatch { lines };
return Self::PendingPatch {
view: TextBlock::new(lines),
};
}
};
@@ -380,23 +528,85 @@ impl HistoryCell {
lines.push(Line::from(""));
HistoryCell::PendingPatch { lines }
HistoryCell::PendingPatch {
view: TextBlock::new(lines),
}
}
}
// ---------------------------------------------------------------------------
// `CellWidget` implementation most variants delegate to their internal
// `TextBlock`. Variants that need custom painting can add their own logic in
// the match arms.
// ---------------------------------------------------------------------------
impl CellWidget for HistoryCell {
fn height(&self, width: u16) -> usize {
match self {
HistoryCell::WelcomeMessage { view }
| HistoryCell::UserPrompt { view }
| HistoryCell::AgentMessage { view }
| HistoryCell::AgentReasoning { view }
| HistoryCell::BackgroundEvent { view }
| HistoryCell::ErrorEvent { view }
| HistoryCell::SessionInfo { view }
| HistoryCell::CompletedExecCommand { view }
| HistoryCell::CompletedMcpToolCall { view }
| HistoryCell::PendingPatch { view }
| HistoryCell::ActiveExecCommand { view, .. }
| HistoryCell::ActiveMcpToolCall { view, .. } => view.height(width),
HistoryCell::CompletedMcpToolCallWithImageOutput {
image,
render_cache,
} => ensure_image_cache(image, width, render_cache),
}
}
pub(crate) fn lines(&self) -> &Vec<Line<'static>> {
fn render_window(&self, first_visible_line: usize, area: Rect, buf: &mut Buffer) {
match self {
HistoryCell::WelcomeMessage { lines, .. }
| HistoryCell::UserPrompt { lines, .. }
| HistoryCell::AgentMessage { lines, .. }
| HistoryCell::AgentReasoning { lines, .. }
| HistoryCell::BackgroundEvent { lines, .. }
| HistoryCell::ErrorEvent { lines, .. }
| HistoryCell::SessionInfo { lines, .. }
| HistoryCell::ActiveExecCommand { lines, .. }
| HistoryCell::CompletedExecCommand { lines, .. }
| HistoryCell::ActiveMcpToolCall { lines, .. }
| HistoryCell::CompletedMcpToolCall { lines, .. }
| HistoryCell::PendingPatch { lines, .. } => lines,
HistoryCell::WelcomeMessage { view }
| HistoryCell::UserPrompt { view }
| HistoryCell::AgentMessage { view }
| HistoryCell::AgentReasoning { view }
| HistoryCell::BackgroundEvent { view }
| HistoryCell::ErrorEvent { view }
| HistoryCell::SessionInfo { view }
| HistoryCell::CompletedExecCommand { view }
| HistoryCell::CompletedMcpToolCall { view }
| HistoryCell::PendingPatch { view }
| HistoryCell::ActiveExecCommand { view, .. }
| HistoryCell::ActiveMcpToolCall { view, .. } => {
view.render_window(first_visible_line, area, buf)
}
HistoryCell::CompletedMcpToolCallWithImageOutput {
image,
render_cache,
} => {
// Ensure we have a cached, resized copy that matches the current width.
// `height()` should have prepared the cache, but if something invalidated it
// (e.g. the first `render_window()` call happens *before* `height()` after a
// resize) we rebuild it here.
let width_cells = area.width;
// Ensure the cache is up-to-date and extract the scaled image.
let _ = ensure_image_cache(image, width_cells, render_cache);
let Some(resized) = render_cache
.borrow()
.as_ref()
.map(|c| c.scaled_image.clone())
else {
return;
};
let picker = &*TERMINAL_PICKER;
if let Ok(protocol) = picker.new_protocol(resized, area, ImgResize::Fit(None)) {
let img_widget = TuiImage::new(&protocol);
img_widget.render(area, buf);
}
}
}
}
}
@@ -432,3 +642,120 @@ fn create_diff_summary(changes: HashMap<PathBuf, FileChange>) -> Vec<String> {
summaries
}
// -------------------------------------
// Helper types for image rendering
// -------------------------------------
/// Cached information for rendering an image inside a conversation cell.
///
/// The cache ties the resized image to a *specific* content width (in
/// terminal cells). Whenever the terminal is resized and the width changes
/// we need to re-compute the scaled variant so that it still fits the
/// available space. Keeping the resized copy around saves a costly rescale
/// between the back-to-back `height()` and `render_window()` calls that the
/// scroll-view performs while laying out the UI.
pub(crate) struct ImageRenderCache {
/// Width in *terminal cells* the cached image was generated for.
width_cells: u16,
/// Height in *terminal rows* that the conversation cell must occupy so
/// the whole image becomes visible.
height_rows: usize,
/// The resized image that fits the given width / height constraints.
scaled_image: DynamicImage,
}
lazy_static! {
static ref TERMINAL_PICKER: ratatui_image::picker::Picker = {
use ratatui_image::picker::Picker;
use ratatui_image::picker::cap_parser::QueryStdioOptions;
// Ask the terminal for capabilities and explicit font size. Request the
// Kitty *text-sizing protocol* as a fallback mechanism for terminals
// (like iTerm2) that do not reply to the standard CSI 16/18 queries.
match Picker::from_query_stdio_with_options(QueryStdioOptions {
text_sizing_protocol: true,
}) {
Ok(picker) => picker,
Err(err) => {
// Fall back to the conservative default that assumes ~8×16 px cells.
// Still better than breaking the build in a headless test run.
tracing::warn!("terminal capability query failed: {err:?}; using default font size");
Picker::from_fontsize((8, 16))
}
}
};
}
/// Resize `image` to fit into `width_cells`×10-rows keeping the original aspect
/// ratio. The function updates `render_cache` and returns the number of rows
/// (<= 10) the picture will occupy.
fn ensure_image_cache(
image: &DynamicImage,
width_cells: u16,
render_cache: &std::cell::RefCell<Option<ImageRenderCache>>,
) -> usize {
if let Some(cache) = render_cache.borrow().as_ref() {
if cache.width_cells == width_cells {
return cache.height_rows;
}
}
let picker = &*TERMINAL_PICKER;
let (char_w_px, char_h_px) = picker.font_size();
// Heuristic to compensate for Hi-DPI terminals (iTerm2 on Retina Mac) that
// report logical pixels (≈ 8×16) while the iTerm2 graphics protocol
// expects *device* pixels. Empirically the device-pixel-ratio is almost
// always 2 on macOS Retina panels.
let hidpi_scale = if picker.protocol_type() == ProtocolType::Iterm2 {
2.0f64
} else {
1.0
};
// The fallback Halfblocks protocol encodes two pixel rows per cell, so each
// terminal *row* represents only half the (possibly scaled) font height.
let effective_char_h_px: f64 = if picker.protocol_type() == ProtocolType::Halfblocks {
(char_h_px as f64) * hidpi_scale / 2.0
} else {
(char_h_px as f64) * hidpi_scale
};
let char_w_px_f64 = (char_w_px as f64) * hidpi_scale;
const MAX_ROWS: f64 = 10.0;
let max_height_px: f64 = effective_char_h_px * MAX_ROWS;
let (orig_w_px, orig_h_px) = {
let (w, h) = image.dimensions();
(w as f64, h as f64)
};
if orig_w_px == 0.0 || orig_h_px == 0.0 || width_cells == 0 {
*render_cache.borrow_mut() = None;
return 0;
}
let max_w_px = char_w_px_f64 * width_cells as f64;
let scale_w = max_w_px / orig_w_px;
let scale_h = max_height_px / orig_h_px;
let scale = scale_w.min(scale_h).min(1.0);
use image::imageops::FilterType;
let scaled_w_px = (orig_w_px * scale).round().max(1.0) as u32;
let scaled_h_px = (orig_h_px * scale).round().max(1.0) as u32;
let scaled_image = image.resize(scaled_w_px, scaled_h_px, FilterType::Lanczos3);
let height_rows = ((scaled_h_px as f64 / effective_char_h_px).ceil()) as usize;
let new_cache = ImageRenderCache {
width_cells,
height_rows,
scaled_image,
};
*render_cache.borrow_mut() = Some(new_cache);
height_rows
}

View File

@@ -5,9 +5,13 @@
use app::App;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::openai_api_key::OPENAI_API_KEY_ENV_VAR;
use codex_core::openai_api_key::get_openai_api_key;
use codex_core::openai_api_key::set_openai_api_key;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::util::is_inside_git_repo;
use codex_login::try_read_openai_api_key;
use log_layer::TuiLogLayer;
use std::fs::OpenOptions;
use std::path::PathBuf;
@@ -19,6 +23,7 @@ mod app;
mod app_event;
mod app_event_sender;
mod bottom_pane;
mod cell_widget;
mod chatwidget;
mod citation_regex;
mod cli;
@@ -27,11 +32,14 @@ mod exec_command;
mod git_warning_screen;
mod history_cell;
mod log_layer;
mod login_screen;
mod markdown;
mod mouse_capture;
mod scroll_event_helper;
mod slash_command;
mod status_indicator_widget;
mod text_block;
mod text_formatting;
mod tui;
mod user_approval_widget;
@@ -54,18 +62,23 @@ pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::
model: cli.model.clone(),
approval_policy,
sandbox_policy,
disable_response_storage: if cli.disable_response_storage {
Some(true)
} else {
None
},
cwd: cli.cwd.clone().map(|p| p.canonicalize().unwrap_or(p)),
model_provider: None,
config_profile: cli.config_profile.clone(),
codex_linux_sandbox_exe,
};
// Parse `-c` overrides from the CLI.
let cli_kv_overrides = match cli.config_overrides.parse_overrides() {
Ok(v) => v,
#[allow(clippy::print_stderr)]
Err(e) => {
eprintln!("Error parsing -c overrides: {e}");
std::process::exit(1);
}
};
#[allow(clippy::print_stderr)]
match Config::load_with_overrides(overrides) {
match Config::load_with_cli_overrides(cli_kv_overrides, overrides) {
Ok(config) => config,
Err(err) => {
eprintln!("Error loading configuration: {err}");
@@ -116,13 +129,15 @@ pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::
.with(tui_layer)
.try_init();
let show_login_screen = should_show_login_screen(&config);
// Determine whether we need to display the "not a git repo" warning
// modal. The flag is shown when the current working directory is *not*
// inside a Git repository **and** the user did *not* pass the
// `--allow-no-git-exec` flag.
let show_git_warning = !cli.skip_git_repo_check && !is_inside_git_repo(&config);
try_run_ratatui_app(cli, config, show_git_warning, log_rx);
try_run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx);
Ok(())
}
@@ -133,10 +148,11 @@ pub fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std::io::
fn try_run_ratatui_app(
cli: Cli,
config: Config,
show_login_screen: bool,
show_git_warning: bool,
log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
) {
if let Err(report) = run_ratatui_app(cli, config, show_git_warning, log_rx) {
if let Err(report) = run_ratatui_app(cli, config, show_login_screen, show_git_warning, log_rx) {
eprintln!("Error: {report:?}");
}
}
@@ -144,6 +160,7 @@ fn try_run_ratatui_app(
fn run_ratatui_app(
cli: Cli,
config: Config,
show_login_screen: bool,
show_git_warning: bool,
mut log_rx: tokio::sync::mpsc::UnboundedReceiver<String>,
) -> color_eyre::Result<()> {
@@ -159,7 +176,13 @@ fn run_ratatui_app(
terminal.clear()?;
let Cli { prompt, images, .. } = cli;
let mut app = App::new(config.clone(), prompt, show_git_warning, images);
let mut app = App::new(
config.clone(),
prompt,
show_login_screen,
show_git_warning,
images,
);
// Bridge log receiver into the AppEvent channel so latest log lines update the UI.
{
@@ -189,3 +212,38 @@ fn restore() {
);
}
}
#[allow(clippy::unwrap_used)]
fn should_show_login_screen(config: &Config) -> bool {
if is_in_need_of_openai_api_key(config) {
// Reading the OpenAI API key is an async operation because it may need
// to refresh the token. Block on it.
let codex_home = config.codex_home.clone();
let (tx, rx) = tokio::sync::oneshot::channel();
tokio::spawn(async move {
match try_read_openai_api_key(&codex_home).await {
Ok(openai_api_key) => {
set_openai_api_key(openai_api_key);
tx.send(false).unwrap();
}
Err(_) => {
tx.send(true).unwrap();
}
}
});
// TODO(mbolin): Impose some sort of timeout.
tokio::task::block_in_place(|| rx.blocking_recv()).unwrap()
} else {
false
}
}
fn is_in_need_of_openai_api_key(config: &Config) -> bool {
let is_using_openai_key = config
.model_provider
.env_key
.as_ref()
.map(|s| s == OPENAI_API_KEY_ENV_VAR)
.unwrap_or(false);
is_using_openai_key && get_openai_api_key().is_none()
}

View File

@@ -0,0 +1,46 @@
use std::path::PathBuf;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::widgets::Paragraph;
use ratatui::widgets::Widget as _;
use ratatui::widgets::WidgetRef;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
pub(crate) struct LoginScreen {
app_event_tx: AppEventSender,
/// Use this with login_with_chatgpt() in login/src/lib.rs and, if
/// successful, update the in-memory config via
/// codex_core::openai_api_key::set_openai_api_key().
#[allow(dead_code)]
codex_home: PathBuf,
}
impl LoginScreen {
pub(crate) fn new(app_event_tx: AppEventSender, codex_home: PathBuf) -> Self {
Self {
app_event_tx,
codex_home,
}
}
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
if let KeyCode::Char('q') = key_event.code {
self.app_event_tx.send(AppEvent::ExitRequest);
}
}
}
impl WidgetRef for &LoginScreen {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let text = Paragraph::new(
"Login using `codex login` and then run this command again. 'q' to quit.",
);
text.render(area, buf);
}
}

View File

@@ -1,11 +1,26 @@
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_tui::Cli;
use codex_tui::run_main;
#[derive(Parser, Debug)]
struct TopCli {
#[clap(flatten)]
config_overrides: CliConfigOverrides,
#[clap(flatten)]
inner: Cli,
}
fn main() -> anyhow::Result<()> {
codex_linux_sandbox::run_with_sandbox(|codex_linux_sandbox_exe| async move {
let cli = Cli::parse();
run_main(cli, codex_linux_sandbox_exe)?;
let top_cli = TopCli::parse();
let mut inner = top_cli.inner;
inner
.config_overrides
.raw_overrides
.splice(0..0, top_cli.config_overrides.raw_overrides);
run_main(inner, codex_linux_sandbox_exe)?;
Ok(())
})
}

View File

@@ -71,7 +71,7 @@ fn rewrite_file_citations<'a>(
None => return Cow::Borrowed(src),
};
CITATION_REGEX.replace_all(src, |caps: &regex::Captures<'_>| {
CITATION_REGEX.replace_all(src, |caps: &regex_lite::Captures<'_>| {
let file = &caps[1];
let start_line = &caps[2];

Some files were not shown because too many files have changed in this diff Show More