Compare commits

...

183 Commits

Author SHA1 Message Date
Michael Bolin
2422660594 feat: list-models subcommand for full CLI 2025-06-09 16:27:56 -04:00
Michael Bolin
b73426c1c4 docs: update codex-rs/README.md to list new features in the Rust CLI (#1267)
Let users know about what the Rust CLI supports that the TypeScript CLI
doesn't!
2025-06-06 18:32:10 -07:00
Reilly Wood
345a38502d codex-rs: Rename /clear to /new, make it start an entirely new chat (#1264)
I noticed that `/clear` wasn't fully clearing chat history; it would
clear the chat history widgets _in the UI_, but the LLM still had access
to information from previous messages.

This PR renames `/clear` to `/new` for clarity as per Michael's
suggestion, resetting `app_state` to a fresh `ChatWidget`.
2025-06-06 16:29:37 -07:00
Michael Bolin
029f39b9da feat: port maybeRedeemCredits() from get-api-key.tsx to login_with_chatgpt.py (#1221)
This builds on https://github.com/openai/codex/pull/1212 and ports the
`maybeRedeemCredits()` function from `get-api-key.ts` to
`login_with_chatgpt.py`:


a80240cfdc/codex-cli/src/utils/get-api-key.tsx (L84-L89)
2025-06-05 23:34:10 -07:00
Michael Bolin
a80240cfdc chore: ensure next Node.js release includes musl binaries for arm64 Linux (#1232)
Target a workflow with more recent binary artifacts.
2025-06-05 23:14:10 -07:00
Michael Bolin
2d5246050a fix: use aarch64-unknown-linux-musl instead of aarch64-unknown-linux-gnu (#1228)
Now that we have published a GitHub Release that contains arm64 musl
artifacts for Linux, update the following scripts to take advantage of
them:

- `dotslash-config.json` now uses musl artifacts for the `linux-aarch64`
target
- `install_native_deps.sh` for the TypeScript CLI now includes
`codex-linux-sandbox-aarch64-unknown-linux-musl` instead of
`codex-linux-sandbox-aarch64-unknown-linux-gnu` for sandboxing
- `codex-cli/bin/codex.js` now checks for `aarch64-unknown-linux-musl`
artifacts instead of `aarch64-unknown-linux-gnu` ones
2025-06-05 22:45:45 -07:00
Michael Bolin
77b017f67d fix: truncate auth.json file before rewriting it (#1231)
This was missed in https://github.com/openai/codex/pull/1212. Caught by
@rizwankce in
https://github.com/openai/codex/discussions/1174#discussioncomment-13377475.
2025-06-05 22:11:02 -07:00
Michael Bolin
c02d25fbad fix: include codex-linux-sandbox-aarch64-unknown-linux-musl in the set of release artifacts (#1230)
This was missed in https://github.com/openai/codex/pull/1225. Once we
create a new GitHub Release with this change, we can use the URL from
the workflow that triggered the release in
https://github.com/openai/codex/pull/1228.
2025-06-05 22:03:07 -07:00
Michael Bolin
9db53b33aa fix: support arm64 build for Linux (#1225)
Users were running into issues with glibc mismatches on arm64 linux. In
the past, we did not provide a musl build for arm64 Linux because we had
trouble getting the openssl dependency to build correctly. Though today
I just tried the same trick in `Cargo.toml` that we were doing for
`x86_64-unknown-linux-musl` (using `openssl-sys` with `features =
["vendored"]`), so I'm not sure what problem we had in the past the
builds "just worked" today!

Though one tweak that did have to be made is that the integration tests
for Seccomp/Landlock empirically require longer timeouts on arm64 linux,
or at least on the `ubuntu-24.04-arm` GitHub Runner. As such, we change
the timeouts for arm64 in `codex-rs/linux-sandbox/tests/landlock.rs`.

Though in solving this problem, I decided I needed a turnkey solution
for testing the Linux build(s) from my Mac laptop, so this PR introduces
`.devcontainer/Dockerfile` and `.devcontainer/devcontainer.json` to
facilitate this. Detailed instructions are in `.devcontainer/README.md`.

We will update `dotslash-config.json` and other release-related scripts
in a follow-up PR.
2025-06-05 20:29:46 -07:00
Michael Bolin
515b6331bd feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.

**What works**

* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.

**What does not work**

* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:


a67a67f325/codex-cli/src/utils/get-api-key.tsx (L84-L89)

**Implementation**

Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.

As such, the most significant files in this PR are:

```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```

Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:

```
codex-rs/core/src/openai_api_key.rs
```

Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.

Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.

**Testing**

Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:

```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```

For reference:

* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
Reilly Wood
a67a67f325 codex-rs: make tool calls prettier (#1211)
This PR overhauls how active tool calls and completed tool calls are
displayed:

1. More use of colour to indicate success/failure and distinguish
between components like tool name+arguments
2. Previously, the entire `CallToolResult` was serialized to JSON and
pretty-printed. Now, we extract each individual `CallToolResultContent`
and print those
1. The previous solution was wasting space by unnecessarily showing
details of the `CallToolResult` struct to users, without formatting the
actual tool call results nicely
2. We're now able to show users more information from tool results in
less space, with nicer formatting when tools return JSON results

### Before:

<img width="1251" alt="Screenshot 2025-06-03 at 11 24 26"
src="https://github.com/user-attachments/assets/5a58f222-219c-4c53-ace7-d887194e30cf"
/>

### After:

<img width="1265" alt="image"
src="https://github.com/user-attachments/assets/99fe54d0-9ebe-406a-855b-7aa529b91274"
/>

## Future Work

1. Integrate image tool result handling better. We should be able to
display images even if they're not the first `CallToolResultContent`
2. Users should have some way to view the full version of truncated tool
results
3. It would be nice to add some left padding for tool results, make it
more clear that they are results. This is doable, just a little fiddly
due to the way `first_visible_line` scrolling works
4. There's almost certainly a better way to format JSON than "all on 1
line with spaces to make Ratatui wrapping work". But I think that works
OK for now.
2025-06-03 14:29:26 -07:00
Michael Bolin
c6fcec55fe fix: always send full instructions when using the Responses API (#1207)
This fixes a longstanding error in the Rust CLI where `codex.rs`
contained an errant `is_first_turn` check that would exclude the user
instructions for subsequent "turns" of a conversation when using the
responses API (i.e., when `previous_response_id` existed).

While here, renames `Prompt.instructions` to `Prompt.user_instructions`
since we now have quite a few levels of instructions floating around.
Also removed an unnecessary use of `clone()` in
`Prompt.get_full_instructions()`.
2025-06-03 09:40:19 -07:00
Michael Bolin
6fcc528a43 fix: provide tolerance for apply_patch tool (#993)
As explained in detail in the doc comment for `ParseMode::Lenient`, we
have observed that GPT-4.1 does not always generate a valid invocation
of `apply_patch`. Fortunately, the error is predictable, so we introduce
some new logic to the `codex-apply-patch` crate to recover from this
error.

Because we would like to avoid this becoming a de facto standard (as it
would be incompatible if `apply_patch` were provided as an actual
executable, unless we also introduced the lenient behavior in the
executable, as well), we require passing `ParseMode::Lenient` to
`parse_patch_text()` to make it clear that the caller is opting into
supporting this special case.

Note the analogous change to the TypeScript CLI was
https://github.com/openai/codex/pull/930. In addition to changing the
accepted input to `apply_patch`, it also introduced additional
instructions for the model, which we include in this PR.

Note that `apply-patch` does not depend on either `regex` or
`regex-lite`, so some of the checks are slightly more verbose to avoid
introducing this dependency.

That said, this PR does not leverage the existing
`extract_heredoc_body_from_apply_patch_command()`, which depends on
`tree-sitter` and `tree-sitter-bash`:


5a5aa89914/codex-rs/apply-patch/src/lib.rs (L191-L246)

though perhaps it should.
2025-06-03 09:06:38 -07:00
Michael Bolin
5a5aa89914 chore: replace regex with regex-lite, where appropriate (#1200)
As explained on https://crates.io/crates/regex-lite, `regex-lite` is a
lighter alternative to `regex` and seems to be sufficient for our
purposes.
2025-06-02 17:11:45 -07:00
Michael Bolin
0f3cc8f842 feat: make reasoning effort/summaries configurable (#1199)
Previous to this PR, we always set `reasoning` when making a request
using the Responses API:


d7245cbbc9/codex-rs/core/src/client.rs (L108-L111)

Though if you tried to use the Rust CLI with `--model gpt-4.1`, this
would fail with:

```shell
"Unsupported parameter: 'reasoning.effort' is not supported with this model."
```

We take a cue from the TypeScript CLI, which does a check on the model
name:


d7245cbbc9/codex-cli/src/utils/agent/agent-loop.ts (L786-L789)

This PR does a similar check, though also adds support for the following
config options:

```
model_reasoning_effort = "low" | "medium" | "high" | "none"
model_reasoning_summary = "auto" | "concise" | "detailed" | "none"
```

This way, if you have a model whose name happens to start with `"o"` (or
`"codex"`?), you can set these to `"none"` to explicitly disable
reasoning, if necessary. (That said, it seems unlikely anyone would use
the Responses API with non-OpenAI models, but we provide an escape
hatch, anyway.)

This PR also updates both the TUI and `codex exec` to show `reasoning
effort` and `reasoning summaries` in the header.
2025-06-02 16:01:34 -07:00
Michael Bolin
d7245cbbc9 fix: chat completions API now also passes tools along (#1167)
Prior to this PR, there were two big misses in `chat_completions.rs`:

1. The loop in `stream_chat_completions()` was only including items of
type `ResponseItem::Message` when building up the `"messages"` JSON for
the `POST` request to the `chat/completions` endpoint. This fixes things
by ensuring other variants (`FunctionCall`, `LocalShellCall`, and
`FunctionCallOutput`) are included, as well.
2. In `process_chat_sse()`, we were not recording tool calls and were
only emitting items of type
`ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream.
Now we introduce `FunctionCallState`, which is used to accumulate the
`delta`s of type `tool_calls`, so we can ultimately emit a
`ResponseItem::FunctionCall`, when appropriate.

While function calling now appears to work for chat completions with my
local testing, I believe that there are still edge cases that are not
covered and that this codepath would benefit from a battery of
integration tests. (As part of that further cleanup, we should also work
to support streaming responses in the UI.)

The other important part of this PR is some cleanup in
`core/src/codex.rs`. In particular, it was hard to reason about how
`run_task()` was building up the list of messages to include in a
request across the various cases:

- Responses API
- Chat Completions API
- Responses API used in concert with ZDR

I like to think things are a bit cleaner now where:

- `zdr_transcript` (if present) contains all messages in the history of
the conversation, which includes function call outputs that have not
been sent back to the model yet
- `pending_input` includes any messages the user has submitted while the
turn is in flight that need to be injected as part of the next `POST` to
the model
- `input_for_next_turn` includes the tool call outputs that have not
been sent back to the model yet
2025-06-02 13:47:51 -07:00
Michael Bolin
e40f86b446 chore: logging cleanup (#1196)
Update what we log to make `RUST_LOG=debug` a bit easier to work with.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1196).
* #1167
* __->__ #1196
2025-06-02 13:31:33 -07:00
Michael Bolin
7896b1089d chore: update the WORKFLOW_URL in install_native_deps.sh to the latest release (#1190) 2025-05-31 10:30:50 -07:00
Michael Bolin
1410ae95ca fix: set --config hide_agent_reasoning=true in the GitHub Action (#1185)
Whoops, I had this flipped in https://github.com/openai/codex/pull/1183.
2025-05-30 23:57:05 -07:00
Michael Bolin
fccf5f3221 fix: disable agent reasoning output by default in the GitHub Action (#1183) 2025-05-30 23:49:48 -07:00
Michael Bolin
1159eaf04f feat: show the version when starting Codex (#1182)
The TypeScript version of the CLI shows the version when it starts up,
which is helpful when users share screenshots (and nice to know, as a
user).
2025-05-30 23:24:36 -07:00
Michael Bolin
e81327e5f4 feat: add hide_agent_reasoning config option (#1181)
This PR introduces a `hide_agent_reasoning` config option (that defaults
to `false`) that users can enable to make the output less verbose by
suppressing reasoning output.

To test, verified that this includes agent reasoning in the output:

```
echo hello | just exec
```

whereas this does not:

```
echo hello | just exec --config hide_agent_reasoning=false
```
2025-05-30 23:14:56 -07:00
Michael Bolin
4f3d294762 feat: dim the timestamp in the exec output (#1180)
This required changing `ts_println!()` to take `$self:ident`, which is a
bit more verbose, but the usability improvement seems worth it.

Also eliminated an unnecessary `.to_string()` while here.
2025-05-30 16:27:37 -07:00
Michael Bolin
cf1d070538 feat: grab-bag of improvements to exec output (#1179)
Fixes:

* Instantiate `EventProcessor` earlier in `lib.rs` so
`print_config_summary()` can be an instance method of it and leverage
its various `Style` fields to ensure it honors `with_ansi` properly.
* After printing the config summary, print out user's prompt with the
heading `User instructions:`. As noted in the comment, now that we can
read the instructions via stdin as of #1178, it is helpful to the user
to ensure they know what instructions were given to Codex.
* Use same colors/bold/italic settings for headers as the TUI, making
the output a bit easier to read.
2025-05-30 16:22:10 -07:00
Michael Bolin
ae743d56b0 feat: for codex exec, if PROMPT is not specified, read from stdin if not a TTY (#1178)
This attempts to make `codex exec` more flexible in how the prompt can
be passed:

* as before, it can be passed as a single string argument
* if `-` is passed as the value, the prompt is read from stdin
* if no argument is passed _and stdin is a tty_, prints a warning to
stderr that no prompt was specified an exits non-zero.
* if no argument is passed _and stdin is NOT a tty_, prints `Reading
prompt from stdin...` to stderr to let the user know that Codex will
wait until it reads EOF from stdin to proceed. (You can repro this case
by doing `yes | just exec` since stdin is not a TTY in that case but it
also never reaches EOF).
2025-05-30 14:41:55 -07:00
Michael Bolin
1bf82056b3 fix: introduce create_tools_json() and share it with chat_completions.rs (#1177)
The main motivator behind this PR is that `stream_chat_completions()`
was not adding the `"tools"` entry to the payload posted to the
`/chat/completions` endpoint. This (1) refactors the existing logic to
build up the `"tools"` JSON from `client.rs` into `openai_tools.rs`, and
(2) updates the use of responses API (`client.rs`) and chat completions
API (`chat_completions.rs`) to both use it.

Note this PR alone is not sufficient to get tool calling from chat
completions working: that is done in
https://github.com/openai/codex/pull/1167.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1177).
* #1167
* __->__ #1177
2025-05-30 14:07:03 -07:00
Michael Bolin
e207f20f64 fix: add extra debugging to GitHub Action (#1173)
https://github.com/openai/codex/actions/runs/15352839832/job/43205041563
appeared to fail around `postComment()`, but I don't see the output from
`fail()` in the logs. Adding a bit more info.
2025-05-30 11:16:30 -07:00
Michael Bolin
0f40ef5a10 fix: missed a step in #1171 for codex.yml (#1172)
Missed in my copy/paste.
2025-05-30 11:04:41 -07:00
Michael Bolin
8676185389 fix: update outdated repo setup in codex.yml (#1171)
We should do some work to share the setup logic across `codex.yml`,
`ci.yml`, and `rust-ci.yml`.
2025-05-30 10:58:57 -07:00
Michael Bolin
baa92f37e0 feat: initial import of experimental GitHub Action (#1170)
This is a first cut at a GitHub Action that lets you define prompt
templates in `.md` files under `.github/codex/labels` that will run
Codex with the associated prompt when the label is added to a GitHub
pull request.

For example, this PR includes these files:

```
.github/codex/labels/codex-attempt.md
.github/codex/labels/codex-code-review.md
.github/codex/labels/codex-investigate-issue.md
```

And the new `.github/workflows/codex.yml` workflow declares the
following triggers:

```yaml
on:
  issues:
    types: [opened, labeled]
  pull_request:
    branches: [main]
    types: [labeled]
```

as well as the following expression to gate the action:

```
jobs:
  codex:
    if: |
      (github.event_name == 'issues' && (
        (github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-investigate-issue'))
      )) ||
      (github.event_name == 'pull_request' && github.event.action == 'labeled' && github.event.label.name == 'codex-code-review')
```

Note the "actor" who added the label must have write access to the repo
for the action to take effect.

After adding a label, the action will "ack" the request by replacing the
original label (e.g., `codex-review`) with an `-in-progress` suffix
(e.g., `codex-review-in-progress`). When it is finished, it will swap
the `-in-progress` label with a `-completed` one (e.g.,
`codex-review-completed`).

Users of the action are responsible for providing an `OPENAI_API_KEY`
and making it available as a secret to the action.
2025-05-30 10:55:28 -07:00
Michael Bolin
a0239c3cd6 fix: enable set positional-arguments in justfile (#1169)
The way these definitions worked before, they did not handle quoted args
with spaces properly.

For example, if you had `/tmp/test-just/printlen.py` as:

```python
#!/usr/bin/env python3

import sys

print(len(sys.argv))
```

and your `justfile` was:

```
printlen *args:
    /tmp/test-just/printlen.py {{args}}
```

Then:

```shell
$ just printlen foo bar
3
$ just printlen 'foo bar'
3
```

which is not what we want: `'foo bar'` should be treated as one
argument.

The fix is to use
[positional-arguments](515e806b51/README.md (L1131)):

```
set positional-arguments

printlen *args:
    /tmp/test-just/printlen.py "$@"
```
2025-05-30 09:11:53 -07:00
Michael Bolin
bdfa95ed31 docs: split the config-related portion of codex-rs/README.md into its own config.md file (#1165)
Also updated the overview on `codex-rs/README.md` while here.
2025-05-29 16:59:35 -07:00
Fouad Matin
828e2062c2 fix(codex-rs): use codex-mini-latest as default (#1164) 2025-05-29 16:55:19 -07:00
Michael Bolin
92957c47fb fix: update justfile to facilitate running CLIs from source and formatting source code (#1163) 2025-05-29 15:35:14 -07:00
Michael Bolin
8c1902b562 chore: update GitHub workflow for native artifacts for npm release (#1162)
Among other things, this picks up this UI treatment fix:

https://github.com/openai/codex/pull/1161
2025-05-29 15:34:06 -07:00
Michael Bolin
a32d305ae6 fix: update UI treatment of slash command menu to match that of the TS CLI (#1161)
Uses the same colors as in the TypeScript CLI:


![image](https://github.com/user-attachments/assets/919cd472-ffb4-4654-a46a-d84f0cd9c097)

Now it is also readable on a light theme, e.g., in Ghostty:


![image](https://github.com/user-attachments/assets/468c37b0-ea63-4455-9b48-73dc2c95f0f6)
2025-05-29 14:57:55 -07:00
Michael Bolin
a768a6a41d fix: introduce ResponseInputItem::McpToolCallOutput variant (#1151)
The output of an MCP server tool call can be one of several types, but
to date, we treated all outputs as text by showing the serialized JSON
as the "tool output" in Codex:


25a9949c49/codex-rs/mcp-types/src/lib.rs (L96-L101)

This PR adds support for the `ImageContent` variant so we can now
display an image output from an MCP tool call.

In making this change, we introduce a new
`ResponseInputItem::McpToolCallOutput` variant so that we can work with
the `mcp_types::CallToolResult` directly when the function call is made
to an MCP server.

Though arguably the more significant change is the introduction of
`HistoryCell::CompletedMcpToolCallWithImageOutput`, which is a cell that
uses `ratatui_image` to render an image into the terminal. To support
this, we introduce `ImageRenderCache`, cache a
`ratatui_image::picker::Picker`, and `ensure_image_cache()` to cache the
appropriate scaled image data and dimensions based on the current
terminal size.

To test, I created a minimal `package.json`:

```json
{
  "name": "kitty-mcp",
  "version": "1.0.0",
  "type": "module",
  "description": "MCP that returns image of kitty",
  "main": "index.js",
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.12.0"
  }
}
```

with the following `index.js` to define the MCP server:

```js
#!/usr/bin/env node

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { readFile } from "node:fs/promises";
import { join } from "node:path";

const IMAGE_URI = "image://Ada.png";

const server = new McpServer({
  name: "Demo",
  version: "1.0.0",
});

server.tool(
  "get-cat-image",
  "If you need a cat image, this tool will provide one.",
  async () => ({
    content: [
      { type: "image", data: await getAdaPngBase64(), mimeType: "image/png" },
    ],
  })
);

server.resource("Ada the Cat", IMAGE_URI, async (uri) => {
  const base64Image = await getAdaPngBase64();
  return {
    contents: [
      {
        uri: uri.href,
        mimeType: "image/png",
        blob: base64Image,
      },
    ],
  };
});

async function getAdaPngBase64() {
  const __dirname = new URL(".", import.meta.url).pathname;
  // From 9705ce2c59/assets/Ada.png
  const filePath = join(__dirname, "Ada.png");
  const imageData = await readFile(filePath);
  const base64Image = imageData.toString("base64");
  return base64Image;
}

const transport = new StdioServerTransport();
await server.connect(transport);
```

With the local changes from this PR, I added the following to my
`config.toml`:

```toml
[mcp_servers.kitty]
command = "node"
args = ["/Users/mbolin/code/kitty-mcp/index.js"]
```

Running the TUI from source:

```
cargo run --bin codex -- --model o3 'I need a picture of a cat'
```

I get:

<img width="732" alt="image"
src="https://github.com/user-attachments/assets/bf80b721-9ca0-4d81-aec7-77d6899e2869"
/>

Now, that said, I have only tested in iTerm and there is definitely some
funny business with getting an accurate character-to-pixel ratio
(sometimes the `CompletedMcpToolCallWithImageOutput` thinks it needs 10
rows to render instead of 4), so there is still work to be done here.
2025-05-28 19:03:17 -07:00
Michael Bolin
25a9949c49 fix: ensure inputSchema for MCP tool always has "properties" field when talking to OpenAI (#1150)
As noted in the comment introduced in this PR, this is analogous to the
issue reported in
https://github.com/openai/openai-agents-python/issues/449. This seems to
work now.
2025-05-28 17:17:21 -07:00
Michael Bolin
392fdd7db6 fix: honor RUST_LOG in mcp-client CLI and default to DEBUG (#1149)
We had `debug!()` logging statements already, but they weren't being
printed because `tracing_subscriber` was not set up.
2025-05-28 17:10:06 -07:00
Michael Bolin
ae1a83f095 feat: introduce CellWidget trait (#1148)
The motivation behind this PR is to make it so a `HistoryCell` is more
like a `WidgetRef` that knows how to render itself into a `Rect` so that
it can be backed by something other than a `Vec<Line>`. Because a
`HistoryCell` is intended to appear in a scrollable list, we want to
ensure the stack of cells can be scrolled one `Line` at a time even if
the `HistoryCell` is not backed by a `Vec<Line>` itself.

To this end, we introduce the `CellWidget` trait whose key method is:

```
fn render_window(&self, first_visible_line: usize, area: Rect, buf: &mut Buffer);
```

The `first_visible_line` param is what differs from
`WidgetRef::render_ref()`, as a `CellWidget` needs to know the offset
into its "full view" at which it should start rendering.

The bookkeeping in `ConversationHistoryWidget` has been updated
accordingly to ensure each `CellWidget` in the history is rendered
appropriately.
2025-05-28 14:03:19 -07:00
Michael Bolin
d60f350cf8 feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:

```
codex --config model=o4-mini
```

Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.

Effectively, it means there are four levels of configuration for some
values:

- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)

Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.

Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
Michael Bolin
eba0e32909 fix: update install_native_deps.sh to pick up the latest release (#1136) 2025-05-27 10:06:41 -07:00
Michael Bolin
29d154cb13 fix: use o4-mini as the default model (#1135)
Rollback of https://github.com/openai/codex/pull/972.
2025-05-27 09:12:55 -07:00
Michael Bolin
6b5b184f21 fix: TUI was not honoring --skip-git-repo-check correctly (#1105)
I discovered that if I ran `codex <PROMPT>` in a cwd that was not a Git
repo, Codex did not automatically run `<PROMPT>` after I accepted the
Git warning. It appears that we were not managing the `AppState`
transition correctly, so this fixes the bug and ensures the Codex
session does not start until the user accepts the Git warning.

In particular, we now create the `ChatWidget` lazily and store it in the
`AppState::Chat` variant.
2025-05-24 08:33:49 -07:00
Michael Bolin
4bf81373a7 fix: forgot to pass codex_linux_sandbox_exe through in cli/src/debug_sandbox.rs (#1095)
I accidentally missed this in https://github.com/openai/codex/pull/1086.
2025-05-23 11:53:13 -07:00
Michael Bolin
89ef4efdcf fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:

For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:


d1de7bb383/codex-rs/core/src/exec.rs (L147-L219)

For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:


d1de7bb383/codex-rs/core/src/exec_linux.rs (L28-L49)

While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.

This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:


d1de7bb383/codex-rs/cli/Cargo.toml (L10-L12)

We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.

Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:

* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`

A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.

Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.

This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.

The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
Michael Bolin
d1de7bb383 feat: add codex_linux_sandbox_exe: Option<PathBuf> field to Config (#1089)
https://github.com/openai/codex/pull/1086 is a work-in-progress to make
Linux sandboxing work more like Seatbelt where, for the command we want
to sandbox, we build up the command and then hand it, and some sandbox
configuration flags, to another command to set up the sandbox and then
run it.

In the case of Seatbelt, macOS provides this helper binary and provides
it at `/usr/bin/sandbox-exec`. For Linux, we have to build our own and
pass it through (which is what #1086 does), so this makes the new
`codex_linux_sandbox_exe` available on `Config` so that it will later be
available in `exec.rs` when we need it in #1086.
2025-05-22 21:52:28 -07:00
Michael Bolin
63deb7c369 fix: for the @native release of the Node module, use the Rust version by default (#1084)
Added logic so that when we run `./scripts/stage_release.sh --native`
(for the `@native` version of the Node module), we drop a `use-native`
file next to `codex.js`. If present, `codex.js` will now run the Rust
CLI.

Ran `./scripts/stage_release.sh --native` and verified that when the
running `codex.js` in the staged folder:

```
$ /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.efvEvBlSN6/bin/codex.js --version
codex-cli 0.0.2505220956
```

it ran the expected Rust version of the CLI, as desired.

While here, I also updated the Rust version to one that I cut today,
which includes the new shell environment policy config option:
https://github.com/openai/codex/pull/1061. Note this may "break" some
users if the processes spawned by Codex need extra environment
variables. (We are still working to determine what the right defaults
should be for this option.)
2025-05-22 13:42:55 -07:00
Michael Bolin
cb379d7797 feat: introduce support for shell_environment_policy in config.toml (#1061)
To date, when handling `shell` and `local_shell` tool calls, we were
spawning new processes using the environment inherited from the Codex
process itself. This means that the sensitive `OPENAI_API_KEY` that
Codex needs to talk to OpenAI models was made available to everything
run by `shell` and `local_shell`. While there are cases where that might
be useful, it does not seem like a good default.

This PR introduces a complex `shell_environment_policy` config option to
control the `env` used with these tool calls. It is inevitably a bit
complex so that it is possible to override individual components of the
policy so without having to restate the entire thing.

Details are in the updated `README.md` in this PR, but here is the
relevant bit that explains the individual fields of
`shell_environment_policy`:

| Field | Type | Default | Description |
| ------------------------- | -------------------------- | ------- |
-----------------------------------------------------------------------------------------------------------------------------------------------
|
| `inherit` | string | `core` | Starting template for the
environment:<br>`core` (`HOME`, `PATH`, `USER`, …), `all` (clone full
parent env), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex
removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN`
(case-insensitive) before other rules run. |
| `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob
patterns to drop after the default filter.<br>Examples: `"AWS_*"`,
`"AZURE_*"`. |
| `set` | table&lt;string,string&gt; | `{}` | Explicit key/value
overrides or additions – always win over inherited values. |
| `include_only` | array&lt;string&gt; | `[]` | If non-empty, a
whitelist of patterns; only variables that match _one_ pattern survive
the final step. (Generally used with `inherit = "all"`.) |


In particular, note that the default is `inherit = "core"`, so:

* if you have extra env variables that you want to inherit from the
parent process, use `inherit = "all"` and then specify `include_only`
* if you have extra env variables where you want to hardcode the values,
the default `inherit = "core"` will work fine, but then you need to
specify `set`

This configuration is not battle-tested, so we will probably still have
to play with it a bit. `core/src/exec_env.rs` has the critical business
logic as well as unit tests.

Though if nothing else, previous to this change:

```
$ cargo run --bin codex -- debug seatbelt -- printenv OPENAI_API_KEY
# ...prints OPENAI_API_KEY...
```

But after this change it does not print anything (as desired).

One final thing to call out about this PR is that the
`configure_command!` macro we use in `core/src/exec.rs` has to do some
complex logic with respect to how it builds up the `env` for the process
being spawned under Landlock/seccomp. Specifically, doing
`cmd.env_clear()` followed by `cmd.envs(&$env_map)` (which is arguably
the most intuitive way to do it) caused the Landlock unit tests to fail
because the processes spawned by the unit tests started failing in
unexpected ways! If we forgo `env_clear()` in favor of updating env vars
one at a time, the tests still pass. The comment in the code talks about
this a bit, and while I would like to investigate this more, I need to
move on for the moment, but I do plan to come back to it to fully
understand what is going on. For example, this suggests that we might
not be able to spawn a C program that calls `env_clear()`, which would
be...weird. We may still have to fiddle with our Landlock config if that
is the case.
2025-05-22 09:51:19 -07:00
Michael Bolin
ef7208359f feat: show Config overview at start of exec (#1073)
Now the `exec` output starts with something like:

```
--------
workdir:  /Users/mbolin/code/codex/codex-rs
model:  o3
provider:  openai
approval:  Never
sandbox:  SandboxPolicy { permissions: [DiskFullReadAccess, DiskWritePlatformUserTempFolder, DiskWritePlatformGlobalTempFolder, DiskWriteCwd, DiskWriteFolder { folder: "/Users/mbolin/.pyenv/shims" }] }
--------
```

which makes it easier to reason about when looking at logs.
2025-05-21 22:53:02 -07:00
Michael Bolin
5746561428 chore: move types out of config.rs into config_types.rs (#1054)
`config.rs` is already quite long without these definitions. Since they
have no real dependencies of their own, let's move them to their own
file so `config.rs` can focus on the business logic of loading a config.
2025-05-20 11:55:25 -07:00
Michael Bolin
d766e845b3 feat: experimental --output-last-message flag to exec subcommand (#1037)
This introduces an experimental `--output-last-message` flag that can be
used to identify a file where the final message from the agent will be
written. Two use cases:

- Ultimately, we will likely add a `--quiet` option to `exec`, but even
if the user does not want any output written to the terminal, they
probably want to know what the agent did. Writing the output to a file
makes it possible to get that information in a clean way.
- Relatedly, when using `exec` in CI, it is easier to review the
transcript written "normally," (i.e., not as JSON or something with
extra escapes), but getting programmatic access to the last message is
likely helpful, so writing the last message to a file gets the best of
both worlds.

I am calling this "experimental" because it is possible that we are
overfitting and will want a more general solution to this problem that
would justify removing this flag.
2025-05-19 16:08:18 -07:00
Michael Bolin
a4bfdf6779 chore: produce .tar.gz versions of artifacts in addition to .zst (#1036)
For sparse containers/environments that do not have `zstd`, provide
`.tar.gz` as alternative archive format.
2025-05-19 15:17:45 -07:00
Fouad Matin
44022db8d0 bump(version): 0.1.2505172129 (#1008)
## `0.1.2505172129`

### 🪲 Bug Fixes

- Add node version check (#1007)
- Persist token after refresh (#1006)
2025-05-17 21:35:54 -07:00
Fouad Matin
a86270f581 fix: add node version check (#1007) 2025-05-17 21:27:41 -07:00
Fouad Matin
835eb77a7d fix: persist token after refresh (#1006)
After a token refresh/exchange, persist the new refresh and id token
2025-05-17 21:27:02 -07:00
Fouad Matin
dbc0ad348e bump(version): 0.1.2505171619 (#1001)
## `0.1.2505171619`

- `codex --login` + `codex --free` (#998)
2025-05-17 16:25:21 -07:00
Fouad Matin
9b4c2984d4 add: codex --login + codex --free (#998)
## Summary
- add `--login` and `--free` flags to cli help
- handle `--login` and `--free` logic in cli
- factor out redeem flow into `maybeRedeemCredits`
- call new helper from login callback
2025-05-17 16:13:12 -07:00
Michael Bolin
f3bde21759 chore: update install_native_deps.sh to use rust-v0.0.2505171051 (#995)
Use a more recent built of the Rust binaries to include with the Node
module.
2025-05-17 11:25:24 -07:00
Michael Bolin
1c6a3f1097 fix: artifacts from previous frames were bleeding through in TUI (#989)
Prior to this PR, I would frequently see glyphs from previous frames
"bleed" through like this:


![image](https://github.com/user-attachments/assets/8784b3d7-f691-4df6-8666-34e2f134ee85)

I think this was due to two issues (now addressed in this PR):

* We were not making use of `ratatui::widgets::Clear` to clear out the
buffer before drawing into it.
* To calculate the `width` used with `wrapped_line_count_for_cell()`, we
were not accounting for the scrollbar.
* Now we calculate `effective_width` using
`inner.width.saturating_sub(1)` where the `1` is for the scrollbar.
* We compute `text_area` using `effective_with` and pass the `text_area`
to `paragraph.render()`.
* We eliminate the conditional `needs_scrollbar` check and always call
`render(Scrollbar)`

I suspect this bug was introduced in
https://github.com/openai/codex/pull/937, though I did not try to
verify: I'm just happy that it appears to be fixed!
2025-05-17 10:51:11 -07:00
Michael Bolin
f8b6b1db81 fix: ensure the first user message always displays after the session info (#988)
Previously, if the first user message was sent with the command
invocation, e.g.:

```
$ cargo run --bin codex 'hello'
```

Then the user message was added as the first entry in the history and
then `is_first_event` would be `false` here:


031df77dfb/codex-rs/tui/src/conversation_history_widget.rs (L178-L179)

which would prevent the "welcome" message with things like the the model
version from displaying.

The fix in this PR is twofold:

* Reorganize the logic so the `ChatWidget` constructor stores
`initial_user_message` rather than sending it right away. Now inside
`handle_codex_event()`, it waits for the `SessionConfigured` event and
sends the `initial_user_message`, if it exists.
* In `conversation_history_widget.rs`, `add_session_info()` checks to
see whether a `WelcomeMessage` exists in the history when determining
the value of `has_welcome_message`. By construction, we expect that
`WelcomeMessage` is always the first message (in which case the existing
`let is_first_event = self.entries.is_empty();` logic would be sound),
but we decide to be extra defensive in case an `EventMsg::Error` is
processed before `EventMsg::SessionConfigured`.
2025-05-17 09:00:23 -07:00
Christoph K
031df77dfb Remove unnecessary console log from test (#970)
When running `npm test` on `codex-cli`, the test
`agent-cancel-prev-response.test.ts` logs a significant body of text to
console for no obvious reason.

This is not helpful, as it makes test logs messy and far longer.

This change deletes the `console.log(...)` that produces the behavior.
2025-05-16 19:48:11 -07:00
Michael Bolin
f9143d0361 fix: do not let Tab keypress flow through to composer when used to toggle focus (#977)
One line fix from Codex!
2025-05-16 19:27:49 -07:00
Fouad Matin
2880925a44 bump(version): 0.1.2505161800 (#978)
## `0.1.2505161800`

- Sign in with chatgpt credits (#974)
- Add support for OpenAI tool type, local_shell (#961)
2025-05-16 18:18:15 -07:00
Fouad Matin
3e19e8fd59 add: sign in with chatgpt credits (#974) 2025-05-16 17:55:08 -07:00
Trevor Creech
c7312c9d52 Fix CLA link in workflow (#964)
## Summary
- fix the CLA link posted by the bot
- docs suggest using an absolute URL:
https://github.com/marketplace/actions/cla-assistant-lite
2025-05-16 17:11:57 -07:00
Michael Bolin
1dc14cefa1 fix: make codex-mini-latest the default model in the Rust TUI (#972)
It's time to make `codex-mini-latest` the new default, as this should be
an "evergreen" model pointer.

* Equivalent change in TypeScript
https://github.com/openai/codex/pull/951
* See some notes about using `codex-mini-latest` with MCP in
https://github.com/openai/codex/pull/961
2025-05-16 17:08:18 -07:00
Michael Bolin
7ca84087e6 feat: make it possible to toggle mouse mode in the Rust TUI (#971)
I did a bit of research to understand why I could not use my mouse to
drag to select text to copy to the clipboard in iTerm.

Apparently https://github.com/openai/codex/pull/641 to enable mousewheel
scrolling broke this functionality. It seems that, unless we put in a
bit of effort, we can have drag-to-select or scrolling, but not both.
Though if you know the trick to hold down `Option` will dragging with
the mouse in iTerm, you can probably get by with this. (I did not know
about this option prior to researching this issue.)

Nevertheless, users may still prefer to disable mouse capture
altogether, so this PR introduces:

* the ability to set `tui.disable_mouse_capture = true` in `config.toml`
to disable mouse capture
* a new command, `/toggle-mouse-mode` to toggle mouse capture
2025-05-16 16:16:50 -07:00
Michael Bolin
67ac8ef605 fix: use text other than 'TODO' as test example (#969)
I casually `rg TODO` to look for TODOs, so the use of TODO in a sample
string in test output was throwing things off.
2025-05-16 14:51:03 -07:00
Michael Bolin
f48dd99f22 feat: add support for OpenAI tool type, local_shell (#961)
The new `codex-mini-latest` model expects a new tool with `{"type":
"local_shell"}`. Its contract is similar to the existing `function` tool
with `"name": "shell"`, so this takes the `local_shell` tool call into
`ExecParams` and sends it through the existing
`handle_container_exec_with_params()` code path.

This also adds the following logic when adding the default set of tools
to a request:

```rust
let default_tools = if self.model.starts_with("codex") {
    &DEFAULT_CODEX_MODEL_TOOLS
} else {
    &DEFAULT_TOOLS
};
```

That is, if the model name starts with `"codex"`, we add `{"type":
"local_shell"}` to the list of tools; otherwise, we add the
aforementioned `shell` tool.

To test this, I ran the TUI with `-m codex-mini-latest` and verified
that it used the `local_shell` tool. Though I also had some entries in
`[mcp_servers]` in my personal `config.toml`. The `codex-mini-latest`
model seemed eager to try the tools from the MCP servers first, so I
have personally commented them out for now, so keep an eye out if you're
testing `codex-mini-latest`!

Perhaps we should include more details with `{"type": "local_shell"}` or
update the following:


fd0b1b0208/codex-rs/core/prompt.md

For reference, the corresponding change in the TypeScript CLI is
https://github.com/openai/codex/pull/951.
2025-05-16 14:38:08 -07:00
Michael Bolin
dfd54e1433 chore: refactor handle_function_call() into smaller functions (#965)
Overall, `codex.rs` is still far too large, but at least there's less
indenting now that things have been moved into smaller functions.

This will also make it easier to introduce the `local_shell` tool in
https://github.com/openai/codex/pull/961.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/965).
* #961
* __->__ #965
2025-05-16 14:17:10 -07:00
Michael Bolin
9739820366 fix: remove file named ">" in the codex-cli folder (#968)
This file appears to have been accidentally added in
https://github.com/openai/codex/pull/912. Its presence makes the repo
impossible to clone on Windows.
2025-05-16 14:08:23 -07:00
Fouad Matin
fd0b1b0208 bump(version): 0.1.2505161243 (#967)
## `0.1.2505161243`

- Sign in with chatgpt (#963)
- Session history viewer (#912)
- Apply patch issue when using different cwd (#942)
- Diff command for filenames with special characters (#954)
2025-05-16 12:46:52 -07:00
Fouad Matin
c6e08ad8c1 add: sign in with chatgpt (#963)
Sign in with ChatGPT to get an API key (flow to grant API credits for Plus/Pro coming later today!)
2025-05-16 12:28:54 -07:00
Fouad Matin
cabf83f2ed add: session history viewer (#912)
- A new “/sessions” command is available for browsing previous sessions,
as shown in the updated slash command list

- The CLI now documents and parses a new “--history” flag to browse past
sessions from the command line

- A dedicated `SessionsOverlay` component loads session metadata and
allows toggling between viewing and resuming sessions

- When the sessions overlay is opened during a chat, selecting a session
can either show the saved rollout or resume it
2025-05-16 12:28:22 -07:00
Michael Bolin
1e39189393 feat: add support for file_opener option in Rust, similiar to #911 (#957)
This ports the enhancement introduced in
https://github.com/openai/codex/pull/911 (and the fixes in
https://github.com/openai/codex/pull/919) for the TypeScript CLI to the
Rust one.
2025-05-16 11:33:08 -07:00
Michael Bolin
3d9f4fcd8a fix: introduce ExtractHeredocError that implements PartialEq (#958) 2025-05-16 09:42:27 -07:00
Sebastian Lund
84e01f4b62 fix: apply patch issue when using different cwd (#942)
If you run a codex instance outside of the current working directory
from where you launched the codex binary it won't be able to apply
patches correctly, even if the sandbox policy allows it. This manifests
weird behaviours, such as

* Reading the same filename in the binary working directory, and
overwriting it in the session working directory. e.g. if you have a
`readme` in both folders it will overwrite the readme in the session
working directory with the readme in the binary working directory
*applied with the suggested patch*.
* The LLM ends up in weird loops trying to verify and debug why the
apply_patch won't work, and it can result in it applying patches by
manually writing python or javascript if it figures out that either is
supported by the system instead.

I added a test-case to ensure that the patch contents are based on the
cwd.

## Issue: mixing relative & absolute paths in apply_patch

1. The apply_patch tool use relative paths based on the session working
directory.
2. `unified_diff_from_chunks` eventually ends up [reading the source
file](https://github.com/reflectionai/codex/blob/main/codex-rs/apply-patch/src/lib.rs#L410)
to figure out what the diff is, by using the relative path.
3. The changes are targeted using an absolute path derived from the
current working directory.

The end-result in case session working directory differs from the binary
working directory: we get the diff for a file relative to the binary
working directory, and apply it on a file in the session working
directory.
2025-05-16 09:12:16 -07:00
hanson-openai
7edfbae062 fix: diff command for filenames with special characters (#954)
## Summary
- fix quoting issues in `/diff` to correctly handle files with special
characters
- add regression test for `getGitDiff` when filenames contain `$`
- relax timeout in raw-exec-process-group test

Fixes https://github.com/openai/codex/issues/943

## Testing
- `pnpm test`
2025-05-16 09:10:44 -07:00
Fouad Matin
316289d01d bump(version): 0.1.2505160811 codex-mini-latest (#953)
## `0.1.2505160811`

- `codex-mini-latest` (#951)
2025-05-16 08:18:20 -07:00
Michael Bolin
30cbfdfa87 chore: update exec crate to use std::time instead of chrono (#952)
When I originally wrote `elapsed.rs`, I realized we were using both
`std::time` and `chrono` with no real benefit of having both. We should
try to keep the `exec` subcommand trim (as it also buildable as a
standalone executable), so this helps tighten things up.
2025-05-16 08:14:50 -07:00
Fouad Matin
070499f534 add: codex-mini-latest (#951)
💽

---------

Co-authored-by: Trevor Creech <tcreech@openai.com>
2025-05-16 08:04:00 -07:00
Michael Bolin
ce2ecbe72f feat: record messages from user in ~/.codex/history.jsonl (#939)
This is a large change to support a "history" feature like you would
expect in a shell like Bash.

History events are recorded in `$CODEX_HOME/history.jsonl`. Because it
is a JSONL file, it is straightforward to append new entries (as opposed
to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be
valid JSON, each new entry entails rewriting the entire file). Because
it is possible for there to be multiple instances of Codex CLI writing
to `history.jsonl` at once, we use advisory file locking when working
with `history.jsonl` in `codex-rs/core/src/message_history.rs`.

Because we believe history is a sufficiently useful feature, we enable
it by default. Though to provide some safety, we set the file
permissions of `history.jsonl` to be `o600` so that other users on the
system cannot read the user's history. We do not yet support a default
list of `SENSITIVE_PATTERNS` as the TypeScript CLI does:


3fdf9df133/codex-cli/src/utils/storage/command-history.ts (L10-L17)

We are going to take a more conservative approach to this list in the
Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude
sensitive information like API tokens, it would also exclude valuable
information such as references to Git commits.

As noted in the updated documentation, users can opt-out of history by
adding the following to `config.toml`:

```toml
[history]
persistence = "none" 
```

Because `history.jsonl` could, in theory, be quite large, we take a[n
arguably overly pedantic] approach in reading history entries into
memory. Specifically, we start by telling the client the current number
of entries in the history file (`history_entry_count`) as well as the
inode (`history_log_id`) of `history.jsonl` (see the new fields on
`SessionConfiguredEvent`).

The client is responsible for keeping new entries in memory to create a
"local history," but if the user hits up enough times to go "past" the
end of local history, then the client should use the new
`GetHistoryEntryRequest` in the protocol to fetch older entries.
Specifically, it should pass the `history_log_id` it was given
originally and work backwards from `history_entry_count`. (It should
really fetch history in batches rather than one-at-a-time, but that is
something we can improve upon in subsequent PRs.)

The motivation behind this crazy scheme is that it is designed to defend
against:

* The `history.jsonl` being truncated during the session such that the
index into the history is no longer consistent with what had been read
up to that point. We do not yet have logic to enforce a `max_bytes` for
`history.jsonl`, but once we do, we will aspire to implement it in a way
that should result in a new inode for the file on most systems.
* New items from concurrent Codex CLI sessions amending to the history.
Because, in absence of truncation, `history.jsonl` is an append-only
log, so long as the client reads backwards from `history_entry_count`,
it should always get a consistent view of history. (That said, it will
not be able to read _new_ commands from concurrent sessions, but perhaps
we will introduce a `/` command to reload latest history or something
down the road.)

Admittedly, my testing of this feature thus far has been fairly light. I
expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
Michael Bolin
3fdf9df133 chore: introduce AppEventSender to help fix clippy warnings and update to Rust 1.87 (#948)
Moving to Rust 1.87 introduced a clippy warning that
`SendError<AppEvent>` was too large.

In practice, the only thing we ever did when we got this error was log
it (if the mspc channel is closed, then the app is likely shutting down
or something, so there's not much to do...), so this finally motivated
me to introduce `AppEventSender`, which wraps
`std::sync::mpsc::Sender<AppEvent>` with a `send()` method that invokes
`send()` on the underlying `Sender` and logs an `Err` if it gets one.

This greatly simplifies the code, as many functions that previously
returned `Result<(), SendError<AppEvent>>` now return `()`, so we don't
have to propagate an `Err` all over the place that we don't really
handle, anyway.

This also makes it so we can upgrade to Rust 1.87 in CI.
2025-05-15 14:50:30 -07:00
Michael Bolin
ec5e82b77c chore: pin Rust version to 1.86 and use io::Error::other to prepare for 1.87 (#947)
Previously, our GitHub actions specified the Rust toolchain as
`dtolnay/rust-toolchain@stable`, which meant the version could change
out from under us. In this case, the move from 1.86 to 1.87 introduced
new clippy warnings, causing build failures.

Because it will take a little time to fix all the new clippy warnings,
this PR pins things to 1.86 for now to unbreak the build.

It also replaces `io::Error::new(io::ErrorKind::Other)` with
`io::Error::other()` in preparation for 1.87.
2025-05-15 14:07:16 -07:00
Michael Bolin
5fc9fc3e3e chore: expose codex_home via Config (#941) 2025-05-15 00:30:13 -07:00
Michael Bolin
0b9ef93da5 fix: properly wrap lines in the Rust TUI (#937)
As discussed on
699ec5a87f (commitcomment-156776835),
to properly support scrolling long content in Ratatui for a sequence of
cells, we need to:

* take the `Vec<Line>` for each cell
* using the wrapping logic we want to use at render time, compute the
_effective line count_ using `Paragraph::line_count()` (see
`wrapped_line_count_for_cell()` in this PR)
* sum up the effective line count to compute the height of the area
being scrolled
* given a `scroll_position: usize`, index into the list of "effective
lines" and accumulate the appropriate `Vec<Line>` for the cells that
should be displayed
* take that `Vec<Line>` to create a `Paragraph` and use the same
line-wrapping policy that was used in `wrapped_line_count_for_cell()`
* display the resulting `Paragraph` and use the accounting to display a
scrollbar with the appropriate thumb size and offset without having to
render the `Vec<Line>` for the full history

With this change, lines wrap as I expect and everything appears to
redraw correctly as I resize my terminal!
2025-05-14 16:51:41 -07:00
Michael Bolin
34aa1991f1 chore: handle all cases for EventMsg (#936)
For now, this removes the `#[non_exhaustive]` directive on `EventMsg` so
that we are forced to handle all `EventMsg` by default. (We may revisit
this if/when we publish `core/` as a `lib` crate.) For now, it is
helpful to have this as a forcing function because we have effectively
two UIs (`tui` and `exec`) and usually when we add a new variant to
`EventMsg`, we want to be sure that we update both.
2025-05-14 13:36:43 -07:00
Michael Bolin
497c5396c0 feat: add mcp subcommand to CLI to run Codex as an MCP server (#934)
Previously, running Codex as an MCP server required a standalone binary
in our Cargo workspace, but this PR makes it available as a subcommand
(`mcp`) of the main CLI.

Ran this with:

```
RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run --bin codex -- mcp
```

and verified it worked as expected in the inspector at
`http://127.0.0.1:6274/`.
2025-05-14 13:15:41 -07:00
Michael Bolin
a12e4b0b31 feat: add support for commands in the Rust TUI (#935)
Introduces support for slash commands like in the TypeScript CLI. We do
not support the full set of commands yet, but the core abstraction is
there now.

In particular, we have a `SlashCommand` enum and due to thoughtful use
of the [strum](https://crates.io/crates/strum) crate, it requires
minimal boilerplate to add a new command to the list.

The key new piece of UI is `CommandPopup`, though the keyboard events
are still handled by `ChatComposer`. The behavior is roughly as follows:

* if the first character in the composer is `/`, the command popup is
displayed (if you really want to send a message to Codex that starts
with a `/`, simply put a space before the `/`)
* while the popup is displayed, up/down can be used to change the
selection of the popup
* if there is a selection, hitting tab completes the command, but does
not send it
* if there is a selection, hitting enter sends the command
* if the prefix of the composer matches a command, the command will be
visible in the popup so the user can see the description (commands could
take arguments, so additional text may appear after the command name
itself)


https://github.com/user-attachments/assets/39c3e6ee-eeb7-4ef7-a911-466d8184975f

Incidentally, Codex wrote almost all the code for this PR!
2025-05-14 12:55:49 -07:00
Michael Bolin
0402aef126 chore: move each view used in BottomPane into its own file (#928)
`BottomPane` was getting a bit unwieldy because it maintained a
`PaneState` enum with three variants and many of its methods had `match`
statements to handle each variant. To replace the enum, this PR:

* Introduces a `trait BottomPaneView` that has two implementations:
`StatusIndicatorView` and `ApprovalModalView`.
* Migrates `PaneState::TextInput` into its own struct, `ChatComposer`,
that does **not** implement `BottomPaneView`.
* Updates `BottomPane` so it has `composer: ChatComposer` and
`active_view: Option<Box<dyn BottomPaneView<'a> + 'a>>`. The idea is
that `active_view` takes priority and is displayed when it is `Some`;
otherwise, `ChatComposer` is displayed.
* While methods of `BottomPane` often have to check whether
`active_view` is present to decide which component to delegate to, the
code is more straightforward than before and introducing new
implementations of `BottomPaneView` should be less painful.

Because we want to retain the `TextArea` owned by `ChatComposer` even
when another view is displayed, to keep the ownership logic simple, it
seemed best to keep `ChatComposer` distinct from `BottomPaneView`.
2025-05-14 10:13:29 -07:00
Michael Bolin
399e819c9b fix: increase timeout for test_dev_null_write (#933)
After updating this test in https://github.com/openai/codex/pull/923, I
have been getting some timeouts with this test in CI, so increasing the
timeout to match that of `test_writable_root`:


327cf41f0f/codex-rs/core/src/landlock.rs (L211-L213)
2025-05-14 10:06:14 -07:00
Yaroslav Halchenko
327cf41f0f Add codespell support (config, workflow to detect/not fix) and make it fix some typos (#903)
More about codespell: https://github.com/codespell-project/codespell .

I personally introduced it to dozens if not hundreds of projects already
and so far only positive feedback.

CI workflow has 'permissions' set only to 'read' so also should be safe.

Let me know if just want to take typo fixes in and get rid of the CI

---------

Signed-off-by: Yaroslav O. Halchenko <debian@onerussian.com>
2025-05-14 09:39:49 -07:00
Fouad Matin
9e7cd2b25a bump(version): 0.1.2505140839 (#932)
## `0.1.2505140839`

### 🪲 Bug Fixes

- Gpt-4.1 apply_patch handling (#930)
- Add support for fileOpener in config.json (#911)
- Patch in #366 and #367 for marked-terminal (#916)
- Remember to set lastIndex = 0 on shared RegExp (#918)
- Always load version from package.json at runtime (#909)
- Tweak the label for citations for better rendering (#919)
- Tighten up some logic around session timestamps and ids (#922)
- Change EventMsg enum so every variant takes a single struct (#925)
- Reasoning default to medium, show workdir when supplied (#931)
- Test_dev_null_write() was not using echo as intended (#923)
2025-05-14 08:44:52 -07:00
Fouad Matin
73259351ff fix: reasoning default to medium, show workdir when supplied (#931) 2025-05-14 08:38:41 -07:00
Fouad Matin
77347d268d fix: gpt-4.1 apply_patch handling (#930) 2025-05-14 08:34:09 -07:00
Fouad Matin
678f0dbfec add: dynamic instructions (#927) 2025-05-14 01:27:46 -07:00
Michael Bolin
1bf00a3a95 feat: Ctrl+J for newline in Rust TUI, default to one line of height (#926)
While the `TextArea` used in the Rust TUI is "multiline," it is not like
an HTML `<textarea>` in that it does not wrap, so there was not much
benefit to setting `MIN_TEXTAREA_ROWS` to `3`, so this PR changes it to
`1`. Though there are now three ways to "increase" the height due to
actual linebreaks:

* paste in multiline content (this worked before this PR)
* pressing `Ctrl+J` will insert a newline
* if you have your terminal emulator set such that it is possible to
press something that `crossterm` interprets as "Enter plus some
modifier," then now that will also work

Now things look a bit more compact on startup:

<img width="745" alt="image"
src="https://github.com/user-attachments/assets/86e2857f-f31c-46f5-a80b-1ab2120b266e"
/>
2025-05-13 21:42:14 -07:00
Michael Bolin
5bf9445351 fix: test_dev_null_write() was not using echo as intended (#923)
I believe this test meant to verify that echoing content to `/dev/null`
succeeded, but instead, I believe it was testing the equivalent to `echo
'blah > /dev/null'`.
2025-05-13 21:40:26 -07:00
Michael Bolin
a5f3a34827 fix: change EventMsg enum so every variant takes a single struct (#925)
https://github.com/openai/codex/pull/922 did this for the
`SessionConfigured` enum variant, and I think it is generally helpful to
be able to work with the values as each enum variant as their own type,
so this converts the remaining variants and updates all of the
callsites.

Added a simple unit test to verify that the JSON-serialized version of
`Event` does not have any unexpected nesting.
2025-05-13 20:44:42 -07:00
Michael Bolin
e6c206d19d fix: tighten up some logic around session timestamps and ids (#922)
* update `SessionConfigured` event to include the UUID for the session
* show the UUID in the Rust TUI
* use local timestamps in log files instead of UTC
* include timestamps in log file names for easier discovery
2025-05-13 19:22:16 -07:00
Michael Bolin
3c03c25e56 feat: introduce --profile for Rust CLI (#921)
This introduces a much-needed "profile" concept where users can specify
a collection of options under one name and then pass that via
`--profile` to the CLI.

This PR introduces the `ConfigProfile` struct and makes it a field of
`CargoToml`. It further updates
`Config::load_from_base_config_with_overrides()` to respect
`ConfigProfile`, overriding default values where appropriate. A detailed
unit test is added at the end of `config.rs` to verify this behavior.

Details on how to use this feature have also been added to
`codex-rs/README.md`.
2025-05-13 16:52:52 -07:00
Adeeb
ae809f3721 restructure flake for codex-rs (#888)
Right now since the repo is having two different implementations of
codex, flake was updated to work with both typescript implementation and
rust implementation
2025-05-13 13:08:42 -07:00
Michael Bolin
a786c1d188 feat: auto-approve nl and support piping to sed (#920)
Auto-approved:

```
["nl", "-ba", "README.md"]
["sed", "-n", "1,200p", "filename.txt"]
["bash", "-lc", "sed -n '1,200p' filename.txt"]
["bash", "-lc", "nl -ba README.md | sed -n '1,200p'"]
```

Not auto approved:

```
["sed", "-n", "'1,200p'", "filename.txt"]
["sed", "-n", "1,200p", "file1.txt", "file2.txt"]
```
2025-05-13 13:06:35 -07:00
Michael Bolin
0ac7e8d55b fix: tweak the label for citations for better rendering (#919)
Adds a space so that sequential citations have some more breathing room.

As I had to update the tests for this change, I also introduced a
`toDiffableString()` helper to make the test easier to update as we make
formatting changes to the output.
2025-05-13 12:46:21 -07:00
Michael Bolin
1ff3e14d5a fix: patch in #366 and #367 for marked-terminal (#916)
This PR uses [`pnpm
patch`](https://www.petermekhaeil.com/til/pnpm-patch/) to pull in the
following proposed fixes for `marked-terminal`:

* https://github.com/mikaelbr/marked-terminal/pull/366
* https://github.com/mikaelbr/marked-terminal/pull/367

This adds a substantial test to `codex-cli/tests/markdown.test.tsx` to
verify the new behavior.

Note that one of the tests shows two citations being split across a line
even though the rendered version would fit comfortably on one line.
Changing this likely requires a subtle fix to `marked-terminal` to
account for "rendered length" when determining line breaks.
2025-05-13 12:29:17 -07:00
Michael Bolin
dd354e2134 fix: remember to set lastIndex = 0 on shared RegExp (#918)
I had not observed an issue in the wild because of this yet, but it
feels like it was only a matter of time...
2025-05-13 12:01:06 -07:00
Michael Bolin
557f608f25 fix: add support for fileOpener in config.json (#911)
This PR introduces the following type:

```typescript
export type FileOpenerScheme = "vscode" | "cursor" | "windsurf";
```

and uses it as the new type for a `fileOpener` option in `config.json`.
If set, this will be used to linkify file annotations in the output
using the URI-based file opener supported in VS Code-based IDEs.

Currently, this does not pass:

Updated `codex-cli/tests/markdown.test.tsx` to verify the new behavior.
Note it required mocking `supports-hyperlinks` and temporarily modifying
`chalk.level` to yield the desired output.
2025-05-13 09:45:46 -07:00
Michael Bolin
05bb5d7d46 fix: always load version from package.json at runtime (#909)
Note the high-level motivation behind this change is to avoid the need
to make temporary changes in the source tree in order to cut a release
build since that runs the risk of leaving things in an inconsistent
state in the event of a failure. The existing code:

```
import pkg from "../../package.json" assert { type: "json" };
```

did not work as intended because, as written, ESBuild would bake the
contents of the local `package.json` into the release build at build
time whereas we want it to read the contents at runtime so we can use
the `package.json` in the tree to build the code and later inject a
modified version into the release package with a timestamped build
version.

Changes:

* move `CLI_VERSION` out of `src/utils/session.ts` and into
`src/version.ts` so `../package.json` is a correct relative path both
from `src/version.ts` in the source tree and also in the final
`dist/cli.js` build output
* change `assert` to `with` in `import pkg` as apparently `with` became
standard in Node 22
* mark `"../package.json"` as external in `build.mjs` so the version is
not baked into the `.js` at build time

After using `pnpm stage-release` to build a release version, if I use
Node 22.0 to run Codex, I see the following printed to stderr at
startup:

```
(node:71308) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
```

Note it is a warning and does not prevent Codex from running.

In Node 22.12, the warning goes away, but the warning still appears in
Node 22.11. For Node 22, 22.15.0 is the current LTS version, so LTS
users will not see this.

Also, something about moving the definition of `CLI_VERSION` caused a
problem with the mocks in `check-updates.test.ts`. I asked Codex to fix
it, and it came up with the change to the test configs. I don't know
enough about vitest to understand what it did, but the tests seem
healthy again, so I'm going with it.
2025-05-12 21:27:15 -07:00
Michael Bolin
61b881d4e5 fix: agent instructions were not being included when ~/.codex/instructions.md was empty (#908)
I had seen issues where `codex-rs` would not always write files without
me pressuring it to do so, and between that and the report of
https://github.com/openai/codex/issues/900, I decided to look into this
further. I found two serious issues with agent instructions:

(1) We were only sending agent instructions on the first turn, but
looking at the TypeScript code, we should be sending them on every turn.

(2) There was a serious issue where the agent instructions were
frequently lost:

* The TypeScript CLI appears to keep writing `~/.codex/instructions.md`:
55142e3e6c/codex-cli/src/utils/config.ts (L586)
* If `instructions.md` is present, the Rust CLI uses the contents of it
INSTEAD OF the default prompt, even if `instructions.md` is empty:
55142e3e6c/codex-rs/core/src/config.rs (L202-L203)

The combination of these two things means that I have been using
`codex-rs` without these key instructions:
https://github.com/openai/codex/blob/main/codex-rs/core/prompt.md

Looking at the TypeScript code, it appears we should be concatenating
these three items every time (if they exist):

* `prompt.md`
* `~/.codex/instructions.md`
* nearest `AGENTS.md`

This PR fixes things so that:

* `Config.instructions` is `None` if `instructions.md` is empty
* `Payload.instructions` is now `&'a str` instead of `Option<&'a
String>` because we should always have _something_ to send
* `Prompt` now has a `get_full_instructions()` helper that returns a
`Cow<str>` that will always include the agent instructions first.
2025-05-12 17:24:44 -07:00
Michael Bolin
55142e3e6c fix: use "thinking" instead of "codex reasoning" as the label for reasoning events in the TUI (#905) 2025-05-12 15:19:45 -07:00
Michael Bolin
115fb0b95d fix: navigate initialization phase before tools/list request in MCP client (#904)
Apparently the MCP server implemented in JavaScript did not require the
`initialize` handshake before responding to tool list/call, so I missed
this.
2025-05-12 15:15:26 -07:00
Avi Rosenberg
ab4cb94227 fix: Normalize paths in resolvePathAgainstWorkdir to prevent path traversal vulnerability (#895)
This PR fixes a potential path traversal vulnerability by ensuring all
paths are properly normalized in the `resolvePathAgainstWorkdir`
function.

## Changes
- Added path normalization for both absolute and relative paths
- Ensures normalized paths are used in all subsequent operations
- Prevents potential path traversal attacks through non-normalized paths

This minimal change addresses the security concern without adding
unnecessary complexity, while maintaining compatibility with existing
code.
2025-05-12 13:44:00 -07:00
Michael Bolin
73fe1381aa chore: introduce new --native flag to Node module release process (#844)
This PR introduces an optional build flag, `--native`, that will build a
version of the Codex npm module that:

- Includes both the Node.js and native Rust versions (for Mac and Linux)
- Will run the native version if `CODEX_RUST=1` is set
- Runs the TypeScript version otherwise

Note this PR also updates the workflow URL to
https://github.com/openai/codex/actions/runs/14872557396, as that is a
build from today that includes everything up through
https://github.com/openai/codex/pull/843.

Test Plan:

In `~/code/codex/codex-cli`, I ran:

```
pnpm stage-release --native
```

The end of the output was:

```
Staged version 0.1.2505121317 for release in /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN
Test Node:
    node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Test Rust:
    CODEX_RUST=1 node /var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN/bin/codex.js --help
Next:  cd "/var/folders/wm/f209bc1n2bd_r0jncn9s6j_00000gp/T/tmp.xd2p5ETYGN" && npm publish --tag native
```

I verified that running each of these commands ran the expected version
of Codex.

While here, I also added `bin` to the `files` list in `package.json`,
which should have been done as part of
https://github.com/openai/codex/pull/757, as that added new entries to
`bin` that were matched by `.gitignore` but should have been included in
a release.
2025-05-12 13:38:10 -07:00
jcoens-openai
f3bd143867 Disallow expect via lints (#865)
Adds `expect()` as a denied lint. Same deal applies with `unwrap()`
where we now need to put `#[expect(...` on ones that we legit want. Took
care to enable `expect()` in test contexts.

# Tests

```
cargo fmt
cargo clippy --all-features --all-targets --no-deps -- -D warnings
cargo test
```
2025-05-12 08:45:46 -07:00
Michael Bolin
a1f51bf91b fix: fix border style for BottomPane (#893)
This PR fixes things so that:

* when the `BottomPane` is in the `StatusIndicator` state, the border
should be dim
* when the `BottomPane` does not have input focus, the border should be
dim

To make it easier to enforce this invariant, this PR introduces
`BottomPane::set_state()` that will:

* update `self.state`
* call `update_border_for_input_focus()`
* request a repaint

This should make it easier to enforce other updates for state changes
going forward.
2025-05-10 23:34:13 -07:00
Michael Bolin
b4785b5f88 feat: include "reasoning" messages in Rust TUI (#892)
As shown in the screenshot, we now include reasoning messages from the
model in the TUI under the heading "codex reasoning":


![image](https://github.com/user-attachments/assets/d8eb3dc3-2f9f-4e95-847e-d24b421249a8)

To ensure these are visible by default when using `o4-mini`, this also
changes the default value for `summary` (formerly `generate_summary`,
which is deprecated in favor of `summary` according to the docs) from
unset to `"auto"`.
2025-05-10 21:43:27 -07:00
Michael Bolin
2b122da087 feat: add support for AGENTS.md in Rust CLI (#885)
The TypeScript CLI already has support for including the contents of
`AGENTS.md` in the instructions sent with the first turn of a
conversation. This PR brings this functionality to the Rust CLI.

To be considered, `AGENTS.md` must be in the `cwd` of the session, or in
one of the parent folders up to a Git/filesystem root (whichever is
encountered first).

By default, a maximum of 32 KiB of `AGENTS.md` will be included, though
this is configurable using the new-in-this-PR `project_doc_max_bytes`
option in `config.toml`.
2025-05-10 17:52:59 -07:00
Corry Haines
b42ad670f1 fix: flex-mode via config/flag (#813)
* Add flexMode to stored config, and use it during config loading unless
the flag is explicitly passed.
* If the config asks for flexMode and the model doesn't support it,
silently disable flexMode.

Resolves #803
2025-05-10 16:18:20 -07:00
Pranav
646e7e9c11 feat: added arceeai as a provider (#818)
- Added ArceeAI as a provider  - https://conductor.arcee.ai/v1
- Compatible with ArceeAI SLMs (Virtuoso, Maestro)
- Works with ArceeAI's Conductor auto‑router models (auto, auto‑tool),
once #817 is merged
2025-05-10 16:16:28 -07:00
Pranav
19262f632f fix: guard against missing choices (#817)
- Fixes guard by using optional chaining to safely check
chunk.choices?.[0] before accessing.
- Currently, accessing chunk.choices[0] without checking could throw if
choices was missing from the chunk.
2025-05-10 16:16:19 -07:00
Corry Haines
fcc76cf3e7 Add reasoning effort option to CLI help text (#815)
Reasoning effort was already available, but not expressed into the help
text, so it was non-discoverable.

Other issues discovered, but will fix in separate PR since they are
larger:
* #816 reasoningEffort isn't displayed in the terminal-header, making it
rather hard to see the state of configuration
* I don't think the config file setting works, as the CLI option always
"wins" and overwrites it
2025-05-10 15:58:59 -07:00
Fouad Matin
3104d81b7b fix: migrate to AGENTS.md (#764)
Migrate from `codex.md` to `AGENTS.md`
2025-05-10 15:57:49 -07:00
Tomas Cupr
e307d007aa fix: retry on OpenAI server_error even without status code (#814)
Fix: retry on server_error responses that lack an HTTP status code

### What happened

1. An OpenAI endpoint returned a **5xx** (transient server-side
failure).
2. The SDK surfaced it as an `APIError` with

{ "type": "server_error", "message": "...", "status": undefined }

           (The SDK does not always populate `status` for these cases.)
3. Our retry logic in `src/utils/agent/agent-loop.ts` determined

isServerError = typeof status === "number" && status >= 500;

Because `status` was *undefined*, the error was **not** recognised as
retriable, the exception bubbled out, and the CLI crashed with a stack
           trace similar to:

               Error: An error occurred while processing the request.
                   at .../cli.js:474:1514

### Root cause

The transient-error detector ignored the semantic flag type ===
"server_error" that the SDK provides when the numeric status is missing.

#### Fix (1 loc + comment)

Extend the check:

const status = errCtx?.status ?? errCtx?.httpStatus ??
errCtx?.statusCode;

const isServerError = (typeof status === "number" && status >= 500) ||
// classic 5xx
errCtx?.type === "server_error";                   // <-- NEW

Now the agent:

* Retries up to **5** times (existing logic) when the backend reports a
transient failure, even if `status` is absent.
* If all retries fail, surfaces the existing friendly system message
instead of an uncaught exception.

### Tests & validation

pnpm test # all suites green (17 agent-level tests now include this
path)
pnpm run lint    # 0 errors / warnings
pnpm run typecheck

A new unit-test file isn’t required—the behaviour is already covered by
tests/agent-server-retry.test.ts, which stubs type: "server_error" and
now passes with the updated logic.

### Impact

* No API-surface changes.
* Prevents CLI crashes on intermittent OpenAI outages.
* Adds robust handling for other providers that may follow the same
error-shape.
2025-05-10 15:43:03 -07:00
Michael Bolin
fde48aaa0d feat: experimental env var: CODEX_SANDBOX_NETWORK_DISABLED (#879)
When using Codex to develop Codex itself, I noticed that sometimes it
would try to add `#[ignore]` to the following tests:

```
keeps_previous_response_id_between_tasks()
retries_on_early_close()
```

Both of these tests start a `MockServer` that launches an HTTP server on
an ephemeral port and requires network access to hit it, which the
Seatbelt policy associated with `--full-auto` correctly denies. If I
wasn't paying attention to the code that Codex was generating, one of
these `#[ignore]` annotations could have slipped into the codebase,
effectively disabling the test for everyone.

To that end, this PR enables an experimental environment variable named
`CODEX_SANDBOX_NETWORK_DISABLED` that is set to `1` if the
`SandboxPolicy` used to spawn the process does not have full network
access. I say it is "experimental" because I'm not convinced this API is
quite right, but we need to start somewhere. (It might be more
appropriate to have an env var like `CODEX_SANDBOX=full-auto`, but the
challenge is that our newer `SandboxPolicy` abstraction does not map to
a simple set of enums like in the TypeScript CLI.)

We leverage this new functionality by adding the following code to the
aforementioned tests as a way to "dynamically disable" them:

```rust
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
    println!(
        "Skipping test because it cannot execute when network is disabled in a Codex sandbox."
    );
    return;
}
```

We can use the `debug seatbelt --full-auto` command to verify that
`cargo test` fails when run under Seatbelt prior to this change:

```
$ cargo run --bin codex -- debug seatbelt --full-auto -- cargo test
---- keeps_previous_response_id_between_tasks stdout ----

thread 'keeps_previous_response_id_between_tasks' panicked at /Users/mbolin/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wiremock-0.6.3/src/mock_server/builder.rs:107:46:
Failed to bind an OS port for a mock server.: Os { code: 1, kind: PermissionDenied, message: "Operation not permitted" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    keeps_previous_response_id_between_tasks

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `-p codex-core --test previous_response_id`
```

Though after this change, the above command succeeds! This means that,
going forward, when Codex operates on Codex itself, when it runs `cargo
test`, only "real failures" should cause the command to fail.

As part of this change, I decided to tighten up the codepaths for
running `exec()` for shell tool calls. In particular, we do it in `core`
for the main Codex business logic itself, but we also expose this logic
via `debug` subcommands in the CLI in the `cli` crate. The logic for the
`debug` subcommands was not quite as faithful to the true business logic
as I liked, so I:

* refactored a bit of the Linux code, splitting `linux.rs` into
`linux_exec.rs` and `landlock.rs` in the `core` crate.
* gating less code behind `#[cfg(target_os = "linux")]` because such
code does not get built by default when I develop on Mac, which means I
either have to build the code in Docker or wait for CI signal
* introduced `macro_rules! configure_command` in `exec.rs` so we can
have both sync and async versions of this code. The synchronous version
seems more appropriate for straight threads or potentially fork/exec.
2025-05-09 18:29:34 -07:00
Govind Kamtamneni
7795272282 Adds Azure OpenAI support (#769)
## Summary

This PR introduces support for Azure OpenAI as a provider within the
Codex CLI. Users can now configure the tool to leverage their Azure
OpenAI deployments by specifying `"azure"` as the provider in
`config.json` and setting the corresponding `AZURE_OPENAI_API_KEY` and
`AZURE_OPENAI_API_VERSION` environment variables. This functionality is
added alongside the existing provider options (OpenAI, OpenRouter,
etc.).

Related to #92

**Note:** This PR is currently in **Draft** status because tests on the
`main` branch are failing. It will be marked as ready for review once
the `main` branch is stable and tests are passing.

---

## What’s Changed

-   **Configuration (`config.ts`, `providers.ts`, `README.md`):**
- Added `"azure"` to the supported `providers` list in `providers.ts`,
specifying its name, default base URL structure, and environment
variable key (`AZURE_OPENAI_API_KEY`).
- Defined the `AZURE_OPENAI_API_VERSION` environment variable in
`config.ts` with a default value (`2025-03-01-preview`).
    -   Updated `README.md` to:
        -   Include "azure" in the list of providers.
- Add a configuration section for Azure OpenAI, detailing the required
environment variables (`AZURE_OPENAI_API_KEY`,
`AZURE_OPENAI_API_VERSION`) with examples.
- **Client Instantiation (`terminal-chat.tsx`, `singlepass-cli-app.tsx`,
`agent-loop.ts`, `compact-summary.ts`, `model-utils.ts`):**
- Modified various components and utility functions where the OpenAI
client is initialized.
- Added conditional logic to check if the configured `provider` is
`"azure"`.
- If the provider is Azure, the `AzureOpenAI` client from the `openai`
package is instantiated, using the configured `baseURL`, `apiKey` (from
`AZURE_OPENAI_API_KEY`), and `apiVersion` (from
`AZURE_OPENAI_API_VERSION`).
- Otherwise, the standard `OpenAI` client is instantiated as before.
-   **Dependencies:**
- Relies on the `openai` package's built-in support for `AzureOpenAI`.
No *new* external dependencies were added specifically for this Azure
implementation beyond the `openai` package itself.

---

## How to Test

*This has been tested locally and confirmed working with Azure OpenAI.*

1.  **Configure `config.json`:**
Ensure your `~/.codex/config.json` (or project-specific config) includes
Azure and sets it as the active provider:
    ```json
    {
      "providers": {
        // ... other providers
        "azure": {
          "name": "AzureOpenAI",
"baseURL": "https://YOUR_RESOURCE_NAME.openai.azure.com", // Replace
with your Azure endpoint
          "envKey": "AZURE_OPENAI_API_KEY"
        }
      },
      "provider": "azure", // Set Azure as the active provider
      "model": "o4-mini" // Use your Azure deployment name here
      // ... other config settings
    }
    ```
2.  **Set up Environment Variables:**
    ```bash
    # Set the API Key for your Azure OpenAI resource
    export AZURE_OPENAI_API_KEY="your-azure-api-key-here"

# Set the API Version (Optional - defaults to `2025-03-01-preview` if
not set)
# Ensure this version is supported by your Azure deployment and endpoint
    export AZURE_OPENAI_API_VERSION="2025-03-01-preview"
    ```
3.  **Get the Codex CLI by building from this PR branch:**
Clone your fork, checkout this branch (`feat/azure-openai`), navigate to
`codex-cli`, and build:
    ```bash
    # cd /path/to/your/fork/codex
    git checkout feat/azure-openai # Or your branch name
    cd codex-cli
    corepack enable
    pnpm install
    pnpm build
    ```
4.  **Invoke Codex:**
Run the locally built CLI using `node` from the `codex-cli` directory:
    ```bash
    node ./dist/cli.js "Explain the purpose of this PR"
    ```
*(Alternatively, if you ran `pnpm link` after building, you can use
`codex "Explain the purpose of this PR"` from anywhere)*.
5. **Verify:** Confirm that the command executes successfully and
interacts with your configured Azure OpenAI deployment.

---

## Tests

- [x] Tested locally against an Azure OpenAI deployment using API Key
authentication. Basic commands and interactions confirmed working.

---

## Checklist

- [x] Added Azure provider details to configuration files
(`providers.ts`, `config.ts`).
- [x] Implemented conditional `AzureOpenAI` client initialization based
on provider setting.
-   [x] Ensured `apiVersion` is passed correctly to the Azure client.
-   [x] Updated `README.md` with Azure OpenAI setup instructions.
- [x] Manually tested core functionality against a live Azure OpenAI
endpoint.
- [x] Add/update automated tests for the Azure code path (pending `main`
stability).

cc @theabhinavdas @nikodem-wrona @fouad-openai @tibo-openai (adjust as
needed)

---

I have read the CLA Document and I hereby sign the CLA
2025-05-09 18:11:32 -07:00
jcoens-openai
78843c3940 feat: Allow pasting newlines (#866)
Noticed that when pasting multi-line blocks, each newline was treated
like a new submission.
Update tui to handle Paste directly and map newlines to shift+enter.

# Test

Copied this into clipboard:
```
Do nothing.
Explain this repo to me.
```

Pasted in and saw multi-line input. Hitting Enter then submitted the
full block.
2025-05-09 11:33:46 -07:00
Michael Bolin
93817643ee chore: refactor exec() into spawn_child() and consume_truncated_output() (#878)
This PR is a straight refactor so that creating the `Child` process for
an `shell` tool call and consuming its output can be separate concerns.
For the actual tool call, we will always apply
`consume_truncated_output()`, but for the top-level debug commands in
the CLI (e.g., `debug seatbelt` and `debug landlock`), we only want to
use the `spawn_child()` part of `exec()`.

We want the subcommands to match the `shell` tool call usage as
faithfully as possible. This becomes more important when we introduce a
new parameter to `spawn_child()` in
https://github.com/openai/codex/pull/879.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/878).
* #879
* __->__ #878
2025-05-09 11:03:58 -07:00
Michael Bolin
27198bfe11 fix: make McpConnectionManager tolerant of MCPs that fail to start (#854)
I added a typo in my `config.toml` such that the `command` for one of my
`mcp_servers` did not exist and I verified that the error was surfaced
in the TUI (and that I was still able to use Codex).


![image](https://github.com/user-attachments/assets/f13cc08c-f4c6-40ec-9ab4-a9d75e03152f)
2025-05-08 23:45:54 -07:00
Michael Bolin
b940adae8e fix: get responses API working again in Rust (#872)
I inadvertently regressed support for the Responses API when adding
support for the chat completions API in
https://github.com/openai/codex/pull/862. This should get both APIs
working again, but the chat completions codepath seems more complex than
necessary. I'll try to clean that up shortly, but I want to get things
working again ASAP.
2025-05-08 22:49:15 -07:00
Michael Bolin
e924070cee feat: support the chat completions API in the Rust CLI (#862)
This is a substantial PR to add support for the chat completions API,
which in turn makes it possible to use non-OpenAI model providers (just
like in the TypeScript CLI):

* It moves a number of structs from `client.rs` to `client_common.rs` so
they can be shared.
* It introduces support for the chat completions API in
`chat_completions.rs`.
* It updates `ModelProviderInfo` so that `env_key` is `Option<String>`
instead of `String` (for e.g., ollama) and adds a `wire_api` field
* It updates `client.rs` to choose between `stream_responses()` and
`stream_chat_completions()` based on the `wire_api` for the
`ModelProviderInfo`
* It updates the `exec` and TUI CLIs to no longer fail if the
`OPENAI_API_KEY` environment variable is not set
* It updates the TUI so that `EventMsg::Error` is displayed more
prominently when it occurs, particularly now that it is important to
alert users to the `CodexErr::EnvVar` variant.
* `CodexErr::EnvVar` was updated to include an optional `instructions`
field so we can preserve the behavior where we direct users to
https://platform.openai.com if `OPENAI_API_KEY` is not set.
* Cleaned up the "welcome message" in the TUI to ensure the model
provider is displayed.
* Updated the docs in `codex-rs/README.md`.

To exercise the chat completions API from OpenAI models, I added the
following to my `config.toml`:

```toml
model = "gpt-4o"
model_provider = "openai-chat-completions"

[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
```

Though to test a non-OpenAI provider, I installed ollama with mistral
locally on my Mac because ChatGPT said that would be a good match for my
hardware:

```shell
brew install ollama
ollama serve
ollama pull mistral
```

Then I added the following to my `~/.codex/config.toml`:

```toml
model = "mistral"
model_provider = "ollama"
```

Note this code could certainly use more test coverage, but I want to get
this in so folks can start playing with it.

For reference, I believe https://github.com/openai/codex/pull/247 was
roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
Michael Bolin
a538e6acb2 fix: use continue-on-error: true to tidy up GitHub Action (#871)
I installed the GitHub Actions extension for VS Code and it started
giving me lint warnings about this line:


a9adb4175c/.github/workflows/rust-ci.yml (L99)

Using an env var to track the state of individual steps was not great,
so I did some research about GitHub actions, which led to the discovery
of combining `continue-on-error: true` with `if .. steps.STEP.outcome ==
'failure'...`.

Apparently there is also a `failure()` macro that is supposed to make
this simpler, but I saw a number of complains online about it not
working as expected. Checking `outcome` seems maybe more reliable at the
cost of being slightly more verbose.
2025-05-08 16:21:11 -07:00
Michael Bolin
a9adb4175c fix: enable clippy on tests (#870)
https://github.com/openai/codex/pull/855 added the clippy warning to
disallow `unwrap()`, but apparently we were not verifying that tests
were "clippy clean" in CI, so I ended up with a lot of local errors in
VS Code.

This turns on the check in CI and fixes the offenders.
2025-05-08 16:02:56 -07:00
Michael Bolin
699ec5a87f fix: remove wrapping in Rust TUI that was incompatible with scrolling math (#868)
I noticed that sometimes I would enter a new message, but it would not
show up in the conversation history. Even if I focused the conversation
history and tried to scroll it to the bottom, I could not bring it into
view. At first, I was concerned that messages were not making it to the
UI layer, but I added debug statements and verified that was not the
issue.

It turned out that, previous to this PR, lines that are wider than the
viewport take up multiple lines of vertical space because `wrap()` was
set on the `Paragraph` inside the scroll pane. Unfortunately, that broke
our "scrollbar math" that assumed each `Line` contributes one line of
height in the UI.

This PR removes the `wrap()`, but introduces a new issue, which is that
now you cannot see long lines without resizing your terminal window. For
now, I filed an issue here:

https://github.com/openai/codex/issues/869

I think the long-term fix is to fix our math so it calculates the height
of a `Line` after it is wrapped given the current width of the viewport.
2025-05-08 15:17:17 -07:00
jcoens-openai
87cf120873 Workspace lints and disallow unwrap (#855)
Sets submodules to use workspace lints. Added denying unwrap as a
workspace level lint, which found a couple of cases where we could have
propagated errors. Also manually labeled ones that were fine by my eye.
2025-05-08 09:46:18 -07:00
Michael Bolin
9fdf2fa066 fix: remove clap dependency from core crate (#860) 2025-05-07 19:33:09 -07:00
Michael Bolin
86022f097e feat: read model_provider and model_providers from config.toml (#853)
This is the first step in supporting other model providers in the Rust
CLI. Specifically, this PR adds support for the new entries in `Config`
and `ConfigOverrides` to specify a `ModelProviderInfo`, which is the
basic config needed for an LLM provider. This PR does not get us all the
way there yet because `client.rs` still categorically appends
`/responses` to the URL and expects the endpoint to support the OpenAI
Responses API. Will fix that next!
2025-05-07 17:38:28 -07:00
Michael Bolin
cfe50c7107 fix: creating an instance of Codex requires a Config (#859)
I discovered that I accidentally introduced a change in
https://github.com/openai/codex/pull/829 where we load a fresh `Config`
in the middle of `codex.rs`:


c3e10e180a/codex-rs/core/src/codex.rs (L515-L522)

This is not good because the `Config` could differ from the one that has
the user's overrides specified from the CLI. Also, in unit tests, it
means the `Config` was picking up my personal settings as opposed to
using a vanilla config, which was problematic.

This PR cleans things up by moving the common case where
`Op::ConfigureSession` is derived from `Config` (originally done in
`codex_wrapper.rs`) and making it the standard way to initialize `Codex`
by putting it in `Codex::spawn()`. Note this also eliminates quite a bit
of boilerplate from the tests and relieves the caller of the
responsibility of minting out unique IDs when invoking `submit()`.
2025-05-07 16:33:28 -07:00
Michael Bolin
c3e10e180a fix: remove CodexBuilder and Recorder (#858)
These abstractions were originally created exclusively for the REPL,
which was removed in https://github.com/openai/codex/pull/754.
Currently, the create some unnecessary Tokio tasks, so we are better off
without them. (We can always bring this back if we have a new use case.)
2025-05-07 16:11:42 -07:00
Michael Bolin
42617f8726 feat: save session transcripts when using Rust CLI (#845)
This adds support for saving transcripts when using the Rust CLI. Like
the TypeScript CLI, it saves the transcript to `~/.codex/sessions`,
though it uses JSONL for the file format (and `.jsonl` for the file
extension) so that even if Codex crashes, what was written to the
`.jsonl` file should generally still be valid JSONL content.
2025-05-07 13:49:15 -07:00
Michael Bolin
9da6ebef3f fix: add optional timeout to McpClient::send_request() (#852)
We now impose a 10s timeout on the initial `tools/list` request to an
MCP server. We do not apply a timeout for other types of requests yet,
but we should start enforcing those, as well.
2025-05-07 12:56:38 -07:00
Michael Bolin
0360b4d0d7 feat: introduce the use of tui-markdown (#851)
This introduces the use of the `tui-markdown` crate to parse an
assistant message as Markdown and style it using ANSI for a better user
experience. As shown in the screenshot below, it has support for syntax
highlighting for _tagged_ fenced code blocks:

<img width="907" alt="image"
src="https://github.com/user-attachments/assets/900dc229-80bb-46e8-b1bb-efee4c70ba3c"
/>

That said, `tui-markdown` is not as configurable (or stylish!) as
https://www.npmjs.com/package/marked-terminal, which is what we use in
the TypeScript CLI. In particular:

* The styles are hardcoded and `tui_markdown::from_str()` does not take
any options whatsoever. It uses "bold white" for inline code style which
does not stand out as much as the yellow used by `marked-terminal`:


65402cbda7/tui-markdown/src/lib.rs (L464)

I asked Codex to take a first pass at this and it came up with:

https://github.com/joshka/tui-markdown/pull/80

* If a fenced code block is not tagged, then it does not get
highlighted. I would rather add some logic here:


65402cbda7/tui-markdown/src/lib.rs (L262)

that uses something like https://pypi.org/project/guesslang/ to examine
the value of `text` and try to use the appropriate syntax highlighter.

* When we have a fenced code block, we do not want to show the opening
and closing triple backticks in the output.

To unblock ourselves, we might want to bundle our own fork of
`tui-markdown` temporarily until we figure out what the shape of the API
should be and then try to upstream it.
2025-05-07 10:46:32 -07:00
jcoens-openai
a080d7b0fd Update submodules version to come from the workspace (#850)
Tie the version of submodules to the workspace version.
2025-05-07 10:08:06 -07:00
jcoens-openai
8a89d3aeda Update cargo to 2024 edition (#842)
Some effects of this change:
- New formatting changes across many files. No functionality changes
should occur from that.
- Calls to `set_env` are considered unsafe, since this only happens in
tests we wrap them in `unsafe` blocks
2025-05-07 08:37:48 -07:00
Michael Bolin
c577e94b67 chore: introduce codex-common crate (#843)
I started this PR because I wanted to share the `format_duration()`
utility function in `codex-rs/exec/src/event_processor.rs` with the TUI.
The question was: where to put it?

`core` should have as few dependencies as possible, so moving it there
would introduce a dependency on `chrono`, which seemed undesirable.
`core` already had this `cli` feature to deal with a similar situation
around sharing common utility functions, so I decided to:

* make `core` feature-free
* introduce `common`
* `common` can have as many "special interest" features as it needs,
each of which can declare their own deps
* the first two features of common are `cli` and `elapsed`

In practice, this meant updating a number of `Cargo.toml` files,
replacing this line:

```toml
codex-core = { path = "../core", features = ["cli"] }
```

with these:

```toml
codex-core = { path = "../core" }
codex-common = { path = "../common", features = ["cli"] }
```

Moving `format_duration()` into its own file gave it some "breathing
room" to add a unit test, so I had Codex generate some tests and new
support for durations over 1 minute.
2025-05-06 17:38:56 -07:00
Michael Bolin
7d8b38b37b feat: show MCP tool calls in codex exec subcommand (#841)
This is analogous to the change for the TUI in
https://github.com/openai/codex/pull/836, but for `codex exec`.

To test, I ran:

```
cargo run --bin codex-exec -- 'what is the weather in wellesley ma tomorrow'
```

and saw:


![image](https://github.com/user-attachments/assets/5714e07f-88c7-4dd9-aa0d-be54c1670533)
2025-05-06 16:52:43 -07:00
Michael Bolin
6f87f4c69f feat: drop support for q in the Rust TUI since we already support ctrl+d (#799)
Out of the box, we will make `/` the only official "escape sequence" for
commands in the Rust TUI. We will look to support `q` (or any string you
want to use as a "macro") via a plugin, but not make it part of the
default experience.

Existing `q` users will have to get by with `ctrl+d` for now.
2025-05-06 16:34:17 -07:00
Michael Bolin
aa36a15f9f fix: make all fields of Session struct private again (#840)
https://github.com/openai/codex/pull/829 noted it introduced a circular
dep between `codex.rs` and `mcp_tool_call.rs`. This attempts to clean
things up: the circular dep still exists, but at least all the fields of
`Session` are private again.
2025-05-06 16:21:35 -07:00
Michael Bolin
88e7ca5f2b feat: show MCP tool calls in TUI (#836)
Adds logic for the `McpToolCallBegin` and `McpToolCallEnd` events in
`codex-rs/tui/src/chatwidget.rs` so they get entries in the conversation
history in the TUI.

Building on top of https://github.com/openai/codex/pull/829, here is the
result of running:

```
cargo run --bin codex -- 'what is the weather in san francisco tomorrow'
```


![image](https://github.com/user-attachments/assets/db4a79bb-4988-46cb-acb2-446d5ba9e058)
2025-05-06 16:12:15 -07:00
Michael Bolin
147a940449 feat: support mcp_servers in config.toml (#829)
This adds initial support for MCP servers in the style of Claude Desktop
and Cursor. Note this PR is the bare minimum to get things working end
to end: all configured MCP servers are launched every time Codex is run,
there is no recovery for MCP servers that crash, etc.

(Also, I took some shortcuts to change some fields of `Session` to be
`pub(crate)`, which also means there are circular deps between
`codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a
subsequent PR.)

`codex-rs/README.md` is updated as part of this PR to explain how to use
this feature. There is a bit of plumbing to route the new settings from
`Config` to the business logic in `codex.rs`. The most significant
chunks for new code are in `mcp_connection_manager.rs` (which defines
the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is
responsible for tool calls.

This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd`
event types to the protocol, but does not add any handlers for them.
(See https://github.com/openai/codex/pull/836 for initial usage.)

To test, I added the following to my `~/.codex/config.toml`:

```toml
# Local build of https://github.com/hideya/mcp-server-weather-js
[mcp_servers.weather]
command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js"
args = []
```

And then I ran the following:

```
codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco'
[2025-05-06T22:40:05] Task started: 1
[2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W):

This Afternoon (Tue):
• Sunny, high near 69 °F
• West-southwest wind around 12 mph

Tonight:
• Partly cloudy, low around 52 °F
• SW wind 7–10 mph
...
```

Note that Codex itself is not able to make network calls, so it would
not normally be able to get live weather information like this. However,
the weather MCP is [currently] not run under the Codex sandbox, so it is
able to hit `api.weather.gov` and fetch current weather information.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829).
* #836
* __->__ #829
2025-05-06 15:47:59 -07:00
Michael Bolin
49d040215a fix: build all crates individually as part of CI (#833)
I discovered that `cargo build` worked for the entire workspace, but not
for the `mcp-client` or `core` crates.

* `mcp-client` failed to build because it underspecified the set of
features it needed from `tokio`.
* `core` failed to build because it was using a "feature" of its own
crate in the default, no-feature version.
 
This PR fixes the builds and adds a check in CI to defend against this
sort of thing going forward.
2025-05-06 12:02:49 -07:00
Michael Bolin
5f1b8f707c feat: update McpClient::new_stdio_client() to accept an env (#831)
Cleans up the signature for `new_stdio_client()` to more closely mirror
how MCP servers are declared in config files (`command`, `args`, `env`).
Also takes a cue from Claude Code where the MCP server is launched with
a restricted `env` so that it only includes "safe" things like `USER`
and `PATH` (see the `create_env_for_mcp_server()` function introduced in
this PR for details) by default, as it is common for developers to have
sensitive API keys present in their environment that should only be
forwarded to the MCP server when the user has explicitly configured it
to do so.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/831).
* #829
* __->__ #831
2025-05-06 11:14:47 -07:00
Michael Bolin
2cf7aeeeb6 feat: initial McpClient for Rust (#822)
This PR introduces an initial `McpClient` that we will use to give Codex
itself programmatic access to foreign MCPs. This does not wire it up in
Codex itself yet, but the new `mcp-client` crate includes a `main.rs`
for basic testing for now.

Manually tested by sending a `tools/list` request to Codex's own MCP
server:

```
codex-rs$ cargo build
codex-rs$ cargo run --bin codex-mcp-client ./target/debug/codex-mcp-server
{
  "tools": [
    {
      "description": "Run a Codex session. Accepts configuration parameters matching the Codex Config struct.",
      "inputSchema": {
        "properties": {
          "approval-policy": {
            "description": "Execution approval policy expressed as the kebab-case variant name (`unless-allow-listed`, `auto-edit`, `on-failure`, `never`).",
            "enum": [
              "auto-edit",
              "unless-allow-listed",
              "on-failure",
              "never"
            ],
            "type": "string"
          },
          "cwd": {
            "description": "Working directory for the session. If relative, it is resolved against the server process's current working directory.",
            "type": "string"
          },
          "disable-response-storage": {
            "description": "Disable server-side response storage.",
            "type": "boolean"
          },
          "model": {
            "description": "Optional override for the model name (e.g. \"o3\", \"o4-mini\")",
            "type": "string"
          },
          "prompt": {
            "description": "The *initial user prompt* to start the Codex conversation.",
            "type": "string"
          },
          "sandbox-permissions": {
            "description": "Sandbox permissions using the same string values accepted by the CLI (e.g. \"disk-write-cwd\", \"network-full-access\").",
            "items": {
              "enum": [
                "disk-full-read-access",
                "disk-write-cwd",
                "disk-write-platform-user-temp-folder",
                "disk-write-platform-global-temp-folder",
                "disk-full-write-access",
                "network-full-access"
              ],
              "type": "string"
            },
            "type": "array"
          }
        },
        "required": [
          "prompt"
        ],
        "type": "object"
      },
      "name": "codex"
    }
  ]
}
```
2025-05-05 12:52:55 -07:00
Anil Karaka
76a979007e fix: increase output limits for truncating collector (#575)
This Pull Request addresses an issue where the output of commands
executed in the raw-exec utility was being truncated due to restrictive
limits on the number of lines and bytes collected. The truncation caused
the message [Output truncated: too many lines or bytes] to appear when
processing large outputs, which could hinder the functionality of the
CLI.

Changes Made

Increased the maximum output limits in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
utility:
Bytes: Increased from 10 KB to 100 KB.
Lines: Increased from 256 lines to 1024 lines.
Installed the @types/node package to resolve missing type definitions
for [NodeJS](https://github.com/openai/codex/pull/575) and
[Buffer](https://github.com/openai/codex/pull/575).
Verified and fixed any related errors in the
[createTruncatingCollector](https://github.com/openai/codex/pull/575)
implementation.

Issue Solved: 

This PR ensures that larger outputs can be processed without truncation,
improving the usability of the CLI for commands that generate extensive
output. https://github.com/openai/codex/issues/509

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-05-05 10:26:55 -07:00
Andrey Mishchenko
7e97980cb4 Use "Title case" for ToC (#812) 2025-05-05 08:49:42 -07:00
Michael Bolin
2b72d05c5e feat: make Codex available as a tool when running it as an MCP server (#811)
This PR replaces the placeholder `"echo"` tool call in the MCP server
with a `"codex"` tool that calls Codex. Events such as
`ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled
properly yet, but I have `approval_policy = "never"` set in my
`~/.codex/config.toml` such that those codepaths are not exercised.

The schema for this MPC tool is defined by a new `CodexToolCallParam`
struct introduced in this PR. It is fairly similar to `ConfigOverrides`,
as the param is used to help create the `Config` used to start the Codex
session, though it also includes the `prompt` used to kick off the
session.

This PR also introduces the use of the third-party `schemars` crate to
generate the JSON schema, which is verified in the
`verify_codex_tool_json_schema()` unit test.

Events that are dispatched during the Codex session are sent back to the
MCP client as MCP notifications. This gives the client a way to monitor
progress as the tool call itself may take minutes to complete depending
on the complexity of the task requested by the user.

In the video below, I launched the server via:

```shell
mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run --
```

In the video, you can see the flow of:

* requesting the list of tools
* choosing the **codex** tool
* entering a value for **prompt** and then making the tool call

Note that I left the other fields blank because when unspecified, the
values in my `~/.codex/config.toml` were used:


https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192

Note that while using the inspector, I did run into
https://github.com/modelcontextprotocol/inspector/issues/293, though the
tip about ensuring I had only one instance of the **MCP Inspector** tab
open in my browser seemed to fix things.
2025-05-05 07:16:19 -07:00
Michael Bolin
5d924d44cf fix: ensure apply_patch resolves relative paths against workdir or project cwd (#810)
https://github.com/openai/codex/pull/800 kicked off some work to be more
disciplined about honoring the `cwd` param passed in rather than
assuming `std::env::current_dir()` as the `cwd`. As part of this, we
need to ensure `apply_patch` calls honor the appropriate `cwd` as well,
which is significant if the paths in the `apply_patch` arg are not
absolute paths themselves. Failing that:

- The `apply_patch` function call can contain an optional`workdir`
param, so:
- If specified and is an absolute path, it should be used to resolve
relative paths
- If specified and is a relative path, should be resolved against
`Config.cwd` and then any relative paths will be resolved against the
result
- If `workdir` is not specified on the function call, relative paths
should be resolved against `Config.cwd`

Note that we had a similar issue in the TypeScript CLI that was fixed in
https://github.com/openai/codex/pull/556.

As part of the fix, this PR introduces `ApplyPatchAction` so clients can
deal with that instead of the raw `HashMap<PathBuf,
ApplyPatchFileChange>`. This enables us to enforce, by construction,
that all paths contained in the `ApplyPatchAction` are absolute paths.
2025-05-04 12:32:51 -07:00
Michael Bolin
a134bdde49 fix: is_inside_git_repo should take the directory as a param (#809)
https://github.com/openai/codex/pull/800 made `cwd` a property of
`Config` and made it so the `cwd` is not necessarily
`std::env::current_dir()`. As such, `is_inside_git_repo()` should check
`Config.cwd` rather than `std::env::current_dir()`.

This PR updates `is_inside_git_repo()` to take `Config` instead of an
arbitrary `PathBuf` to force the check to operate on a `Config` where
`cwd` has been resolved to what the user specified.
2025-05-04 11:39:10 -07:00
Michael Bolin
cd12f0c24a fix: TUI should use cwd from Config (#808)
https://github.com/openai/codex/pull/800 made `cwd` a property of
`Config`, so the TUI should use this instead of running
`std::env::current_dir()`.
2025-05-04 11:12:40 -07:00
Michael Bolin
421e159888 feat: make cwd a required field of Config so we stop assuming std::env::current_dir() in a session (#800)
In order to expose Codex via an MCP server, I realized that we should be
taking `cwd` as a parameter rather than assuming
`std::env::current_dir()` as the `cwd`. Specifically, the user may want
to start a session in a directory other than the one where the MCP
server has been started.

This PR makes `cwd: PathBuf` a required field of `Session` and threads
it all the way through, though I think there is still an issue with not
honoring `workdir` for `apply_patch`, which is something we also had to
fix in the TypeScript version: https://github.com/openai/codex/pull/556.

This also adds `-C`/`--cd` to change the cwd via the command line.

To test, I ran:

```
cargo run --bin codex -- exec -C /tmp 'show the output of ls'
```

and verified it showed the contents of my `/tmp` folder instead of
`$PWD`.
2025-05-04 10:57:12 -07:00
Andrey Mishchenko
4b61fb8bab use "Title case" in README.md (#798) 2025-05-03 10:17:44 -07:00
Michael Bolin
0442458309 doc: update the config.toml documentation for the Rust CLI in codex-rs/README.md (#795)
https://github.com/openai/codex/pull/793 had important information on
the `notify` config option that seemed worth memorializing, so this PR
updates the documentation about all of the configurable options in
`~/.codex/config.toml`.
2025-05-02 20:32:24 -07:00
Michael Bolin
a180ed44e8 feat: configurable notifications in the Rust CLI (#793)
With this change, you can specify a program that will be executed to get
notified about events generated by Codex. The notification info will be
packaged as a JSON object. The supported notification types are defined
by the `UserNotification` enum introduced in this PR. Initially, it
contains only one variant, `AgentTurnComplete`:

```rust
pub(crate) enum UserNotification {
    #[serde(rename_all = "kebab-case")]
    AgentTurnComplete {
        turn_id: String,

        /// Messages that the user sent to the agent to initiate the turn.
        input_messages: Vec<String>,

        /// The last message sent by the assistant in the turn.
        last_assistant_message: Option<String>,
    },
}
```

This is intended to support the common case when a "turn" ends, which
often means it is now your chance to give Codex further instructions.

For example, I have the following in my `~/.codex/config.toml`:

```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```

I created my own custom notifier script that calls out to
[terminal-notifier](https://github.com/julienXX/terminal-notifier) to
show a desktop push notification on macOS. Contents of `notify.py`:

```python
#!/usr/bin/env python3

import json
import subprocess
import sys


def main() -> int:
    if len(sys.argv) != 2:
        print("Usage: notify.py <NOTIFICATION_JSON>")
        return 1

    try:
        notification = json.loads(sys.argv[1])
    except json.JSONDecodeError:
        return 1

    match notification_type := notification.get("type"):
        case "agent-turn-complete":
            assistant_message = notification.get("last-assistant-message")
            if assistant_message:
                title = f"Codex: {assistant_message}"
            else:
                title = "Codex: Turn Complete!"
            input_messages = notification.get("input_messages", [])
            message = " ".join(input_messages)
            title += message
        case _:
            print(f"not sending a push notification for: {notification_type}")
            return 0

    subprocess.check_output(
        [
            "terminal-notifier",
            "-title",
            title,
            "-message",
            message,
            "-group",
            "codex",
            "-ignoreDnD",
            "-activate",
            "com.googlecode.iterm2",
        ]
    )

    return 0


if __name__ == "__main__":
    sys.exit(main())
```

For reference, here are related PRs that tried to add this functionality
to the TypeScript version of the Codex CLI:

* https://github.com/openai/codex/pull/160
* https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
Michael Bolin
21cd953dbd feat: introduce mcp-server crate (#792)
This introduces the `mcp-server` crate, which contains a barebones MCP
server that provides an `echo` tool that echoes the user's request back
to them.

To test it out, I launched
[modelcontextprotocol/inspector](https://github.com/modelcontextprotocol/inspector)
like so:

```
mcp-server$ npx @modelcontextprotocol/inspector cargo run --
```

and opened up `http://127.0.0.1:6274` in my browser:


![image](https://github.com/user-attachments/assets/83fc55d4-25c2-4497-80cd-e9702283ff93)

I also had to make a small fix to `mcp-types`, adding
`#[serde(untagged)]` to a number of `enum`s.
2025-05-02 17:25:58 -07:00
Michael Bolin
865e518771 fix: mcp-types serialization wasn't quite working (#791)
While creating a basic MCP server in
https://github.com/openai/codex/pull/792, I discovered a number of bugs
with the initial `mcp-types` crate that I needed to fix in order to
implement the server.

For example, I discovered that when serializing a message, `"jsonrpc":
"2.0"` was not being included.

I changed the codegen so that the field is added as:

```rust
    #[serde(rename = "jsonrpc", default = "default_jsonrpc")]
    pub jsonrpc: String,
```

This ensures that the field is serialized as `"2.0"`, though the field
still has to be assigned, which is tedious. I may experiment with
`Default` or something else in the future. (I also considered creating a
custom serializer, but I'm not sure it's worth the trouble.)

While here, I also added `MCP_SCHEMA_VERSION` and `JSONRPC_VERSION` as
`pub const`s for the crate.

I also discovered that MCP rejects sending `null` for optional fields,
so I had to add `#[serde(skip_serializing_if = "Option::is_none")]` on
`Option` fields.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/791).
* #792
* __->__ #791
2025-05-02 16:38:05 -07:00
Michael Bolin
83961e0299 feat: introduce mcp-types crate (#787)
This adds our own `mcp-types` crate to our Cargo workspace. We vendor in
the
[`2025-03-26/schema.json`](05f2045136/schema/2025-03-26/schema.json)
from the MCP repo and introduce a `generate_mcp_types.py` script to
codegen the `lib.rs` from the JSON schema.

Test coverage is currently light, but I plan to refine things as we
start making use of this crate.

And yes, I am aware that
https://github.com/modelcontextprotocol/rust-sdk exists, though the
published https://crates.io/crates/rmcp appears to be a competing
effort. While things are up in the air, it seems better for us to
control our own version of this code.

Incidentally, Codex did a lot of the work for this PR. I told it to
never edit `lib.rs` directly and instead to update
`generate_mcp_types.py` and then re-run it to update `lib.rs`. It
followed these instructions and once things were working end-to-end, I
iteratively asked for changes to the tests until the API looked
reasonable (and the code worked). Codex was responsible for figuring out
what to do to `generate_mcp_types.py` to achieve the requested test/API
changes.
2025-05-02 13:33:14 -07:00
anup-openai
f6b1ce2e3a Configure HTTPS agent for proxies (#775)
- Some workflows require you to route openAI API traffic through a proxy
- See
https://github.com/openai/openai-node/tree/v4?tab=readme-ov-file#configuring-an-https-agent-eg-for-proxies
for more details

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
Co-authored-by: Fouad Matin <fouad@openai.com>
2025-05-02 12:08:13 -07:00
Fouad Matin
b864cc3810 update: vite version (#766) 2025-05-02 09:30:08 -07:00
Michael Bolin
a4b51f6b67 feat: use Landlock for sandboxing on Linux in TypeScript CLI (#763)
Building on top of https://github.com/openai/codex/pull/757, this PR
updates Codex to use the Landlock executor binary for sandboxing in the
Node.js CLI. Note that Codex has to be invoked with either `--full-auto`
or `--auto-edit` to activate sandboxing. (Using `--suggest` or
`--dangerously-auto-approve-everything` ensures the sandboxing codepath
will not be exercised.)

When I tested this on a Linux host (specifically, `Ubuntu 24.04.1 LTS`),
things worked as expected: I ran Codex CLI with `--full-auto` and then
asked it to do `echo 'hello mbolin' into hello_world.txt` and it
succeeded without prompting me.

However, in my testing, I discovered that the sandboxing did *not* work
when using `--full-auto` in a Linux Docker container from a macOS host.
I updated the code to throw a detailed error message when this happens:


![image](https://github.com/user-attachments/assets/e5b99def-f00e-4ade-a0c5-2394d30df52e)
2025-05-01 12:34:56 -07:00
Michael Bolin
3f5975ad5a chore: make build process a single script to run (#757)
This introduces `./codex-cli/scripts/stage_release.sh`, which is a shell
script that stages a release for the Node.js module in a temp directory.
It updates the release to include these native binaries:

```
bin/codex-linux-sandbox-arm64
bin/codex-linux-sandbox-x64
```

though this PR does not update Codex CLI to use them yet.

When doing local development, run
`./codex-cli/scripts/install_native_deps.sh` to install these in your
own `bin/` folder.

This PR also updates `README.md` to document the new workflow.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/757).
* #763
* __->__ #757
2025-05-01 08:36:07 -07:00
Fouad Matin
463a230991 bump(version): 0.1.2504301751 (#768)
## `0.1.2504301751`

### 🚀 Features

- User config api key (#569)
- `@mention` files in codex (#701)
- Add `--reasoning` CLI flag (#314)
- Lower default retry wait time and increase number of tries (#720)
- Add common package registries domains to allowed-domains list (#414)

### 🪲 Bug Fixes

- Insufficient quota message (#758)
- Input keyboard shortcut opt+delete (#685)
- `/diff` should include untracked files (#686)
- Only allow running without sandbox if explicitly marked in safe
container (#699)
- Tighten up check for /usr/bin/sandbox-exec (#710)
- Check if sandbox-exec is available (#696)
- Duplicate messages in quiet mode (#680)
2025-04-30 17:55:34 -07:00
Fouad Matin
985fd44ec0 fix: input keyboard shortcut opt+delete (#685) 2025-04-30 17:17:13 -07:00
moppywhip
bc4e6db749 feat: @mention files in codex (#701)
Solves #700

## State of the World Before

Prior to this PR, when users wanted to share file contents with Codex,
they had two options:
- Manually copy and paste file contents into the chat
- Wait for the assistant to use the shell tool to view the file

The second approach required the assistant to:
1. Recognize the need to view a file
2. Execute a shell tool call
3. Wait for the tool call to complete
4. Process the file contents

This consumed extra tokens and reduced user control over which files
were shared with the model.

## State of the World After

With this PR, users can now:
- Reference files directly in their chat input using the `@path` syntax
- Have file contents automatically expanded into XML blocks before being
sent to the LLM

For example, users can type `@src/utils/config.js` in their message, and
the file contents will be included in context. Within the terminal chat
history, these file blocks will be collapsed back to `@path` format in
the UI for clean presentation.

Tag File suggestions:
<img width="857" alt="file-suggestions"
src="https://github.com/user-attachments/assets/397669dc-ad83-492d-b5f0-164fab2ff4ba"
/>

Tagging files in action:
<img width="858" alt="tagging-files"
src="https://github.com/user-attachments/assets/0de9d559-7b7f-4916-aeff-87ae9b16550a"
/>

Demo video of file tagging:
[![Demo video of file
tagging](https://img.youtube.com/vi/vL4LqtBnqt8/0.jpg)](https://www.youtube.com/watch?v=vL4LqtBnqt8)

## Implementation Details

This PR consists of 2 main components:

1. **File Tag Utilities**:
- New `file-tag-utils.ts` utility module that handles both expansion and
collapsing of file tags
- `expandFileTags()` identifies `@path` tokens and replaces them with
XML blocks containing file contents
- `collapseXmlBlocks()` reverses the process, converting XML blocks back
to `@path` format for UI display
- Tokens are only expanded if they point to valid files (directories are
ignored)
   - Expansion happens just before sending input to the model

2. **Terminal Chat Integration**:
- Leveraged the existing file system completion system for tabbing to
support the `@path` syntax
   - Added `updateFsSuggestions` helper to manage filesystem suggestions
- Added `replaceFileSystemSuggestion` to replace input with filesystem
suggestions
- Applied `collapseXmlBlocks` in the chat response rendering so that
tagged files are shown as simple `@path` tags

The PR also includes test coverage for both the UI and the file tag
utilities.

## Next Steps

Some ideas I'd like to implement if this feature gets merged:

- Line selection: `@path[50:80]` to grab specific sections of files
- Method selection: `@path#methodName` to grab just one function/class
- Visual improvements: highlight file tags in the UI to make them more
noticeable
2025-04-30 16:19:55 -07:00
Kevin Alwell
bd82101859 fix: insufficient quota message (#758)
This pull request includes a change to improve the error message
displayed when there is insufficient quota in the `AgentLoop` class. The
updated message provides more detailed information and a link for
managing or purchasing credits.

Error message improvement:

*
[`codex-cli/src/utils/agent/agent-loop.ts`](diffhunk://#diff-b15957eac2720c3f1f55aa32f172cdd0ac6969caf4e7be87983df747a9f97083L1140-R1140):
Updated the error message in the `AgentLoop` class to include the
specific error message (if available) and a link to manage or purchase
credits.


Fixes #751
2025-04-30 16:00:50 -07:00
Michael Bolin
033d379eca fix: remove unused _writableRoots arg to exec() function (#762)
I suspect this was done originally so that `execForSandbox()` had a
consistent signature for both the `SandboxType.NONE` and
`SandboxType.MACOS_SEATBELT` cases, but that is not really necessary and
turns out to make the upcoming Landlock support a bit more complicated
to implement, so I had Codex remove it and clean up the call sites.
2025-04-30 14:08:27 -07:00
Michael Bolin
e6fe8d6fa1 chore: mark Rust releases as "prerelease" (#761)
Apparently the URLs for draft releases cannot be downloaded using
unauthenticated `curl`, which means the DotSlash file only works for
users who are authenticated with `gh`. According to chat, prereleases
_can_ be fetched with unauthenticated `curl`, so let's try that.
2025-04-30 13:25:53 -07:00
Michael Bolin
b571249867 chore: script to create a Rust release (#759)
For now, keep things simple such that we never update the `version` in
the `Cargo.toml` for the workspace root on the `main` branch. Instead,
create a new branch for a release, push one commit that updates the
`version`, and then tag that branch to kick off a release.

To test, I ran this script and created this release job:

https://github.com/openai/codex/actions/runs/14762580641
2025-04-30 12:39:03 -07:00
Michael Bolin
24278347b7 fix: remove codex-repl from GitHub workflows (#760)
I missed this when doing https://github.com/openai/codex/pull/754.
2025-04-30 12:10:24 -07:00
Michael Bolin
8f7a54501c chore: Rust release, set prerelease:false and version=0.0.2504301132 (#755)
The generated DotSlash file has URLs that refer to
`https://github.com/openai/codex/releases/`, so let's set
`prerelease:false` (but keep `draft:true` for now) so those URLs should
work.

Also updated `version` in Cargo workspace so I will kick off a build
once this lands.
2025-04-30 11:53:03 -07:00
Michael Bolin
2f1d96e77d fix: remove errant eslint-disable so pnpm run lint passes again (#756)
My bad: introduced in https://github.com/openai/codex/pull/753.
2025-04-30 11:37:11 -07:00
Michael Bolin
84aaefa102 fix: read version from package.json instead of modifying session.ts (#753)
I am working to simplify the build process. As a first step, update
`session.ts` so it reads the `version` from `package.json` at runtime so
we no longer have to modify it during the build process. I want to get
to a place where the build looks like:

```
cd codex-cli
pnpm i
pnpm build
RELEASE_DIR=$(mktemp -d)
cp -r bin "$RELEASE_DIR/bin"
cp -r dist "$RELEASE_DIR/dist"
cp -r src "$RELEASE_DIR/src" # important if we want sourcemaps to continue to work
cp ../README.md "$RELEASE_DIR"
VERSION=$(printf '0.1.%d' $(date +%y%m%d%H%M))
jq --arg version "$VERSION" '.version = $version' package.json > "$RELEASE_DIR/package.json"
```

Then the contents of `$RELEASE_DIR` should be good to `npm publish`, no?
2025-04-30 11:03:10 -07:00
Michael Bolin
c432d9ef81 chore: remove the REPL crate/subcommand (#754)
@oai-ragona and I discussed it, and we feel the REPL crate has served
its purpose, so we're going to delete the code and future archaeologists
can find it in Git history.
2025-04-30 10:15:50 -07:00
Michael Bolin
4746ee900f fix: remove expected dot after v in rust-v tag name (#742)
I think this extra dot was not intentional, but I'm not sure. Certainly
this comment suggests it should not be there:


85999d7277/.github/workflows/rust-release.yml (L4)
2025-04-30 10:05:47 -07:00
278 changed files with 25801 additions and 3840 deletions

1
.codespellignore Normal file
View File

@@ -0,0 +1 @@
iTerm

6
.codespellrc Normal file
View File

@@ -0,0 +1,6 @@
[codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts
check-hidden = true
ignore-regex = ^\s*"image/\S+": ".*|\b(afterAll)\b
ignore-words-list = ratatui,ser

29
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,29 @@
FROM ubuntu:22.04
ARG DEBIAN_FRONTEND=noninteractive
# enable 'universe' because musl-tools & clang live there
RUN apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common && \
add-apt-repository --yes universe
# now install build deps
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential curl git ca-certificates \
pkg-config clang musl-tools libssl-dev && \
rm -rf /var/lib/apt/lists/*
# non-root dev user
ARG USER=dev
ARG UID=1000
RUN useradd -m -u $UID $USER
USER $USER
# install Rust + musl target as dev user
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal && \
~/.cargo/bin/rustup target add aarch64-unknown-linux-musl
ENV PATH="/home/${USER}/.cargo/bin:${PATH}"
WORKDIR /workspace

30
.devcontainer/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Containerized Development
We provide the following options to facilitate Codex development in a container. This is particularly useful for verifying the Linux build when working on a macOS host.
## Docker
To build the Docker image locally for x64 and then run it with the repo mounted under `/workspace`:
```shell
CODEX_DOCKER_IMAGE_NAME=codex-linux-dev
docker build --platform=linux/amd64 -t "$CODEX_DOCKER_IMAGE_NAME" ./.devcontainer
docker run --platform=linux/amd64 --rm -it -e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64 -v "$PWD":/workspace -w /workspace/codex-rs "$CODEX_DOCKER_IMAGE_NAME"
```
Note that `/workspace/target` will contain the binaries built for your host platform, so we include `-e CARGO_TARGET_DIR=/workspace/codex-rs/target-amd64` in the `docker run` command so that the binaries built inside your container are written to a separate directory.
For arm64, specify `--platform=linux/amd64` instead for both `docker build` and `docker run`.
Currently, the `Dockerfile` works for both x64 and arm64 Linux, though you need to run `rustup target add x86_64-unknown-linux-musl` yourself to install the musl toolchain for x64.
## VS Code
VS Code recognizes the `devcontainer.json` file and gives you the option to develop Codex in a container. Currently, `devcontainer.json` builds and runs the `arm64` flavor of the container.
From the integrated terminal in VS Code, you can build either flavor of the `arm64` build (GNU or musl):
```shell
cargo build --target aarch64-unknown-linux-musl
cargo build --target aarch64-unknown-linux-gnu
```

View File

@@ -0,0 +1,29 @@
{
"name": "Codex",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"platform": "linux/arm64"
},
/* Force VS Code to run the container as arm64 in
case your host is x86 (or vice-versa). */
"runArgs": ["--platform=linux/arm64"],
"containerEnv": {
"RUST_BACKTRACE": "1",
"CARGO_TARGET_DIR": "${containerWorkspaceFolder}/codex-rs/target-arm64"
},
"remoteUser": "dev",
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.defaultProfile.linux": "bash"
},
"extensions": [
"rust-lang.rust-analyzer"
],
}
}
}

1
.github/actions/codex/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/node_modules/

View File

@@ -0,0 +1,8 @@
printWidth = 80
quoteProps = "consistent"
semi = true
tabWidth = 2
trailingComma = "all"
# Preserve existing behavior for markdown/text wrapping.
proseWrap = "preserve"

140
.github/actions/codex/README.md vendored Normal file
View File

@@ -0,0 +1,140 @@
# openai/codex-action
`openai/codex-action` is a GitHub Action that facilitates the use of [Codex](https://github.com/openai/codex) on GitHub issues and pull requests. Using the action, associate **labels** to run Codex with the appropriate prompt for the given context. Codex will respond by posting comments or creating PRs, whichever you specify!
Here is a sample workflow that uses `openai/codex-action`:
```yaml
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
if: ... # optional, but can be effective in conserving CI resources
runs-on: ubuntu-latest
# TODO(mbolin): Need to verify if/when `write` is necessary.
permissions:
contents: write
issues: write
pull-requests: write
steps:
# By default, Codex runs network disabled using --full-auto, so perform
# any setup that requires network (such as installing dependencies)
# before openai/codex-action.
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Codex
uses: openai/codex-action@latest
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
```
See sample usage in [`codex.yml`](../../workflows/codex.yml).
## Triggering the Action
Using the sample workflow above, we have:
```yaml
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
```
which means our workflow will be triggered when any of the following events occur:
- a label is added to an issue
- a label is added to a pull request against the `main` branch
### Label-Based Triggers
To define a GitHub label that should trigger Codex, create a file named `.github/codex/labels/LABEL-NAME.md` in your repository where `LABEL-NAME` is the name of the label. The content of the file is the prompt template to use when the label is added (see more on [Prompt Template Variables](#prompt-template-variables) below).
For example, if the file `.github/codex/labels/codex-review.md` exists, then:
- Adding the `codex-review` label will trigger the workflow containing the `openai/codex-action` GitHub Action.
- When `openai/codex-action` starts, it will replace the `codex-review` label with `codex-review-in-progress`.
- When `openai/codex-action` is finished, it will replace the `codex-review-in-progress` label with `codex-review-completed`.
If Codex sees that either `codex-review-in-progress` or `codex-review-completed` is already present, it will not perform the action.
As determined by the [default config](./src/default-label-config.ts), Codex will act on the following labels by default:
- Adding the `codex-review` label to a pull request will have Codex review the PR and add it to the PR as a comment.
- Adding the `codex-triage` label to an issue will have Codex investigate the issue and report its findings as a comment.
- Adding the `codex-issue-fix` label to an issue will have Codex attempt to fix the issue and create a PR wit the fix, if any.
## Action Inputs
The `openai/codex-action` GitHub Action takes the following inputs
### `openai_api_key` (required)
Set your `OPENAI_API_KEY` as a [repository secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). See **Secrets and varaibles** then **Actions** in the settings for your GitHub repo.
Note that the secret name does not have to be `OPENAI_API_KEY`. For example, you might want to name it `CODEX_OPENAI_API_KEY` and then configure it on `openai/codex-action` as follows:
```yaml
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
```
### `github_token` (required)
This is required so that Codex can post a comment or create a PR. Set this value on the action as follows:
```yaml
github_token: ${{ secrets.GITHUB_TOKEN }}
```
### `codex_args`
A whitespace-delimited list of arguments to pass to Codex. Defaults to `--full-auto`, but if you want to override the default model to use `o3`:
```yaml
codex_args: "--full-auto --model o3"
```
For more complex configurations, use the `codex_home` input.
### `codex_home`
If set, the value to use for the `$CODEX_HOME` environment variable when running Codex. As explained [in the docs](https://github.com/openai/codex/tree/main/codex-rs#readme), this folder can contain the `config.toml` to configure Codex, custom instructions, and log files.
This should be a relative path within your repo.
## Prompt Template Variables
As shown above, `"prompt"` and `"promptPath"` are used to define prompt templates that will be populated and passed to Codex in response to certain events. All template variables are of the form `{CODEX_ACTION_...}` and the supported values are defined below.
### `CODEX_ACTION_ISSUE_TITLE`
If the action was triggered on a GitHub issue, this is the issue title.
Specifically it is read as the `.issue.title` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_ISSUE_BODY`
If the action was triggered on a GitHub issue, this is the issue body.
Specifically it is read as the `.issue.body` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_GITHUB_EVENT_PATH`
The value of the `$GITHUB_EVENT_PATH` environment variable, which is the path to the file that contains the JSON payload for the event that triggered the workflow. Codex can use `jq` to read only the fields of interest from this file.
### `CODEX_ACTION_PR_DIFF`
If the action was triggered on a pull request, this is the diff between the base and head commits of the PR. It is the output from `git diff`.
Note that the content of the diff could be quite large, so is generally safer to point Codex at `CODEX_ACTION_GITHUB_EVENT_PATH` and let it decide how it wants to explore the change.

124
.github/actions/codex/action.yml vendored Normal file
View File

@@ -0,0 +1,124 @@
name: "Codex [reusable action]"
description: "A reusable action that runs a Codex model."
inputs:
openai_api_key:
description: "The value to use as the OPENAI_API_KEY environment variable when running Codex."
required: true
trigger_phrase:
description: "Text to trigger Codex from a PR/issue body or comment."
required: false
default: ""
github_token:
description: "Token so Codex can comment on the PR or issue."
required: true
codex_args:
description: "A whitespace-delimited list of arguments to pass to Codex. Due to limitations in YAML, arguments with spaces are not supported. For more complex configurations, use the `codex_home` input."
required: false
default: "--config hide_agent_reasoning=true --full-auto"
codex_home:
description: "Value to use as the CODEX_HOME environment variable when running Codex."
required: false
codex_release_tag:
description: "The release tag of the Codex model to run."
required: false
default: "codex-rs-ca8e97fcbcb991e542b8689f2d4eab9d30c399d6-1-rust-v0.0.2505302325"
runs:
using: "composite"
steps:
# Do this in Bash so we do not even bother to install Bun if the sender does
# not have write access to the repo.
- name: Verify user has write access to the repo.
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
PERMISSION=$(gh api \
"/repos/${GITHUB_REPOSITORY}/collaborators/${{ github.event.sender.login }}/permission" \
| jq -r '.permission')
if [[ "$PERMISSION" != "admin" && "$PERMISSION" != "write" ]]; then
exit 1
fi
- name: Download Codex
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
# Determine OS/arch and corresponding Codex artifact name.
uname_s=$(uname -s)
uname_m=$(uname -m)
case "$uname_s" in
Linux*) os="linux" ;;
Darwin*) os="apple-darwin" ;;
*) echo "Unsupported operating system: $uname_s"; exit 1 ;;
esac
case "$uname_m" in
x86_64*) arch="x86_64" ;;
arm64*|aarch64*) arch="aarch64" ;;
*) echo "Unsupported architecture: $uname_m"; exit 1 ;;
esac
# linux builds differentiate between musl and gnu.
if [[ "$os" == "linux" ]]; then
if [[ "$arch" == "x86_64" ]]; then
triple="${arch}-unknown-linux-musl"
else
# Only other supported linux build is aarch64 gnu.
triple="${arch}-unknown-linux-gnu"
fi
else
# macOS
triple="${arch}-apple-darwin"
fi
# Note that if we start baking version numbers into the artifact name,
# we will need to update this action.yml file to match.
artifact="codex-exec-${triple}.tar.gz"
gh release download ${{ inputs.codex_release_tag }} --repo openai/codex \
--pattern "$artifact" --output - \
| tar xzO > /usr/local/bin/codex-exec
chmod +x /usr/local/bin/codex-exec
# Display Codex version to confirm binary integrity; ensure we point it
# at the checked-out repository via --cd so that any subsequent commands
# use the correct working directory.
codex-exec --cd "$GITHUB_WORKSPACE" --version
- name: Install Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: 1.2.11
- name: Install dependencies
shell: bash
run: |
cd ${{ github.action_path }}
bun install --production
- name: Run Codex
shell: bash
run: bun run ${{ github.action_path }}/src/main.ts
# Process args plus environment variables often have a max of 128 KiB,
# so we should fit within that limit?
env:
INPUT_CODEX_ARGS: ${{ inputs.codex_args || '' }}
INPUT_CODEX_HOME: ${{ inputs.codex_home || ''}}
INPUT_TRIGGER_PHRASE: ${{ inputs.trigger_phrase || '' }}
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
GITHUB_TOKEN: ${{ inputs.github_token }}
GITHUB_EVENT_ACTION: ${{ github.event.action || '' }}
GITHUB_EVENT_LABEL_NAME: ${{ github.event.label.name || '' }}
GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number || '' }}
GITHUB_EVENT_ISSUE_BODY: ${{ github.event.issue.body || '' }}
GITHUB_EVENT_REVIEW_BODY: ${{ github.event.review.body || '' }}
GITHUB_EVENT_COMMENT_BODY: ${{ github.event.comment.body || '' }}

85
.github/actions/codex/bun.lock vendored Normal file
View File

@@ -0,0 +1,85 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"name": "codex-action",
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1",
},
"devDependencies": {
"@types/bun": "^1.2.11",
"@types/node": "^22.15.21",
"prettier": "^3.5.3",
"typescript": "^5.8.3",
},
},
},
"packages": {
"@actions/core": ["@actions/core@1.11.1", "", { "dependencies": { "@actions/exec": "^1.1.1", "@actions/http-client": "^2.0.1" } }, "sha512-hXJCSrkwfA46Vd9Z3q4cpEpHB1rL5NG04+/rbqW9d3+CSvtB1tYe8UTpAlixa1vj0m/ULglfEK2UKxMGxCxv5A=="],
"@actions/exec": ["@actions/exec@1.1.1", "", { "dependencies": { "@actions/io": "^1.0.1" } }, "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w=="],
"@actions/github": ["@actions/github@6.0.1", "", { "dependencies": { "@actions/http-client": "^2.2.0", "@octokit/core": "^5.0.1", "@octokit/plugin-paginate-rest": "^9.2.2", "@octokit/plugin-rest-endpoint-methods": "^10.4.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "undici": "^5.28.5" } }, "sha512-xbZVcaqD4XnQAe35qSQqskb3SqIAfRyLBrHMd/8TuL7hJSz2QtbDwnNM8zWx4zO5l2fnGtseNE3MbEvD7BxVMw=="],
"@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
"@octokit/auth-token": ["@octokit/auth-token@4.0.0", "", {}, "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA=="],
"@octokit/core": ["@octokit/core@5.2.1", "", { "dependencies": { "@octokit/auth-token": "^4.0.0", "@octokit/graphql": "^7.1.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.0.0", "before-after-hook": "^2.2.0", "universal-user-agent": "^6.0.0" } }, "sha512-dKYCMuPO1bmrpuogcjQ8z7ICCH3FP6WmxpwC03yjzGfZhj9fTJg6+bS1+UAplekbN2C+M61UNllGOOoAfGCrdQ=="],
"@octokit/endpoint": ["@octokit/endpoint@9.0.6", "", { "dependencies": { "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-H1fNTMA57HbkFESSt3Y9+FBICv+0jFceJFPWDePYlR/iMGrwM5ph+Dd4XRQs+8X+PUFURLQgX9ChPfhJ/1uNQw=="],
"@octokit/graphql": ["@octokit/graphql@7.1.1", "", { "dependencies": { "@octokit/request": "^8.4.1", "@octokit/types": "^13.0.0", "universal-user-agent": "^6.0.0" } }, "sha512-3mkDltSfcDUoa176nlGoA32RGjeWjl3K7F/BwHwRMJUW/IteSa4bnSV8p2ThNkcIcZU2umkZWxwETSSCJf2Q7g=="],
"@octokit/openapi-types": ["@octokit/openapi-types@24.2.0", "", {}, "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg=="],
"@octokit/plugin-paginate-rest": ["@octokit/plugin-paginate-rest@9.2.2", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-u3KYkGF7GcZnSD/3UP0S7K5XUFT2FkOQdcfXZGZQPGv3lm4F2Xbf71lvjldr8c1H3nNbF+33cLEkWYbokGWqiQ=="],
"@octokit/plugin-rest-endpoint-methods": ["@octokit/plugin-rest-endpoint-methods@10.4.1", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg=="],
"@octokit/request": ["@octokit/request@8.4.1", "", { "dependencies": { "@octokit/endpoint": "^9.0.6", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-qnB2+SY3hkCmBxZsR/MPCybNmbJe4KAlfWErXq+rBKkQJlbjdJeS85VI9r8UqeLYLvnAenU8Q1okM/0MBsAGXw=="],
"@octokit/request-error": ["@octokit/request-error@5.1.1", "", { "dependencies": { "@octokit/types": "^13.1.0", "deprecation": "^2.0.0", "once": "^1.4.0" } }, "sha512-v9iyEQJH6ZntoENr9/yXxjuezh4My67CBSu9r6Ve/05Iu5gNgnisNWOsoJHTP6k0Rr0+HQIpnH+kyammu90q/g=="],
"@octokit/types": ["@octokit/types@13.10.0", "", { "dependencies": { "@octokit/openapi-types": "^24.2.0" } }, "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA=="],
"@types/bun": ["@types/bun@1.2.13", "", { "dependencies": { "bun-types": "1.2.13" } }, "sha512-u6vXep/i9VBxoJl3GjZsl/BFIsvML8DfVDO0RYLEwtSZSp981kEO1V5NwRcO1CPJ7AmvpbnDCiMKo3JvbDEjAg=="],
"@types/node": ["@types/node@22.15.21", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-EV/37Td6c+MgKAbkcLG6vqZ2zEYHD7bvSrzqqs2RIhbA6w3x+Dqz8MZM3sP6kGTeLrdoOgKZe+Xja7tUB2DNkQ=="],
"before-after-hook": ["before-after-hook@2.2.3", "", {}, "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ=="],
"bun-types": ["bun-types@1.2.13", "", { "dependencies": { "@types/node": "*" } }, "sha512-rRjA1T6n7wto4gxhAO/ErZEtOXyEZEmnIHQfl0Dt1QQSB4QV0iP6BZ9/YB5fZaHFQ2dwHFrmPaRQ9GGMX01k9Q=="],
"deprecation": ["deprecation@2.3.1", "", {}, "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"prettier": ["prettier@3.5.3", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-QQtaxnoDJeAkDvDKWCLiwIXkTgRhwYDEQCghU9Z6q03iyek/rxRh/2lC3HB7P8sWT2xC/y5JDctPLBIGzHKbhw=="],
"tunnel": ["tunnel@0.0.6", "", {}, "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg=="],
"typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="],
"undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
"undici-types": ["undici-types@6.21.0", "", {}, "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="],
"universal-user-agent": ["universal-user-agent@6.0.1", "", {}, "sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"@octokit/plugin-paginate-rest/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"@octokit/plugin-paginate-rest/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
}
}

21
.github/actions/codex/package.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"name": "codex-action",
"version": "0.0.0",
"private": true,
"scripts": {
"format": "prettier --check src",
"format:fix": "prettier --write src",
"test": "bun test",
"typecheck": "tsc"
},
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1"
},
"devDependencies": {
"@types/bun": "^1.2.11",
"@types/node": "^22.15.21",
"prettier": "^3.5.3",
"typescript": "^5.8.3"
}
}

View File

@@ -0,0 +1,85 @@
import * as github from "@actions/github";
import type { EnvContext } from "./env-context";
/**
* Add an "eyes" reaction to the entity (issue, issue comment, or pull request
* review comment) that triggered the current Codex invocation.
*
* The purpose is to provide immediate feedback to the user similar to the
* *-in-progress label flow indicating that the bot has acknowledged the
* request and is working on it.
*
* We attempt to add the reaction best suited for the current GitHub event:
*
* • issues → POST /repos/{owner}/{repo}/issues/{issue_number}/reactions
* • issue_comment → POST /repos/{owner}/{repo}/issues/comments/{comment_id}/reactions
* • pull_request_review_comment → POST /repos/{owner}/{repo}/pulls/comments/{comment_id}/reactions
*
* If the specific target is unavailable (e.g. unexpected payload shape) we
* silently skip instead of failing the whole action because the reaction is
* merely cosmetic.
*/
export async function addEyesReaction(ctx: EnvContext): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo } = github.context.repo;
const eventName = github.context.eventName;
try {
switch (eventName) {
case "issue_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForIssueComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "pull_request_review_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForPullRequestReviewComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "issues": {
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
return;
}
break;
}
default: {
// Fallback: try to react to the issue/PR if we have a number.
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
}
}
}
} catch (error) {
// Do not fail the action if reaction creation fails log and continue.
console.warn(`Failed to add \"eyes\" reaction: ${error}`);
}
}

53
.github/actions/codex/src/comment.ts vendored Normal file
View File

@@ -0,0 +1,53 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `issue_comment` and `pull_request_review_comment` events once we know
* the action is supported.
*/
export async function onComment(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
// Attempt to get the body of the comment from the environment. Depending on
// the event type either `GITHUB_EVENT_COMMENT_BODY` (issue & PR comments) or
// `GITHUB_EVENT_REVIEW_BODY` (PR reviews) is set.
const commentBody =
ctx.tryGetNonEmpty("GITHUB_EVENT_COMMENT_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_REVIEW_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_ISSUE_BODY");
if (!commentBody) {
console.warn("Comment body not found in environment: skipping.");
return;
}
// Check if the trigger phrase is present.
if (!commentBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this comment.`,
);
return;
}
// Derive the prompt by removing the trigger phrase. Remove only the first
// occurrence to keep any additional occurrences that might be meaningful.
const prompt = commentBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping");
return;
}
// Provide immediate feedback that we are working on the request.
await addEyesReaction(ctx);
// Run Codex and post the response as a new comment.
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

11
.github/actions/codex/src/config.ts vendored Normal file
View File

@@ -0,0 +1,11 @@
import { readdirSync, statSync } from "fs";
import * as path from "path";
export interface Config {
labels: Record<string, LabelConfig>;
}
export interface LabelConfig {
/** Returns the prompt template. */
getPromptTemplate(): string;
}

View File

@@ -0,0 +1,44 @@
import type { Config } from "./config";
export function getDefaultConfig(): Config {
return {
labels: {
"codex-investigate-issue": {
getPromptTemplate: () =>
`
Troubleshoot whether the reported issue is valid.
Provide a concise and respectful comment summarizing the findings.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
"codex-code-review": {
getPromptTemplate: () =>
`
Review this PR and respond with a very concise final message, formatted in Markdown.
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the \`base\` and \`head\` refs that define this PR. Both refs are available locally.
`.trim(),
},
"codex-attempt-fix": {
getPromptTemplate: () =>
`
Attempt to solve the reported issue.
If a code change is required, create a new branch, commit the fix, and open a pull-request that resolves the problem.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
},
};
}

116
.github/actions/codex/src/env-context.ts vendored Normal file
View File

@@ -0,0 +1,116 @@
/*
* Centralised access to environment variables used by the Codex GitHub
* Action.
*
* To enable proper unit-testing we avoid reading from `process.env` at module
* initialisation time. Instead a `EnvContext` object is created (usually from
* the real `process.env`) and passed around explicitly or where that is not
* yet practical imported as the shared `defaultContext` singleton. Tests can
* create their own context backed by a stubbed map of variables without having
* to mutate global state.
*/
import { fail } from "./fail";
import * as github from "@actions/github";
export interface EnvContext {
/**
* Return the value for a given environment variable or terminate the action
* via `fail` if it is missing / empty.
*/
get(name: string): string;
/**
* Attempt to read an environment variable. Returns the value when present;
* otherwise returns undefined (does not call `fail`).
*/
tryGet(name: string): string | undefined;
/**
* Attempt to read an environment variable. Returns non-empty string value or
* null if unset or empty string.
*/
tryGetNonEmpty(name: string): string | null;
/**
* Return a memoised Octokit instance authenticated via the token resolved
* from the provided argument (when defined) or the environment variables
* `GITHUB_TOKEN`/`GH_TOKEN`.
*
* Subsequent calls return the same cached instance to avoid spawning
* multiple REST clients within a single action run.
*/
getOctokit(token?: string): ReturnType<typeof github.getOctokit>;
}
/** Internal helper *not* exported. */
function _getRequiredEnv(
name: string,
env: Record<string, string | undefined>,
): string | undefined {
const value = env[name];
// Avoid leaking secrets into logs while still logging non-secret variables.
if (name.endsWith("KEY") || name.endsWith("TOKEN")) {
if (value) {
console.log(`value for ${name} was found`);
}
} else {
console.log(`${name}=${value}`);
}
return value;
}
/** Create a context backed by the supplied environment map (defaults to `process.env`). */
export function createEnvContext(
env: Record<string, string | undefined> = process.env,
): EnvContext {
// Lazily instantiated Octokit client shared across this context.
let cachedOctokit: ReturnType<typeof github.getOctokit> | null = null;
return {
get(name: string): string {
const value = _getRequiredEnv(name, env);
if (value == null) {
fail(`Missing required environment variable: ${name}`);
}
return value;
},
tryGet(name: string): string | undefined {
return _getRequiredEnv(name, env);
},
tryGetNonEmpty(name: string): string | null {
const value = _getRequiredEnv(name, env);
return value == null || value === "" ? null : value;
},
getOctokit(token?: string) {
if (cachedOctokit) {
return cachedOctokit;
}
// Determine the token to authenticate with.
const githubToken = token ?? env["GITHUB_TOKEN"] ?? env["GH_TOKEN"];
if (!githubToken) {
fail(
"Unable to locate a GitHub token. `github_token` should have been set on the action.",
);
}
cachedOctokit = github.getOctokit(githubToken!);
return cachedOctokit;
},
};
}
/**
* Shared context built from the actual `process.env`. Production code that is
* not yet refactored to receive a context explicitly may import and use this
* singleton. Tests should avoid the singleton and instead pass their own
* context to the functions they exercise.
*/
export const defaultContext: EnvContext = createEnvContext();

4
.github/actions/codex/src/fail.ts vendored Normal file
View File

@@ -0,0 +1,4 @@
export function fail(message: string): never {
console.error(message);
process.exit(1);
}

149
.github/actions/codex/src/git-helpers.ts vendored Normal file
View File

@@ -0,0 +1,149 @@
import { spawnSync } from "child_process";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
function runGit(args: string[], silent = true): string {
console.info(`Running git ${args.join(" ")}`);
const res = spawnSync("git", args, {
encoding: "utf8",
stdio: silent ? ["ignore", "pipe", "pipe"] : "inherit",
});
if (res.error) {
throw res.error;
}
if (res.status !== 0) {
// Return stderr so caller may handle; else throw.
throw new Error(
`git ${args.join(" ")} failed with code ${res.status}: ${res.stderr}`,
);
}
return res.stdout.trim();
}
function stageAllChanges() {
runGit(["add", "-A"]);
}
function hasStagedChanges(): boolean {
const res = spawnSync("git", ["diff", "--cached", "--quiet", "--exit-code"]);
return res.status !== 0;
}
function ensureOnBranch(
issueNumber: number,
protectedBranches: string[],
suggestedSlug?: string,
): string {
let branch = "";
try {
branch = runGit(["symbolic-ref", "--short", "-q", "HEAD"]);
} catch {
branch = "";
}
// If detached HEAD or on a protected branch, create a new branch.
if (!branch || protectedBranches.includes(branch)) {
if (suggestedSlug) {
const safeSlug = suggestedSlug
.toLowerCase()
.replace(/[^\w\s-]/g, "")
.trim()
.replace(/\s+/g, "-");
branch = `codex-fix-${issueNumber}-${safeSlug}`;
} else {
branch = `codex-fix-${issueNumber}-${Date.now()}`;
}
runGit(["switch", "-c", branch]);
}
return branch;
}
function commitIfNeeded(issueNumber: number) {
if (hasStagedChanges()) {
runGit([
"commit",
"-m",
`fix: automated fix for #${issueNumber} via Codex`,
]);
}
}
function pushBranch(branch: string, githubToken: string, ctx: EnvContext) {
const repoSlug = ctx.get("GITHUB_REPOSITORY"); // owner/repo
const remoteUrl = `https://x-access-token:${githubToken}@github.com/${repoSlug}.git`;
runGit(["push", "--force-with-lease", "-u", remoteUrl, `HEAD:${branch}`]);
}
/**
* If this returns a string, it is the URL of the created PR.
*/
export async function maybePublishPRForIssue(
issueNumber: number,
lastMessage: string,
ctx: EnvContext,
): Promise<string | undefined> {
// Only proceed if GITHUB_TOKEN available.
const githubToken =
ctx.tryGetNonEmpty("GITHUB_TOKEN") ?? ctx.tryGetNonEmpty("GH_TOKEN");
if (!githubToken) {
console.warn("No GitHub token - skipping PR creation.");
return undefined;
}
// Print `git status` for debugging.
runGit(["status"]);
// Stage any remaining changes so they can be committed and pushed.
stageAllChanges();
const octokit = ctx.getOctokit(githubToken);
const { owner, repo } = github.context.repo;
// Determine default branch to treat as protected.
let defaultBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
defaultBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const sanitizedMessage = lastMessage.replace(/\u2022/g, "-");
const [summaryLine] = sanitizedMessage.split(/\r?\n/);
const branch = ensureOnBranch(issueNumber, [defaultBranch, "master"], summaryLine);
commitIfNeeded(issueNumber);
pushBranch(branch, githubToken, ctx);
// Try to find existing PR for this branch
const headParam = `${owner}:${branch}`;
const existing = await octokit.rest.pulls.list({
owner,
repo,
head: headParam,
state: "open",
});
if (existing.data.length > 0) {
return existing.data[0].html_url;
}
// Determine base branch (default to main)
let baseBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
baseBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const pr = await octokit.rest.pulls.create({
owner,
repo,
title: summaryLine,
head: branch,
base: baseBranch,
body: sanitizedMessage,
});
return pr.data.html_url;
}

16
.github/actions/codex/src/git-user.ts vendored Normal file
View File

@@ -0,0 +1,16 @@
export function setGitHubActionsUser(): void {
const commands = [
["git", "config", "--global", "user.name", "github-actions[bot]"],
[
"git",
"config",
"--global",
"user.email",
"41898282+github-actions[bot]@users.noreply.github.com",
],
];
for (const command of commands) {
Bun.spawnSync(command);
}
}

View File

@@ -0,0 +1,11 @@
import * as pathMod from "path";
import { EnvContext } from "./env-context";
export function resolveWorkspacePath(path: string, ctx: EnvContext): string {
if (pathMod.isAbsolute(path)) {
return path;
} else {
const workspace = ctx.get("GITHUB_WORKSPACE");
return pathMod.join(workspace, path);
}
}

View File

@@ -0,0 +1,56 @@
import type { Config, LabelConfig } from "./config";
import { getDefaultConfig } from "./default-label-config";
import { readFileSync, readdirSync, statSync } from "fs";
import * as path from "path";
/**
* Build an in-memory configuration object by scanning the repository for
* Markdown templates located in `.github/codex/labels`.
*
* Each `*.md` file in that directory represents a label that can trigger the
* Codex GitHub Action. The filename **without** the extension is interpreted
* as the label name, e.g. `codex-review.md` ➜ `codex-review`.
*
* For every such label we derive the corresponding `doneLabel` by appending
* the suffix `-completed`.
*/
export function loadConfig(workspace: string): Config {
const labelsDir = path.join(workspace, ".github", "codex", "labels");
let entries: string[];
try {
entries = readdirSync(labelsDir);
} catch {
// If the directory is missing, return the default configuration.
return getDefaultConfig();
}
const labels: Record<string, LabelConfig> = {};
for (const entry of entries) {
if (!entry.endsWith(".md")) {
continue;
}
const fullPath = path.join(labelsDir, entry);
if (!statSync(fullPath).isFile()) {
continue;
}
const labelName = entry.slice(0, -3); // trim ".md"
labels[labelName] = new FileLabelConfig(fullPath);
}
return { labels };
}
class FileLabelConfig implements LabelConfig {
constructor(private readonly promptPath: string) {}
getPromptTemplate(): string {
return readFileSync(this.promptPath, "utf8");
}
}

80
.github/actions/codex/src/main.ts vendored Executable file
View File

@@ -0,0 +1,80 @@
#!/usr/bin/env bun
import type { Config } from "./config";
import { defaultContext, EnvContext } from "./env-context";
import { loadConfig } from "./load-config";
import { setGitHubActionsUser } from "./git-user";
import { onLabeled } from "./process-label";
import { ensureBaseAndHeadCommitsForPRAreAvailable } from "./prompt-template";
import { performAdditionalValidation } from "./verify-inputs";
import { onComment } from "./comment";
import { onReview } from "./review";
async function main(): Promise<void> {
const ctx: EnvContext = defaultContext;
// Build the configuration dynamically by scanning `.github/codex/labels`.
const GITHUB_WORKSPACE = ctx.get("GITHUB_WORKSPACE");
const config: Config = loadConfig(GITHUB_WORKSPACE);
// Optionally perform additional validation of prompt template files.
performAdditionalValidation(config, GITHUB_WORKSPACE);
const GITHUB_EVENT_NAME = ctx.get("GITHUB_EVENT_NAME");
const GITHUB_EVENT_ACTION = ctx.get("GITHUB_EVENT_ACTION");
// Set user.name and user.email to a bot before Codex runs, just in case it
// creates a commit.
setGitHubActionsUser();
switch (GITHUB_EVENT_NAME) {
case "issues": {
if (GITHUB_EVENT_ACTION === "labeled") {
await onLabeled(config, ctx);
return;
} else if (GITHUB_EVENT_ACTION === "opened") {
await onComment(ctx);
return;
}
break;
}
case "issue_comment": {
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
case "pull_request": {
if (GITHUB_EVENT_ACTION === "labeled") {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
await onLabeled(config, ctx);
return;
}
break;
}
case "pull_request_review": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "submitted") {
await onReview(ctx);
return;
}
break;
}
case "pull_request_review_comment": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
}
console.warn(
`Unsupported action '${GITHUB_EVENT_ACTION}' for event '${GITHUB_EVENT_NAME}'.`,
);
}
main();

View File

@@ -0,0 +1,62 @@
import { fail } from "./fail";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
/**
* Post a comment to the issue / pull request currently in scope.
*
* Provide the environment context so that token lookup (inside getOctokit) does
* not rely on global state.
*/
export async function postComment(
commentBody: string,
ctx: EnvContext,
): Promise<void> {
// Append a footer with a link back to the workflow run, if available.
const footer = buildWorkflowRunFooter(ctx);
const bodyWithFooter = footer ? `${commentBody}${footer}` : commentBody;
const octokit = ctx.getOctokit();
console.info("Got Octokit instance for posting comment");
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
console.warn(
"No issue or pull_request number found in GitHub context; skipping comment creation.",
);
return;
}
try {
console.info("Calling octokit.rest.issues.createComment()");
await octokit.rest.issues.createComment({
owner,
repo,
issue_number: issueNumber,
body: bodyWithFooter,
});
} catch (error) {
fail(`Failed to create comment via GitHub API: ${error}`);
}
}
/**
* Helper to build a Markdown fragment linking back to the workflow run that
* generated the current comment. Returns `undefined` if required environment
* variables are missing e.g. when running outside of GitHub Actions so we
* can gracefully skip the footer in those cases.
*/
function buildWorkflowRunFooter(ctx: EnvContext): string | undefined {
const serverUrl =
ctx.tryGetNonEmpty("GITHUB_SERVER_URL") ?? "https://github.com";
const repository = ctx.tryGetNonEmpty("GITHUB_REPOSITORY");
const runId = ctx.tryGetNonEmpty("GITHUB_RUN_ID");
if (!repository || !runId) {
return undefined;
}
const url = `${serverUrl}/${repository}/actions/runs/${runId}`;
return `\n\n---\n*[_View workflow run_](${url})*`;
}

View File

@@ -0,0 +1,195 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { renderPromptTemplate } from "./prompt-template";
import { postComment } from "./post-comment";
import { runCodex } from "./run-codex";
import * as github from "@actions/github";
import { Config, LabelConfig } from "./config";
import { maybePublishPRForIssue } from "./git-helpers";
export async function onLabeled(
config: Config,
ctx: EnvContext,
): Promise<void> {
const GITHUB_EVENT_LABEL_NAME = ctx.get("GITHUB_EVENT_LABEL_NAME");
const labelConfig = config.labels[GITHUB_EVENT_LABEL_NAME] as
| LabelConfig
| undefined;
if (!labelConfig) {
fail(
`Label \`${GITHUB_EVENT_LABEL_NAME}\` not found in config: ${JSON.stringify(config)}`,
);
}
await processLabelConfig(ctx, GITHUB_EVENT_LABEL_NAME, labelConfig);
}
/**
* Wrapper that handles `-in-progress` and `-completed` semantics around the core lint/fix/review
* processing. It will:
*
* - Skip execution if the `-in-progress` or `-completed` label is already present.
* - Mark the PR/issue as `-in-progress`.
* - After successful execution, mark the PR/issue as `-completed`.
*/
async function processLabelConfig(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo, issueNumber, labelNames } =
await getCurrentLabels(octokit);
const inProgressLabel = `${label}-in-progress`;
const completedLabel = `${label}-completed`;
for (const markerLabel of [inProgressLabel, completedLabel]) {
if (labelNames.includes(markerLabel)) {
console.log(
`Label '${markerLabel}' already present on issue/PR #${issueNumber}. Skipping Codex action.`,
);
// Clean up: remove the triggering label to avoid confusion and re-runs.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
remove: markerLabel,
});
return;
}
}
// Mark the PR/issue as in progress.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: inProgressLabel,
remove: label,
});
// Run the core Codex processing.
await processLabel(ctx, label, labelConfig);
// Mark the PR/issue as completed.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: completedLabel,
remove: inProgressLabel,
});
}
async function processLabel(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const template = labelConfig.getPromptTemplate();
const populatedTemplate = await renderPromptTemplate(template, ctx);
// Always run Codex and post the resulting message as a comment.
let commentBody = await runCodex(populatedTemplate, ctx);
// Current heuristic: only try to create a PR if "attempt" or "fix" is in the
// label name. (Yes, we plan to evolve this.)
if (label.indexOf("fix") !== -1 || label.indexOf("attempt") !== -1) {
console.info(`label ${label} indicates we should attempt to create a PR`);
const prUrl = await maybeFixIssue(ctx, commentBody);
if (prUrl) {
commentBody += `\n\n---\nOpened pull request: ${prUrl}`;
}
} else {
console.info(
`label ${label} does not indicate we should attempt to create a PR`,
);
}
await postComment(commentBody, ctx);
}
async function maybeFixIssue(
ctx: EnvContext,
lastMessage: string,
): Promise<string | undefined> {
// Attempt to create a PR out of any changes Codex produced.
const issueNumber = github.context.issue.number!; // exists for issues triggering this path
try {
return await maybePublishPRForIssue(issueNumber, lastMessage, ctx);
} catch (e) {
console.warn(`Failed to publish PR: ${e}`);
}
}
async function getCurrentLabels(
octokit: ReturnType<typeof github.getOctokit>,
): Promise<{
owner: string;
repo: string;
issueNumber: number;
labelNames: Array<string>;
}> {
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
fail("No issue or pull_request number found in GitHub context.");
}
const { data: issueData } = await octokit.rest.issues.get({
owner,
repo,
issue_number: issueNumber,
});
const labelNames =
issueData.labels?.map((label: any) =>
typeof label === "string" ? label : label.name,
) ?? [];
return { owner, repo, issueNumber, labelNames };
}
async function addAndRemoveLabels(
octokit: ReturnType<typeof github.getOctokit>,
opts: {
owner: string;
repo: string;
issueNumber: number;
add?: string;
remove?: string;
},
): Promise<void> {
const { owner, repo, issueNumber, add, remove } = opts;
if (add) {
try {
await octokit.rest.issues.addLabels({
owner,
repo,
issue_number: issueNumber,
labels: [add],
});
} catch (error) {
console.warn(`Failed to add label '${add}': ${error}`);
}
}
if (remove) {
try {
await octokit.rest.issues.removeLabel({
owner,
repo,
issue_number: issueNumber,
name: remove,
});
} catch (error) {
console.warn(`Failed to remove label '${remove}': ${error}`);
}
}
}

View File

@@ -0,0 +1,284 @@
/*
* Utilities to render Codex prompt templates.
*
* A template is a Markdown (or plain-text) file that may contain one or more
* placeholders of the form `{CODEX_ACTION_<NAME>}`. At runtime these
* placeholders are substituted with dynamically generated content. Each
* placeholder is resolved **exactly once** even if it appears multiple times
* in the same template.
*/
import { readFile } from "fs/promises";
import { EnvContext } from "./env-context";
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
/**
* Lazily caches parsed `$GITHUB_EVENT_PATH` contents keyed by the file path so
* we only hit the filesystem once per unique event payload.
*/
const githubEventDataCache: Map<string, Promise<any>> = new Map();
function getGitHubEventData(ctx: EnvContext): Promise<any> {
const eventPath = ctx.get("GITHUB_EVENT_PATH");
let cached = githubEventDataCache.get(eventPath);
if (!cached) {
cached = readFile(eventPath, "utf8").then((raw) => JSON.parse(raw));
githubEventDataCache.set(eventPath, cached);
}
return cached;
}
async function runCommand(args: Array<string>): Promise<string> {
const result = Bun.spawnSync(args, {
stdout: "pipe",
stderr: "pipe",
});
if (result.success) {
return result.stdout.toString();
}
console.error(`Error running ${JSON.stringify(args)}: ${result.stderr}`);
return "";
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
// Regex that captures the variable name without the surrounding { } braces.
const VAR_REGEX = /\{(CODEX_ACTION_[A-Z0-9_]+)\}/g;
// Cache individual placeholder values so each one is resolved at most once per
// process even if many templates reference it.
const placeholderCache: Map<string, Promise<string>> = new Map();
/**
* Parse a template string, resolve all placeholders and return the rendered
* result.
*/
export async function renderPromptTemplate(
template: string,
ctx: EnvContext,
): Promise<string> {
// ---------------------------------------------------------------------
// 1) Gather all *unique* placeholders present in the template.
// ---------------------------------------------------------------------
const variables = new Set<string>();
for (const match of template.matchAll(VAR_REGEX)) {
variables.add(match[1]);
}
// ---------------------------------------------------------------------
// 2) Kick off (or reuse) async resolution for each variable.
// ---------------------------------------------------------------------
for (const variable of variables) {
if (!placeholderCache.has(variable)) {
placeholderCache.set(variable, resolveVariable(variable, ctx));
}
}
// ---------------------------------------------------------------------
// 3) Await completion so we can perform a simple synchronous replace below.
// ---------------------------------------------------------------------
const resolvedEntries: [string, string][] = [];
for (const [key, promise] of placeholderCache.entries()) {
resolvedEntries.push([key, await promise]);
}
const resolvedMap = new Map<string, string>(resolvedEntries);
// ---------------------------------------------------------------------
// 4) Replace each occurrence. We use replace with a callback to ensure
// correct substitution even if variable names overlap (they shouldn't,
// but better safe than sorry).
// ---------------------------------------------------------------------
return template.replace(VAR_REGEX, (_, varName: string) => {
return resolvedMap.get(varName) ?? "";
});
}
export async function ensureBaseAndHeadCommitsForPRAreAvailable(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const prShas = await getPrShas(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return null;
}
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined - unexpected");
return null;
}
const workspace = ctx.get("GITHUB_WORKSPACE");
// Refs (branch names)
const baseRef: string | undefined = pr.base?.ref;
const headRef: string | undefined = pr.head?.ref;
// Clone URLs
const baseRemoteUrl: string | undefined = pr.base?.repo?.clone_url;
const headRemoteUrl: string | undefined = pr.head?.repo?.clone_url;
if (!baseRef || !headRef || !baseRemoteUrl || !headRemoteUrl) {
console.warn(
"Missing PR ref or remote URL information - cannot fetch commits",
);
return null;
}
// Ensure we have the base branch.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
baseRef,
]);
// Ensure we have the head branch.
if (headRemoteUrl === baseRemoteUrl) {
// Same repository the commit is available from `origin`.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
headRef,
]);
} else {
// Fork make sure a `pr` remote exists that points at the fork. Attempting
// to add a remote that already exists causes git to error, so we swallow
// any non-zero exit codes from that specific command.
await runCommand([
"git",
"-C",
workspace,
"remote",
"add",
"pr",
headRemoteUrl,
]);
// Whether adding succeeded or the remote already existed, attempt to fetch
// the head ref from the `pr` remote.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"pr",
headRef,
]);
}
return prShas;
}
// ---------------------------------------------------------------------------
// Internal helpers still exported for use by other modules.
// ---------------------------------------------------------------------------
export async function resolvePrDiff(ctx: EnvContext): Promise<string> {
const prShas = await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return "";
}
const workspace = ctx.get("GITHUB_WORKSPACE");
const { baseSha, headSha } = prShas;
return runCommand([
"git",
"-C",
workspace,
"diff",
"--color=never",
`${baseSha}..${headSha}`,
]);
}
// ---------------------------------------------------------------------------
// Placeholder resolution
// ---------------------------------------------------------------------------
async function resolveVariable(name: string, ctx: EnvContext): Promise<string> {
switch (name) {
case "CODEX_ACTION_ISSUE_TITLE": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.title ?? "";
}
case "CODEX_ACTION_ISSUE_BODY": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.body ?? "";
}
case "CODEX_ACTION_GITHUB_EVENT_PATH": {
return ctx.get("GITHUB_EVENT_PATH");
}
case "CODEX_ACTION_BASE_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.base?.ref ?? "";
}
case "CODEX_ACTION_HEAD_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.head?.ref ?? "";
}
case "CODEX_ACTION_PR_DIFF": {
return resolvePrDiff(ctx);
}
// -------------------------------------------------------------------
// Add new template variables here.
// -------------------------------------------------------------------
default: {
// Unknown variable leave it blank to avoid leaking placeholders to the
// final prompt. The alternative would be to `fail()` here, but silently
// ignoring unknown placeholders is more forgiving and better matches the
// behaviour of typical template engines.
console.warn(`Unknown template variable: ${name}`);
return "";
}
}
}
async function getPrShas(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined");
return null;
}
// Prefer explicit SHAs if available to avoid relying on local branch names.
const baseSha: string | undefined = pr.base?.sha;
const headSha: string | undefined = pr.head?.sha;
if (!baseSha || !headSha) {
console.warn("one of base or head is not defined on event.pull_request");
return null;
}
return { baseSha, headSha };
}

42
.github/actions/codex/src/review.ts vendored Normal file
View File

@@ -0,0 +1,42 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `pull_request_review` events. We treat the review body the same way
* as a normal comment.
*/
export async function onReview(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
const reviewBody = ctx.tryGet("GITHUB_EVENT_REVIEW_BODY");
if (!reviewBody) {
console.warn("Review body not found in environment: skipping.");
return;
}
if (!reviewBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this review.`,
);
return;
}
const prompt = reviewBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping.");
return;
}
await addEyesReaction(ctx);
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

56
.github/actions/codex/src/run-codex.ts vendored Normal file
View File

@@ -0,0 +1,56 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { tmpdir } from "os";
import { join } from "node:path";
import { readFile, mkdtemp } from "fs/promises";
import { resolveWorkspacePath } from "./github-workspace";
/**
* Runs the Codex CLI with the provided prompt and returns the output written
* to the "last message" file.
*/
export async function runCodex(
prompt: string,
ctx: EnvContext,
): Promise<string> {
const OPENAI_API_KEY = ctx.get("OPENAI_API_KEY");
const tempDirPath = await mkdtemp(join(tmpdir(), "codex-"));
const lastMessageOutput = join(tempDirPath, "codex-prompt.md");
const args = ["/usr/local/bin/codex-exec"];
const inputCodexArgs = ctx.tryGet("INPUT_CODEX_ARGS")?.trim();
if (inputCodexArgs) {
args.push(...inputCodexArgs.split(/\s+/));
}
args.push("--output-last-message", lastMessageOutput, prompt);
const env: Record<string, string> = { ...process.env, OPENAI_API_KEY };
const INPUT_CODEX_HOME = ctx.tryGet("INPUT_CODEX_HOME");
if (INPUT_CODEX_HOME) {
env.CODEX_HOME = resolveWorkspacePath(INPUT_CODEX_HOME, ctx);
}
console.log(`Running Codex: ${JSON.stringify(args)}`);
const result = Bun.spawnSync(args, {
stdout: "inherit",
stderr: "inherit",
env,
});
if (!result.success) {
fail(`Codex failed: see above for details.`);
}
// Read the output generated by Codex.
let lastMessage: string;
try {
lastMessage = await readFile(lastMessageOutput, "utf8");
} catch (err) {
fail(`Failed to read Codex output at '${lastMessageOutput}': ${err}`);
}
return lastMessage;
}

View File

@@ -0,0 +1,33 @@
// Validate the inputs passed to the composite action.
// The script currently ensures that the provided configuration file exists and
// matches the expected schema.
import type { Config } from "./config";
import { existsSync } from "fs";
import * as path from "path";
import { fail } from "./fail";
export function performAdditionalValidation(config: Config, workspace: string) {
// Additional validation: ensure referenced prompt files exist and are Markdown.
for (const [label, details] of Object.entries(config.labels)) {
// Determine which prompt key is present (the schema guarantees exactly one).
const promptPathStr =
(details as any).prompt ?? (details as any).promptPath;
if (promptPathStr) {
const promptPath = path.isAbsolute(promptPathStr)
? promptPathStr
: path.join(workspace, promptPathStr);
if (!existsSync(promptPath)) {
fail(`Prompt file for label '${label}' not found: ${promptPath}`);
}
if (!promptPath.endsWith(".md")) {
fail(
`Prompt file for label '${label}' must be a .md file (got ${promptPathStr}).`,
);
}
}
}
}

15
.github/actions/codex/tsconfig.json vendored Normal file
View File

@@ -0,0 +1,15 @@
{
"compilerOptions": {
"lib": ["ESNext"],
"target": "ESNext",
"module": "ESNext",
"moduleDetection": "force",
"moduleResolution": "bundler",
"noEmit": true,
"strict": true,
"skipLibCheck": true
},
"include": ["src"]
}

3
.github/codex/home/config.toml vendored Normal file
View File

@@ -0,0 +1,3 @@
model = "o3"
# Consider setting [mcp_servers] here!

9
.github/codex/labels/codex-attempt.md vendored Normal file
View File

@@ -0,0 +1,9 @@
Attempt to solve the reported issue.
If a code change is required, create a new branch, commit the fix, and open a pull request that resolves the problem.
Here is the original GitHub issue that triggered this run:
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}

7
.github/codex/labels/codex-review.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Review this PR and respond with a very concise final message, formatted in Markdown.
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the `base` and `head` refs that define this PR. Both refs are available locally.

7
.github/codex/labels/codex-triage.md vendored Normal file
View File

@@ -0,0 +1,7 @@
Troubleshoot whether the reported issue is valid.
Provide a concise and respectful comment summarizing the findings.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}

View File

@@ -1,20 +1,11 @@
{
"outputs": {
"codex-repl": {
"platforms": {
"macos-aarch64": { "regex": "^codex-repl-aarch64-apple-darwin\\.zst$", "path": "codex-repl" },
"macos-x86_64": { "regex": "^codex-repl-x86_64-apple-darwin\\.zst$", "path": "codex-repl" },
"linux-x86_64": { "regex": "^codex-repl-x86_64-unknown-linux-musl\\.zst$", "path": "codex-repl" },
"linux-aarch64": { "regex": "^codex-repl-aarch64-unknown-linux-gnu\\.zst$", "path": "codex-repl" }
}
},
"codex-exec": {
"platforms": {
"macos-aarch64": { "regex": "^codex-exec-aarch64-apple-darwin\\.zst$", "path": "codex-exec" },
"macos-x86_64": { "regex": "^codex-exec-x86_64-apple-darwin\\.zst$", "path": "codex-exec" },
"linux-x86_64": { "regex": "^codex-exec-x86_64-unknown-linux-musl\\.zst$", "path": "codex-exec" },
"linux-aarch64": { "regex": "^codex-exec-aarch64-unknown-linux-gnu\\.zst$", "path": "codex-exec" }
"linux-aarch64": { "regex": "^codex-exec-aarch64-unknown-linux-musl\\.zst$", "path": "codex-exec" }
}
},
@@ -23,14 +14,14 @@
"macos-aarch64": { "regex": "^codex-aarch64-apple-darwin\\.zst$", "path": "codex" },
"macos-x86_64": { "regex": "^codex-x86_64-apple-darwin\\.zst$", "path": "codex" },
"linux-x86_64": { "regex": "^codex-x86_64-unknown-linux-musl\\.zst$", "path": "codex" },
"linux-aarch64": { "regex": "^codex-aarch64-unknown-linux-gnu\\.zst$", "path": "codex" }
"linux-aarch64": { "regex": "^codex-aarch64-unknown-linux-musl\\.zst$", "path": "codex" }
}
},
"codex-linux-sandbox": {
"platforms": {
"linux-x86_64": { "regex": "^codex-linux-sandbox-x86_64-unknown-linux-musl\\.zst$", "path": "codex-linux-sandbox" },
"linux-aarch64": { "regex": "^codex-linux-sandbox-aarch64-unknown-linux-gnu\\.zst$", "path": "codex-linux-sandbox" }
"linux-aarch64": { "regex": "^codex-linux-sandbox-aarch64-unknown-linux-musl\\.zst$", "path": "codex-linux-sandbox" }
}
}
}

View File

@@ -68,6 +68,12 @@ jobs:
- name: Build
run: pnpm run build
- name: Ensure staging a release works.
working-directory: codex-cli
env:
GH_TOKEN: ${{ github.token }}
run: pnpm stage-release
- name: Ensure README.md contains only ASCII and certain Unicode code points
run: ./scripts/asciicheck.py README.md
- name: Check README ToC

View File

@@ -23,7 +23,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
path-to-document: docs/CLA.md
path-to-document: https://github.com/openai/codex/blob/main/docs/CLA.md
path-to-signatures: signatures/cla.json
branch: cla-signatures
allowlist: dependabot[bot]

27
.github/workflows/codespell.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
# Codespell configuration is within .codespellrc
---
name: Codespell
on:
push:
branches: [main]
pull_request:
branches: [main]
permissions:
contents: read
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
- name: Codespell
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
with:
ignore_words_file: .codespellignore

95
.github/workflows/codex.yml vendored Normal file
View File

@@ -0,0 +1,95 @@
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
# This `if` check provides complex filtering logic to avoid running Codex
# on every PR. Admittedly, one thing this does not verify is whether the
# sender has write access to the repo: that must be done as part of a
# runtime step.
#
# Note the label values should match the ones in the .github/codex/labels
# folder.
if: |
(github.event_name == 'issues' && (
(github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-triage'))
)) ||
(github.event_name == 'pull_request' && github.event.action == 'labeled' && github.event.label.name == 'codex-review')
runs-on: ubuntu-latest
permissions:
contents: write # can push or create branches
issues: write # for comments + labels on issues/PRs
pull-requests: write # for PR comments/labels
steps:
# TODO: Consider adding an optional mode (--dry-run?) to actions/codex
# that verifies whether Codex should actually be run for this event.
# (For example, it may be rejected because the sender does not have
# write access to the repo.) The benefit would be two-fold:
# 1. As the first step of this job, it gives us a chance to add a reaction
# or comment to the PR/issue ASAP to "ack" the request.
# 2. It saves resources by skipping the clone and setup steps below if
# Codex is not going to run.
- name: Checkout repository
uses: actions/checkout@v4
# We install the dependencies like we would for an ordinary CI job,
# particularly because Codex will not have network access to install
# these dependencies.
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 22
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10.8.1
run_install: false
- name: Get pnpm store directory
id: pnpm-cache
shell: bash
run: |
echo "store_path=$(pnpm store path --silent)" >> $GITHUB_OUTPUT
- name: Setup pnpm cache
uses: actions/cache@v4
with:
path: ${{ steps.pnpm-cache.outputs.store_path }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
- name: Install dependencies
run: pnpm install
- uses: dtolnay/rust-toolchain@1.87
with:
targets: x86_64-unknown-linux-gnu
components: clippy
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-${{ hashFiles('**/Cargo.lock') }}
# Note it is possible that the `verify` step internal to Run Codex will
# fail, in which case the work to setup the repo was worthless :(
- name: Run Codex
uses: ./.github/actions/codex
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
codex_home: ./.github/codex/home

View File

@@ -26,7 +26,9 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@1.87
with:
components: rustfmt
- name: cargo fmt
run: cargo fmt -- --config imports_granularity=Item --check
@@ -53,14 +55,19 @@ jobs:
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
- runner: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@1.87
with:
targets: ${{ matrix.target }}
components: clippy
- uses: actions/cache@v4
with:
@@ -72,23 +79,40 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt install -y musl-tools pkg-config
- name: Initialize failure flag
run: echo "FAILED=" >> $GITHUB_ENV
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --all-features -- -D warnings || echo "FAILED=${FAILED:+$FAILED, }cargo clippy" >> $GITHUB_ENV
id: clippy
continue-on-error: true
run: cargo clippy --target ${{ matrix.target }} --all-features --tests -- -D warnings
# Running `cargo build` from the workspace root builds the workspace using
# the union of all features from third-party crates. This can mask errors
# where individual crates have underspecified features. To avoid this, we
# run `cargo build` for each crate individually, though because this is
# slower, we only do this for the x86_64-unknown-linux-gnu target.
- name: cargo build individual crates
id: build
if: ${{ matrix.target == 'x86_64-unknown-linux-gnu' }}
continue-on-error: true
run: find . -name Cargo.toml -mindepth 2 -maxdepth 2 -print0 | xargs -0 -n1 -I{} bash -c 'cd "$(dirname "{}")" && cargo build'
- name: cargo test
run: cargo test --target ${{ matrix.target }} || echo "FAILED=${FAILED:+$FAILED, }cargo test" >> $GITHUB_ENV
id: test
continue-on-error: true
run: cargo test --all-features --target ${{ matrix.target }}
env:
RUST_BACKTRACE: 1
- name: Fail if any step failed
if: env.FAILED != ''
# Fail the job if any of the previous steps failed.
- name: verify all steps passed
if: |
steps.clippy.outcome == 'failure' ||
steps.build.outcome == 'failure' ||
steps.test.outcome == 'failure'
run: |
echo "See logs above, as the following steps failed:"
echo "$FAILED"
echo "One or more checks failed (clippy, build, or test). See logs for details."
exit 1

View File

@@ -9,14 +9,14 @@ name: rust-release
on:
push:
tags:
- "rust-v.*.*.*"
- "rust-v*.*.*"
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
env:
TAG_REGEX: '^rust-v\.[0-9]+\.[0-9]+\.[0-9]+$'
TAG_REGEX: '^rust-v[0-9]+\.[0-9]+\.[0-9]+$'
jobs:
tag-check:
@@ -37,7 +37,7 @@ jobs:
|| { echo "❌ Tag '${GITHUB_REF_NAME}' != ${TAG_REGEX}"; exit 1; }
# 2. Extract versions
tag_ver="${GITHUB_REF_NAME#rust-v.}"
tag_ver="${GITHUB_REF_NAME#rust-v}"
cargo_ver="$(grep -m1 '^version' codex-rs/Cargo.toml \
| sed -E 's/version *= *"([^"]+)".*/\1/')"
@@ -69,12 +69,14 @@ jobs:
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@1.87
with:
targets: ${{ matrix.target }}
@@ -88,7 +90,7 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt install -y musl-tools pkg-config
@@ -102,11 +104,13 @@ jobs:
dest="dist/${{ matrix.target }}"
mkdir -p "$dest"
cp target/${{ matrix.target }}/release/codex-repl "$dest/codex-repl-${{ matrix.target }}"
cp target/${{ matrix.target }}/release/codex-exec "$dest/codex-exec-${{ matrix.target }}"
cp target/${{ matrix.target }}/release/codex "$dest/codex-${{ matrix.target }}"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'x86_64-unknown-linux-gnu' || matrix.target == 'aarch64-unknown-linux-gnu' }}
# After https://github.com/openai/codex/pull/1228 is merged and a new
# release is cut with an artifacts built after that PR, the `-gnu`
# variants can go away as we will only use the `-musl` variants.
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'x86_64-unknown-linux-gnu' || matrix.target == 'aarch64-unknown-linux-gnu' || matrix.target == 'aarch64-unknown-linux-musl' }}
name: Stage Linux-only artifacts
shell: bash
run: |
@@ -116,13 +120,42 @@ jobs:
- name: Compress artifacts
shell: bash
run: |
# Path that contains the uncompressed binaries for the current
# ${{ matrix.target }}
dest="dist/${{ matrix.target }}"
zstd -T0 -19 --rm "$dest"/*
# For compatibility with environments that lack the `zstd` tool we
# additionally create a `.tar.gz` alongside every single binary that
# we publish. The end result is:
# codex-<target>.zst (existing)
# codex-<target>.tar.gz (new)
# ...same naming for codex-exec-* and codex-linux-sandbox-*
# 1. Produce a .tar.gz for every file in the directory *before* we
# run `zstd --rm`, because that flag deletes the original files.
for f in "$dest"/*; do
base="$(basename "$f")"
# Skip files that are already archives (shouldn't happen, but be
# safe).
if [[ "$base" == *.tar.gz ]]; then
continue
fi
# Create per-binary tar.gz
tar -C "$dest" -czf "$dest/${base}.tar.gz" "$base"
# Also create .zst (existing behaviour) *and* remove the original
# uncompressed binary to keep the directory small.
zstd -T0 -19 --rm "$dest/$base"
done
- uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}
path: codex-rs/dist/${{ matrix.target }}/*
# Upload the per-binary .zst files as well as the new .tar.gz
# equivalents we generated in the previous step.
path: |
codex-rs/dist/${{ matrix.target }}/*
release:
needs: build
@@ -143,11 +176,9 @@ jobs:
with:
tag_name: ${{ env.RELEASE_TAG }}
files: dist/**
# TODO(ragona): I'm going to leave these as prerelease/draft for now.
# It gives us 1) clarity that these are not yet a stable version, and
# 2) allows a human step to review the release before publishing the draft.
# For now, tag releases as "prerelease" because we are not claiming
# the Rust CLI is stable yet.
prerelease: true
draft: true
- uses: facebook/dotslash-publish-release@v2
env:

4
.gitignore vendored
View File

@@ -77,3 +77,7 @@ yarn.lock
package.json-e
session.ts-e
CHANGELOG.ignore.md
# nix related
.direnv
.envrc

5
AGENTS.md Normal file
View File

@@ -0,0 +1,5 @@
# Rust/codex-rs
In the codex-rs folder where the rust code lives:
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR`. You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.

View File

@@ -2,6 +2,68 @@
You can install any of these versions: `npm install -g codex@version`
## `0.1.2505172129`
### 🪲 Bug Fixes
- Add node version check (#1007)
- Persist token after refresh (#1006)
## `0.1.2505171619`
- `codex --login` + `codex --free` (#998)
## `0.1.2505161800`
- Sign in with chatgpt credits (#974)
- Add support for OpenAI tool type, local_shell (#961)
## `0.1.2505161243`
- Sign in with chatgpt (#963)
- Session history viewer (#912)
- Apply patch issue when using different cwd (#942)
- Diff command for filenames with special characters (#954)
## `0.1.2505160811`
- `codex-mini-latest` (#951)
## `0.1.2505140839`
### 🪲 Bug Fixes
- Gpt-4.1 apply_patch handling (#930)
- Add support for fileOpener in config.json (#911)
- Patch in #366 and #367 for marked-terminal (#916)
- Remember to set lastIndex = 0 on shared RegExp (#918)
- Always load version from package.json at runtime (#909)
- Tweak the label for citations for better rendering (#919)
- Tighten up some logic around session timestamps and ids (#922)
- Change EventMsg enum so every variant takes a single struct (#925)
- Reasoning default to medium, show workdir when supplied (#931)
- Test_dev_null_write() was not using echo as intended (#923)
## `0.1.2504301751`
### 🚀 Features
- User config api key (#569)
- `@mention` files in codex (#701)
- Add `--reasoning` CLI flag (#314)
- Lower default retry wait time and increase number of tries (#720)
- Add common package registries domains to allowed-domains list (#414)
### 🪲 Bug Fixes
- Insufficient quota message (#758)
- Input keyboard shortcut opt+delete (#685)
- `/diff` should include untracked files (#686)
- Only allow running without sandbox if explicitly marked in safe container (#699)
- Tighten up check for /usr/bin/sandbox-exec (#710)
- Check if sandbox-exec is available (#696)
- Duplicate messages in quiet mode (#680)
## `0.1.2504251709`
### 🚀 Features

173
README.md
View File

@@ -8,48 +8,48 @@
---
<details>
<summary><strong>Table&nbsp;of&nbsp;Contents</strong></summary>
<summary><strong>Table of contents</strong></summary>
<!-- Begin ToC -->
- [Experimental Technology Disclaimer](#experimental-technology-disclaimer)
- [Experimental technology disclaimer](#experimental-technology-disclaimer)
- [Quickstart](#quickstart)
- [Why Codex?](#why-codex)
- [Security Model & Permissions](#security-model--permissions)
- [Security model & permissions](#security-model--permissions)
- [Platform sandboxing details](#platform-sandboxing-details)
- [System Requirements](#system-requirements)
- [CLI Reference](#cli-reference)
- [Memory & Project Docs](#memory--project-docs)
- [System requirements](#system-requirements)
- [CLI reference](#cli-reference)
- [Memory & project docs](#memory--project-docs)
- [Non-interactive / CI mode](#non-interactive--ci-mode)
- [Tracing / Verbose Logging](#tracing--verbose-logging)
- [Tracing / verbose logging](#tracing--verbose-logging)
- [Recipes](#recipes)
- [Installation](#installation)
- [Configuration Guide](#configuration-guide)
- [Basic Configuration Parameters](#basic-configuration-parameters)
- [Custom AI Provider Configuration](#custom-ai-provider-configuration)
- [History Configuration](#history-configuration)
- [Configuration Examples](#configuration-examples)
- [Full Configuration Example](#full-configuration-example)
- [Custom Instructions](#custom-instructions)
- [Environment Variables Setup](#environment-variables-setup)
- [Configuration guide](#configuration-guide)
- [Basic configuration parameters](#basic-configuration-parameters)
- [Custom AI provider configuration](#custom-ai-provider-configuration)
- [History configuration](#history-configuration)
- [Configuration examples](#configuration-examples)
- [Full configuration example](#full-configuration-example)
- [Custom instructions](#custom-instructions)
- [Environment variables setup](#environment-variables-setup)
- [FAQ](#faq)
- [Zero Data Retention (ZDR) Usage](#zero-data-retention-zdr-usage)
- [Codex Open Source Fund](#codex-open-source-fund)
- [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
- [Codex open source fund](#codex-open-source-fund)
- [Contributing](#contributing)
- [Development workflow](#development-workflow)
- [Git Hooks with Husky](#git-hooks-with-husky)
- [Git hooks with Husky](#git-hooks-with-husky)
- [Debugging](#debugging)
- [Writing high-impact code changes](#writing-high-impact-code-changes)
- [Opening a pull request](#opening-a-pull-request)
- [Review process](#review-process)
- [Community values](#community-values)
- [Getting help](#getting-help)
- [Contributor License Agreement (CLA)](#contributor-license-agreement-cla)
- [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
- [Quick fixes](#quick-fixes)
- [Releasing `codex`](#releasing-codex)
- [Alternative Build Options](#alternative-build-options)
- [Nix Flake Development](#nix-flake-development)
- [Security & Responsible AI](#security--responsible-ai)
- [Alternative build options](#alternative-build-options)
- [Nix flake development](#nix-flake-development)
- [Security & responsible AI](#security--responsible-ai)
- [License](#license)
<!-- End ToC -->
@@ -58,7 +58,7 @@
---
## Experimental Technology Disclaimer
## Experimental technology disclaimer
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
@@ -98,12 +98,14 @@ export OPENAI_API_KEY="your-api-key-here"
>
> - openai (default)
> - openrouter
> - azure
> - gemini
> - ollama
> - mistral
> - deepseek
> - xai
> - groq
> - arceeai
> - any other provider that is compatible with the OpenAI API
>
> If you use a provider other than OpenAI, you will need to set the API key for the provider in the config file or in the environment variable as:
@@ -158,7 +160,7 @@ And it's **fully open-source** so you can see and contribute to how it develops!
---
## Security Model & Permissions
## Security model & permissions
Codex lets you decide _how much autonomy_ the agent receives and auto-approval policy via the
`--approval-mode` flag (or the interactive onboarding prompt):
@@ -198,7 +200,7 @@ The hardening mechanism Codex uses depends on your OS:
---
## System Requirements
## System requirements
| Requirement | Details |
| --------------------------- | --------------------------------------------------------------- |
@@ -211,7 +213,7 @@ The hardening mechanism Codex uses depends on your OS:
---
## CLI Reference
## CLI reference
| Command | Purpose | Example |
| ------------------------------------ | ----------------------------------- | ------------------------------------ |
@@ -224,15 +226,15 @@ Key flags: `--model/-m`, `--approval-mode/-a`, `--quiet/-q`, and `--notify`.
---
## Memory & Project Docs
## Memory & project docs
Codex merges Markdown instructions in this order:
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down:
1. `~/.codex/instructions.md` - personal global guidance
2. `codex.md` at repo root - shared project notes
3. `codex.md` in cwd - sub-package specifics
1. `~/.codex/AGENTS.md` - personal global guidance
2. `AGENTS.md` at repo root - shared project notes
3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
Disable with `--no-project-doc` or `CODEX_DISABLE_PROJECT_DOC=1`.
Disable loading of these files with `--no-project-doc` or the environment variable `CODEX_DISABLE_PROJECT_DOC=1`.
---
@@ -250,7 +252,7 @@ Run Codex head-less in pipelines. Example GitHub Action step:
Set `CODEX_QUIET_MODE=1` to silence interactive UI noise.
## Tracing / Verbose Logging
## Tracing / verbose logging
Setting the environment variable `DEBUG=true` prints full API request and response details:
@@ -308,6 +310,9 @@ corepack enable
pnpm install
pnpm build
# Linux-only: download prebuilt sandboxing binaries (requires gh and zstd).
./scripts/install_native_deps.sh
# Get the usage and the options
node ./dist/cli.js --help
@@ -322,11 +327,11 @@ pnpm link
---
## Configuration Guide
## Configuration guide
Codex configuration files can be placed in the `~/.codex/` directory, supporting both YAML and JSON formats.
### Basic Configuration Parameters
### Basic configuration parameters
| Parameter | Type | Default | Description | Available Options |
| ------------------- | ------- | ---------- | -------------------------------- | ---------------------------------------------------------------------------------------------- |
@@ -335,7 +340,7 @@ Codex configuration files can be placed in the `~/.codex/` directory, supporting
| `fullAutoErrorMode` | string | `ask-user` | Error handling in full-auto mode | `ask-user` (prompt for user input)<br>`ignore-and-continue` (ignore and proceed) |
| `notify` | boolean | `true` | Enable desktop notifications | `true`/`false` |
### Custom AI Provider Configuration
### Custom AI provider configuration
In the `providers` object, you can configure multiple AI service providers. Each provider requires the following parameters:
@@ -345,7 +350,7 @@ In the `providers` object, you can configure multiple AI service providers. Each
| `baseURL` | string | API service URL | `"https://api.openai.com/v1"` |
| `envKey` | string | Environment variable name (for API key) | `"OPENAI_API_KEY"` |
### History Configuration
### History configuration
In the `history` object, you can configure conversation history settings:
@@ -355,7 +360,7 @@ In the `history` object, you can configure conversation history settings:
| `saveHistory` | boolean | Whether to save history | `true` |
| `sensitivePatterns` | array | Patterns of sensitive information to filter in history | `[]` |
### Configuration Examples
### Configuration examples
1. YAML format (save as `~/.codex/config.yaml`):
@@ -377,7 +382,7 @@ notify: true
}
```
### Full Configuration Example
### Full configuration example
Below is a comprehensive example of `config.json` with multiple custom providers:
@@ -391,6 +396,11 @@ Below is a comprehensive example of `config.json` with multiple custom providers
"baseURL": "https://api.openai.com/v1",
"envKey": "OPENAI_API_KEY"
},
"azure": {
"name": "AzureOpenAI",
"baseURL": "https://YOUR_PROJECT_NAME.openai.azure.com/openai",
"envKey": "AZURE_OPENAI_API_KEY"
},
"openrouter": {
"name": "OpenRouter",
"baseURL": "https://openrouter.ai/api/v1",
@@ -425,6 +435,11 @@ Below is a comprehensive example of `config.json` with multiple custom providers
"name": "Groq",
"baseURL": "https://api.groq.com/openai/v1",
"envKey": "GROQ_API_KEY"
},
"arceeai": {
"name": "ArceeAI",
"baseURL": "https://conductor.arcee.ai/v1",
"envKey": "ARCEEAI_API_KEY"
}
},
"history": {
@@ -435,16 +450,16 @@ Below is a comprehensive example of `config.json` with multiple custom providers
}
```
### Custom Instructions
### Custom instructions
You can create a `~/.codex/instructions.md` file to define custom instructions:
You can create a `~/.codex/AGENTS.md` file to define custom guidance for the agent:
```markdown
- Always respond with emojis
- Only use git commands when explicitly requested
```
### Environment Variables Setup
### Environment variables setup
For each AI provider, you need to set the corresponding API key in your environment variables. For example:
@@ -452,6 +467,10 @@ For each AI provider, you need to set the corresponding API key in your environm
# OpenAI
export OPENAI_API_KEY="your-api-key-here"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
export AZURE_OPENAI_API_VERSION="2025-03-01-preview" (Optional)
# OpenRouter
export OPENROUTER_API_KEY="your-openrouter-key-here"
@@ -497,7 +516,7 @@ Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.mic
---
## Zero Data Retention (ZDR) Usage
## Zero data retention (ZDR) usage
Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
@@ -509,7 +528,7 @@ You may need to upgrade to a more recent version with: `npm i -g @openai/codex@l
---
## Codex Open Source Fund
## Codex open source fund
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
@@ -534,7 +553,7 @@ More broadly we welcome contributions - whether you are opening your very first
- We use **Vitest** for unit tests, **ESLint** + **Prettier** for style, and **TypeScript** for type-checking.
- Before pushing, run the full test/type/lint suite:
### Git Hooks with Husky
### Git hooks with Husky
This project uses [Husky](https://typicode.github.io/husky/) to enforce code quality checks:
@@ -608,7 +627,7 @@ If you run into problems setting up the project, would like feedback on an idea,
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
### Contributor License Agreement (CLA)
### Contributor license agreement (CLA)
All contributors **must** accept the CLA. The process is lightweight:
@@ -633,29 +652,42 @@ The **DCO check** blocks merges until every commit in the PR carries the footer
### Releasing `codex`
To publish a new version of the CLI, run the release scripts defined in `codex-cli/package.json`:
To publish a new version of the CLI you first need to stage the npm package. A
helper script in `codex-cli/scripts/` does all the heavy lifting. Inside the
`codex-cli` folder run:
1. Open the `codex-cli` directory
2. Make sure you're on a branch like `git checkout -b bump-version`
3. Bump the version and `CLI_VERSION` to current datetime: `pnpm release:version`
4. Commit the version bump (with DCO sign-off):
```bash
git add codex-cli/src/utils/session.ts codex-cli/package.json
git commit -s -m "chore(release): codex-cli v$(node -p \"require('./codex-cli/package.json').version\")"
```
5. Copy README, build, and publish to npm: `pnpm release`
6. Push to branch: `git push origin HEAD`
```bash
# Classic, JS implementation that includes small, native binaries for Linux sandboxing.
pnpm stage-release
### Alternative Build Options
# Optionally specify the temp directory to reuse between runs.
RELEASE_DIR=$(mktemp -d)
pnpm stage-release --tmp "$RELEASE_DIR"
#### Nix Flake Development
# "Fat" package that additionally bundles the native Rust CLI binaries for
# Linux. End-users can then opt-in at runtime by setting CODEX_RUST=1.
pnpm stage-release --native
```
Go to the folder where the release is staged and verify that it works as intended. If so, run the following from the temp folder:
```
cd "$RELEASE_DIR"
npm publish
```
### Alternative build options
#### Nix flake development
Prerequisite: Nix >= 2.4 with flakes enabled (`experimental-features = nix-command flakes` in `~/.config/nix/nix.conf`).
Enter a Nix development shell:
```bash
nix develop
# Use either one of the commands according to which implementation you want to work with
nix develop .#codex-cli # For entering codex-cli specific shell
nix develop .#codex-rs # For entering codex-rs specific shell
```
This shell includes Node.js, installs dependencies, builds the CLI, and provides a `codex` command alias.
@@ -663,19 +695,34 @@ This shell includes Node.js, installs dependencies, builds the CLI, and provides
Build and run the CLI directly:
```bash
nix build
# Use either one of the commands according to which implementation you want to work with
nix build .#codex-cli # For building codex-cli
nix build .#codex-rs # For building codex-rs
./result/bin/codex --help
```
Run the CLI via the flake app:
```bash
nix run .#codex
# Use either one of the commands according to which implementation you want to work with
nix run .#codex-cli # For running codex-cli
nix run .#codex-rs # For running codex-rs
```
Use direnv with flakes
If you have direnv installed, you can use the following `.envrc` to automatically enter the Nix shell when you `cd` into the project directory:
```bash
cd codex-rs
echo "use flake ../flake.nix#codex-cli" >> .envrc && direnv allow
cd codex-cli
echo "use flake ../flake.nix#codex-rs" >> .envrc && direnv allow
```
---
## Security & Responsible AI
## Security & responsible AI
Have you discovered a vulnerability or have concerns about model output? Please e-mail **security@openai.com** and we will respond promptly.

View File

@@ -1,6 +1,6 @@
module.exports = {
root: true,
env: { browser: true, es2020: true },
env: { browser: true, node: true, es2020: true },
extends: [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",

3
codex-cli/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
# Added by ./scripts/install_native_deps.sh
/bin/codex-linux-sandbox-arm64
/bin/codex-linux-sandbox-x64

View File

@@ -1,17 +1,90 @@
#!/usr/bin/env node
// Unified entry point for the Codex CLI.
/*
* Behavior
* =========
* 1. By default we import the JavaScript implementation located in
* dist/cli.js.
*
* 2. Developers can opt-in to a pre-compiled Rust binary by setting the
* environment variable CODEX_RUST to a truthy value (`1`, `true`, etc.).
* When that variable is present we resolve the correct binary for the
* current platform / architecture and execute it via child_process.
*
* If the CODEX_RUST=1 is specified and there is no native binary for the
* current platform / architecture, an error is thrown.
*/
// Unified entry point for Codex CLI on all platforms
// Dynamically loads the compiled ESM bundle in dist/cli.js
import { spawnSync } from "child_process";
import fs from "fs";
import path from "path";
import { fileURLToPath, pathToFileURL } from "url";
import path from 'path';
import { fileURLToPath, pathToFileURL } from 'url';
// Determine whether the user explicitly wants the Rust CLI.
// Determine this script's directory
// __dirname equivalent in ESM
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// For the @native release of the Node module, the `use-native` file is added,
// indicating we should default to the native binary. For other releases,
// setting CODEX_RUST=1 will opt-in to the native binary, if included.
const wantsNative = fs.existsSync(path.join(__dirname, "use-native")) ||
(process.env.CODEX_RUST != null
? ["1", "true", "yes"].includes(process.env.CODEX_RUST.toLowerCase())
: false);
// Try native binary if requested.
if (wantsNative) {
const { platform, arch } = process;
let targetTriple = null;
switch (platform) {
case "linux":
switch (arch) {
case "x64":
targetTriple = "x86_64-unknown-linux-musl";
break;
case "arm64":
targetTriple = "aarch64-unknown-linux-musl";
break;
default:
break;
}
break;
case "darwin":
switch (arch) {
case "x64":
targetTriple = "x86_64-apple-darwin";
break;
case "arm64":
targetTriple = "aarch64-apple-darwin";
break;
default:
break;
}
break;
default:
break;
}
if (!targetTriple) {
throw new Error(`Unsupported platform: ${platform} (${arch})`);
}
const binaryPath = path.join(__dirname, "..", "bin", `codex-${targetTriple}`);
const result = spawnSync(binaryPath, process.argv.slice(2), {
stdio: "inherit",
});
const exitCode = typeof result.status === "number" ? result.status : 1;
process.exit(exitCode);
}
// Fallback: execute the original JavaScript CLI.
// Resolve the path to the compiled CLI bundle
const cliPath = path.resolve(__dirname, '../dist/cli.js');
const cliPath = path.resolve(__dirname, "../dist/cli.js");
const cliUrl = pathToFileURL(cliPath).href;
// Load and execute the CLI
@@ -21,7 +94,6 @@ const cliUrl = pathToFileURL(cliPath).href;
} catch (err) {
// eslint-disable-next-line no-console
console.error(err);
// eslint-disable-next-line no-undef
process.exit(1);
}
})();

View File

@@ -72,6 +72,9 @@ if (isDevBuild) {
esbuild
.build({
entryPoints: ["src/cli.tsx"],
// Do not bundle the contents of package.json at build time: always read it
// at runtime.
external: ["../package.json"],
bundle: true,
format: "esm",
platform: "node",

43
codex-cli/default.nix Normal file
View File

@@ -0,0 +1,43 @@
{ pkgs, monorep-deps ? [], ... }:
let
node = pkgs.nodejs_22;
in
rec {
package = pkgs.buildNpmPackage {
pname = "codex-cli";
version = "0.1.0";
src = ./.;
npmDepsHash = "sha256-3tAalmh50I0fhhd7XreM+jvl0n4zcRhqygFNB1Olst8";
nodejs = node;
npmInstallFlags = [ "--frozen-lockfile" ];
meta = with pkgs.lib; {
description = "OpenAI Codex commandline interface";
license = licenses.asl20;
homepage = "https://github.com/openai/codex";
};
};
devShell = pkgs.mkShell {
name = "codex-cli-dev";
buildInputs = monorep-deps ++ [
node
pkgs.pnpm
];
shellHook = ''
echo "Entering development shell for codex-cli"
# cd codex-cli
if [ -f package-lock.json ]; then
pnpm ci || echo "npm ci failed"
else
pnpm install || echo "npm install failed"
fi
npm run build || echo "npm build failed"
export PATH=$PWD/node_modules/.bin:$PATH
alias codex="node $PWD/dist/cli.js"
'';
};
app = {
type = "app";
program = "${package}/bin/codex";
};
}

View File

@@ -1,7 +1,7 @@
name: "impossible-pong"
description: |
Update index.html with the following features:
- Add an overlayed styled popup to start the game on first load
- Add an overlaid styled popup to start the game on first load
- Between each point, show a 3 second countdown (this should be skipped if a player wins)
- After each game the AI wins, display text at the bottom of the screen with lighthearted insults for the player
- Add a leaderboard to the right of the court that shows how many games each player has won.

View File

@@ -13,7 +13,7 @@ act,prompt,for_devs
"Advertiser","I want you to act as an advertiser. You will create a campaign to promote a product or service of your choice. You will choose a target audience, develop key messages and slogans, select the media channels for promotion, and decide on any additional activities needed to reach your goals. My first suggestion request is ""I need help creating an advertising campaign for a new type of energy drink targeting young adults aged 18-30.""",FALSE
"Storyteller","I want you to act as a storyteller. You will come up with entertaining stories that are engaging, imaginative and captivating for the audience. It can be fairy tales, educational stories or any other type of stories which has the potential to capture people's attention and imagination. Depending on the target audience, you may choose specific themes or topics for your storytelling session e.g., if it's children then you can talk about animals; If it's adults then history-based tales might engage them better etc. My first request is ""I need an interesting story on perseverance.""",FALSE
"Football Commentator","I want you to act as a football commentator. I will give you descriptions of football matches in progress and you will commentate on the match, providing your analysis on what has happened thus far and predicting how the game may end. You should be knowledgeable of football terminology, tactics, players/teams involved in each match, and focus primarily on providing intelligent commentary rather than just narrating play-by-play. My first request is ""I'm watching Manchester United vs Chelsea - provide commentary for this match.""",FALSE
"Stand-up Comedian","I want you to act as a stand-up comedian. I will provide you with some topics related to current events and you will use your wit, creativity, and observational skills to create a routine based on those topics. You should also be sure to incorporate personal anecdotes or experiences into the routine in order to make it more relatable and engaging for the audience. My first request is ""I want an humorous take on politics.""",FALSE
"Stand-up Comedian","I want you to act as a stand-up comedian. I will provide you with some topics related to current events and you will use your with, creativity, and observational skills to create a routine based on those topics. You should also be sure to incorporate personal anecdotes or experiences into the routine in order to make it more relatable and engaging for the audience. My first request is ""I want an humorous take on politics.""",FALSE
"Motivational Coach","I want you to act as a motivational coach. I will provide you with some information about someone's goals and challenges, and it will be your job to come up with strategies that can help this person achieve their goals. This could involve providing positive affirmations, giving helpful advice or suggesting activities they can do to reach their end goal. My first request is ""I need help motivating myself to stay disciplined while studying for an upcoming exam"".",FALSE
"Composer","I want you to act as a composer. I will provide the lyrics to a song and you will create music for it. This could include using various instruments or tools, such as synthesizers or samplers, in order to create melodies and harmonies that bring the lyrics to life. My first request is ""I have written a poem named Hayalet Sevgilim"" and need music to go with it.""""""",FALSE
"Debater","I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is ""I want an opinion piece about Deno.""",FALSE
@@ -23,7 +23,7 @@ act,prompt,for_devs
"Movie Critic","I want you to act as a movie critic. You will develop an engaging and creative movie review. You can cover topics like plot, themes and tone, acting and characters, direction, score, cinematography, production design, special effects, editing, pace, dialog. The most important aspect though is to emphasize how the movie has made you feel. What has really resonated with you. You can also be critical about the movie. Please avoid spoilers. My first request is ""I need to write a movie review for the movie Interstellar""",FALSE
"Relationship Coach","I want you to act as a relationship coach. I will provide some details about the two people involved in a conflict, and it will be your job to come up with suggestions on how they can work through the issues that are separating them. This could include advice on communication techniques or different strategies for improving their understanding of one another's perspectives. My first request is ""I need help solving conflicts between my spouse and myself.""",FALSE
"Poet","I want you to act as a poet. You will create poems that evoke emotions and have the power to stir people's soul. Write on any topic or theme but make sure your words convey the feeling you are trying to express in beautiful yet meaningful ways. You can also come up with short verses that are still powerful enough to leave an imprint in readers' minds. My first request is ""I need a poem about love.""",FALSE
"Rapper","I want you to act as a rapper. You will come up with powerful and meaningful lyrics, beats and rhythm that can 'wow' the audience. Your lyrics should have an intriguing meaning and message which people can relate too. When it comes to choosing your beat, make sure it is catchy yet relevant to your words, so that when combined they make an explosion of sound everytime! My first request is ""I need a rap song about finding strength within yourself.""",FALSE
"Rapper","I want you to act as a rapper. You will come up with powerful and meaningful lyrics, beats and rhythm that can 'wow' the audience. Your lyrics should have an intriguing meaning and message which people can relate too. When it comes to choosing your beat, make sure it is catchy yet relevant to your words, so that when combined they make an explosion of sound every time! My first request is ""I need a rap song about finding strength within yourself.""",FALSE
"Motivational Speaker","I want you to act as a motivational speaker. Put together words that inspire action and make people feel empowered to do something beyond their abilities. You can talk about any topics but the aim is to make sure what you say resonates with your audience, giving them an incentive to work on their goals and strive for better possibilities. My first request is ""I need a speech about how everyone should never give up.""",FALSE
"Philosophy Teacher","I want you to act as a philosophy teacher. I will provide some topics related to the study of philosophy, and it will be your job to explain these concepts in an easy-to-understand manner. This could include providing examples, posing questions or breaking down complex ideas into smaller pieces that are easier to comprehend. My first request is ""I need help understanding how different philosophical theories can be applied in everyday life.""",FALSE
"Philosopher","I want you to act as a philosopher. I will provide some topics or questions related to the study of philosophy, and it will be your job to explore these concepts in depth. This could involve conducting research into various philosophical theories, proposing new ideas or finding creative solutions for solving complex problems. My first request is ""I need help developing an ethical framework for decision making.""",FALSE
1 act prompt for_devs
13 Advertiser I want you to act as an advertiser. You will create a campaign to promote a product or service of your choice. You will choose a target audience, develop key messages and slogans, select the media channels for promotion, and decide on any additional activities needed to reach your goals. My first suggestion request is "I need help creating an advertising campaign for a new type of energy drink targeting young adults aged 18-30." FALSE
14 Storyteller I want you to act as a storyteller. You will come up with entertaining stories that are engaging, imaginative and captivating for the audience. It can be fairy tales, educational stories or any other type of stories which has the potential to capture people's attention and imagination. Depending on the target audience, you may choose specific themes or topics for your storytelling session e.g., if it's children then you can talk about animals; If it's adults then history-based tales might engage them better etc. My first request is "I need an interesting story on perseverance." FALSE
15 Football Commentator I want you to act as a football commentator. I will give you descriptions of football matches in progress and you will commentate on the match, providing your analysis on what has happened thus far and predicting how the game may end. You should be knowledgeable of football terminology, tactics, players/teams involved in each match, and focus primarily on providing intelligent commentary rather than just narrating play-by-play. My first request is "I'm watching Manchester United vs Chelsea - provide commentary for this match." FALSE
16 Stand-up Comedian I want you to act as a stand-up comedian. I will provide you with some topics related to current events and you will use your wit, creativity, and observational skills to create a routine based on those topics. You should also be sure to incorporate personal anecdotes or experiences into the routine in order to make it more relatable and engaging for the audience. My first request is "I want an humorous take on politics." I want you to act as a stand-up comedian. I will provide you with some topics related to current events and you will use your with, creativity, and observational skills to create a routine based on those topics. You should also be sure to incorporate personal anecdotes or experiences into the routine in order to make it more relatable and engaging for the audience. My first request is "I want an humorous take on politics." FALSE
17 Motivational Coach I want you to act as a motivational coach. I will provide you with some information about someone's goals and challenges, and it will be your job to come up with strategies that can help this person achieve their goals. This could involve providing positive affirmations, giving helpful advice or suggesting activities they can do to reach their end goal. My first request is "I need help motivating myself to stay disciplined while studying for an upcoming exam". FALSE
18 Composer I want you to act as a composer. I will provide the lyrics to a song and you will create music for it. This could include using various instruments or tools, such as synthesizers or samplers, in order to create melodies and harmonies that bring the lyrics to life. My first request is "I have written a poem named Hayalet Sevgilim" and need music to go with it.""" FALSE
19 Debater I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno." FALSE
23 Movie Critic I want you to act as a movie critic. You will develop an engaging and creative movie review. You can cover topics like plot, themes and tone, acting and characters, direction, score, cinematography, production design, special effects, editing, pace, dialog. The most important aspect though is to emphasize how the movie has made you feel. What has really resonated with you. You can also be critical about the movie. Please avoid spoilers. My first request is "I need to write a movie review for the movie Interstellar" FALSE
24 Relationship Coach I want you to act as a relationship coach. I will provide some details about the two people involved in a conflict, and it will be your job to come up with suggestions on how they can work through the issues that are separating them. This could include advice on communication techniques or different strategies for improving their understanding of one another's perspectives. My first request is "I need help solving conflicts between my spouse and myself." FALSE
25 Poet I want you to act as a poet. You will create poems that evoke emotions and have the power to stir people's soul. Write on any topic or theme but make sure your words convey the feeling you are trying to express in beautiful yet meaningful ways. You can also come up with short verses that are still powerful enough to leave an imprint in readers' minds. My first request is "I need a poem about love." FALSE
26 Rapper I want you to act as a rapper. You will come up with powerful and meaningful lyrics, beats and rhythm that can 'wow' the audience. Your lyrics should have an intriguing meaning and message which people can relate too. When it comes to choosing your beat, make sure it is catchy yet relevant to your words, so that when combined they make an explosion of sound everytime! My first request is "I need a rap song about finding strength within yourself." I want you to act as a rapper. You will come up with powerful and meaningful lyrics, beats and rhythm that can 'wow' the audience. Your lyrics should have an intriguing meaning and message which people can relate too. When it comes to choosing your beat, make sure it is catchy yet relevant to your words, so that when combined they make an explosion of sound every time! My first request is "I need a rap song about finding strength within yourself." FALSE
27 Motivational Speaker I want you to act as a motivational speaker. Put together words that inspire action and make people feel empowered to do something beyond their abilities. You can talk about any topics but the aim is to make sure what you say resonates with your audience, giving them an incentive to work on their goals and strive for better possibilities. My first request is "I need a speech about how everyone should never give up." FALSE
28 Philosophy Teacher I want you to act as a philosophy teacher. I will provide some topics related to the study of philosophy, and it will be your job to explain these concepts in an easy-to-understand manner. This could include providing examples, posing questions or breaking down complex ideas into smaller pieces that are easier to comprehend. My first request is "I need help understanding how different philosophical theories can be applied in everyday life." FALSE
29 Philosopher I want you to act as a philosopher. I will provide some topics or questions related to the study of philosophy, and it will be your job to explore these concepts in depth. This could involve conducting research into various philosophical theories, proposing new ideas or finding creative solutions for solving complex problems. My first request is "I need help developing an ethical framework for decision making." FALSE

View File

@@ -1,6 +1,6 @@
{
"name": "@openai/codex",
"version": "0.1.2504251709",
"version": "0.0.0-dev",
"license": "Apache-2.0",
"bin": {
"codex": "bin/codex.js"
@@ -20,12 +20,10 @@
"typecheck": "tsc --noEmit",
"build": "node build.mjs",
"build:dev": "NODE_ENV=development node build.mjs --dev && NODE_OPTIONS=--enable-source-maps node dist/cli-dev.js",
"release:readme": "cp ../README.md ./README.md",
"release:version": "TS=$(date +%y%m%d%H%M) && sed -E -i'' -e \"s/\\\"0\\.1\\.[0-9]{10}\\\"/\\\"0.1.${TS}\\\"/g\" package.json src/utils/session.ts",
"release:build-and-publish": "pnpm run build && npm publish",
"release": "pnpm run release:readme && pnpm run release:version && pnpm install && pnpm run release:build-and-publish"
"stage-release": "./scripts/stage_release.sh"
},
"files": [
"bin",
"dist"
],
"dependencies": {
@@ -33,10 +31,12 @@
"chalk": "^5.2.0",
"diff": "^7.0.0",
"dotenv": "^16.1.4",
"express": "^5.1.0",
"fast-deep-equal": "^3.1.3",
"fast-npm-meta": "^0.4.2",
"figures": "^6.1.0",
"file-type": "^20.1.0",
"https-proxy-agent": "^7.0.6",
"ink": "^5.2.0",
"js-yaml": "^4.1.0",
"marked": "^15.0.7",
@@ -55,6 +55,7 @@
"devDependencies": {
"@eslint/js": "^9.22.0",
"@types/diff": "^7.0.2",
"@types/express": "^5.0.1",
"@types/js-yaml": "^4.0.9",
"@types/marked-terminal": "^6.1.1",
"@types/react": "^18.0.32",
@@ -76,7 +77,8 @@
"semver": "^7.7.1",
"ts-node": "^10.9.1",
"typescript": "^5.0.3",
"vitest": "^3.0.9",
"vite": "^6.3.4",
"vitest": "^3.1.2",
"whatwg-url": "^14.2.0",
"which": "^5.0.0"
},

View File

@@ -0,0 +1,99 @@
#!/usr/bin/env bash
# Install native runtime dependencies for codex-cli.
#
# By default the script copies the sandbox binaries that are required at
# runtime. When called with the --full-native flag, it additionally
# bundles pre-built Rust CLI binaries so that the resulting npm package can run
# the native implementation when users set CODEX_RUST=1.
#
# Usage
# install_native_deps.sh [RELEASE_ROOT] [--full-native]
#
# The optional RELEASE_ROOT is the path that contains package.json. Omitting
# it installs the binaries into the repository's own bin/ folder to support
# local development.
set -euo pipefail
# ------------------
# Parse arguments
# ------------------
DEST_DIR=""
INCLUDE_RUST=0
for arg in "$@"; do
case "$arg" in
--full-native)
INCLUDE_RUST=1
;;
*)
if [[ -z "$DEST_DIR" ]]; then
DEST_DIR="$arg"
else
echo "Unexpected argument: $arg" >&2
exit 1
fi
;;
esac
done
# ----------------------------------------------------------------------------
# Determine where the binaries should be installed.
# ----------------------------------------------------------------------------
if [[ $# -gt 0 ]]; then
# The caller supplied a release root directory.
CODEX_CLI_ROOT="$1"
BIN_DIR="$CODEX_CLI_ROOT/bin"
else
# No argument; fall back to the repos own bin directory.
# Resolve the path of this script, then walk up to the repo root.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CODEX_CLI_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
BIN_DIR="$CODEX_CLI_ROOT/bin"
fi
# Make sure the destination directory exists.
mkdir -p "$BIN_DIR"
# ----------------------------------------------------------------------------
# Download and decompress the artifacts from the GitHub Actions workflow.
# ----------------------------------------------------------------------------
# Until we start publishing stable GitHub releases, we have to grab the binaries
# from the GitHub Action that created them. Update the URL below to point to the
# appropriate workflow run:
WORKFLOW_URL="https://github.com/openai/codex/actions/runs/15483730027"
WORKFLOW_ID="${WORKFLOW_URL##*/}"
ARTIFACTS_DIR="$(mktemp -d)"
trap 'rm -rf "$ARTIFACTS_DIR"' EXIT
# NB: The GitHub CLI `gh` must be installed and authenticated.
gh run download --dir "$ARTIFACTS_DIR" --repo openai/codex "$WORKFLOW_ID"
# Decompress the artifacts for Linux sandboxing.
zstd -d "$ARTIFACTS_DIR/x86_64-unknown-linux-musl/codex-linux-sandbox-x86_64-unknown-linux-musl.zst" \
-o "$BIN_DIR/codex-linux-sandbox-x64"
zstd -d "$ARTIFACTS_DIR/aarch64-unknown-linux-musl/codex-linux-sandbox-aarch64-unknown-linux-musl.zst" \
-o "$BIN_DIR/codex-linux-sandbox-arm64"
if [[ "$INCLUDE_RUST" -eq 1 ]]; then
# x64 Linux
zstd -d "$ARTIFACTS_DIR/x86_64-unknown-linux-musl/codex-x86_64-unknown-linux-musl.zst" \
-o "$BIN_DIR/codex-x86_64-unknown-linux-musl"
# ARM64 Linux
zstd -d "$ARTIFACTS_DIR/aarch64-unknown-linux-musl/codex-aarch64-unknown-linux-musl.zst" \
-o "$BIN_DIR/codex-aarch64-unknown-linux-musl"
# x64 macOS
zstd -d "$ARTIFACTS_DIR/x86_64-apple-darwin/codex-x86_64-apple-darwin.zst" \
-o "$BIN_DIR/codex-x86_64-apple-darwin"
# ARM64 macOS
zstd -d "$ARTIFACTS_DIR/aarch64-apple-darwin/codex-aarch64-apple-darwin.zst" \
-o "$BIN_DIR/codex-aarch64-apple-darwin"
fi
echo "Installed native dependencies into $BIN_DIR"

View File

@@ -0,0 +1,147 @@
#!/usr/bin/env bash
# -----------------------------------------------------------------------------
# stage_release.sh
# -----------------------------------------------------------------------------
# Stages an npm release for @openai/codex.
#
# The script used to accept a single optional positional argument that indicated
# the temporary directory in which to stage the package. We now support a
# flag-based interface so that we can extend the command with further options
# without breaking the call-site contract.
#
# --tmp <dir> : Use <dir> instead of a freshly created temp directory.
# --native : Bundle the pre-built Rust CLI binaries for Linux alongside
# the JavaScript implementation (a so-called "fat" package).
# -h|--help : Print usage.
#
# When --native is supplied we copy the linux-sandbox binaries (as before) and
# additionally fetch / unpack the two Rust targets that we currently support:
# - x86_64-unknown-linux-musl
# - aarch64-unknown-linux-musl
#
# NOTE: This script is intended to be run from the repository root via
# `pnpm --filter codex-cli stage-release ...` or inside codex-cli with the
# helper script entry in package.json (`pnpm stage-release ...`).
# -----------------------------------------------------------------------------
set -euo pipefail
# Helper - usage / flag parsing
usage() {
cat <<EOF
Usage: $(basename "$0") [--tmp DIR] [--native]
Options
--tmp DIR Use DIR to stage the release (defaults to a fresh mktemp dir)
--native Bundle Rust binaries for Linux (fat package)
-h, --help Show this help
Legacy positional argument: the first non-flag argument is still interpreted
as the temporary directory (for backwards compatibility) but is deprecated.
EOF
exit "${1:-0}"
}
TMPDIR=""
INCLUDE_NATIVE=0
# Manual flag parser - Bash getopts does not handle GNU long options well.
while [[ $# -gt 0 ]]; do
case "$1" in
--tmp)
shift || { echo "--tmp requires an argument"; usage 1; }
TMPDIR="$1"
;;
--tmp=*)
TMPDIR="${1#*=}"
;;
--native)
INCLUDE_NATIVE=1
;;
-h|--help)
usage 0
;;
--*)
echo "Unknown option: $1" >&2
usage 1
;;
*)
echo "Unexpected extra argument: $1" >&2
usage 1
;;
esac
shift
done
# Fallback when the caller did not specify a directory.
# If no directory was specified create a fresh temporary one.
if [[ -z "$TMPDIR" ]]; then
TMPDIR="$(mktemp -d)"
fi
# Ensure the directory exists, then resolve to an absolute path.
mkdir -p "$TMPDIR"
TMPDIR="$(cd "$TMPDIR" && pwd)"
# Main build logic
echo "Staging release in $TMPDIR"
# The script lives in codex-cli/scripts/ - change into codex-cli root so that
# relative paths keep working.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CODEX_CLI_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
pushd "$CODEX_CLI_ROOT" >/dev/null
# 1. Build the JS artifacts ---------------------------------------------------
pnpm install
pnpm build
# Paths inside the staged package
mkdir -p "$TMPDIR/bin"
cp -r bin/codex.js "$TMPDIR/bin/codex.js"
cp -r dist "$TMPDIR/dist"
cp -r src "$TMPDIR/src" # keep source for TS sourcemaps
cp ../README.md "$TMPDIR" || true # README is one level up - ignore if missing
# Derive a timestamp-based version (keep same scheme as before)
VERSION="$(printf '0.1.%d' "$(date +%y%m%d%H%M)")"
# Modify package.json - bump version and optionally add the native directory to
# the files array so that the binaries are published to npm.
jq --arg version "$VERSION" \
'.version = $version' \
package.json > "$TMPDIR/package.json"
# 2. Native runtime deps (sandbox plus optional Rust binaries)
if [[ "$INCLUDE_NATIVE" -eq 1 ]]; then
./scripts/install_native_deps.sh "$TMPDIR" --full-native
touch "${TMPDIR}/bin/use-native"
else
./scripts/install_native_deps.sh "$TMPDIR"
fi
popd >/dev/null
echo "Staged version $VERSION for release in $TMPDIR"
if [[ "$INCLUDE_NATIVE" -eq 1 ]]; then
echo "Test Rust:"
echo " node ${TMPDIR}/bin/codex.js --help"
else
echo "Test Node:"
echo " node ${TMPDIR}/bin/codex.js --help"
fi
# Print final hint for convenience
if [[ "$INCLUDE_NATIVE" -eq 1 ]]; then
echo "Next: cd \"$TMPDIR\" && npm publish --tag native"
else
echo "Next: cd \"$TMPDIR\" && npm publish"
fi

View File

@@ -1,12 +1,13 @@
import type { ApprovalPolicy } from "./approvals";
import type { AppConfig } from "./utils/config";
import type { TerminalChatSession } from "./utils/session.js";
import type { ResponseItem } from "openai/resources/responses/responses";
import TerminalChat from "./components/chat/terminal-chat";
import TerminalChatPastRollout from "./components/chat/terminal-chat-past-rollout";
import { checkInGit } from "./utils/check-in-git";
import { CLI_VERSION, type TerminalChatSession } from "./utils/session.js";
import { onExit } from "./utils/terminal";
import { CLI_VERSION } from "./version";
import { ConfirmInput } from "@inkjs/ui";
import { Box, Text, useApp, useStdin } from "ink";
import React, { useMemo, useState } from "react";
@@ -49,6 +50,7 @@ export default function App({
<TerminalChatPastRollout
session={rollout.session}
items={rollout.items}
fileOpener={config.fileOpener}
/>
);
}

View File

@@ -281,12 +281,14 @@ export function resolvePathAgainstWorkdir(
candidatePath: string,
workdir: string | undefined,
): string {
if (path.isAbsolute(candidatePath)) {
return candidatePath;
// Normalize candidatePath to prevent path traversal attacks
const normalizedCandidatePath = path.normalize(candidatePath);
if (path.isAbsolute(normalizedCandidatePath)) {
return normalizedCandidatePath;
} else if (workdir != null) {
return path.resolve(workdir, candidatePath);
return path.resolve(workdir, normalizedCandidatePath);
} else {
return path.resolve(candidatePath);
return path.resolve(normalizedCandidatePath);
}
}
@@ -363,6 +365,11 @@ export function isSafeCommand(
reason: "View file contents",
group: "Reading files",
};
case "nl":
return {
reason: "View file with line numbers",
group: "Reading files",
};
case "rg":
return {
reason: "Ripgrep search",
@@ -446,11 +453,15 @@ export function isSafeCommand(
}
break;
case "sed":
// We allow two types of sed invocations:
// 1. `sed -n 1,200p FILE`
// 2. `sed -n 1,200p` because the file is passed via stdin, e.g.,
// `nl -ba README.md | sed -n '1,200p'`
if (
cmd1 === "-n" &&
isValidSedNArg(cmd2) &&
typeof cmd3 === "string" &&
command.length === 4
(command.length === 3 ||
(typeof cmd3 === "string" && command.length === 4))
) {
return {
reason: "Sed print subset",

View File

@@ -1,6 +1,19 @@
#!/usr/bin/env node
import "dotenv/config";
// Exit early if on an older version of Node.js (< 22)
const major = process.versions.node.split(".").map(Number)[0]!;
if (major < 22) {
// eslint-disable-next-line no-console
console.error(
"\n" +
"Codex CLI requires Node.js version 22 or newer.\n" +
`You are running Node.js v${process.versions.node}.\n` +
"Please upgrade Node.js: https://nodejs.org/en/download/\n",
);
process.exit(1);
}
// Hack to suppress deprecation warnings (punycode)
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(process as any).noDeprecation = true;
@@ -14,16 +27,20 @@ import type { ReasoningEffort } from "openai/resources.mjs";
import App from "./app";
import { runSinglePass } from "./cli-singlepass";
import SessionsOverlay from "./components/sessions-overlay.js";
import { AgentLoop } from "./utils/agent/agent-loop";
import { ReviewDecision } from "./utils/agent/review";
import { AutoApprovalMode } from "./utils/auto-approval-mode";
import { checkForUpdates } from "./utils/check-updates";
import {
getApiKey,
loadConfig,
PRETTY_PRINT,
INSTRUCTIONS_FILEPATH,
} from "./utils/config";
import {
getApiKey as fetchApiKey,
maybeRedeemCredits,
} from "./utils/get-api-key";
import { createInputItem } from "./utils/input-utils";
import { initLogger } from "./utils/logger/log";
import { isModelSupportedForResponses } from "./utils/model-utils.js";
@@ -34,6 +51,7 @@ import { spawnSync } from "child_process";
import fs from "fs";
import { render } from "ink";
import meow from "meow";
import os from "os";
import path from "path";
import React from "react";
@@ -56,10 +74,13 @@ const cli = meow(
--version Print version and exit
-h, --help Show usage and exit
-m, --model <model> Model to use for completions (default: o4-mini)
-m, --model <model> Model to use for completions (default: codex-mini-latest)
-p, --provider <provider> Provider to use for completions (default: openai)
-i, --image <path> Path(s) to image files to include as input
-v, --view <rollout> Inspect a previously saved rollout instead of starting a session
--history Browse previous sessions
--login Start a new sign in flow
--free Retry redeeming free credits
-q, --quiet Non-interactive mode that only prints the assistant's final output
-c, --config Open the instructions file in your editor
-w, --writable-root <path> Writable folder for sandbox in full-auto mode (can be specified multiple times)
@@ -68,7 +89,7 @@ const cli = meow(
--auto-edit Automatically approve file edits; still prompt for commands
--full-auto Automatically approve edits and commands when executed in the sandbox
--no-project-doc Do not automatically include the repository's 'codex.md'
--no-project-doc Do not automatically include the repository's 'AGENTS.md'
--project-doc <file> Include an additional markdown file at <file> as context
--full-stdout Do not truncate stdout/stderr from command outputs
--notify Enable desktop notifications for responses
@@ -79,6 +100,8 @@ const cli = meow(
--flex-mode Use "flex-mode" processing mode for the request (only supported
with models o3 and o4-mini)
--reasoning <effort> Set the reasoning effort level (low, medium, high) (default: high)
Dangerous options
--dangerously-auto-approve-everything
Skip all confirmation prompts and execute commands without
@@ -102,6 +125,9 @@ const cli = meow(
help: { type: "boolean", aliases: ["h"] },
version: { type: "boolean", description: "Print version and exit" },
view: { type: "string" },
history: { type: "boolean", description: "Browse previous sessions" },
login: { type: "boolean", description: "Force a new sign in flow" },
free: { type: "boolean", description: "Retry redeeming free credits" },
model: { type: "string", aliases: ["m"] },
provider: { type: "string", aliases: ["p"] },
image: { type: "string", isMultiple: true, aliases: ["i"] },
@@ -144,7 +170,7 @@ const cli = meow(
},
noProjectDoc: {
type: "boolean",
description: "Disable automatic inclusion of project-level codex.md",
description: "Disable automatic inclusion of project-level AGENTS.md",
},
projectDoc: {
type: "string",
@@ -259,11 +285,82 @@ let config = loadConfig(undefined, undefined, {
isFullContext: fullContextMode,
});
const prompt = cli.input[0];
// `prompt` can be updated later when the user resumes a previous session
// via the `--history` flag. Therefore it must be declared with `let` rather
// than `const`.
let prompt = cli.input[0];
const model = cli.flags.model ?? config.model;
const imagePaths = cli.flags.image;
const provider = cli.flags.provider ?? config.provider ?? "openai";
const apiKey = getApiKey(provider);
const client = {
issuer: "https://auth.openai.com",
client_id: "app_EMoamEEZ73f0CkXaXp7hrann",
};
let apiKey = "";
let savedTokens:
| {
id_token?: string;
access_token?: string;
refresh_token: string;
}
| undefined;
// Try to load existing auth file if present
try {
const home = os.homedir();
const authDir = path.join(home, ".codex");
const authFile = path.join(authDir, "auth.json");
if (fs.existsSync(authFile)) {
const data = JSON.parse(fs.readFileSync(authFile, "utf-8"));
savedTokens = data.tokens;
const lastRefreshTime = data.last_refresh
? new Date(data.last_refresh).getTime()
: 0;
const expired = Date.now() - lastRefreshTime > 28 * 24 * 60 * 60 * 1000;
if (data.OPENAI_API_KEY && !expired) {
apiKey = data.OPENAI_API_KEY;
}
}
} catch {
// ignore errors
}
if (cli.flags.login) {
apiKey = await fetchApiKey(client.issuer, client.client_id);
try {
const home = os.homedir();
const authDir = path.join(home, ".codex");
const authFile = path.join(authDir, "auth.json");
if (fs.existsSync(authFile)) {
const data = JSON.parse(fs.readFileSync(authFile, "utf-8"));
savedTokens = data.tokens;
}
} catch {
/* ignore */
}
} else if (!apiKey) {
apiKey = await fetchApiKey(client.issuer, client.client_id);
}
// Ensure the API key is available as an environment variable for legacy code
process.env["OPENAI_API_KEY"] = apiKey;
if (cli.flags.free) {
// eslint-disable-next-line no-console
console.log(`${chalk.bold("codex --free")} attempting to redeem credits...`);
if (!savedTokens?.refresh_token) {
apiKey = await fetchApiKey(client.issuer, client.client_id, true);
// fetchApiKey includes credit redemption as the end of the flow
} else {
await maybeRedeemCredits(
client.issuer,
client.client_id,
savedTokens.refresh_token,
savedTokens.id_token,
);
}
}
// Set of providers that don't require API keys
const NO_API_KEY_REQUIRED = new Set(["ollama"]);
@@ -306,8 +403,8 @@ config = {
model: model ?? config.model,
notify: Boolean(cli.flags.notify),
reasoningEffort:
(cli.flags.reasoning as ReasoningEffort | undefined) ?? "high",
flexMode: Boolean(cli.flags.flexMode),
(cli.flags.reasoning as ReasoningEffort | undefined) ?? "medium",
flexMode: cli.flags.flexMode || (config.flexMode ?? false),
provider,
disableResponseStorage,
};
@@ -321,15 +418,19 @@ try {
}
// For --flex-mode, validate and exit if incorrect.
if (cli.flags.flexMode) {
if (config.flexMode) {
const allowedFlexModels = new Set(["o3", "o4-mini"]);
if (!allowedFlexModels.has(config.model)) {
// eslint-disable-next-line no-console
console.error(
`The --flex-mode option is only supported when using the 'o3' or 'o4-mini' models. ` +
`Current model: '${config.model}'.`,
);
process.exit(1);
if (cli.flags.flexMode) {
// eslint-disable-next-line no-console
console.error(
`The --flex-mode option is only supported when using the 'o3' or 'o4-mini' models. ` +
`Current model: '${config.model}'.`,
);
process.exit(1);
} else {
config.flexMode = false;
}
}
}
@@ -349,6 +450,46 @@ if (
let rollout: AppRollout | undefined;
// For --history, show session selector and optionally update prompt or rollout.
if (cli.flags.history) {
const result: { path: string; mode: "view" | "resume" } | null =
await new Promise((resolve) => {
const instance = render(
React.createElement(SessionsOverlay, {
onView: (p: string) => {
instance.unmount();
resolve({ path: p, mode: "view" });
},
onResume: (p: string) => {
instance.unmount();
resolve({ path: p, mode: "resume" });
},
onExit: () => {
instance.unmount();
resolve(null);
},
}),
);
});
if (!result) {
process.exit(0);
}
if (result.mode === "view") {
try {
const content = fs.readFileSync(result.path, "utf-8");
rollout = JSON.parse(content) as AppRollout;
} catch (error) {
// eslint-disable-next-line no-console
console.error("Error reading session file:", error);
process.exit(1);
}
} else {
prompt = `Resume this session: ${result.path}`;
}
}
// For --view, optionally load an existing rollout from disk, display it and exit.
if (cli.flags.view) {
const viewPath = cli.flags.view;

View File

@@ -1,6 +1,7 @@
import type { TerminalHeaderProps } from "./terminal-header.js";
import type { GroupedResponseItem } from "./use-message-grouping.js";
import type { ResponseItem } from "openai/resources/responses/responses.mjs";
import type { FileOpenerScheme } from "src/utils/config.js";
import TerminalChatResponseItem from "./terminal-chat-response-item.js";
import TerminalHeader from "./terminal-header.js";
@@ -19,11 +20,13 @@ type MessageHistoryProps = {
confirmationPrompt: React.ReactNode;
loading: boolean;
headerProps: TerminalHeaderProps;
fileOpener: FileOpenerScheme | undefined;
};
const MessageHistory: React.FC<MessageHistoryProps> = ({
batch,
headerProps,
fileOpener,
}) => {
const messages = batch.map(({ item }) => item!);
@@ -68,7 +71,10 @@ const MessageHistory: React.FC<MessageHistoryProps> = ({
message.type === "message" && message.role === "user" ? 0 : 1
}
>
<TerminalChatResponseItem item={message} />
<TerminalChatResponseItem
item={message}
fileOpener={fileOpener}
/>
</Box>
);
}}

View File

@@ -137,6 +137,9 @@ export interface MultilineTextEditorProps {
// Called when the internal text buffer updates.
readonly onChange?: (text: string) => void;
// Optional initial cursor position (character offset)
readonly initialCursorOffset?: number;
}
// Expose a minimal imperative API so parent components (e.g. TerminalChatInput)
@@ -169,6 +172,7 @@ const MultilineTextEditorInner = (
onSubmit,
focus = true,
onChange,
initialCursorOffset,
}: MultilineTextEditorProps,
ref: React.Ref<MultilineTextEditorHandle | null>,
): React.ReactElement => {
@@ -176,7 +180,7 @@ const MultilineTextEditorInner = (
// Editor State
// ---------------------------------------------------------------------------
const buffer = useRef(new TextBuffer(initialText));
const buffer = useRef(new TextBuffer(initialText, initialCursorOffset));
const [version, setVersion] = useState(0);
// Keep track of the current terminal size so that the editor grows/shrinks

View File

@@ -1,5 +1,6 @@
import type { MultilineTextEditorHandle } from "./multiline-editor";
import type { ReviewDecision } from "../../utils/agent/review.js";
import type { FileSystemSuggestion } from "../../utils/file-system-suggestions.js";
import type { HistoryEntry } from "../../utils/storage/command-history.js";
import type {
ResponseInputItem,
@@ -11,6 +12,7 @@ import { TerminalChatCommandReview } from "./terminal-chat-command-review.js";
import TextCompletions from "./terminal-chat-completions.js";
import { loadConfig } from "../../utils/config.js";
import { getFileSystemSuggestions } from "../../utils/file-system-suggestions.js";
import { expandFileTags } from "../../utils/file-tag-utils";
import { createInputItem } from "../../utils/input-utils.js";
import { log } from "../../utils/logger/log.js";
import { setSessionId } from "../../utils/session.js";
@@ -52,6 +54,7 @@ export default function TerminalChatInput({
openApprovalOverlay,
openHelpOverlay,
openDiffOverlay,
openSessionsOverlay,
onCompact,
interruptAgent,
active,
@@ -75,6 +78,7 @@ export default function TerminalChatInput({
openApprovalOverlay: () => void;
openHelpOverlay: () => void;
openDiffOverlay: () => void;
openSessionsOverlay: () => void;
onCompact: () => void;
interruptAgent: () => void;
active: boolean;
@@ -92,16 +96,120 @@ export default function TerminalChatInput({
const [historyIndex, setHistoryIndex] = useState<number | null>(null);
const [draftInput, setDraftInput] = useState<string>("");
const [skipNextSubmit, setSkipNextSubmit] = useState<boolean>(false);
const [fsSuggestions, setFsSuggestions] = useState<Array<string>>([]);
const [fsSuggestions, setFsSuggestions] = useState<
Array<FileSystemSuggestion>
>([]);
const [selectedCompletion, setSelectedCompletion] = useState<number>(-1);
// Multiline text editor key to force remount after submission
const [editorKey, setEditorKey] = useState(0);
const [editorState, setEditorState] = useState<{
key: number;
initialCursorOffset?: number;
}>({ key: 0 });
// Imperative handle from the multiline editor so we can query caret position
const editorRef = useRef<MultilineTextEditorHandle | null>(null);
// Track the caret row across keystrokes
const prevCursorRow = useRef<number | null>(null);
const prevCursorWasAtLastRow = useRef<boolean>(false);
// --- Helper for updating input, remounting editor, and moving cursor to end ---
const applyFsSuggestion = useCallback((newInputText: string) => {
setInput(newInputText);
setEditorState((s) => ({
key: s.key + 1,
initialCursorOffset: newInputText.length,
}));
}, []);
// --- Helper for updating file system suggestions ---
function updateFsSuggestions(
txt: string,
alwaysUpdateSelection: boolean = false,
) {
// Clear file system completions if a space is typed
if (txt.endsWith(" ")) {
setFsSuggestions([]);
setSelectedCompletion(-1);
} else {
// Determine the current token (last whitespace-separated word)
const words = txt.trim().split(/\s+/);
const lastWord = words[words.length - 1] ?? "";
const shouldUpdateSelection =
lastWord.startsWith("@") || alwaysUpdateSelection;
// Strip optional leading '@' for the path prefix
let pathPrefix: string;
if (lastWord.startsWith("@")) {
pathPrefix = lastWord.slice(1);
// If only '@' is typed, list everything in the current directory
pathPrefix = pathPrefix.length === 0 ? "./" : pathPrefix;
} else {
pathPrefix = lastWord;
}
if (shouldUpdateSelection) {
const completions = getFileSystemSuggestions(pathPrefix);
setFsSuggestions(completions);
if (completions.length > 0) {
setSelectedCompletion((prev) =>
prev < 0 || prev >= completions.length ? 0 : prev,
);
} else {
setSelectedCompletion(-1);
}
} else if (fsSuggestions.length > 0) {
// Token cleared → clear menu
setFsSuggestions([]);
setSelectedCompletion(-1);
}
}
}
/**
* Result of replacing text with a file system suggestion
*/
interface ReplacementResult {
/** The new text with the suggestion applied */
text: string;
/** The selected suggestion if a replacement was made */
suggestion: FileSystemSuggestion | null;
/** Whether a replacement was actually made */
wasReplaced: boolean;
}
// --- Helper for replacing input with file system suggestion ---
function getFileSystemSuggestion(
txt: string,
requireAtPrefix: boolean = false,
): ReplacementResult {
if (fsSuggestions.length === 0 || selectedCompletion < 0) {
return { text: txt, suggestion: null, wasReplaced: false };
}
const words = txt.trim().split(/\s+/);
const lastWord = words[words.length - 1] ?? "";
// Check if @ prefix is required and the last word doesn't have it
if (requireAtPrefix && !lastWord.startsWith("@")) {
return { text: txt, suggestion: null, wasReplaced: false };
}
const selected = fsSuggestions[selectedCompletion];
if (!selected) {
return { text: txt, suggestion: null, wasReplaced: false };
}
const replacement = lastWord.startsWith("@")
? `@${selected.path}`
: selected.path;
words[words.length - 1] = replacement;
return {
text: words.join(" "),
suggestion: selected,
wasReplaced: true,
};
}
// Load command history on component mount
useEffect(() => {
async function loadHistory() {
@@ -174,6 +282,9 @@ export default function TerminalChatInput({
case "/history":
openOverlay();
break;
case "/sessions":
openSessionsOverlay();
break;
case "/help":
openHelpOverlay();
break;
@@ -223,21 +334,12 @@ export default function TerminalChatInput({
}
if (_key.tab && selectedCompletion >= 0) {
const words = input.trim().split(/\s+/);
const selected = fsSuggestions[selectedCompletion];
if (words.length > 0 && selected) {
words[words.length - 1] = selected;
const newText = words.join(" ");
setInput(newText);
// Force remount of the editor with the new text
setEditorKey((k) => k + 1);
// We need to move the cursor to the end after editor remounts
setTimeout(() => {
editorRef.current?.moveCursorToEnd?.();
}, 0);
const { text: newText, wasReplaced } =
getFileSystemSuggestion(input);
// Only proceed if the text was actually changed
if (wasReplaced) {
applyFsSuggestion(newText);
setFsSuggestions([]);
setSelectedCompletion(-1);
}
@@ -277,7 +379,7 @@ export default function TerminalChatInput({
setInput(history[newIndex]?.command ?? "");
// Re-mount the editor so it picks up the new initialText
setEditorKey((k) => k + 1);
setEditorState((s) => ({ key: s.key + 1 }));
return; // handled
}
@@ -296,28 +398,23 @@ export default function TerminalChatInput({
if (newIndex >= history.length) {
setHistoryIndex(null);
setInput(draftInput);
setEditorKey((k) => k + 1);
setEditorState((s) => ({ key: s.key + 1 }));
} else {
setHistoryIndex(newIndex);
setInput(history[newIndex]?.command ?? "");
setEditorKey((k) => k + 1);
setEditorState((s) => ({ key: s.key + 1 }));
}
return; // handled
}
// Otherwise let it propagate
}
if (_key.tab) {
const words = input.split(/\s+/);
const mostRecentWord = words[words.length - 1];
if (mostRecentWord === undefined || mostRecentWord === "") {
return;
}
const completions = getFileSystemSuggestions(mostRecentWord);
setFsSuggestions(completions);
if (completions.length > 0) {
setSelectedCompletion(0);
}
// Defer filesystem suggestion logic to onSubmit if enter key is pressed
if (!_key.return) {
// Pressing tab should trigger the file system suggestions
const shouldUpdateSelection = _key.tab;
const targetInput = _key.delete ? input.slice(0, -1) : input + _input;
updateFsSuggestions(targetInput, shouldUpdateSelection);
}
}
@@ -392,6 +489,10 @@ export default function TerminalChatInput({
setInput("");
openOverlay();
return;
} else if (inputValue === "/sessions") {
setInput("");
openSessionsOverlay();
return;
} else if (inputValue === "/help") {
setInput("");
openHelpOverlay();
@@ -492,7 +593,7 @@ export default function TerminalChatInput({
try {
const os = await import("node:os");
const { CLI_VERSION } = await import("../../utils/session.js");
const { CLI_VERSION } = await import("../../version.js");
const { buildBugReportUrl } = await import(
"../../utils/bug-report.js"
);
@@ -599,7 +700,10 @@ export default function TerminalChatInput({
);
text = text.trim();
const inputItem = await createInputItem(text, images);
// Expand @file tokens into XML blocks for the model
const expandedText = await expandFileTags(text);
const inputItem = await createInputItem(expandedText, images);
submitInput([inputItem]);
// Get config for history persistence.
@@ -633,6 +737,7 @@ export default function TerminalChatInput({
openModelOverlay,
openHelpOverlay,
openDiffOverlay,
openSessionsOverlay,
history,
onCompact,
skipNextSubmit,
@@ -673,28 +778,30 @@ export default function TerminalChatInput({
setHistoryIndex(null);
}
setInput(txt);
// Clear tab completions if a space is typed
if (txt.endsWith(" ")) {
setFsSuggestions([]);
setSelectedCompletion(-1);
} else if (fsSuggestions.length > 0) {
// Update file suggestions as user types
const words = txt.trim().split(/\s+/);
const mostRecentWord =
words.length > 0 ? words[words.length - 1] : "";
if (mostRecentWord !== undefined) {
setFsSuggestions(getFileSystemSuggestions(mostRecentWord));
}
}
}}
key={editorKey}
key={editorState.key}
initialCursorOffset={editorState.initialCursorOffset}
initialText={input}
height={6}
focus={active}
onSubmit={(txt) => {
onSubmit(txt);
setEditorKey((k) => k + 1);
// If final token is an @path, replace with filesystem suggestion if available
const {
text: replacedText,
suggestion,
wasReplaced,
} = getFileSystemSuggestion(txt, true);
// If we replaced @path token with a directory, don't submit
if (wasReplaced && suggestion?.isDirectory) {
applyFsSuggestion(replacedText);
// Update suggestions for the new directory
updateFsSuggestions(replacedText, true);
return;
}
onSubmit(replacedText);
setEditorState((s) => ({ key: s.key + 1 }));
setInput("");
setHistoryIndex(null);
setDraftInput("");
@@ -741,7 +848,7 @@ export default function TerminalChatInput({
</Text>
) : fsSuggestions.length > 0 ? (
<TextCompletions
completions={fsSuggestions}
completions={fsSuggestions.map((suggestion) => suggestion.path)}
selectedCompletion={selectedCompletion}
displayLimit={5}
/>

View File

@@ -1,5 +1,6 @@
import type { TerminalChatSession } from "../../utils/session.js";
import type { ResponseItem } from "openai/resources/responses/responses";
import type { FileOpenerScheme } from "src/utils/config.js";
import TerminalChatResponseItem from "./terminal-chat-response-item";
import { Box, Text } from "ink";
@@ -8,9 +9,11 @@ import React from "react";
export default function TerminalChatPastRollout({
session,
items,
fileOpener,
}: {
session: TerminalChatSession;
items: Array<ResponseItem>;
fileOpener: FileOpenerScheme | undefined;
}): React.ReactElement {
const { version, id: sessionId, model } = session;
return (
@@ -51,9 +54,13 @@ export default function TerminalChatPastRollout({
{React.useMemo(
() =>
items.map((item, key) => (
<TerminalChatResponseItem key={key} item={item} />
<TerminalChatResponseItem
key={key}
item={item}
fileOpener={fileOpener}
/>
)),
[items],
[items, fileOpener],
)}
</Box>
</Box>

View File

@@ -8,23 +8,30 @@ import type {
ResponseOutputMessage,
ResponseReasoningItem,
} from "openai/resources/responses/responses";
import type { FileOpenerScheme } from "src/utils/config";
import { useTerminalSize } from "../../hooks/use-terminal-size";
import { collapseXmlBlocks } from "../../utils/file-tag-utils";
import { parseToolCall, parseToolCallOutput } from "../../utils/parsers";
import chalk, { type ForegroundColorName } from "chalk";
import { Box, Text } from "ink";
import { parse, setOptions } from "marked";
import TerminalRenderer from "marked-terminal";
import path from "path";
import React, { useEffect, useMemo } from "react";
import { formatCommandForDisplay } from "src/format-command.js";
import supportsHyperlinks from "supports-hyperlinks";
export default function TerminalChatResponseItem({
item,
fullStdout = false,
setOverlayMode,
fileOpener,
}: {
item: ResponseItem;
fullStdout?: boolean;
setOverlayMode?: React.Dispatch<React.SetStateAction<OverlayModeType>>;
fileOpener: FileOpenerScheme | undefined;
}): React.ReactElement {
switch (item.type) {
case "message":
@@ -32,10 +39,15 @@ export default function TerminalChatResponseItem({
<TerminalChatResponseMessage
setOverlayMode={setOverlayMode}
message={item}
fileOpener={fileOpener}
/>
);
// @ts-expect-error new item types aren't in SDK yet
case "local_shell_call":
case "function_call":
return <TerminalChatResponseToolCall message={item} />;
// @ts-expect-error new item types aren't in SDK yet
case "local_shell_call_output":
case "function_call_output":
return (
<TerminalChatResponseToolCallOutput
@@ -49,7 +61,9 @@ export default function TerminalChatResponseItem({
// @ts-expect-error `reasoning` is not in the responses API yet
if (item.type === "reasoning") {
return <TerminalChatResponseReasoning message={item} />;
return (
<TerminalChatResponseReasoning message={item} fileOpener={fileOpener} />
);
}
return <TerminalChatResponseGenericMessage message={item} />;
@@ -77,8 +91,10 @@ export default function TerminalChatResponseItem({
export function TerminalChatResponseReasoning({
message,
fileOpener,
}: {
message: ResponseReasoningItem & { duration_ms?: number };
fileOpener: FileOpenerScheme | undefined;
}): React.ReactElement | null {
// Only render when there is a reasoning summary
if (!message.summary || message.summary.length === 0) {
@@ -91,7 +107,7 @@ export function TerminalChatResponseReasoning({
return (
<Box key={key} flexDirection="column">
{s.headline && <Text bold>{s.headline}</Text>}
<Markdown>{s.text}</Markdown>
<Markdown fileOpener={fileOpener}>{s.text}</Markdown>
</Box>
);
})}
@@ -107,9 +123,11 @@ const colorsByRole: Record<string, ForegroundColorName> = {
function TerminalChatResponseMessage({
message,
setOverlayMode,
fileOpener,
}: {
message: ResponseInputMessageItem | ResponseOutputMessage;
setOverlayMode?: React.Dispatch<React.SetStateAction<OverlayModeType>>;
fileOpener: FileOpenerScheme | undefined;
}) {
// auto switch to model mode if the system message contains "has been deprecated"
useEffect(() => {
@@ -128,7 +146,7 @@ function TerminalChatResponseMessage({
<Text bold color={colorsByRole[message.role] || "gray"}>
{message.role === "assistant" ? "codex" : message.role}
</Text>
<Markdown>
<Markdown fileOpener={fileOpener}>
{message.content
.map(
(c) =>
@@ -137,7 +155,7 @@ function TerminalChatResponseMessage({
: c.type === "refusal"
? c.refusal
: c.type === "input_text"
? c.text
? collapseXmlBlocks(c.text)
: c.type === "input_image"
? "<Image>"
: c.type === "input_file"
@@ -153,16 +171,28 @@ function TerminalChatResponseMessage({
function TerminalChatResponseToolCall({
message,
}: {
message: ResponseFunctionToolCallItem;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
message: ResponseFunctionToolCallItem | any;
}) {
const details = parseToolCall(message);
let workdir: string | undefined;
let cmdReadableText: string | undefined;
if (message.type === "function_call") {
const details = parseToolCall(message);
workdir = details?.workdir;
cmdReadableText = details?.cmdReadableText;
} else if (message.type === "local_shell_call") {
const action = message.action;
workdir = action.working_directory;
cmdReadableText = formatCommandForDisplay(action.command);
}
return (
<Box flexDirection="column" gap={1}>
<Text color="magentaBright" bold>
command
{workdir ? <Text dimColor>{` (${workdir})`}</Text> : ""}
</Text>
<Text>
<Text dimColor>$</Text> {details?.cmdReadableText}
<Text dimColor>$</Text> {cmdReadableText}
</Text>
</Box>
);
@@ -172,7 +202,8 @@ function TerminalChatResponseToolCallOutput({
message,
fullStdout,
}: {
message: ResponseFunctionToolCallOutputItem;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
message: ResponseFunctionToolCallOutputItem | any;
fullStdout: boolean;
}) {
const { output, metadata } = parseToolCallOutput(message.output);
@@ -239,26 +270,91 @@ export function TerminalChatResponseGenericMessage({
export type MarkdownProps = TerminalRendererOptions & {
children: string;
fileOpener: FileOpenerScheme | undefined;
/** Base path for resolving relative file citation paths. */
cwd?: string;
};
export function Markdown({
children,
fileOpener,
cwd,
...options
}: MarkdownProps): React.ReactElement {
const size = useTerminalSize();
const rendered = React.useMemo(() => {
const linkifiedMarkdown = rewriteFileCitations(children, fileOpener, cwd);
// Configure marked for this specific render
setOptions({
// @ts-expect-error missing parser, space props
renderer: new TerminalRenderer({ ...options, width: size.columns }),
});
const parsed = parse(children, { async: false }).trim();
const parsed = parse(linkifiedMarkdown, { async: false }).trim();
// Remove the truncation logic
return parsed;
// eslint-disable-next-line react-hooks/exhaustive-deps -- options is an object of primitives
}, [children, size.columns, size.rows]);
}, [
children,
size.columns,
size.rows,
fileOpener,
supportsHyperlinks.stdout,
chalk.level,
]);
return <Text>{rendered}</Text>;
}
/** Regex to match citations for source files (hence the `F:` prefix). */
const citationRegex = new RegExp(
[
// Opening marker
"【",
// Capture group 1: file ID or name (anything except '†')
"F:([^†]+)",
// Field separator
"†",
// Capture group 2: start line (digits)
"L(\\d+)",
// Non-capturing group for optional end line
"(?:",
// Capture group 3: end line (digits or '?')
"-L(\\d+|\\?)",
// End of optional group (may not be present)
")?",
// Closing marker
"】",
].join(""),
"g", // Global flag
);
function rewriteFileCitations(
markdown: string,
fileOpener: FileOpenerScheme | undefined,
cwd: string = process.cwd(),
): string {
citationRegex.lastIndex = 0;
return markdown.replace(citationRegex, (_match, file, start, _end) => {
const absPath = path.resolve(cwd, file);
if (!fileOpener) {
return `[${file}](${absPath})`;
}
const uri = `${fileOpener}://file${absPath}:${start}`;
const label = `${file}:${start}`;
// In practice, sometimes multiple citations for the same file, but with a
// different line number, are shown sequentially, so we:
// - include the line number in the label to disambiguate them
// - add a space after the link to make it easier to read
return `[${label}](${uri}) `;
});
}

View File

@@ -1,3 +1,4 @@
import type { AppRollout } from "../../app.js";
import type { ApplyPatchCommand, ApprovalPolicy } from "../../approvals.js";
import type { CommandConfirmation } from "../../utils/agent/agent-loop.js";
import type { AppConfig } from "../../utils/config.js";
@@ -5,6 +6,7 @@ import type { ColorName } from "chalk";
import type { ResponseItem } from "openai/resources/responses/responses.mjs";
import TerminalChatInput from "./terminal-chat-input.js";
import TerminalChatPastRollout from "./terminal-chat-past-rollout.js";
import { TerminalChatToolCallCommand } from "./terminal-chat-tool-call-command.js";
import TerminalMessageHistory from "./terminal-message-history.js";
import { formatCommandForDisplay } from "../../format-command.js";
@@ -13,7 +15,7 @@ import { useTerminalSize } from "../../hooks/use-terminal-size.js";
import { AgentLoop } from "../../utils/agent/agent-loop.js";
import { ReviewDecision } from "../../utils/agent/review.js";
import { generateCompactSummary } from "../../utils/compact-summary.js";
import { getBaseUrl, getApiKey, saveConfig } from "../../utils/config.js";
import { saveConfig } from "../../utils/config.js";
import { extractAppliedPatches as _extractAppliedPatches } from "../../utils/extract-applied-patches.js";
import { getGitDiff } from "../../utils/get-diff.js";
import { createInputItem } from "../../utils/input-utils.js";
@@ -23,24 +25,27 @@ import {
calculateContextPercentRemaining,
uniqueById,
} from "../../utils/model-utils.js";
import { CLI_VERSION } from "../../utils/session.js";
import { createOpenAIClient } from "../../utils/openai-client.js";
import { shortCwd } from "../../utils/short-path.js";
import { saveRollout } from "../../utils/storage/save-rollout.js";
import { CLI_VERSION } from "../../version.js";
import ApprovalModeOverlay from "../approval-mode-overlay.js";
import DiffOverlay from "../diff-overlay.js";
import HelpOverlay from "../help-overlay.js";
import HistoryOverlay from "../history-overlay.js";
import ModelOverlay from "../model-overlay.js";
import SessionsOverlay from "../sessions-overlay.js";
import chalk from "chalk";
import fs from "fs/promises";
import { Box, Text } from "ink";
import { spawn } from "node:child_process";
import OpenAI from "openai";
import React, { useEffect, useMemo, useRef, useState } from "react";
import { inspect } from "util";
export type OverlayModeType =
| "none"
| "history"
| "sessions"
| "model"
| "approval"
| "help"
@@ -78,10 +83,7 @@ async function generateCommandExplanation(
): Promise<string> {
try {
// Create a temporary OpenAI client
const oai = new OpenAI({
apiKey: getApiKey(config.provider),
baseURL: getBaseUrl(config.provider),
});
const oai = createOpenAIClient(config);
// Format the command for display
const commandForDisplay = formatCommandForDisplay(command);
@@ -194,6 +196,7 @@ export default function TerminalChat({
submitConfirmation,
} = useConfirmation();
const [overlayMode, setOverlayMode] = useState<OverlayModeType>("none");
const [viewRollout, setViewRollout] = useState<AppRollout | null>(null);
// Store the diff text when opening the diff overlay so the view isnt
// recomputed on every rerender while it is open.
@@ -457,6 +460,16 @@ export default function TerminalChat({
[items, model],
);
if (viewRollout) {
return (
<TerminalChatPastRollout
fileOpener={config.fileOpener}
session={viewRollout.session}
items={viewRollout.items}
/>
);
}
return (
<Box flexDirection="column">
<Box flexDirection="column">
@@ -483,6 +496,7 @@ export default function TerminalChat({
initialImagePaths,
flexModeEnabled: Boolean(config.flexMode),
}}
fileOpener={config.fileOpener}
/>
) : (
<Box>
@@ -511,6 +525,7 @@ export default function TerminalChat({
openModelOverlay={() => setOverlayMode("model")}
openApprovalOverlay={() => setOverlayMode("approval")}
openHelpOverlay={() => setOverlayMode("help")}
openSessionsOverlay={() => setOverlayMode("sessions")}
openDiffOverlay={() => {
const { isGitRepo, diff } = getGitDiff();
let text: string;
@@ -570,6 +585,25 @@ export default function TerminalChat({
{overlayMode === "history" && (
<HistoryOverlay items={items} onExit={() => setOverlayMode("none")} />
)}
{overlayMode === "sessions" && (
<SessionsOverlay
onView={async (p) => {
try {
const txt = await fs.readFile(p, "utf-8");
const data = JSON.parse(txt) as AppRollout;
setViewRollout(data);
setOverlayMode("none");
} catch {
setOverlayMode("none");
}
}}
onResume={(p) => {
setOverlayMode("none");
setInitialPrompt(`Resume this session: ${p}`);
}}
onExit={() => setOverlayMode("none")}
/>
)}
{overlayMode === "model" && (
<ModelOverlay
currentModel={model}

View File

@@ -2,6 +2,7 @@ import type { OverlayModeType } from "./terminal-chat.js";
import type { TerminalHeaderProps } from "./terminal-header.js";
import type { GroupedResponseItem } from "./use-message-grouping.js";
import type { ResponseItem } from "openai/resources/responses/responses.mjs";
import type { FileOpenerScheme } from "src/utils/config.js";
import TerminalChatResponseItem from "./terminal-chat-response-item.js";
import TerminalHeader from "./terminal-header.js";
@@ -23,6 +24,7 @@ type TerminalMessageHistoryProps = {
headerProps: TerminalHeaderProps;
fullStdout: boolean;
setOverlayMode: React.Dispatch<React.SetStateAction<OverlayModeType>>;
fileOpener: FileOpenerScheme | undefined;
};
const TerminalMessageHistory: React.FC<TerminalMessageHistoryProps> = ({
@@ -33,6 +35,7 @@ const TerminalMessageHistory: React.FC<TerminalMessageHistoryProps> = ({
thinkingSeconds: _thinkingSeconds,
fullStdout,
setOverlayMode,
fileOpener,
}) => {
// Flatten batch entries to response items.
const messages = useMemo(() => batch.map(({ item }) => item!), [batch]);
@@ -59,16 +62,25 @@ const TerminalMessageHistory: React.FC<TerminalMessageHistoryProps> = ({
key={`${message.id}-${index}`}
flexDirection="column"
marginLeft={
message.type === "message" && message.role === "user" ? 0 : 4
message.type === "message" &&
(message.role === "user" || message.role === "assistant")
? 0
: 4
}
marginTop={
message.type === "message" && message.role === "user" ? 0 : 1
}
marginBottom={
message.type === "message" && message.role === "assistant"
? 1
: 0
}
>
<TerminalChatResponseItem
item={message}
fullStdout={fullStdout}
setOverlayMode={setOverlayMode}
fileOpener={fileOpener}
/>
</Box>
);

View File

@@ -0,0 +1,130 @@
import type { TypeaheadItem } from "./typeahead-overlay.js";
import TypeaheadOverlay from "./typeahead-overlay.js";
import fs from "fs/promises";
import { Box, Text, useInput } from "ink";
import os from "os";
import path from "path";
import React, { useEffect, useState } from "react";
const SESSIONS_ROOT = path.join(os.homedir(), ".codex", "sessions");
export type SessionMeta = {
path: string;
timestamp: string;
userMessages: number;
toolCalls: number;
firstMessage: string;
};
async function loadSessions(): Promise<Array<SessionMeta>> {
try {
const entries = await fs.readdir(SESSIONS_ROOT);
const sessions: Array<SessionMeta> = [];
for (const entry of entries) {
if (!entry.endsWith(".json")) {
continue;
}
const filePath = path.join(SESSIONS_ROOT, entry);
try {
// eslint-disable-next-line no-await-in-loop
const content = await fs.readFile(filePath, "utf-8");
const data = JSON.parse(content) as {
session?: { timestamp?: string };
items?: Array<{
type: string;
role: string;
content: Array<{ text: string }>;
}>;
};
const items = Array.isArray(data.items) ? data.items : [];
const firstUser = items.find(
(i) => i?.type === "message" && i.role === "user",
);
const firstText =
firstUser?.content?.[0]?.text?.replace(/\n/g, " ").slice(0, 16) ?? "";
const userMessages = items.filter(
(i) => i?.type === "message" && i.role === "user",
).length;
const toolCalls = items.filter(
(i) => i?.type === "function_call",
).length;
sessions.push({
path: filePath,
timestamp: data.session?.timestamp || "",
userMessages,
toolCalls,
firstMessage: firstText,
});
} catch {
/* ignore invalid session */
}
}
sessions.sort((a, b) => b.timestamp.localeCompare(a.timestamp));
return sessions;
} catch {
return [];
}
}
type Props = {
onView: (sessionPath: string) => void;
onResume: (sessionPath: string) => void;
onExit: () => void;
};
export default function SessionsOverlay({
onView,
onResume,
onExit,
}: Props): JSX.Element {
const [items, setItems] = useState<Array<TypeaheadItem>>([]);
const [mode, setMode] = useState<"view" | "resume">("view");
useEffect(() => {
(async () => {
const sessions = await loadSessions();
const formatted = sessions.map((s) => {
const ts = s.timestamp
? new Date(s.timestamp).toLocaleString(undefined, {
dateStyle: "short",
timeStyle: "short",
})
: "";
const first = s.firstMessage?.slice(0, 50);
const label = `${ts} · ${s.userMessages} msgs/${s.toolCalls} tools · ${first}`;
return { label, value: s.path } as TypeaheadItem;
});
setItems(formatted);
})();
}, []);
useInput((_input, key) => {
if (key.tab) {
setMode((m) => (m === "view" ? "resume" : "view"));
}
});
return (
<TypeaheadOverlay
title={mode === "view" ? "View session" : "Resume session"}
description={
<Box flexDirection="column">
<Text>
{mode === "view" ? "press enter to view" : "press enter to resume"}
</Text>
<Text dimColor>tab to toggle mode · esc to cancel</Text>
</Box>
}
initialItems={items}
onSelect={(value) => {
if (mode === "view") {
onView(value);
} else {
onResume(value);
}
}}
onExit={onExit}
/>
);
}

View File

@@ -5,13 +5,7 @@ import type { FileOperation } from "../utils/singlepass/file_ops";
import Spinner from "./vendor/ink-spinner"; // Thirdparty / vendor components
import TextInput from "./vendor/ink-text-input";
import {
OPENAI_TIMEOUT_MS,
OPENAI_ORGANIZATION,
OPENAI_PROJECT,
getBaseUrl,
getApiKey,
} from "../utils/config";
import { createOpenAIClient } from "../utils/openai-client";
import {
generateDiffSummary,
generateEditSummary,
@@ -26,7 +20,6 @@ import { EditedFilesSchema } from "../utils/singlepass/file_ops";
import * as fsSync from "fs";
import * as fsPromises from "fs/promises";
import { Box, Text, useApp, useInput } from "ink";
import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import path from "path";
import React, { useEffect, useState, useRef } from "react";
@@ -399,20 +392,7 @@ export function SinglePassApp({
files,
});
const headers: Record<string, string> = {};
if (OPENAI_ORGANIZATION) {
headers["OpenAI-Organization"] = OPENAI_ORGANIZATION;
}
if (OPENAI_PROJECT) {
headers["OpenAI-Project"] = OPENAI_PROJECT;
}
const openai = new OpenAI({
apiKey: getApiKey(config.provider),
baseURL: getBaseUrl(config.provider),
timeout: OPENAI_TIMEOUT_MS,
defaultHeaders: headers,
});
const openai = createOpenAIClient(config);
const chatResp = await openai.beta.chat.completions.parse({
model: config.model,
...(config.flexMode ? { service_tier: "flex" } : {}),

View File

@@ -100,11 +100,14 @@ export default class TextBuffer {
private clipboard: string | null = null;
constructor(text = "") {
constructor(text = "", initialCursorIdx = 0) {
this.lines = text.split("\n");
if (this.lines.length === 0) {
this.lines = [""];
}
// No need to reset cursor on failure - class already default cursor position to 0,0
this.setCursorIdx(initialCursorIdx);
}
/* =======================================================================
@@ -122,6 +125,39 @@ export default class TextBuffer {
this.cursorCol = clamp(this.cursorCol, 0, this.lineLen(this.cursorRow));
}
/**
* Sets the cursor position based on a character offset from the start of the document.
* @param idx The character offset to move to (0-based)
* @returns true if successful, false if the index was invalid
*/
private setCursorIdx(idx: number): boolean {
// Reset preferred column since this is an explicit horizontal movement
this.preferredCol = null;
let remainingChars = idx;
let row = 0;
// Count characters line by line until we find the right position
while (row < this.lines.length) {
const lineLength = this.lineLen(row);
// Add 1 for the newline character (except for the last line)
const totalChars = lineLength + (row < this.lines.length - 1 ? 1 : 0);
if (remainingChars <= lineLength) {
this.cursorRow = row;
this.cursorCol = remainingChars;
return true;
}
// Move to next line, subtract this line's characters plus newline
remainingChars -= totalChars;
row++;
}
// If we get here, the index was too large
return false;
}
/* =====================================================================
* History helpers
* =================================================================== */
@@ -489,6 +525,22 @@ export default class TextBuffer {
end++;
}
/*
* After consuming the actual word we also want to swallow any immediate
* separator run that *follows* it so that a forward word-delete mirrors
* the behaviour of common shells/editors (and matches the expectations
* encoded in our test-suite).
*
* Example given the text "foo bar baz" and the caret placed at the
* beginning of "bar" (index 4) we want Alt+Delete to turn the string
* into "foo␠baz" (single space). Without this extra loop we would stop
* right before the separating space, producing "foo␠␠baz".
*/
while (end < arr.length && !isWordChar(arr[end])) {
end++;
}
this.lines[this.cursorRow] =
cpSlice(line, 0, this.cursorCol) + cpSlice(line, end);
// caret stays in place
@@ -823,12 +875,42 @@ export default class TextBuffer {
// no `key.backspace` flag set. Treat that byte exactly like an ordinary
// Backspace for parity with textarea.rs and to make interactive tests
// feedable through the simpler `(ch, {}, vp)` path.
// ------------------------------------------------------------------
// Word-wise deletions
//
// macOS (and many terminals on Linux/BSD) map the physical “Delete” key
// to a *backspace* operation emitting either the raw DEL (0x7f) byte
// or setting `key.backspace = true` in Inks parsed event. Holding the
// Option/Alt modifier therefore *also* sends backspace semantics even
// though users colloquially refer to the shortcut as “⌥+Delete”.
//
// Historically we treated **modifier + Delete** as a *forward* word
// deletion. This behaviour, however, diverges from the default found
// in shells (zsh, bash, fish, etc.) and native macOS text fields where
// ⌥+Delete removes the word *to the left* of the caret. Update the
// mapping so that both
//
// • ⌥/Alt/Meta + Backspace and
// • ⌥/Alt/Meta + Delete
//
// perform a **backward** word deletion. We keep the ability to delete
// the *next* word by requiring an additional Shift modifier a common
// binding on full-size keyboards that expose a dedicated Forward Delete
// key.
// ------------------------------------------------------------------
else if (
// ⌥/Alt/Meta + (Backspace|Delete|DEL byte) → backward word delete
(key["meta"] || key["ctrl"] || key["alt"]) &&
(key["backspace"] || input === "\x7f")
!key["shift"] &&
(key["backspace"] || input === "\x7f" || key["delete"])
) {
this.deleteWordLeft();
} else if ((key["meta"] || key["ctrl"] || key["alt"]) && key["delete"]) {
} else if (
// ⇧+⌥/Alt/Meta + (Backspace|Delete|DEL byte) → forward word delete
(key["meta"] || key["ctrl"] || key["alt"]) &&
key["shift"] &&
(key["backspace"] || input === "\x7f" || key["delete"])
) {
this.deleteWordRight();
} else if (
key["backspace"] ||

View File

@@ -8,29 +8,34 @@ import type {
ResponseItem,
ResponseCreateParams,
FunctionTool,
Tool,
} from "openai/resources/responses/responses.mjs";
import type { Reasoning } from "openai/resources.mjs";
import { CLI_VERSION } from "../../version.js";
import {
OPENAI_TIMEOUT_MS,
OPENAI_ORGANIZATION,
OPENAI_PROJECT,
getApiKey,
getBaseUrl,
AZURE_OPENAI_API_VERSION,
} from "../config.js";
import { log } from "../logger/log.js";
import { parseToolCallArguments } from "../parsers.js";
import { responsesCreateViaChatCompletions } from "../responses.js";
import {
ORIGIN,
CLI_VERSION,
getSessionId,
setCurrentModel,
setSessionId,
} from "../session.js";
import { applyPatchToolInstructions } from "./apply-patch.js";
import { handleExecCommand } from "./handle-exec-command.js";
import { HttpsProxyAgent } from "https-proxy-agent";
import { spawnSync } from "node:child_process";
import { randomUUID } from "node:crypto";
import OpenAI, { APIConnectionTimeoutError } from "openai";
import OpenAI, { APIConnectionTimeoutError, AzureOpenAI } from "openai";
import os from "os";
// Wait time before retrying after rate limit errors (ms).
const RATE_LIMIT_RETRY_WAIT_MS = parseInt(
@@ -38,6 +43,9 @@ const RATE_LIMIT_RETRY_WAIT_MS = parseInt(
10,
);
// See https://github.com/openai/openai-node/tree/v4?tab=readme-ov-file#configuring-an-https-agent-eg-for-proxies
const PROXY_URL = process.env["HTTPS_PROXY"];
export type CommandConfirmation = {
review: ReviewDecision;
applyPatch?: ApplyPatchCommand | undefined;
@@ -76,7 +84,7 @@ type AgentLoopParams = {
onLastResponseId: (lastResponseId: string) => void;
};
const shellTool: FunctionTool = {
const shellFunctionTool: FunctionTool = {
type: "function",
name: "shell",
description: "Runs a shell command, and returns its output.",
@@ -100,6 +108,11 @@ const shellTool: FunctionTool = {
},
};
const localShellTool: Tool = {
//@ts-expect-error - waiting on sdk
type: "local_shell",
};
export class AgentLoop {
private model: string;
private provider: string;
@@ -293,7 +306,7 @@ export class AgentLoop {
this.sessionId = getSessionId() || randomUUID().replaceAll("-", "");
// Configure OpenAI client with optional timeout (ms) from environment
const timeoutMs = OPENAI_TIMEOUT_MS;
const apiKey = getApiKey(this.provider);
const apiKey = this.config.apiKey ?? process.env["OPENAI_API_KEY"] ?? "";
const baseURL = getBaseUrl(this.provider);
this.oai = new OpenAI({
@@ -314,9 +327,29 @@ export class AgentLoop {
: {}),
...(OPENAI_PROJECT ? { "OpenAI-Project": OPENAI_PROJECT } : {}),
},
httpAgent: PROXY_URL ? new HttpsProxyAgent(PROXY_URL) : undefined,
...(timeoutMs !== undefined ? { timeout: timeoutMs } : {}),
});
if (this.provider.toLowerCase() === "azure") {
this.oai = new AzureOpenAI({
apiKey,
baseURL,
apiVersion: AZURE_OPENAI_API_VERSION,
defaultHeaders: {
originator: ORIGIN,
version: CLI_VERSION,
session_id: this.sessionId,
...(OPENAI_ORGANIZATION
? { "OpenAI-Organization": OPENAI_ORGANIZATION }
: {}),
...(OPENAI_PROJECT ? { "OpenAI-Project": OPENAI_PROJECT } : {}),
},
httpAgent: PROXY_URL ? new HttpsProxyAgent(PROXY_URL) : undefined,
...(timeoutMs !== undefined ? { timeout: timeoutMs } : {}),
});
}
setSessionId(this.sessionId);
setCurrentModel(this.model);
@@ -433,6 +466,73 @@ export class AgentLoop {
return [outputItem, ...additionalItems];
}
private async handleLocalShellCall(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
item: any,
): Promise<Array<ResponseInputItem>> {
// If the agent has been canceled in the meantime we should not perform any
// additional work. Returning an empty array ensures that we neither execute
// the requested tool call nor enqueue any followup input items. This keeps
// the cancellation semantics intuitive for users once they interrupt a
// task no further actions related to that task should be taken.
if (this.canceled) {
return [];
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const outputItem: any = {
type: "local_shell_call_output",
// `call_id` is mandatory ensure we never send `undefined` which would
// trigger the "No tool output found…" 400 from the API.
call_id: item.call_id,
output: "no function found",
};
// We intentionally *do not* remove this `callId` from the `pendingAborts`
// set right away. The output produced below is only queued up for the
// *next* request to the OpenAI API it has not been delivered yet. If
// the user presses ESCESC (i.e. invokes `cancel()`) in the small window
// between queuing the result and the actual network call, we need to be
// able to surface a synthetic `function_call_output` marked as
// "aborted". Keeping the ID in the set until the run concludes
// successfully lets the next `run()` differentiate between an aborted
// tool call (needs the synthetic output) and a completed one (cleared
// below in the `flush()` helper).
// used to tell model to stop if needed
const additionalItems: Array<ResponseInputItem> = [];
if (item.action.type !== "exec") {
throw new Error("Invalid action type");
}
const args = {
cmd: item.action.command,
workdir: item.action.working_directory,
timeoutInMillis: item.action.timeout_ms,
};
const {
outputText,
metadata,
additionalItems: additionalItemsFromExec,
} = await handleExecCommand(
args,
this.config,
this.approvalPolicy,
this.additionalWritableRoots,
this.getCommandConfirmation,
this.execAbortController?.signal,
);
outputItem.output = JSON.stringify({ output: outputText, metadata });
if (additionalItemsFromExec) {
additionalItems.push(...additionalItemsFromExec);
}
return [outputItem, ...additionalItems];
}
public async run(
input: Array<ResponseInputItem>,
previousResponseId: string = "",
@@ -517,6 +617,11 @@ export class AgentLoop {
// `disableResponseStorage === true`.
let transcriptPrefixLen = 0;
let tools: Array<Tool> = [shellFunctionTool];
if (this.model.startsWith("codex")) {
tools = [localShellTool];
}
const stripInternalFields = (
item: ResponseInputItem,
): ResponseInputItem => {
@@ -620,6 +725,8 @@ export class AgentLoop {
if (
(item as ResponseInputItem).type === "function_call" ||
(item as ResponseInputItem).type === "reasoning" ||
//@ts-expect-error - waiting on sdk
(item as ResponseInputItem).type === "local_shell_call" ||
((item as ResponseInputItem).type === "message" &&
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(item as any).role === "user")
@@ -658,7 +765,7 @@ export class AgentLoop {
// prompts) and so that freshly generated `function_call_output`s are
// shown immediately.
// Figure out what subset of `turnInput` constitutes *new* information
// for the UI so that we dont spam the interface with repeats of the
// for the UI so that we don't spam the interface with repeats of the
// entire transcript on every iteration when response storage is
// disabled.
const deltaInput = this.disableResponseStorage
@@ -675,13 +782,19 @@ export class AgentLoop {
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
try {
let reasoning: Reasoning | undefined;
if (this.model.startsWith("o")) {
reasoning = { effort: this.config.reasoningEffort ?? "high" };
if (this.model === "o3" || this.model === "o4-mini") {
reasoning.summary = "auto";
}
let modelSpecificInstructions: string | undefined;
if (this.model.startsWith("o") || this.model.startsWith("codex")) {
reasoning = { effort: this.config.reasoningEffort ?? "medium" };
reasoning.summary = "auto";
}
const mergedInstructions = [prefix, this.instructions]
if (this.model.startsWith("gpt-4.1")) {
modelSpecificInstructions = applyPatchToolInstructions;
}
const mergedInstructions = [
prefix,
modelSpecificInstructions,
this.instructions,
]
.filter(Boolean)
.join("\n");
@@ -714,7 +827,7 @@ export class AgentLoop {
store: true,
previous_response_id: lastResponseId || undefined,
}),
tools: [shellTool],
tools: tools,
// Explicitly tell the model it is allowed to pick whatever
// tool it deems appropriate. Omitting this sometimes leads to
// the model ignoring the available tools and responding with
@@ -739,7 +852,13 @@ export class AgentLoop {
const errCtx = error as any;
const status =
errCtx?.status ?? errCtx?.httpStatus ?? errCtx?.statusCode;
const isServerError = typeof status === "number" && status >= 500;
// Treat classical 5xx *and* explicit OpenAI `server_error` types
// as transient server-side failures that qualify for a retry. The
// SDK often omits the numeric status for these, reporting only
// the `type` field.
const isServerError =
(typeof status === "number" && status >= 500) ||
errCtx?.type === "server_error";
if (
(isTimeout || isServerError || isConnectionError) &&
attempt < MAX_RETRIES
@@ -928,7 +1047,10 @@ export class AgentLoop {
if (maybeReasoning.type === "reasoning") {
maybeReasoning.duration_ms = Date.now() - thinkingStart;
}
if (item.type === "function_call") {
if (
item.type === "function_call" ||
item.type === "local_shell_call"
) {
// Track outstanding tool call so we can abort later if needed.
// The item comes from the streaming response, therefore it has
// either `id` (chat) or `call_id` (responses) we normalise
@@ -1051,7 +1173,11 @@ export class AgentLoop {
let reasoning: Reasoning | undefined;
if (this.model.startsWith("o")) {
reasoning = { effort: "high" };
if (this.model === "o3" || this.model === "o4-mini") {
if (
this.model === "o3" ||
this.model === "o4-mini" ||
this.model === "codex-mini-latest"
) {
reasoning.summary = "auto";
}
}
@@ -1090,7 +1216,7 @@ export class AgentLoop {
store: true,
previous_response_id: lastResponseId || undefined,
}),
tools: [shellTool],
tools: tools,
tool_choice: "auto",
});
@@ -1137,7 +1263,7 @@ export class AgentLoop {
content: [
{
type: "input_text",
text: "⚠️ Insufficient quota. Please check your billing details and retry.",
text: `\u26a0 Insufficient quota: ${err instanceof Error && err.message ? err.message.trim() : "No remaining quota."} Manage or purchase credits at https://platform.openai.com/account/billing.`,
},
],
});
@@ -1452,6 +1578,17 @@ export class AgentLoop {
// eslint-disable-next-line no-await-in-loop
const result = await this.handleFunctionCall(item);
turnInput.push(...result);
//@ts-expect-error - waiting on sdk
} else if (item.type === "local_shell_call") {
//@ts-expect-error - waiting on sdk
if (alreadyProcessedResponses.has(item.id)) {
continue;
}
//@ts-expect-error - waiting on sdk
alreadyProcessedResponses.add(item.id);
// eslint-disable-next-line no-await-in-loop
const result = await this.handleLocalShellCall(item);
turnInput.push(...result);
}
emitItem(item as ResponseItem);
}
@@ -1459,6 +1596,19 @@ export class AgentLoop {
}
}
// Dynamic developer message prefix: includes user, workdir, and rg suggestion.
const userName = os.userInfo().username;
const workdir = process.cwd();
const dynamicLines: Array<string> = [
`User: ${userName}`,
`Workdir: ${workdir}`,
];
if (spawnSync("rg", ["--version"], { stdio: "ignore" }).status === 0) {
dynamicLines.push(
"- Always use rg instead of grep/ls -R because it is much faster and respects gitignore",
);
}
const dynamicPrefix = dynamicLines.join("\n");
const prefix = `You are operating as and within the Codex CLI, a terminal-based agentic coding assistant built by OpenAI. It wraps OpenAI models to enable natural language interaction with a local codebase. You are expected to be precise, safe, and helpful.
You can:
@@ -1494,7 +1644,6 @@ You MUST adhere to the following criteria when executing the task:
- If there is a .pre-commit-config.yaml, use \`pre-commit run --files ...\` to check that your changes pass the pre-commit checks. However, do not fix pre-existing errors on lines you didn't touch.
- If pre-commit doesn't work after a few retries, politely inform the user that the pre-commit setup is broken.
- Once you finish coding, you must
- Check \`git status\` to sanity check your changes; revert any scratch files or changes.
- Remove all inline comments you added as much as possible, even if they look normal. Check using \`git diff\`. Inline comments must be generally avoided, unless active maintainers of the repo, after long careful study of the code and the issue, will still misinterpret the code without the comments.
- Check if you accidentally add copyright or license headers. If so, remove them.
- Try to run pre-commit if it is available.
@@ -1504,7 +1653,9 @@ You MUST adhere to the following criteria when executing the task:
- Respond in a friendly tone as a remote teammate, who is knowledgeable, capable and eager to help with coding.
- When your task involves writing or modifying files:
- Do NOT tell the user to "save the file" or "copy the code into a file" if you already created or modified the file using \`apply_patch\`. Instead, reference the file as already saved.
- Do NOT show the full contents of large files you have already written, unless the user explicitly asks for them.`;
- Do NOT show the full contents of large files you have already written, unless the user explicitly asks for them.
${dynamicPrefix}`;
function filterToApiMessages(
items: Array<ResponseInputItem>,

View File

@@ -550,7 +550,15 @@ export function text_to_patch(
!(lines[0] ?? "").startsWith(PATCH_PREFIX.trim()) ||
lines[lines.length - 1] !== PATCH_SUFFIX.trim()
) {
throw new DiffError("Invalid patch text");
let reason = "Invalid patch text: ";
if (lines.length < 2) {
reason += "Patch text must have at least two lines.";
} else if (!(lines[0] ?? "").startsWith(PATCH_PREFIX.trim())) {
reason += "Patch text must start with the correct patch prefix.";
} else if (lines[lines.length - 1] !== PATCH_SUFFIX.trim()) {
reason += "Patch text must end with the correct patch suffix.";
}
throw new DiffError(reason);
}
const parser = new Parser(orig, lines);
parser.index = 1;
@@ -762,3 +770,46 @@ if (import.meta.url === `file://${process.argv[1]}`) {
}
});
}
export const applyPatchToolInstructions = `
To edit files, ALWAYS use the \`shell\` tool with \`apply_patch\` CLI. \`apply_patch\` effectively allows you to execute a diff/patch against a file, but the format of the diff specification is unique to this task, so pay careful attention to these instructions. To use the \`apply_patch\` CLI, you should call the shell tool with the following structure:
\`\`\`bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n[YOUR_PATCH]\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
\`\`\`
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff format.
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs. For instance, we might have:
@@ class BaseClass
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
- If a code block is repeated so many times in a class or function such that even a single \`@@\` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple \`@@\` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough to uniquely identify code. An example of a message that you might pass as "input" to this function, in order to apply a patch, is shown below.
\`\`\`bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n*** Update File: pygorithm/searching/binary_search.py\\n@@ class BaseClass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n@@ class Subclass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
\`\`\`
File references can only be relative, NEVER ABSOLUTE. After the apply_patch command is run, it will always say "Done!", regardless of whether the patch was successfully applied or not. However, you can determine if there are issue and errors by looking at any warnings or logging lines printed BEFORE the "Done!" is output.
`;

View File

@@ -1,17 +1,21 @@
import type { AppConfig } from "../config.js";
import type { ExecInput, ExecResult } from "./sandbox/interface.js";
import type { SpawnOptions } from "child_process";
import type { ParseEntry } from "shell-quote";
import { process_patch } from "./apply-patch.js";
import { SandboxType } from "./sandbox/interface.js";
import { execWithLandlock } from "./sandbox/landlock.js";
import { execWithSeatbelt } from "./sandbox/macos-seatbelt.js";
import { exec as rawExec } from "./sandbox/raw-exec.js";
import { formatCommandForDisplay } from "../../format-command.js";
import { log } from "../logger/log.js";
import fs from "fs";
import os from "os";
import path from "path";
import { parse } from "shell-quote";
import { resolvePathAgainstWorkdir } from "src/approvals.js";
import { PATCH_SUFFIX } from "src/parse-apply-patch.js";
const DEFAULT_TIMEOUT_MS = 10_000; // 10 seconds
@@ -40,38 +44,61 @@ export function exec(
additionalWritableRoots,
}: ExecInput & { additionalWritableRoots: ReadonlyArray<string> },
sandbox: SandboxType,
config: AppConfig,
abortSignal?: AbortSignal,
): Promise<ExecResult> {
// This is a temporary measure to understand what are the common base commands
// until we start persisting and uploading rollouts
const execForSandbox =
sandbox === SandboxType.MACOS_SEATBELT ? execWithSeatbelt : rawExec;
const opts: SpawnOptions = {
timeout: timeoutInMillis || DEFAULT_TIMEOUT_MS,
...(requiresShell(cmd) ? { shell: true } : {}),
...(workdir ? { cwd: workdir } : {}),
};
// Merge default writable roots with any user-specified ones.
const writableRoots = [
process.cwd(),
os.tmpdir(),
...additionalWritableRoots,
];
return execForSandbox(cmd, opts, writableRoots, abortSignal);
switch (sandbox) {
case SandboxType.NONE: {
// SandboxType.NONE uses the raw exec implementation.
return rawExec(cmd, opts, config, abortSignal);
}
case SandboxType.MACOS_SEATBELT: {
// Merge default writable roots with any user-specified ones.
const writableRoots = [
process.cwd(),
os.tmpdir(),
...additionalWritableRoots,
];
return execWithSeatbelt(cmd, opts, writableRoots, config, abortSignal);
}
case SandboxType.LINUX_LANDLOCK: {
return execWithLandlock(
cmd,
opts,
additionalWritableRoots,
config,
abortSignal,
);
}
}
}
export function execApplyPatch(
patchText: string,
workdir: string | undefined = undefined,
): ExecResult {
// This is a temporary measure to understand what are the common base commands
// until we start persisting and uploading rollouts
// This find/replace is required from some models like 4.1 where the patch
// text is wrapped in quotes that breaks the apply_patch command.
let applyPatchInput = patchText
.replace(/('|")?<<('|")EOF('|")/, "")
.replace(/\*\*\* End Patch\nEOF('|")?/, "*** End Patch")
.trim();
if (!applyPatchInput.endsWith(PATCH_SUFFIX)) {
applyPatchInput += "\n" + PATCH_SUFFIX;
}
log(`Applying patch: \`\`\`${applyPatchInput}\`\`\`\n\n`);
try {
const result = process_patch(
patchText,
applyPatchInput,
(p) => fs.readFileSync(resolvePathAgainstWorkdir(p, workdir), "utf8"),
(p, c) => {
const resolvedPath = resolvePathAgainstWorkdir(p, workdir);

View File

@@ -94,6 +94,7 @@ export async function handleExecCommand(
/* applyPatch */ undefined,
/* runInSandbox */ false,
additionalWritableRoots,
config,
abortSignal,
).then(convertSummaryToResult);
}
@@ -142,6 +143,7 @@ export async function handleExecCommand(
applyPatch,
runInSandbox,
additionalWritableRoots,
config,
abortSignal,
);
// If the operation was aborted in the meantime, propagate the cancellation
@@ -179,6 +181,7 @@ export async function handleExecCommand(
applyPatch,
false,
additionalWritableRoots,
config,
abortSignal,
);
return convertSummaryToResult(summary);
@@ -213,6 +216,7 @@ async function execCommand(
applyPatchCommand: ApplyPatchCommand | undefined,
runInSandbox: boolean,
additionalWritableRoots: ReadonlyArray<string>,
config: AppConfig,
abortSignal?: AbortSignal,
): Promise<ExecCommandSummary> {
let { workdir } = execInput;
@@ -252,6 +256,7 @@ async function execCommand(
: await exec(
{ ...execInput, additionalWritableRoots },
await getSandbox(runInSandbox),
config,
abortSignal,
);
const duration = Date.now() - start;
@@ -303,6 +308,11 @@ async function getSandbox(runInSandbox: boolean): Promise<SandboxType> {
"Sandbox was mandated, but 'sandbox-exec' was not found in PATH!",
);
}
} else if (process.platform === "linux") {
// TODO: Need to verify that the Landlock sandbox is working. For example,
// using Landlock in a Linux Docker container from a macOS host may not
// work.
return SandboxType.LINUX_LANDLOCK;
} else if (CODEX_UNSAFE_ALLOW_NO_SANDBOX) {
// Allow running without a sandbox if the user has explicitly marked the
// environment as already being sufficiently locked-down.

View File

@@ -1,7 +1,6 @@
// Maximum output cap: either MAX_OUTPUT_LINES lines or MAX_OUTPUT_BYTES bytes,
// whichever limit is reached first.
const MAX_OUTPUT_BYTES = 1024 * 10; // 10 KB
const MAX_OUTPUT_LINES = 256;
import { DEFAULT_SHELL_MAX_BYTES, DEFAULT_SHELL_MAX_LINES } from "../../config";
/**
* Creates a collector that accumulates data Buffers from a stream up to
@@ -10,8 +9,8 @@ const MAX_OUTPUT_LINES = 256;
*/
export function createTruncatingCollector(
stream: NodeJS.ReadableStream,
byteLimit: number = MAX_OUTPUT_BYTES,
lineLimit: number = MAX_OUTPUT_LINES,
byteLimit: number = DEFAULT_SHELL_MAX_BYTES,
lineLimit: number = DEFAULT_SHELL_MAX_LINES,
): {
getString: () => string;
hit: boolean;

View File

@@ -0,0 +1,175 @@
import type { ExecResult } from "./interface.js";
import type { AppConfig } from "../../config.js";
import type { SpawnOptions } from "child_process";
import { exec } from "./raw-exec.js";
import { execFile } from "child_process";
import fs from "fs";
import path from "path";
import { log } from "src/utils/logger/log.js";
import { fileURLToPath } from "url";
/**
* Runs Landlock with the following permissions:
* - can read any file on disk
* - can write to process.cwd()
* - can write to the platform user temp folder
* - can write to any user-provided writable root
*/
export async function execWithLandlock(
cmd: Array<string>,
opts: SpawnOptions,
userProvidedWritableRoots: ReadonlyArray<string>,
config: AppConfig,
abortSignal?: AbortSignal,
): Promise<ExecResult> {
const sandboxExecutable = await getSandboxExecutable();
const extraSandboxPermissions = userProvidedWritableRoots.flatMap(
(root: string) => ["--sandbox-permission", `disk-write-folder=${root}`],
);
const fullCommand = [
sandboxExecutable,
"--sandbox-permission",
"disk-full-read-access",
"--sandbox-permission",
"disk-write-cwd",
"--sandbox-permission",
"disk-write-platform-user-temp-folder",
...extraSandboxPermissions,
"--",
...cmd,
];
return exec(fullCommand, opts, config, abortSignal);
}
/**
* Lazily initialized promise that resolves to the absolute path of the
* architecture-specific Landlock helper binary.
*/
let sandboxExecutablePromise: Promise<string> | null = null;
async function detectSandboxExecutable(): Promise<string> {
// Find the executable relative to the package.json file.
const __filename = fileURLToPath(import.meta.url);
let dir: string = path.dirname(__filename);
// Ascend until package.json is found or we reach the filesystem root.
// eslint-disable-next-line no-constant-condition
while (true) {
try {
// eslint-disable-next-line no-await-in-loop
await fs.promises.access(
path.join(dir, "package.json"),
fs.constants.F_OK,
);
break; // Found the package.json ⇒ dir is our project root.
} catch {
// keep searching
}
const parent = path.dirname(dir);
if (parent === dir) {
throw new Error("Unable to locate package.json");
}
dir = parent;
}
const sandboxExecutable = getLinuxSandboxExecutableForCurrentArchitecture();
const candidate = path.join(dir, "bin", sandboxExecutable);
try {
await fs.promises.access(candidate, fs.constants.X_OK);
} catch {
throw new Error(`${candidate} not found or not executable`);
}
// Will throw if the executable is not working in this environment.
await verifySandboxExecutable(candidate);
return candidate;
}
const ERROR_WHEN_LANDLOCK_NOT_SUPPORTED = `\
The combination of seccomp/landlock that Codex uses for sandboxing is not
supported in this environment.
If you are running in a Docker container, you may want to try adding
restrictions to your Docker container such that it provides your desired
sandboxing guarantees and then run Codex with the
--dangerously-auto-approve-everything option inside the container.
If you are running on an older Linux kernel that does not support newer
features of seccomp/landlock, you will have to update your kernel to a newer
version.
`;
/**
* Now that we have the path to the executable, make sure that it works in
* this environment. For example, when running a Linux Docker container from
* macOS like so:
*
* docker run -it alpine:latest /bin/sh
*
* Running `codex-linux-sandbox-x64 -- true` in the container fails with:
*
* ```
* Error: sandbox error: seccomp setup error
*
* Caused by:
* 0: seccomp setup error
* 1: Error calling `seccomp`: Invalid argument (os error 22)
* 2: Invalid argument (os error 22)
* ```
*/
function verifySandboxExecutable(sandboxExecutable: string): Promise<void> {
// Note we are running `true` rather than `bash -lc true` because we want to
// ensure we run an executable, not a shell built-in. Note that `true` should
// always be available in a POSIX environment.
return new Promise((resolve, reject) => {
const args = ["--", "true"];
execFile(sandboxExecutable, args, (error, stdout, stderr) => {
if (error) {
log(
`Sandbox check failed for ${sandboxExecutable} ${args.join(" ")}: ${error}`,
);
log(`stdout: ${stdout}`);
log(`stderr: ${stderr}`);
reject(new Error(ERROR_WHEN_LANDLOCK_NOT_SUPPORTED));
} else {
resolve();
}
});
});
}
/**
* Returns the absolute path to the architecture-specific Landlock helper
* binary. (Could be a rejected promise if not found.)
*/
function getSandboxExecutable(): Promise<string> {
if (!sandboxExecutablePromise) {
sandboxExecutablePromise = detectSandboxExecutable();
}
return sandboxExecutablePromise;
}
/** @return name of the native executable to use for Linux sandboxing. */
function getLinuxSandboxExecutableForCurrentArchitecture(): string {
switch (process.arch) {
case "arm64":
return "codex-linux-sandbox-arm64";
case "x64":
return "codex-linux-sandbox-x64";
// Fall back to the x86_64 build for anything else it will obviously
// fail on incompatible systems but gives a sane error message rather
// than crashing earlier.
default:
return "codex-linux-sandbox-x64";
}
}

View File

@@ -1,4 +1,5 @@
import type { ExecResult } from "./interface.js";
import type { AppConfig } from "../../config.js";
import type { SpawnOptions } from "child_process";
import { exec } from "./raw-exec.js";
@@ -24,6 +25,7 @@ export function execWithSeatbelt(
cmd: Array<string>,
opts: SpawnOptions,
writableRoots: ReadonlyArray<string>,
config: AppConfig,
abortSignal?: AbortSignal,
): Promise<ExecResult> {
let scopedWritePolicy: string;
@@ -72,7 +74,7 @@ export function execWithSeatbelt(
"--",
...cmd,
];
return exec(fullCommand, opts, writableRoots, abortSignal);
return exec(fullCommand, opts, config, abortSignal);
}
const READ_ONLY_SEATBELT_POLICY = `

View File

@@ -1,4 +1,5 @@
import type { ExecResult } from "./interface";
import type { AppConfig } from "../../config";
import type {
ChildProcess,
SpawnOptions,
@@ -20,7 +21,7 @@ import * as os from "os";
export function exec(
command: Array<string>,
options: SpawnOptions,
_writableRoots: ReadonlyArray<string>,
config: AppConfig,
abortSignal?: AbortSignal,
): Promise<ExecResult> {
// Adapt command for the current platform (e.g., convert 'ls' to 'dir' on Windows)
@@ -143,9 +144,21 @@ export function exec(
// ExecResult object so the rest of the agent loop can carry on gracefully.
return new Promise<ExecResult>((resolve) => {
// Get shell output limits from config if available
const maxBytes = config?.tools?.shell?.maxBytes;
const maxLines = config?.tools?.shell?.maxLines;
// Collect stdout and stderr up to configured limits.
const stdoutCollector = createTruncatingCollector(child.stdout!);
const stderrCollector = createTruncatingCollector(child.stderr!);
const stdoutCollector = createTruncatingCollector(
child.stdout!,
maxBytes,
maxLines,
);
const stderrCollector = createTruncatingCollector(
child.stderr!,
maxBytes,
maxLines,
);
child.on("exit", (code, signal) => {
const stdout = stdoutCollector.getString();

View File

@@ -1,7 +1,7 @@
import type { AgentName } from "package-manager-detector";
import { detectInstallerByPath } from "./package-manager-detector";
import { CLI_VERSION } from "./session";
import { CLI_VERSION } from "../version";
import boxen from "boxen";
import chalk from "chalk";
import { getLatestVersion } from "fast-npm-meta";

View File

@@ -1,12 +1,14 @@
import type { AppConfig } from "./config.js";
import type { ResponseItem } from "openai/resources/responses/responses.mjs";
import { getBaseUrl, getApiKey } from "./config.js";
import OpenAI from "openai";
import { createOpenAIClient } from "./openai-client.js";
/**
* Generate a condensed summary of the conversation items.
* @param items The list of conversation items to summarize
* @param model The model to use for generating the summary
* @param flexMode Whether to use the flex-mode service tier
* @param config The configuration object
* @returns A concise structured summary string
*/
/**
@@ -23,10 +25,7 @@ export async function generateCompactSummary(
flexMode = false,
config: AppConfig,
): Promise<string> {
const oai = new OpenAI({
apiKey: getApiKey(config.provider),
baseURL: getBaseUrl(config.provider),
});
const oai = createOpenAIClient(config);
const conversationText = items
.filter(

View File

@@ -43,11 +43,15 @@ if (!isVitest) {
loadDotenv({ path: USER_WIDE_CONFIG_PATH });
}
export const DEFAULT_AGENTIC_MODEL = "o4-mini";
export const DEFAULT_AGENTIC_MODEL = "codex-mini-latest";
export const DEFAULT_FULL_CONTEXT_MODEL = "gpt-4.1";
export const DEFAULT_APPROVAL_MODE = AutoApprovalMode.SUGGEST;
export const DEFAULT_INSTRUCTIONS = "";
// Default shell output limits
export const DEFAULT_SHELL_MAX_BYTES = 1024 * 10; // 10 KB
export const DEFAULT_SHELL_MAX_LINES = 256;
export const CONFIG_DIR = join(homedir(), ".codex");
export const CONFIG_JSON_FILEPATH = join(CONFIG_DIR, "config.json");
export const CONFIG_YAML_FILEPATH = join(CONFIG_DIR, "config.yaml");
@@ -64,12 +68,15 @@ export const OPENAI_TIMEOUT_MS =
export const OPENAI_BASE_URL = process.env["OPENAI_BASE_URL"] || "";
export let OPENAI_API_KEY = process.env["OPENAI_API_KEY"] || "";
export const AZURE_OPENAI_API_VERSION =
process.env["AZURE_OPENAI_API_VERSION"] || "2025-03-01-preview";
export const DEFAULT_REASONING_EFFORT = "high";
export const OPENAI_ORGANIZATION = process.env["OPENAI_ORGANIZATION"] || "";
export const OPENAI_PROJECT = process.env["OPENAI_PROJECT"] || "";
// Can be set `true` when Codex is running in an environment that is marked as already
// considered sufficiently locked-down so that we allow running wihtout an explicit sandbox.
// considered sufficiently locked-down so that we allow running without an explicit sandbox.
export const CODEX_UNSAFE_ALLOW_NO_SANDBOX = Boolean(
process.env["CODEX_UNSAFE_ALLOW_NO_SANDBOX"] || "",
);
@@ -113,7 +120,7 @@ export function getApiKey(provider: string = "openai"): string | undefined {
return process.env[providerInfo.envKey];
}
// Checking `PROVIDER_API_KEY feels more intuitive with a custom provider.
// Checking `PROVIDER_API_KEY` feels more intuitive with a custom provider.
const customApiKey = process.env[`${provider.toUpperCase()}_API_KEY`];
if (customApiKey) {
return customApiKey;
@@ -128,6 +135,8 @@ export function getApiKey(provider: string = "openai"): string | undefined {
return undefined;
}
export type FileOpenerScheme = "vscode" | "cursor" | "windsurf";
// Represents config as persisted in config.json.
export type StoredConfig = {
model?: string;
@@ -139,15 +148,28 @@ export type StoredConfig = {
notify?: boolean;
/** Disable server-side response storage (send full transcript each request) */
disableResponseStorage?: boolean;
flexMode?: boolean;
providers?: Record<string, { name: string; baseURL: string; envKey: string }>;
history?: {
maxSize?: number;
saveHistory?: boolean;
sensitivePatterns?: Array<string>;
};
tools?: {
shell?: {
maxBytes?: number;
maxLines?: number;
};
};
/** User-defined safe commands */
safeCommands?: Array<string>;
reasoningEffort?: ReasoningEffort;
/**
* URI-based file opener. This is used when linking code references in
* terminal output.
*/
fileOpener?: FileOpenerScheme;
};
// Minimal config written on first run. An *empty* model string ensures that
@@ -186,18 +208,35 @@ export type AppConfig = {
saveHistory: boolean;
sensitivePatterns: Array<string>;
};
tools?: {
shell?: {
maxBytes: number;
maxLines: number;
};
};
fileOpener?: FileOpenerScheme;
};
// Formatting (quiet mode-only).
export const PRETTY_PRINT = Boolean(process.env["PRETTY_PRINT"] || "");
// ---------------------------------------------------------------------------
// Project doc support (codex.md)
// Project doc support (AGENTS.md / codex.md)
// ---------------------------------------------------------------------------
export const PROJECT_DOC_MAX_BYTES = 32 * 1024; // 32 kB
const PROJECT_DOC_FILENAMES = ["codex.md", ".codex.md", "CODEX.md"];
// We support multiple filenames for project-level agent instructions. As of
// 2025 the recommended convention is to use `AGENTS.md`, however we keep
// the legacy `codex.md` variants for backwards-compatibility so that existing
// repositories continue to work without changes. The list is ordered so that
// the first match wins newer conventions first, older fallbacks later.
const PROJECT_DOC_FILENAMES = [
"AGENTS.md", // preferred
"codex.md", // legacy
".codex.md",
"CODEX.md",
];
const PROJECT_DOC_SEPARATOR = "\n\n--- project-doc ---\n\n";
export function discoverProjectDocPath(startDir: string): string | null {
@@ -238,7 +277,8 @@ export function discoverProjectDocPath(startDir: string): string | null {
}
/**
* Load the project documentation markdown (codex.md) if present. If the file
* Load the project documentation markdown (`AGENTS.md` or the legacy
* `codex.md`) if present. If the file
* exceeds {@link PROJECT_DOC_MAX_BYTES} it will be truncated and a warning is
* logged.
*
@@ -388,8 +428,17 @@ export const loadConfig = (
instructions: combinedInstructions,
notify: storedConfig.notify === true,
approvalMode: storedConfig.approvalMode,
tools: {
shell: {
maxBytes:
storedConfig.tools?.shell?.maxBytes ?? DEFAULT_SHELL_MAX_BYTES,
maxLines:
storedConfig.tools?.shell?.maxLines ?? DEFAULT_SHELL_MAX_LINES,
},
},
disableResponseStorage: storedConfig.disableResponseStorage === true,
reasoningEffort: storedConfig.reasoningEffort,
fileOpener: storedConfig.fileOpener,
};
// -----------------------------------------------------------------------
@@ -451,6 +500,10 @@ export const loadConfig = (
}
// Notification setting: enable desktop notifications when set in config
config.notify = storedConfig.notify === true;
// Flex-mode setting: enable the flex-mode service tier when set in config
if (storedConfig.flexMode !== undefined) {
config.flexMode = storedConfig.flexMode;
}
// Add default history config if not provided
if (storedConfig.history !== undefined) {
@@ -505,6 +558,7 @@ export const saveConfig = (
providers: config.providers,
approvalMode: config.approvalMode,
disableResponseStorage: config.disableResponseStorage,
flexMode: config.flexMode,
reasoningEffort: config.reasoningEffort,
};
@@ -517,6 +571,18 @@ export const saveConfig = (
};
}
// Add tools settings if they exist
if (config.tools) {
configToSave.tools = {
shell: config.tools.shell
? {
maxBytes: config.tools.shell.maxBytes,
maxLines: config.tools.shell.maxLines,
}
: undefined,
};
}
if (ext === ".yaml" || ext === ".yml") {
writeFileSync(targetPath, dumpYaml(configToSave), "utf-8");
} else {

View File

@@ -2,7 +2,24 @@ import fs from "fs";
import os from "os";
import path from "path";
export function getFileSystemSuggestions(pathPrefix: string): Array<string> {
/**
* Represents a file system suggestion with path and directory information
*/
export interface FileSystemSuggestion {
/** The full path of the suggestion */
path: string;
/** Whether the suggestion is a directory */
isDirectory: boolean;
}
/**
* Gets file system suggestions based on a path prefix
* @param pathPrefix The path prefix to search for
* @returns Array of file system suggestions
*/
export function getFileSystemSuggestions(
pathPrefix: string,
): Array<FileSystemSuggestion> {
if (!pathPrefix) {
return [];
}
@@ -31,10 +48,10 @@ export function getFileSystemSuggestions(pathPrefix: string): Array<string> {
.map((item) => {
const fullPath = path.join(readDir, item);
const isDirectory = fs.statSync(fullPath).isDirectory();
if (isDirectory) {
return path.join(fullPath, sep);
}
return fullPath;
return {
path: isDirectory ? path.join(fullPath, sep) : fullPath,
isDirectory,
};
});
} catch {
return [];

View File

@@ -0,0 +1,62 @@
import fs from "fs";
import path from "path";
/**
* Replaces @path tokens in the input string with <path>file contents</path> XML blocks for LLM context.
* Only replaces if the path points to a file; directories are ignored.
*/
export async function expandFileTags(raw: string): Promise<string> {
const re = /@([\w./~-]+)/g;
let out = raw;
type MatchInfo = { index: number; length: number; path: string };
const matches: Array<MatchInfo> = [];
for (const m of raw.matchAll(re) as IterableIterator<RegExpMatchArray>) {
const idx = m.index;
const captured = m[1];
if (idx !== undefined && captured) {
matches.push({ index: idx, length: m[0].length, path: captured });
}
}
// Process in reverse to avoid index shifting.
for (let i = matches.length - 1; i >= 0; i--) {
const { index, length, path: p } = matches[i]!;
const resolved = path.resolve(process.cwd(), p);
try {
const st = fs.statSync(resolved);
if (st.isFile()) {
const content = fs.readFileSync(resolved, "utf-8");
const rel = path.relative(process.cwd(), resolved);
const xml = `<${rel}>\n${content}\n</${rel}>`;
out = out.slice(0, index) + xml + out.slice(index + length);
}
} catch {
// If path invalid, leave token as is
}
}
return out;
}
/**
* Collapses <path>content</path> XML blocks back to @path format.
* This is the reverse operation of expandFileTags.
* Only collapses blocks where the path points to a valid file; invalid paths remain unchanged.
*/
export function collapseXmlBlocks(text: string): string {
return text.replace(
/<([^\n>]+)>([\s\S]*?)<\/\1>/g,
(match, path1: string) => {
const filePath = path.normalize(path1.trim());
try {
// Only convert to @path format if it's a valid file
return fs.statSync(path.resolve(process.cwd(), filePath)).isFile()
? "@" + filePath
: match;
} catch {
return match; // Keep XML block if path is invalid
}
},
);
}

View File

@@ -0,0 +1,75 @@
import SelectInput from "../components/select-input/select-input.js";
import Spinner from "../components/vendor/ink-spinner.js";
import TextInput from "../components/vendor/ink-text-input.js";
import { Box, Text } from "ink";
import React, { useState } from "react";
export type Choice = { type: "signin" } | { type: "apikey"; key: string };
export function ApiKeyPrompt({
onDone,
}: {
onDone: (choice: Choice) => void;
}): JSX.Element {
const [step, setStep] = useState<"select" | "paste">("select");
const [apiKey, setApiKey] = useState("");
if (step === "select") {
return (
<Box flexDirection="column" gap={1}>
<Box flexDirection="column">
<Text>
Sign in with ChatGPT to generate an API key or paste one you already
have.
</Text>
<Text dimColor>[use arrows to move, enter to select]</Text>
</Box>
<SelectInput
items={[
{ label: "Sign in with ChatGPT", value: "signin" },
{
label: "Paste an API key (or set as OPENAI_API_KEY)",
value: "paste",
},
]}
onSelect={(item: { value: string }) => {
if (item.value === "signin") {
onDone({ type: "signin" });
} else {
setStep("paste");
}
}}
/>
</Box>
);
}
return (
<Box flexDirection="column">
<Text>Paste your OpenAI API key and press &lt;Enter&gt;:</Text>
<TextInput
value={apiKey}
onChange={setApiKey}
onSubmit={(value: string) => {
if (value.trim() !== "") {
onDone({ type: "apikey", key: value.trim() });
}
}}
placeholder="sk-..."
mask="*"
/>
</Box>
);
}
export function WaitingForAuth(): JSX.Element {
return (
<Box flexDirection="row" marginTop={1}>
<Spinner type="ball" />
<Text>
{" "}
Waiting for authentication <Text dimColor>ctrl + c to quit</Text>
</Text>
</Box>
);
}

View File

@@ -0,0 +1,766 @@
import type { Choice } from "./get-api-key-components";
import type { Request, Response } from "express";
import { ApiKeyPrompt, WaitingForAuth } from "./get-api-key-components";
import chalk from "chalk";
import express from "express";
import fs from "fs/promises";
import { render } from "ink";
import crypto from "node:crypto";
import { URL } from "node:url";
import open from "open";
import os from "os";
import path from "path";
import React from "react";
function promptUserForChoice(): Promise<Choice> {
return new Promise<Choice>((resolve) => {
const instance = render(
<ApiKeyPrompt
onDone={(choice: Choice) => {
resolve(choice);
instance.unmount();
}}
/>,
);
});
}
interface OidcConfiguration {
issuer: string;
authorization_endpoint: string;
token_endpoint: string;
}
async function getOidcConfiguration(
issuer: string,
): Promise<OidcConfiguration> {
const discoveryUrl = new URL(issuer);
discoveryUrl.pathname = "/.well-known/openid-configuration";
if (issuer === "https://auth.openai.com") {
// Account for legacy quirk in production tenant
discoveryUrl.pathname = "/v2.0" + discoveryUrl.pathname;
}
const res = await fetch(discoveryUrl.toString());
if (!res.ok) {
throw new Error("Failed to fetch OIDC configuration");
}
return (await res.json()) as OidcConfiguration;
}
interface IDTokenClaims {
"exp": number;
"https://api.openai.com/auth": {
organization_id: string;
project_id: string;
completed_platform_onboarding: boolean;
is_org_owner: boolean;
chatgpt_subscription_active_start: string;
chatgpt_subscription_active_until: string;
chatgpt_plan_type: string;
};
}
interface AccessTokenClaims {
"https://api.openai.com/auth": {
chatgpt_plan_type: string;
};
}
function generatePKCECodes(): {
code_verifier: string;
code_challenge: string;
} {
const code_verifier = crypto.randomBytes(64).toString("hex");
const code_challenge = crypto
.createHash("sha256")
.update(code_verifier)
.digest("base64url");
return { code_verifier, code_challenge };
}
async function maybeRedeemCredits(
issuer: string,
clientId: string,
refreshToken: string,
idToken?: string,
): Promise<void> {
try {
let currentIdToken = idToken;
let idClaims: IDTokenClaims | undefined;
if (
currentIdToken &&
typeof currentIdToken === "string" &&
currentIdToken.split(".")[1]
) {
idClaims = JSON.parse(
Buffer.from(currentIdToken.split(".")[1]!, "base64url").toString(
"utf8",
),
) as IDTokenClaims;
} else {
currentIdToken = "";
}
// Validate idToken expiration
// if expired, attempt token-exchange for a fresh idToken
if (!idClaims || !idClaims.exp || Date.now() >= idClaims.exp * 1000) {
// eslint-disable-next-line no-console
console.log(chalk.dim("Refreshing credentials..."));
try {
const refreshRes = await fetch("https://auth.openai.com/oauth/token", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
client_id: clientId,
grant_type: "refresh_token",
refresh_token: refreshToken,
scope: "openid profile email",
}),
});
if (!refreshRes.ok) {
// eslint-disable-next-line no-console
console.warn(
`Failed to refresh credentials: ${refreshRes.status} ${refreshRes.statusText}\n${chalk.dim(await refreshRes.text())}`,
);
// eslint-disable-next-line no-console
console.warn(
`Please sign in again to redeem credits: ${chalk.bold("codex --login")}`,
);
return;
}
const refreshData = (await refreshRes.json()) as {
id_token: string;
refresh_token?: string;
};
currentIdToken = refreshData.id_token;
idClaims = JSON.parse(
Buffer.from(currentIdToken.split(".")[1]!, "base64url").toString(
"utf8",
),
) as IDTokenClaims;
if (refreshData.refresh_token) {
try {
const home = os.homedir();
const authDir = path.join(home, ".codex");
const authFile = path.join(authDir, "auth.json");
const existingJson = JSON.parse(
await fs.readFile(authFile, "utf-8"),
);
existingJson.tokens.id_token = currentIdToken;
existingJson.tokens.refresh_token = refreshData.refresh_token;
existingJson.last_refresh = new Date().toISOString();
await fs.writeFile(
authFile,
JSON.stringify(existingJson, null, 2),
{ mode: 0o600 },
);
} catch (err) {
// eslint-disable-next-line no-console
console.warn("Unable to update refresh token in auth file:", err);
}
}
} catch (err) {
// eslint-disable-next-line no-console
console.warn("Unable to refresh ID token via token-exchange:", err);
return;
}
}
// Confirm the subscription is active for more than 7 days
const subStart =
idClaims["https://api.openai.com/auth"]
?.chatgpt_subscription_active_start;
if (
typeof subStart === "string" &&
Date.now() - new Date(subStart).getTime() < 7 * 24 * 60 * 60 * 1000
) {
// eslint-disable-next-line no-console
console.warn(
"Sorry, your subscription must be active for more than 7 days to redeem credits.\nMore info: " +
chalk.dim("https://help.openai.com/en/articles/11381614") +
chalk.bold(
"\nPlease try again on " +
new Date(
new Date(subStart).getTime() + 7 * 24 * 60 * 60 * 1000,
).toLocaleDateString() +
" " +
new Date(
new Date(subStart).getTime() + 7 * 24 * 60 * 60 * 1000,
).toLocaleTimeString(),
),
);
return;
}
const completed = Boolean(
idClaims["https://api.openai.com/auth"]?.completed_platform_onboarding,
);
const isOwner = Boolean(
idClaims["https://api.openai.com/auth"]?.is_org_owner,
);
const needsSetup = !completed && isOwner;
const planType = idClaims["https://api.openai.com/auth"]
?.chatgpt_plan_type as string | undefined;
if (needsSetup || !(planType === "plus" || planType === "pro")) {
// eslint-disable-next-line no-console
console.warn(
"Users with Plus or Pro subscriptions can redeem free API credits.\nMore info: " +
chalk.dim("https://help.openai.com/en/articles/11381614"),
);
return;
}
const apiHost =
issuer === "https://auth.openai.com"
? "https://api.openai.com"
: "https://api.openai.org";
const redeemRes = await fetch(`${apiHost}/v1/billing/redeem_credits`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ id_token: currentIdToken }),
});
if (!redeemRes.ok) {
// eslint-disable-next-line no-console
console.warn(
`Credit redemption request failed: ${redeemRes.status} ${redeemRes.statusText}`,
);
return;
}
try {
const redeemData = (await redeemRes.json()) as {
granted_chatgpt_subscriber_api_credits?: number;
};
const granted = redeemData?.granted_chatgpt_subscriber_api_credits ?? 0;
if (granted > 0) {
// eslint-disable-next-line no-console
console.log(
chalk.green(
`${chalk.bold(
`Thanks for being a ChatGPT ${
planType === "plus" ? "Plus" : "Pro"
} subscriber!`,
)}\nIf you haven't already redeemed, you should receive ${
planType === "plus" ? "$5" : "$50"
} in API credits\nCredits: ${chalk.dim(chalk.underline("https://platform.openai.com/settings/organization/billing/credit-grants"))}\nMore info: ${chalk.dim(chalk.underline("https://help.openai.com/en/articles/11381614"))}`,
),
);
} else {
// eslint-disable-next-line no-console
console.log(
chalk.green(
`It looks like no credits were granted:\n${JSON.stringify(
redeemData,
null,
2,
)}\nCredits: ${chalk.dim(
chalk.underline(
"https://platform.openai.com/settings/organization/billing/credit-grants",
),
)}\nMore info: ${chalk.dim(
chalk.underline("https://help.openai.com/en/articles/11381614"),
)}`,
),
);
}
} catch (parseErr) {
// eslint-disable-next-line no-console
console.warn("Unable to parse credit redemption response:", parseErr);
}
} catch (err) {
// eslint-disable-next-line no-console
console.warn("Unable to redeem ChatGPT subscriber API credits:", err);
}
}
async function handleCallback(
req: Request,
issuer: string,
oidcConfig: OidcConfiguration,
codeVerifier: string,
clientId: string,
redirectUri: string,
expectedState: string,
): Promise<{ access_token: string; success_url: string }> {
const state = (req.query as Record<string, string>)["state"] as
| string
| undefined;
if (!state || state !== expectedState) {
throw new Error("Invalid state parameter");
}
const code = (req.query as Record<string, string>)["code"] as
| string
| undefined;
if (!code) {
throw new Error("Missing authorization code");
}
const params = new URLSearchParams();
params.append("grant_type", "authorization_code");
params.append("code", code);
params.append("redirect_uri", redirectUri);
params.append("client_id", clientId);
params.append("code_verifier", codeVerifier);
oidcConfig.token_endpoint = `${issuer}/oauth/token`;
const tokenRes = await fetch(oidcConfig.token_endpoint, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: params.toString(),
});
if (!tokenRes.ok) {
throw new Error("Failed to exchange authorization code for tokens");
}
const tokenData = (await tokenRes.json()) as {
id_token: string;
access_token: string;
refresh_token: string;
};
const idTokenParts = tokenData.id_token.split(".");
if (idTokenParts.length !== 3) {
throw new Error("Invalid ID token");
}
const accessTokenParts = tokenData.access_token.split(".");
if (accessTokenParts.length !== 3) {
throw new Error("Invalid access token");
}
const idTokenClaims = JSON.parse(
Buffer.from(idTokenParts[1]!, "base64url").toString("utf8"),
) as IDTokenClaims;
const accessTokenClaims = JSON.parse(
Buffer.from(accessTokenParts[1]!, "base64url").toString("utf8"),
) as AccessTokenClaims;
const org_id = idTokenClaims["https://api.openai.com/auth"]?.organization_id;
if (!org_id) {
throw new Error("Missing organization in id_token claims");
}
const project_id = idTokenClaims["https://api.openai.com/auth"]?.project_id;
if (!project_id) {
throw new Error("Missing project in id_token claims");
}
const randomId = crypto.randomBytes(6).toString("hex");
const exchangeParams = new URLSearchParams({
grant_type: "urn:ietf:params:oauth:grant-type:token-exchange",
client_id: clientId,
requested_token: "openai-api-key",
subject_token: tokenData.id_token,
subject_token_type: "urn:ietf:params:oauth:token-type:id_token",
name: `Codex CLI [auto-generated] (${new Date().toISOString().slice(0, 10)}) [${
randomId
}]`,
});
const exchangeRes = await fetch(oidcConfig.token_endpoint, {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: exchangeParams.toString(),
});
if (!exchangeRes.ok) {
throw new Error(`Failed to create API key: ${await exchangeRes.text()}`);
}
const exchanged = (await exchangeRes.json()) as {
access_token: string;
// NOTE(mbolin): I did not see the "key" property set in practice. Note
// this property is not read by the code.
key: string;
};
// Determine whether the organization still requires additional
// setup (e.g., adding a payment method) based on the ID-token
// claim provided by the auth service.
const completedOnboarding = Boolean(
idTokenClaims["https://api.openai.com/auth"]?.completed_platform_onboarding,
);
const chatgptPlanType =
accessTokenClaims["https://api.openai.com/auth"]?.chatgpt_plan_type;
const isOrgOwner = Boolean(
idTokenClaims["https://api.openai.com/auth"]?.is_org_owner,
);
const needsSetup = !completedOnboarding && isOrgOwner;
// Build the success URL on the same host/port as the callback and
// include the required query parameters for the front-end page.
// console.log("Redirecting to success page");
const successUrl = new URL("/success", redirectUri);
if (issuer === "https://auth.openai.com") {
successUrl.searchParams.set("platform_url", "https://platform.openai.com");
} else {
successUrl.searchParams.set(
"platform_url",
"https://platform.api.openai.org",
);
}
successUrl.searchParams.set("id_token", tokenData.id_token);
successUrl.searchParams.set("needs_setup", needsSetup ? "true" : "false");
successUrl.searchParams.set("org_id", org_id);
successUrl.searchParams.set("project_id", project_id);
successUrl.searchParams.set("plan_type", chatgptPlanType);
try {
const home = os.homedir();
const authDir = path.join(home, ".codex");
await fs.mkdir(authDir, { recursive: true });
const authFile = path.join(authDir, "auth.json");
const authData = {
tokens: tokenData,
last_refresh: new Date().toISOString(),
OPENAI_API_KEY: exchanged.access_token,
};
await fs.writeFile(authFile, JSON.stringify(authData, null, 2), {
mode: 0o600,
});
} catch (err) {
// eslint-disable-next-line no-console
console.warn("Unable to save auth file:", err);
}
await maybeRedeemCredits(
issuer,
clientId,
tokenData.refresh_token,
tokenData.id_token,
);
return {
access_token: exchanged.access_token,
success_url: successUrl.toString(),
};
}
const LOGIN_SUCCESS_HTML = String.raw`
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Sign into Codex CLI</title>
<link rel="icon" href='data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"%3E%3Cpath stroke="%23000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"/%3E%3C/svg%3E' type="image/svg+xml">
<style>
.container {
margin: auto;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
position: relative;
background: white;
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
}
.inner-container {
width: 400px;
flex-direction: column;
justify-content: flex-start;
align-items: center;
gap: 20px;
display: inline-flex;
}
.content {
align-self: stretch;
flex-direction: column;
justify-content: flex-start;
align-items: center;
gap: 20px;
display: flex;
}
.svg-wrapper {
position: relative;
}
.title {
text-align: center;
color: var(--text-primary, #0D0D0D);
font-size: 28px;
font-weight: 400;
line-height: 36.40px;
word-wrap: break-word;
}
.setup-box {
width: 600px;
padding: 16px 20px;
background: var(--bg-primary, white);
box-shadow: 0px 4px 16px rgba(0, 0, 0, 0.05);
border-radius: 16px;
outline: 1px var(--border-default, rgba(13, 13, 13, 0.10)) solid;
outline-offset: -1px;
justify-content: flex-start;
align-items: center;
gap: 16px;
display: inline-flex;
}
.setup-content {
flex: 1 1 0;
justify-content: flex-start;
align-items: center;
gap: 24px;
display: flex;
}
.setup-text {
flex: 1 1 0;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
gap: 4px;
display: inline-flex;
}
.setup-title {
align-self: stretch;
color: var(--text-primary, #0D0D0D);
font-size: 14px;
font-weight: 510;
line-height: 20px;
word-wrap: break-word;
}
.setup-description {
align-self: stretch;
color: var(--text-secondary, #5D5D5D);
font-size: 14px;
font-weight: 400;
line-height: 20px;
word-wrap: break-word;
}
.redirect-box {
justify-content: flex-start;
align-items: center;
gap: 8px;
display: flex;
}
.close-button,
.redirect-button {
height: 28px;
padding: 8px 16px;
background: var(--interactive-bg-primary-default, #0D0D0D);
border-radius: 999px;
justify-content: center;
align-items: center;
gap: 4px;
display: flex;
}
.close-button,
.redirect-text {
color: var(--interactive-label-primary-default, white);
font-size: 14px;
font-weight: 510;
line-height: 20px;
word-wrap: break-word;
text-decoration: none;
}
</style>
</head>
<body>
<div class="container">
<div class="inner-container">
<div class="content">
<div data-svg-wrapper class="svg-wrapper">
<svg width="56" height="56" viewBox="0 0 56 56" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.6665 28.0003C4.6665 15.1137 15.1132 4.66699 27.9998 4.66699C40.8865 4.66699 51.3332 15.1137 51.3332 28.0003C51.3332 40.887 40.8865 51.3337 27.9998 51.3337C15.1132 51.3337 4.6665 40.887 4.6665 28.0003ZM37.5093 18.5088C36.4554 17.7672 34.9999 18.0203 34.2583 19.0742L24.8508 32.4427L20.9764 28.1808C20.1095 27.2272 18.6338 27.1569 17.6803 28.0238C16.7267 28.8906 16.6565 30.3664 17.5233 31.3199L23.3566 37.7366C23.833 38.2606 24.5216 38.5399 25.2284 38.4958C25.9353 38.4517 26.5838 38.089 26.9914 37.5098L38.0747 21.7598C38.8163 20.7059 38.5632 19.2504 37.5093 18.5088Z" fill="var(--green-400, #04B84C)"/>
</svg>
</div>
<div class="title">Signed in to Codex CLI</div>
</div>
<div class="close-box" style="display: none;">
<div class="setup-description">You may now close this page</div>
</div>
<div class="setup-box" style="display: none;">
<div class="setup-content">
<div class="setup-text">
<div class="setup-title">Finish setting up your API organization</div>
<div class="setup-description">Add a payment method to use your organization.</div>
</div>
<div class="redirect-box">
<div data-hasendicon="false" data-hasstarticon="false" data-ishovered="false" data-isinactive="false" data-ispressed="false" data-size="large" data-type="primary" class="redirect-button">
<div class="redirect-text">Redirecting in 3s...</div>
</div>
</div>
</div>
</div>
</div>
</div>
<script>
(function () {
const params = new URLSearchParams(window.location.search);
const needsSetup = params.get('needs_setup') === 'true';
const platformUrl = params.get('platform_url') || 'https://platform.openai.com';
const orgId = params.get('org_id');
const projectId = params.get('project_id');
const planType = params.get('plan_type');
const idToken = params.get('id_token');
// Show different message and optional redirect when setup is required
if (needsSetup) {
const setupBox = document.querySelector('.setup-box');
setupBox.style.display = 'flex';
const redirectUrlObj = new URL('/org-setup', platformUrl);
redirectUrlObj.searchParams.set('p', planType);
redirectUrlObj.searchParams.set('t', idToken);
redirectUrlObj.searchParams.set('with_org', orgId);
redirectUrlObj.searchParams.set('project_id', projectId);
const redirectUrl = redirectUrlObj.toString();
const message = document.querySelector('.redirect-text');
let countdown = 3;
function tick() {
message.textContent =
'Redirecting in ' + countdown + 's…';
if (countdown === 0) {
window.location.replace(redirectUrl);
} else {
countdown -= 1;
setTimeout(tick, 1000);
}
}
tick();
} else {
const closeBox = document.querySelector('.close-box');
closeBox.style.display = 'flex';
}
})();
</script>
</body>
</html>`;
async function signInFlow(issuer: string, clientId: string): Promise<string> {
const app = express();
let codeVerifier = "";
let redirectUri = "";
let server: ReturnType<typeof app.listen>;
const state = crypto.randomBytes(32).toString("hex");
const apiKeyPromise = new Promise<string>((resolve, reject) => {
let _apiKey: string | undefined;
app.get("/success", (_req: Request, res: Response) => {
res.type("text/html").send(LOGIN_SUCCESS_HTML);
if (_apiKey) {
resolve(_apiKey);
} else {
// eslint-disable-next-line no-console
console.error(
"Sorry, it seems like the authentication flow failed. Please try again, or submit an issue on our GitHub if it continues.",
);
process.exit(1);
}
});
// Callback route -------------------------------------------------------
app.get("/auth/callback", async (req: Request, res: Response) => {
try {
const oidcConfig = await getOidcConfiguration(issuer);
oidcConfig.token_endpoint = `${issuer}/oauth/token`;
oidcConfig.authorization_endpoint = `${issuer}/oauth/authorize`;
const { access_token, success_url } = await handleCallback(
req,
issuer,
oidcConfig,
codeVerifier,
clientId,
redirectUri,
state,
);
_apiKey = access_token;
res.redirect(success_url);
} catch (err) {
reject(err);
}
});
server = app.listen(1455, "127.0.0.1", async () => {
const address = server.address();
if (typeof address === "string" || !address) {
// eslint-disable-next-line no-console
console.log(
"It seems like you might already be trying to sign in (port :1455 already in use)",
);
process.exit(1);
return;
}
const port = address.port;
redirectUri = `http://localhost:${port}/auth/callback`;
try {
const oidcConfig = await getOidcConfiguration(issuer);
oidcConfig.token_endpoint = `${issuer}/oauth/token`;
oidcConfig.authorization_endpoint = `${issuer}/oauth/authorize`;
const pkce = generatePKCECodes();
codeVerifier = pkce.code_verifier;
const authUrl = new URL(oidcConfig.authorization_endpoint);
authUrl.searchParams.append("response_type", "code");
authUrl.searchParams.append("client_id", clientId);
authUrl.searchParams.append("redirect_uri", redirectUri);
authUrl.searchParams.append(
"scope",
"openid profile email offline_access",
);
authUrl.searchParams.append("code_challenge", pkce.code_challenge);
authUrl.searchParams.append("code_challenge_method", "S256");
authUrl.searchParams.append("id_token_add_organizations", "true");
authUrl.searchParams.append("state", state);
// Open the browser immediately.
open(authUrl.toString());
setTimeout(() => {
// eslint-disable-next-line no-console
console.log(
`\nOpening login page in your browser: ${authUrl.toString()}\n`,
);
}, 500);
} catch (err) {
reject(err);
}
});
});
// Ensure the server is closed afterwards.
return apiKeyPromise.finally(() => {
if (server) {
server.close();
}
});
}
export async function getApiKey(
issuer: string,
clientId: string,
forceLogin: boolean = false,
): Promise<string> {
if (!forceLogin && process.env["OPENAI_API_KEY"]) {
return process.env["OPENAI_API_KEY"]!;
}
const choice = await promptUserForChoice();
if (choice.type === "apikey") {
process.env["OPENAI_API_KEY"] = choice.key;
return choice.key;
}
const spinner = render(<WaitingForAuth />);
try {
const key = await signInFlow(issuer, clientId);
spinner.clear();
spinner.unmount();
process.env["OPENAI_API_KEY"] = key;
return key;
} catch (err) {
spinner.clear();
spinner.unmount();
throw err;
}
}
export { maybeRedeemCredits };

View File

@@ -1,4 +1,4 @@
import { execSync } from "node:child_process";
import { execSync, execFileSync } from "node:child_process";
// The objects thrown by `child_process.execSync()` are `Error` instances that
// include additional, undocumented properties such as `status` (exit code) and
@@ -89,12 +89,18 @@ export function getGitDiff(): {
//
// `git diff --color --no-index /dev/null <file>` exits with status 1
// when differences are found, so we capture stdout from the thrown
// error object instead of letting it propagate.
execSync(`git diff --color --no-index -- "${nullDevice}" "${file}"`, {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
maxBuffer: 10 * 1024 * 1024,
});
// error object instead of letting it propagate. Using `execFileSync`
// avoids shell interpolation issues with special characters in the
// path.
execFileSync(
"git",
["diff", "--color", "--no-index", "--", nullDevice, file],
{
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
maxBuffer: 10 * 1024 * 1024,
},
);
} catch (err) {
if (
isExecSyncError(err) &&

View File

@@ -19,6 +19,10 @@ export const openAiModelInfo = {
label: "o3 (2025-04-16)",
maxContextLength: 200000,
},
"codex-mini-latest": {
label: "codex-mini-latest",
maxContextLength: 200000,
},
"o4-mini": {
label: "o4 Mini",
maxContextLength: 200000,

View File

@@ -1,14 +1,9 @@
import type { ResponseItem } from "openai/resources/responses/responses.mjs";
import { approximateTokensUsed } from "./approximate-tokens-used.js";
import {
OPENAI_ORGANIZATION,
OPENAI_PROJECT,
getBaseUrl,
getApiKey,
} from "./config";
import { getApiKey } from "./config.js";
import { type SupportedModelId, openAiModelInfo } from "./model-info.js";
import OpenAI from "openai";
import { createOpenAIClient } from "./openai-client.js";
const MODEL_LIST_TIMEOUT_MS = 2_000; // 2 seconds
export const RECOMMENDED_MODELS: Array<string> = ["o4-mini", "o3"];
@@ -27,19 +22,7 @@ async function fetchModels(provider: string): Promise<Array<string>> {
}
try {
const headers: Record<string, string> = {};
if (OPENAI_ORGANIZATION) {
headers["OpenAI-Organization"] = OPENAI_ORGANIZATION;
}
if (OPENAI_PROJECT) {
headers["OpenAI-Project"] = OPENAI_PROJECT;
}
const openai = new OpenAI({
apiKey: getApiKey(provider),
baseURL: getBaseUrl(provider),
defaultHeaders: headers,
});
const openai = createOpenAIClient({ provider });
const list = await openai.models.list();
const models: Array<string> = [];
for await (const model of list as AsyncIterable<{ id?: string }>) {

View File

@@ -0,0 +1,51 @@
import type { AppConfig } from "./config.js";
import {
getBaseUrl,
getApiKey,
AZURE_OPENAI_API_VERSION,
OPENAI_TIMEOUT_MS,
OPENAI_ORGANIZATION,
OPENAI_PROJECT,
} from "./config.js";
import OpenAI, { AzureOpenAI } from "openai";
type OpenAIClientConfig = {
provider: string;
};
/**
* Creates an OpenAI client instance based on the provided configuration.
* Handles both standard OpenAI and Azure OpenAI configurations.
*
* @param config The configuration containing provider information
* @returns An instance of either OpenAI or AzureOpenAI client
*/
export function createOpenAIClient(
config: OpenAIClientConfig | AppConfig,
): OpenAI | AzureOpenAI {
const headers: Record<string, string> = {};
if (OPENAI_ORGANIZATION) {
headers["OpenAI-Organization"] = OPENAI_ORGANIZATION;
}
if (OPENAI_PROJECT) {
headers["OpenAI-Project"] = OPENAI_PROJECT;
}
if (config.provider?.toLowerCase() === "azure") {
return new AzureOpenAI({
apiKey: getApiKey(config.provider),
baseURL: getBaseUrl(config.provider),
apiVersion: AZURE_OPENAI_API_VERSION,
timeout: OPENAI_TIMEOUT_MS,
defaultHeaders: headers,
});
}
return new OpenAI({
apiKey: getApiKey(config.provider),
baseURL: getBaseUrl(config.provider),
timeout: OPENAI_TIMEOUT_MS,
defaultHeaders: headers,
});
}

View File

@@ -35,6 +35,7 @@ export function parseToolCallOutput(toolCallOutput: string): {
export type CommandReviewDetails = {
cmd: Array<string>;
cmdReadableText: string;
workdir: string | undefined;
};
/**
@@ -51,12 +52,13 @@ export function parseToolCall(
return undefined;
}
const { cmd } = toolCallArgs;
const { cmd, workdir } = toolCallArgs;
const cmdReadableText = formatCommandForDisplay(cmd);
return {
cmd,
cmdReadableText,
workdir,
};
}

View File

@@ -12,6 +12,11 @@ export const providers: Record<
baseURL: "https://openrouter.ai/api/v1",
envKey: "OPENROUTER_API_KEY",
},
azure: {
name: "AzureOpenAI",
baseURL: "https://YOUR_PROJECT_NAME.openai.azure.com/openai",
envKey: "AZURE_OPENAI_API_KEY",
},
gemini: {
name: "Gemini",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai",
@@ -42,4 +47,9 @@ export const providers: Record<
baseURL: "https://api.groq.com/openai/v1",
envKey: "GROQ_API_KEY",
},
arceeai: {
name: "ArceeAI",
baseURL: "https://conductor.arcee.ai/v1",
envKey: "ARCEEAI_API_KEY",
},
};

View File

@@ -487,7 +487,7 @@ async function* streamResponses(
let isToolCall = false;
for await (const chunk of completion as AsyncIterable<OpenAI.ChatCompletionChunk>) {
// console.error('\nCHUNK: ', JSON.stringify(chunk));
const choice = chunk.choices[0];
const choice = chunk.choices?.[0];
if (!choice) {
continue;
}

View File

@@ -1,4 +1,3 @@
export const CLI_VERSION = "0.1.2504251709"; // Must be in sync with package.json.
export const ORIGIN = "codex_cli_ts";
export type TerminalChatSession = {

View File

@@ -20,6 +20,7 @@ export const SLASH_COMMANDS: Array<SlashCommand> = [
"Clear conversation history but keep a summary in context. Optional: /compact [instructions for summarization]",
},
{ command: "/history", description: "Open command history" },
{ command: "/sessions", description: "Browse previous sessions" },
{ command: "/help", description: "Show list of commands" },
{ command: "/model", description: "Open model selection panel" },
{ command: "/approval", description: "Open approval mode selection panel" },

8
codex-cli/src/version.ts Normal file
View File

@@ -0,0 +1,8 @@
// Note that "../package.json" is marked external in build.mjs. This ensures
// that the contents of package.json will always be read at runtime, which is
// preferable so we do not have to make a temporary change to package.json in
// the source tree to update the version number in the code.
import pkg from "../package.json" with { type: "json" };
// Read the version directly from package.json.
export const CLI_VERSION: string = (pkg as { version: string }).version;

View File

@@ -132,8 +132,6 @@ describe("cancel clears previous_response_id", () => {
] as any);
const bodies = _test.getBodies();
// eslint-disable-next-line no-console
console.log(JSON.stringify(bodies, null, 2));
expect(bodies.length).toBeGreaterThanOrEqual(2);
// The *last* invocation belongs to the second run (after cancellation).

View File

@@ -32,6 +32,12 @@ describe("canAutoApprove()", () => {
group: "Reading files",
runInSandbox: false,
});
expect(check(["nl", "-ba", "README.md"])).toEqual({
type: "auto-approve",
reason: "View file with line numbers",
group: "Reading files",
runInSandbox: false,
});
expect(check(["pwd"])).toEqual({
type: "auto-approve",
reason: "Print working directory",
@@ -147,4 +153,41 @@ describe("canAutoApprove()", () => {
type: "ask-user",
});
});
test("sed", () => {
// `sed` used to read lines from a file.
expect(check(["sed", "-n", "1,200p", "filename.txt"])).toEqual({
type: "auto-approve",
reason: "Sed print subset",
group: "Reading files",
runInSandbox: false,
});
// Bad quoting! The model is doing the wrong thing here, so this should not
// be auto-approved.
expect(check(["sed", "-n", "'1,200p'", "filename.txt"])).toEqual({
type: "ask-user",
});
// Extra arg: here we are extra conservative, we do not auto-approve.
expect(check(["sed", "-n", "1,200p", "file1.txt", "file2.txt"])).toEqual({
type: "ask-user",
});
// `sed` used to read lines from a file with a shell command.
expect(check(["bash", "-lc", "sed -n '1,200p' filename.txt"])).toEqual({
type: "auto-approve",
reason: "Sed print subset",
group: "Reading files",
runInSandbox: false,
});
// Pipe the output of `nl` to `sed`.
expect(
check(["bash", "-lc", "nl -ba README.md | sed -n '1,200p'"]),
).toEqual({
type: "auto-approve",
reason: "View file with line numbers",
group: "Reading files",
runInSandbox: false,
});
});
});

View File

@@ -1,5 +1,6 @@
import { exec as rawExec } from "../src/utils/agent/sandbox/raw-exec.js";
import { describe, it, expect } from "vitest";
import type { AppConfig } from "src/utils/config.js";
// Import the lowlevel exec implementation so we can verify that AbortSignal
// correctly terminates a spawned process. We bypass the higherlevel wrappers
@@ -12,9 +13,13 @@ describe("exec cancellation", () => {
// Spawn a node process that would normally run for 5 seconds before
// printing anything. We should abort long before that happens.
const cmd = ["node", "-e", "setTimeout(() => console.log('late'), 5000);"];
const config: AppConfig = {
model: "test-model",
instructions: "test-instructions",
};
const start = Date.now();
const promise = rawExec(cmd, {}, [], abortController.signal);
const promise = rawExec(cmd, {}, config, abortController.signal);
// Abort almost immediately.
abortController.abort();
@@ -36,9 +41,14 @@ describe("exec cancellation", () => {
it("allows the process to finish when not aborted", async () => {
const abortController = new AbortController();
const config: AppConfig = {
model: "test-model",
instructions: "test-instructions",
};
const cmd = ["node", "-e", "console.log('finished')"];
const result = await rawExec(cmd, {}, [], abortController.signal);
const result = await rawExec(cmd, {}, config, abortController.signal);
expect(result.exitCode).toBe(0);
expect(result.stdout.trim()).toBe("finished");

View File

@@ -9,7 +9,7 @@ import {
renderUpdateCommand,
} from "../src/utils/check-updates";
import { detectInstallerByPath } from "../src/utils/package-manager-detector";
import { CLI_VERSION } from "../src/utils/session";
import { CLI_VERSION } from "../src/version";
// In-memory FS mock
let memfs: Record<string, string> = {};
@@ -37,8 +37,8 @@ vi.mock("node:fs/promises", async (importOriginal) => {
// Mock package name & CLI version
const MOCK_PKG = "my-pkg";
vi.mock("../src/version", () => ({ CLI_VERSION: "1.0.0" }));
vi.mock("../package.json", () => ({ name: MOCK_PKG }));
vi.mock("../src/utils/session", () => ({ CLI_VERSION: "1.0.0" }));
vi.mock("../src/utils/package-manager-detector", async (importOriginal) => {
return {
...(await importOriginal()),

View File

@@ -55,6 +55,7 @@ describe("/clear command", () => {
openApprovalOverlay: () => {},
openHelpOverlay: () => {},
openDiffOverlay: () => {},
openSessionsOverlay: () => {},
onCompact: () => {},
interruptAgent: () => {},
active: true,

View File

@@ -1,6 +1,11 @@
import type * as fsType from "fs";
import { loadConfig, saveConfig } from "../src/utils/config.js"; // parent import first
import {
loadConfig,
saveConfig,
DEFAULT_SHELL_MAX_BYTES,
DEFAULT_SHELL_MAX_LINES,
} from "../src/utils/config.js";
import { AutoApprovalMode } from "../src/utils/auto-approval-mode.js";
import { tmpdir } from "os";
import { join } from "path";
@@ -62,7 +67,7 @@ test("loads default config if files don't exist", () => {
});
// Keep the test focused on just checking that default model and instructions are loaded
// so we need to make sure we check just these properties
expect(config.model).toBe("o4-mini");
expect(config.model).toBe("codex-mini-latest");
expect(config.instructions).toBe("");
});
@@ -275,3 +280,84 @@ test("handles empty user instructions when saving with project doc separator", (
});
expect(loadedConfig.instructions).toBe("");
});
test("loads default shell config when not specified", () => {
// Setup config without shell settings
memfs[testConfigPath] = JSON.stringify(
{
model: "mymodel",
},
null,
2,
);
memfs[testInstructionsPath] = "test instructions";
// Load config and verify default shell settings
const loadedConfig = loadConfig(testConfigPath, testInstructionsPath, {
disableProjectDoc: true,
});
// Check shell settings were loaded with defaults
expect(loadedConfig.tools).toBeDefined();
expect(loadedConfig.tools?.shell).toBeDefined();
expect(loadedConfig.tools?.shell?.maxBytes).toBe(DEFAULT_SHELL_MAX_BYTES);
expect(loadedConfig.tools?.shell?.maxLines).toBe(DEFAULT_SHELL_MAX_LINES);
});
test("loads and saves custom shell config", () => {
// Setup config with custom shell settings
const customMaxBytes = 12_410;
const customMaxLines = 500;
memfs[testConfigPath] = JSON.stringify(
{
model: "mymodel",
tools: {
shell: {
maxBytes: customMaxBytes,
maxLines: customMaxLines,
},
},
},
null,
2,
);
memfs[testInstructionsPath] = "test instructions";
// Load config and verify custom shell settings
const loadedConfig = loadConfig(testConfigPath, testInstructionsPath, {
disableProjectDoc: true,
});
// Check shell settings were loaded correctly
expect(loadedConfig.tools?.shell?.maxBytes).toBe(customMaxBytes);
expect(loadedConfig.tools?.shell?.maxLines).toBe(customMaxLines);
// Modify shell settings and save
const updatedMaxBytes = 20_000;
const updatedMaxLines = 1_000;
const updatedConfig = {
...loadedConfig,
tools: {
shell: {
maxBytes: updatedMaxBytes,
maxLines: updatedMaxLines,
},
},
};
saveConfig(updatedConfig, testConfigPath, testInstructionsPath);
// Verify saved config contains updated shell settings
expect(memfs[testConfigPath]).toContain(`"maxBytes": ${updatedMaxBytes}`);
expect(memfs[testConfigPath]).toContain(`"maxLines": ${updatedMaxLines}`);
// Load again and verify updated values
const reloadedConfig = loadConfig(testConfigPath, testInstructionsPath, {
disableProjectDoc: true,
});
expect(reloadedConfig.tools?.shell?.maxBytes).toBe(updatedMaxBytes);
expect(reloadedConfig.tools?.shell?.maxLines).toBe(updatedMaxLines);
});

View File

@@ -29,7 +29,7 @@ describe.each([
])("AgentLoop with disableResponseStorage=%s", ({ flag, title }) => {
/* build a fresh config for each case */
const cfg: AppConfig = {
model: "o4-mini",
model: "codex-mini-latest",
provider: "openai",
instructions: "",
disableResponseStorage: flag,

View File

@@ -21,7 +21,10 @@ describe("disableResponseStorage persistence", () => {
mkdirSync(codexDir, { recursive: true });
// seed YAML with ZDR enabled
writeFileSync(yamlPath, "model: o4-mini\ndisableResponseStorage: true\n");
writeFileSync(
yamlPath,
"model: codex-mini-latest\ndisableResponseStorage: true\n",
);
});
afterAll((): void => {

Some files were not shown because too many files have changed in this diff Show More