mirror of
https://github.com/openai/codex.git
synced 2026-02-02 06:57:03 +00:00
Compare commits
5 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bd2a53d1cd | ||
|
|
e744548aae | ||
|
|
8b23e160c4 | ||
|
|
3df732caa1 | ||
|
|
3a4f5435e8 |
@@ -1,3 +1 @@
|
||||
iTerm
|
||||
iTerm2
|
||||
psuedo
|
||||
@@ -1,6 +1,6 @@
|
||||
[codespell]
|
||||
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
|
||||
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts,*.jsonl,frame*.txt
|
||||
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts
|
||||
check-hidden = true
|
||||
ignore-regex = ^\s*"image/\S+": ".*|\b(afterAll)\b
|
||||
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm
|
||||
ignore-words-list = ratatui,ser
|
||||
|
||||
22
.github/ISSUE_TEMPLATE/2-bug-report.yml
vendored
22
.github/ISSUE_TEMPLATE/2-bug-report.yml
vendored
@@ -20,14 +20,6 @@ body:
|
||||
attributes:
|
||||
label: What version of Codex is running?
|
||||
description: Copy the output of `codex --version`
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: plan
|
||||
attributes:
|
||||
label: What subscription do you have?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: model
|
||||
attributes:
|
||||
@@ -40,18 +32,11 @@ body:
|
||||
description: |
|
||||
For MacOS and Linux: copy the output of `uname -mprs`
|
||||
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What issue are you seeing?
|
||||
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: What steps can reproduce the bug?
|
||||
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
|
||||
description: Explain the bug and provide a code snippet that can reproduce it.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
@@ -59,6 +44,11 @@ body:
|
||||
attributes:
|
||||
label: What is the expected behavior?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What do you see instead?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
|
||||
25
.github/ISSUE_TEMPLATE/4-feature-request.yml
vendored
25
.github/ISSUE_TEMPLATE/4-feature-request.yml
vendored
@@ -1,25 +0,0 @@
|
||||
name: 🎁 Feature Request
|
||||
description: Propose a new feature for Codex
|
||||
labels:
|
||||
- enhancement
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Is Codex missing a feature that you'd like to see? Feel free to propose it here.
|
||||
|
||||
Before you submit a feature:
|
||||
1. Search existing issues for similar features. If you find one, 👍 it rather than opening a new one.
|
||||
2. The Codex team will try to balance the varying needs of the community when prioritizing or rejecting new features. Not all features will be accepted. See [Contributing](https://github.com/openai/codex#contributing) for more details.
|
||||
|
||||
- type: textarea
|
||||
id: feature
|
||||
attributes:
|
||||
label: What feature would you like to see?
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: Is there anything else you think we should know?
|
||||
62
.github/ISSUE_TEMPLATE/5-vs-code-extension.yml
vendored
62
.github/ISSUE_TEMPLATE/5-vs-code-extension.yml
vendored
@@ -1,62 +0,0 @@
|
||||
name: 🧑💻 VS Code Extension
|
||||
description: Report an issue with the VS Code extension
|
||||
labels:
|
||||
- extension
|
||||
- needs triage
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
|
||||
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
|
||||
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: What version of the VS Code extension are you using?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: plan
|
||||
attributes:
|
||||
label: What subscription do you have?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: ide
|
||||
attributes:
|
||||
label: Which IDE are you using?
|
||||
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: platform
|
||||
attributes:
|
||||
label: What platform is your computer?
|
||||
description: |
|
||||
For MacOS and Linux: copy the output of `uname -mprs`
|
||||
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What issue are you seeing?
|
||||
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: What steps can reproduce the bug?
|
||||
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: What is the expected behavior?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: Is there anything else you think we should know?
|
||||
@@ -1,2 +1 @@
|
||||
/bin/
|
||||
/node_modules/
|
||||
8
.github/actions/codex/.prettierrc.toml
vendored
Normal file
8
.github/actions/codex/.prettierrc.toml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
printWidth = 80
|
||||
quoteProps = "consistent"
|
||||
semi = true
|
||||
tabWidth = 2
|
||||
trailingComma = "all"
|
||||
|
||||
# Preserve existing behavior for markdown/text wrapping.
|
||||
proseWrap = "preserve"
|
||||
140
.github/actions/codex/README.md
vendored
Normal file
140
.github/actions/codex/README.md
vendored
Normal file
@@ -0,0 +1,140 @@
|
||||
# openai/codex-action
|
||||
|
||||
`openai/codex-action` is a GitHub Action that facilitates the use of [Codex](https://github.com/openai/codex) on GitHub issues and pull requests. Using the action, associate **labels** to run Codex with the appropriate prompt for the given context. Codex will respond by posting comments or creating PRs, whichever you specify!
|
||||
|
||||
Here is a sample workflow that uses `openai/codex-action`:
|
||||
|
||||
```yaml
|
||||
name: Codex
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, labeled]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
codex:
|
||||
if: ... # optional, but can be effective in conserving CI resources
|
||||
runs-on: ubuntu-latest
|
||||
# TODO(mbolin): Need to verify if/when `write` is necessary.
|
||||
permissions:
|
||||
contents: write
|
||||
issues: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
# By default, Codex runs network disabled using --full-auto, so perform
|
||||
# any setup that requires network (such as installing dependencies)
|
||||
# before openai/codex-action.
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Codex
|
||||
uses: openai/codex-action@latest
|
||||
with:
|
||||
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
See sample usage in [`codex.yml`](../../workflows/codex.yml).
|
||||
|
||||
## Triggering the Action
|
||||
|
||||
Using the sample workflow above, we have:
|
||||
|
||||
```yaml
|
||||
on:
|
||||
issues:
|
||||
types: [opened, labeled]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
types: [labeled]
|
||||
```
|
||||
|
||||
which means our workflow will be triggered when any of the following events occur:
|
||||
|
||||
- a label is added to an issue
|
||||
- a label is added to a pull request against the `main` branch
|
||||
|
||||
### Label-Based Triggers
|
||||
|
||||
To define a GitHub label that should trigger Codex, create a file named `.github/codex/labels/LABEL-NAME.md` in your repository where `LABEL-NAME` is the name of the label. The content of the file is the prompt template to use when the label is added (see more on [Prompt Template Variables](#prompt-template-variables) below).
|
||||
|
||||
For example, if the file `.github/codex/labels/codex-review.md` exists, then:
|
||||
|
||||
- Adding the `codex-review` label will trigger the workflow containing the `openai/codex-action` GitHub Action.
|
||||
- When `openai/codex-action` starts, it will replace the `codex-review` label with `codex-review-in-progress`.
|
||||
- When `openai/codex-action` is finished, it will replace the `codex-review-in-progress` label with `codex-review-completed`.
|
||||
|
||||
If Codex sees that either `codex-review-in-progress` or `codex-review-completed` is already present, it will not perform the action.
|
||||
|
||||
As determined by the [default config](./src/default-label-config.ts), Codex will act on the following labels by default:
|
||||
|
||||
- Adding the `codex-review` label to a pull request will have Codex review the PR and add it to the PR as a comment.
|
||||
- Adding the `codex-triage` label to an issue will have Codex investigate the issue and report its findings as a comment.
|
||||
- Adding the `codex-issue-fix` label to an issue will have Codex attempt to fix the issue and create a PR wit the fix, if any.
|
||||
|
||||
## Action Inputs
|
||||
|
||||
The `openai/codex-action` GitHub Action takes the following inputs
|
||||
|
||||
### `openai_api_key` (required)
|
||||
|
||||
Set your `OPENAI_API_KEY` as a [repository secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). See **Secrets and varaibles** then **Actions** in the settings for your GitHub repo.
|
||||
|
||||
Note that the secret name does not have to be `OPENAI_API_KEY`. For example, you might want to name it `CODEX_OPENAI_API_KEY` and then configure it on `openai/codex-action` as follows:
|
||||
|
||||
```yaml
|
||||
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
```
|
||||
|
||||
### `github_token` (required)
|
||||
|
||||
This is required so that Codex can post a comment or create a PR. Set this value on the action as follows:
|
||||
|
||||
```yaml
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
### `codex_args`
|
||||
|
||||
A whitespace-delimited list of arguments to pass to Codex. Defaults to `--full-auto`, but if you want to override the default model to use `o3`:
|
||||
|
||||
```yaml
|
||||
codex_args: "--full-auto --model o3"
|
||||
```
|
||||
|
||||
For more complex configurations, use the `codex_home` input.
|
||||
|
||||
### `codex_home`
|
||||
|
||||
If set, the value to use for the `$CODEX_HOME` environment variable when running Codex. As explained [in the docs](https://github.com/openai/codex/tree/main/codex-rs#readme), this folder can contain the `config.toml` to configure Codex, custom instructions, and log files.
|
||||
|
||||
This should be a relative path within your repo.
|
||||
|
||||
## Prompt Template Variables
|
||||
|
||||
As shown above, `"prompt"` and `"promptPath"` are used to define prompt templates that will be populated and passed to Codex in response to certain events. All template variables are of the form `{CODEX_ACTION_...}` and the supported values are defined below.
|
||||
|
||||
### `CODEX_ACTION_ISSUE_TITLE`
|
||||
|
||||
If the action was triggered on a GitHub issue, this is the issue title.
|
||||
|
||||
Specifically it is read as the `.issue.title` from the `$GITHUB_EVENT_PATH`.
|
||||
|
||||
### `CODEX_ACTION_ISSUE_BODY`
|
||||
|
||||
If the action was triggered on a GitHub issue, this is the issue body.
|
||||
|
||||
Specifically it is read as the `.issue.body` from the `$GITHUB_EVENT_PATH`.
|
||||
|
||||
### `CODEX_ACTION_GITHUB_EVENT_PATH`
|
||||
|
||||
The value of the `$GITHUB_EVENT_PATH` environment variable, which is the path to the file that contains the JSON payload for the event that triggered the workflow. Codex can use `jq` to read only the fields of interest from this file.
|
||||
|
||||
### `CODEX_ACTION_PR_DIFF`
|
||||
|
||||
If the action was triggered on a pull request, this is the diff between the base and head commits of the PR. It is the output from `git diff`.
|
||||
|
||||
Note that the content of the diff could be quite large, so is generally safer to point Codex at `CODEX_ACTION_GITHUB_EVENT_PATH` and let it decide how it wants to explore the change.
|
||||
127
.github/actions/codex/action.yml
vendored
Normal file
127
.github/actions/codex/action.yml
vendored
Normal file
@@ -0,0 +1,127 @@
|
||||
name: "Codex [reusable action]"
|
||||
description: "A reusable action that runs a Codex model."
|
||||
|
||||
inputs:
|
||||
openai_api_key:
|
||||
description: "The value to use as the OPENAI_API_KEY environment variable when running Codex."
|
||||
required: true
|
||||
trigger_phrase:
|
||||
description: "Text to trigger Codex from a PR/issue body or comment."
|
||||
required: false
|
||||
default: ""
|
||||
github_token:
|
||||
description: "Token so Codex can comment on the PR or issue."
|
||||
required: true
|
||||
codex_args:
|
||||
description: "A whitespace-delimited list of arguments to pass to Codex. Due to limitations in YAML, arguments with spaces are not supported. For more complex configurations, use the `codex_home` input."
|
||||
required: false
|
||||
default: "--config hide_agent_reasoning=true --full-auto"
|
||||
codex_home:
|
||||
description: "Value to use as the CODEX_HOME environment variable when running Codex."
|
||||
required: false
|
||||
codex_release_tag:
|
||||
description: "The release tag of the Codex model to run, e.g., 'rust-v0.3.0'. Defaults to the latest release."
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
# Do this in Bash so we do not even bother to install Bun if the sender does
|
||||
# not have write access to the repo.
|
||||
- name: Verify user has write access to the repo.
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
PERMISSION=$(gh api \
|
||||
"/repos/${GITHUB_REPOSITORY}/collaborators/${{ github.event.sender.login }}/permission" \
|
||||
| jq -r '.permission')
|
||||
|
||||
if [[ "$PERMISSION" != "admin" && "$PERMISSION" != "write" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Download Codex
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
# Determine OS/arch and corresponding Codex artifact name.
|
||||
uname_s=$(uname -s)
|
||||
uname_m=$(uname -m)
|
||||
|
||||
case "$uname_s" in
|
||||
Linux*) os="linux" ;;
|
||||
Darwin*) os="apple-darwin" ;;
|
||||
*) echo "Unsupported operating system: $uname_s"; exit 1 ;;
|
||||
esac
|
||||
|
||||
case "$uname_m" in
|
||||
x86_64*) arch="x86_64" ;;
|
||||
arm64*|aarch64*) arch="aarch64" ;;
|
||||
*) echo "Unsupported architecture: $uname_m"; exit 1 ;;
|
||||
esac
|
||||
|
||||
# linux builds differentiate between musl and gnu.
|
||||
if [[ "$os" == "linux" ]]; then
|
||||
if [[ "$arch" == "x86_64" ]]; then
|
||||
triple="${arch}-unknown-linux-musl"
|
||||
else
|
||||
# Only other supported linux build is aarch64 gnu.
|
||||
triple="${arch}-unknown-linux-gnu"
|
||||
fi
|
||||
else
|
||||
# macOS
|
||||
triple="${arch}-apple-darwin"
|
||||
fi
|
||||
|
||||
# Note that if we start baking version numbers into the artifact name,
|
||||
# we will need to update this action.yml file to match.
|
||||
artifact="codex-exec-${triple}.tar.gz"
|
||||
|
||||
TAG_ARG="${{ inputs.codex_release_tag }}"
|
||||
# The usage is `gh release download [<tag>] [flags]`, so if TAG_ARG
|
||||
# is empty, we do not pass it so we can default to the latest release.
|
||||
gh release download ${TAG_ARG:+$TAG_ARG} --repo openai/codex \
|
||||
--pattern "$artifact" --output - \
|
||||
| tar xzO > /usr/local/bin/codex-exec
|
||||
chmod +x /usr/local/bin/codex-exec
|
||||
|
||||
# Display Codex version to confirm binary integrity; ensure we point it
|
||||
# at the checked-out repository via --cd so that any subsequent commands
|
||||
# use the correct working directory.
|
||||
codex-exec --cd "$GITHUB_WORKSPACE" --version
|
||||
|
||||
- name: Install Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.2.11
|
||||
|
||||
- name: Install dependencies
|
||||
shell: bash
|
||||
run: |
|
||||
cd ${{ github.action_path }}
|
||||
bun install --production
|
||||
|
||||
- name: Run Codex
|
||||
shell: bash
|
||||
run: bun run ${{ github.action_path }}/src/main.ts
|
||||
# Process args plus environment variables often have a max of 128 KiB,
|
||||
# so we should fit within that limit?
|
||||
env:
|
||||
INPUT_CODEX_ARGS: ${{ inputs.codex_args || '' }}
|
||||
INPUT_CODEX_HOME: ${{ inputs.codex_home || ''}}
|
||||
INPUT_TRIGGER_PHRASE: ${{ inputs.trigger_phrase || '' }}
|
||||
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
|
||||
GITHUB_TOKEN: ${{ inputs.github_token }}
|
||||
GITHUB_EVENT_ACTION: ${{ github.event.action || '' }}
|
||||
GITHUB_EVENT_LABEL_NAME: ${{ github.event.label.name || '' }}
|
||||
GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number || '' }}
|
||||
GITHUB_EVENT_ISSUE_BODY: ${{ github.event.issue.body || '' }}
|
||||
GITHUB_EVENT_REVIEW_BODY: ${{ github.event.review.body || '' }}
|
||||
GITHUB_EVENT_COMMENT_BODY: ${{ github.event.comment.body || '' }}
|
||||
91
.github/actions/codex/bun.lock
vendored
Normal file
91
.github/actions/codex/bun.lock
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
{
|
||||
"lockfileVersion": 1,
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "codex-action",
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.11.1",
|
||||
"@actions/github": "^6.0.1",
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "^1.2.19",
|
||||
"@types/node": "^24.1.0",
|
||||
"prettier": "^3.6.2",
|
||||
"typescript": "^5.8.3",
|
||||
},
|
||||
},
|
||||
},
|
||||
"packages": {
|
||||
"@actions/core": ["@actions/core@1.11.1", "", { "dependencies": { "@actions/exec": "^1.1.1", "@actions/http-client": "^2.0.1" } }, "sha512-hXJCSrkwfA46Vd9Z3q4cpEpHB1rL5NG04+/rbqW9d3+CSvtB1tYe8UTpAlixa1vj0m/ULglfEK2UKxMGxCxv5A=="],
|
||||
|
||||
"@actions/exec": ["@actions/exec@1.1.1", "", { "dependencies": { "@actions/io": "^1.0.1" } }, "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w=="],
|
||||
|
||||
"@actions/github": ["@actions/github@6.0.1", "", { "dependencies": { "@actions/http-client": "^2.2.0", "@octokit/core": "^5.0.1", "@octokit/plugin-paginate-rest": "^9.2.2", "@octokit/plugin-rest-endpoint-methods": "^10.4.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "undici": "^5.28.5" } }, "sha512-xbZVcaqD4XnQAe35qSQqskb3SqIAfRyLBrHMd/8TuL7hJSz2QtbDwnNM8zWx4zO5l2fnGtseNE3MbEvD7BxVMw=="],
|
||||
|
||||
"@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
|
||||
|
||||
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
|
||||
|
||||
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
|
||||
|
||||
"@octokit/auth-token": ["@octokit/auth-token@4.0.0", "", {}, "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA=="],
|
||||
|
||||
"@octokit/core": ["@octokit/core@5.2.1", "", { "dependencies": { "@octokit/auth-token": "^4.0.0", "@octokit/graphql": "^7.1.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.0.0", "before-after-hook": "^2.2.0", "universal-user-agent": "^6.0.0" } }, "sha512-dKYCMuPO1bmrpuogcjQ8z7ICCH3FP6WmxpwC03yjzGfZhj9fTJg6+bS1+UAplekbN2C+M61UNllGOOoAfGCrdQ=="],
|
||||
|
||||
"@octokit/endpoint": ["@octokit/endpoint@9.0.6", "", { "dependencies": { "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-H1fNTMA57HbkFESSt3Y9+FBICv+0jFceJFPWDePYlR/iMGrwM5ph+Dd4XRQs+8X+PUFURLQgX9ChPfhJ/1uNQw=="],
|
||||
|
||||
"@octokit/graphql": ["@octokit/graphql@7.1.1", "", { "dependencies": { "@octokit/request": "^8.4.1", "@octokit/types": "^13.0.0", "universal-user-agent": "^6.0.0" } }, "sha512-3mkDltSfcDUoa176nlGoA32RGjeWjl3K7F/BwHwRMJUW/IteSa4bnSV8p2ThNkcIcZU2umkZWxwETSSCJf2Q7g=="],
|
||||
|
||||
"@octokit/openapi-types": ["@octokit/openapi-types@24.2.0", "", {}, "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg=="],
|
||||
|
||||
"@octokit/plugin-paginate-rest": ["@octokit/plugin-paginate-rest@9.2.2", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-u3KYkGF7GcZnSD/3UP0S7K5XUFT2FkOQdcfXZGZQPGv3lm4F2Xbf71lvjldr8c1H3nNbF+33cLEkWYbokGWqiQ=="],
|
||||
|
||||
"@octokit/plugin-rest-endpoint-methods": ["@octokit/plugin-rest-endpoint-methods@10.4.1", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg=="],
|
||||
|
||||
"@octokit/request": ["@octokit/request@8.4.1", "", { "dependencies": { "@octokit/endpoint": "^9.0.6", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-qnB2+SY3hkCmBxZsR/MPCybNmbJe4KAlfWErXq+rBKkQJlbjdJeS85VI9r8UqeLYLvnAenU8Q1okM/0MBsAGXw=="],
|
||||
|
||||
"@octokit/request-error": ["@octokit/request-error@5.1.1", "", { "dependencies": { "@octokit/types": "^13.1.0", "deprecation": "^2.0.0", "once": "^1.4.0" } }, "sha512-v9iyEQJH6ZntoENr9/yXxjuezh4My67CBSu9r6Ve/05Iu5gNgnisNWOsoJHTP6k0Rr0+HQIpnH+kyammu90q/g=="],
|
||||
|
||||
"@octokit/types": ["@octokit/types@13.10.0", "", { "dependencies": { "@octokit/openapi-types": "^24.2.0" } }, "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA=="],
|
||||
|
||||
"@types/bun": ["@types/bun@1.2.19", "", { "dependencies": { "bun-types": "1.2.19" } }, "sha512-d9ZCmrH3CJ2uYKXQIUuZ/pUnTqIvLDS0SK7pFmbx8ma+ziH/FRMoAq5bYpRG7y+w1gl+HgyNZbtqgMq4W4e2Lg=="],
|
||||
|
||||
"@types/node": ["@types/node@24.1.0", "", { "dependencies": { "undici-types": "~7.8.0" } }, "sha512-ut5FthK5moxFKH2T1CUOC6ctR67rQRvvHdFLCD2Ql6KXmMuCrjsSsRI9UsLCm9M18BMwClv4pn327UvB7eeO1w=="],
|
||||
|
||||
"@types/react": ["@types/react@19.1.8", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-AwAfQ2Wa5bCx9WP8nZL2uMZWod7J7/JSplxbTmBQ5ms6QpqNYm672H0Vu9ZVKVngQ+ii4R/byguVEUZQyeg44g=="],
|
||||
|
||||
"before-after-hook": ["before-after-hook@2.2.3", "", {}, "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ=="],
|
||||
|
||||
"bun-types": ["bun-types@1.2.19", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-uAOTaZSPuYsWIXRpj7o56Let0g/wjihKCkeRqUBhlLVM/Bt+Fj9xTo+LhC1OV1XDaGkz4hNC80et5xgy+9KTHQ=="],
|
||||
|
||||
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
|
||||
|
||||
"deprecation": ["deprecation@2.3.1", "", {}, "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="],
|
||||
|
||||
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
|
||||
|
||||
"prettier": ["prettier@3.6.2", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="],
|
||||
|
||||
"tunnel": ["tunnel@0.0.6", "", {}, "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg=="],
|
||||
|
||||
"typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="],
|
||||
|
||||
"undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
|
||||
|
||||
"undici-types": ["undici-types@7.8.0", "", {}, "sha512-9UJ2xGDvQ43tYyVMpuHlsgApydB8ZKfVYTsLDhXkFL/6gfkp+U8xTGdh8pMJv1SpZna0zxG1DwsKZsreLbXBxw=="],
|
||||
|
||||
"universal-user-agent": ["universal-user-agent@6.0.1", "", {}, "sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ=="],
|
||||
|
||||
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
|
||||
|
||||
"@octokit/plugin-paginate-rest/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
|
||||
|
||||
"@octokit/plugin-rest-endpoint-methods/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
|
||||
|
||||
"bun-types/@types/node": ["@types/node@24.0.13", "", { "dependencies": { "undici-types": "~7.8.0" } }, "sha512-Qm9OYVOFHFYg3wJoTSrz80hoec5Lia/dPp84do3X7dZvLikQvM1YpmvTBEdIr/e+U8HTkFjLHLnl78K/qjf+jQ=="],
|
||||
|
||||
"@octokit/plugin-paginate-rest/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
|
||||
|
||||
"@octokit/plugin-rest-endpoint-methods/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
|
||||
}
|
||||
}
|
||||
21
.github/actions/codex/package.json
vendored
Normal file
21
.github/actions/codex/package.json
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"name": "codex-action",
|
||||
"version": "0.0.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"format": "prettier --check src",
|
||||
"format:fix": "prettier --write src",
|
||||
"test": "bun test",
|
||||
"typecheck": "tsc"
|
||||
},
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.11.1",
|
||||
"@actions/github": "^6.0.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "^1.2.19",
|
||||
"@types/node": "^24.1.0",
|
||||
"prettier": "^3.6.2",
|
||||
"typescript": "^5.8.3"
|
||||
}
|
||||
}
|
||||
85
.github/actions/codex/src/add-reaction.ts
vendored
Normal file
85
.github/actions/codex/src/add-reaction.ts
vendored
Normal file
@@ -0,0 +1,85 @@
|
||||
import * as github from "@actions/github";
|
||||
import type { EnvContext } from "./env-context";
|
||||
|
||||
/**
|
||||
* Add an "eyes" reaction to the entity (issue, issue comment, or pull request
|
||||
* review comment) that triggered the current Codex invocation.
|
||||
*
|
||||
* The purpose is to provide immediate feedback to the user – similar to the
|
||||
* *-in-progress label flow – indicating that the bot has acknowledged the
|
||||
* request and is working on it.
|
||||
*
|
||||
* We attempt to add the reaction best suited for the current GitHub event:
|
||||
*
|
||||
* • issues → POST /repos/{owner}/{repo}/issues/{issue_number}/reactions
|
||||
* • issue_comment → POST /repos/{owner}/{repo}/issues/comments/{comment_id}/reactions
|
||||
* • pull_request_review_comment → POST /repos/{owner}/{repo}/pulls/comments/{comment_id}/reactions
|
||||
*
|
||||
* If the specific target is unavailable (e.g. unexpected payload shape) we
|
||||
* silently skip instead of failing the whole action because the reaction is
|
||||
* merely cosmetic.
|
||||
*/
|
||||
export async function addEyesReaction(ctx: EnvContext): Promise<void> {
|
||||
const octokit = ctx.getOctokit();
|
||||
const { owner, repo } = github.context.repo;
|
||||
const eventName = github.context.eventName;
|
||||
|
||||
try {
|
||||
switch (eventName) {
|
||||
case "issue_comment": {
|
||||
const commentId = (github.context.payload as any)?.comment?.id;
|
||||
if (commentId) {
|
||||
await octokit.rest.reactions.createForIssueComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: commentId,
|
||||
content: "eyes",
|
||||
});
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "pull_request_review_comment": {
|
||||
const commentId = (github.context.payload as any)?.comment?.id;
|
||||
if (commentId) {
|
||||
await octokit.rest.reactions.createForPullRequestReviewComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: commentId,
|
||||
content: "eyes",
|
||||
});
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "issues": {
|
||||
const issueNumber = github.context.issue.number;
|
||||
if (issueNumber) {
|
||||
await octokit.rest.reactions.createForIssue({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
content: "eyes",
|
||||
});
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default: {
|
||||
// Fallback: try to react to the issue/PR if we have a number.
|
||||
const issueNumber = github.context.issue.number;
|
||||
if (issueNumber) {
|
||||
await octokit.rest.reactions.createForIssue({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
content: "eyes",
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Do not fail the action if reaction creation fails – log and continue.
|
||||
console.warn(`Failed to add \"eyes\" reaction: ${error}`);
|
||||
}
|
||||
}
|
||||
53
.github/actions/codex/src/comment.ts
vendored
Normal file
53
.github/actions/codex/src/comment.ts
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
import type { EnvContext } from "./env-context";
|
||||
import { runCodex } from "./run-codex";
|
||||
import { postComment } from "./post-comment";
|
||||
import { addEyesReaction } from "./add-reaction";
|
||||
|
||||
/**
|
||||
* Handle `issue_comment` and `pull_request_review_comment` events once we know
|
||||
* the action is supported.
|
||||
*/
|
||||
export async function onComment(ctx: EnvContext): Promise<void> {
|
||||
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
|
||||
if (!triggerPhrase) {
|
||||
console.warn("Empty trigger phrase: skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
// Attempt to get the body of the comment from the environment. Depending on
|
||||
// the event type either `GITHUB_EVENT_COMMENT_BODY` (issue & PR comments) or
|
||||
// `GITHUB_EVENT_REVIEW_BODY` (PR reviews) is set.
|
||||
const commentBody =
|
||||
ctx.tryGetNonEmpty("GITHUB_EVENT_COMMENT_BODY") ??
|
||||
ctx.tryGetNonEmpty("GITHUB_EVENT_REVIEW_BODY") ??
|
||||
ctx.tryGetNonEmpty("GITHUB_EVENT_ISSUE_BODY");
|
||||
|
||||
if (!commentBody) {
|
||||
console.warn("Comment body not found in environment: skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if the trigger phrase is present.
|
||||
if (!commentBody.includes(triggerPhrase)) {
|
||||
console.log(
|
||||
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this comment.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Derive the prompt by removing the trigger phrase. Remove only the first
|
||||
// occurrence to keep any additional occurrences that might be meaningful.
|
||||
const prompt = commentBody.replace(triggerPhrase, "").trim();
|
||||
|
||||
if (prompt.length === 0) {
|
||||
console.warn("Prompt is empty after removing trigger phrase: skipping");
|
||||
return;
|
||||
}
|
||||
|
||||
// Provide immediate feedback that we are working on the request.
|
||||
await addEyesReaction(ctx);
|
||||
|
||||
// Run Codex and post the response as a new comment.
|
||||
const lastMessage = await runCodex(prompt, ctx);
|
||||
await postComment(lastMessage, ctx);
|
||||
}
|
||||
11
.github/actions/codex/src/config.ts
vendored
Normal file
11
.github/actions/codex/src/config.ts
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
import { readdirSync, statSync } from "fs";
|
||||
import * as path from "path";
|
||||
|
||||
export interface Config {
|
||||
labels: Record<string, LabelConfig>;
|
||||
}
|
||||
|
||||
export interface LabelConfig {
|
||||
/** Returns the prompt template. */
|
||||
getPromptTemplate(): string;
|
||||
}
|
||||
44
.github/actions/codex/src/default-label-config.ts
vendored
Normal file
44
.github/actions/codex/src/default-label-config.ts
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
import type { Config } from "./config";
|
||||
|
||||
export function getDefaultConfig(): Config {
|
||||
return {
|
||||
labels: {
|
||||
"codex-investigate-issue": {
|
||||
getPromptTemplate: () =>
|
||||
`
|
||||
Troubleshoot whether the reported issue is valid.
|
||||
|
||||
Provide a concise and respectful comment summarizing the findings.
|
||||
|
||||
### {CODEX_ACTION_ISSUE_TITLE}
|
||||
|
||||
{CODEX_ACTION_ISSUE_BODY}
|
||||
`.trim(),
|
||||
},
|
||||
"codex-code-review": {
|
||||
getPromptTemplate: () =>
|
||||
`
|
||||
Review this PR and respond with a very concise final message, formatted in Markdown.
|
||||
|
||||
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
|
||||
|
||||
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
|
||||
|
||||
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the \`base\` and \`head\` refs that define this PR. Both refs are available locally.
|
||||
`.trim(),
|
||||
},
|
||||
"codex-attempt-fix": {
|
||||
getPromptTemplate: () =>
|
||||
`
|
||||
Attempt to solve the reported issue.
|
||||
|
||||
If a code change is required, create a new branch, commit the fix, and open a pull-request that resolves the problem.
|
||||
|
||||
### {CODEX_ACTION_ISSUE_TITLE}
|
||||
|
||||
{CODEX_ACTION_ISSUE_BODY}
|
||||
`.trim(),
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
116
.github/actions/codex/src/env-context.ts
vendored
Normal file
116
.github/actions/codex/src/env-context.ts
vendored
Normal file
@@ -0,0 +1,116 @@
|
||||
/*
|
||||
* Centralised access to environment variables used by the Codex GitHub
|
||||
* Action.
|
||||
*
|
||||
* To enable proper unit-testing we avoid reading from `process.env` at module
|
||||
* initialisation time. Instead a `EnvContext` object is created (usually from
|
||||
* the real `process.env`) and passed around explicitly or – where that is not
|
||||
* yet practical – imported as the shared `defaultContext` singleton. Tests can
|
||||
* create their own context backed by a stubbed map of variables without having
|
||||
* to mutate global state.
|
||||
*/
|
||||
|
||||
import { fail } from "./fail";
|
||||
import * as github from "@actions/github";
|
||||
|
||||
export interface EnvContext {
|
||||
/**
|
||||
* Return the value for a given environment variable or terminate the action
|
||||
* via `fail` if it is missing / empty.
|
||||
*/
|
||||
get(name: string): string;
|
||||
|
||||
/**
|
||||
* Attempt to read an environment variable. Returns the value when present;
|
||||
* otherwise returns undefined (does not call `fail`).
|
||||
*/
|
||||
tryGet(name: string): string | undefined;
|
||||
|
||||
/**
|
||||
* Attempt to read an environment variable. Returns non-empty string value or
|
||||
* null if unset or empty string.
|
||||
*/
|
||||
tryGetNonEmpty(name: string): string | null;
|
||||
|
||||
/**
|
||||
* Return a memoised Octokit instance authenticated via the token resolved
|
||||
* from the provided argument (when defined) or the environment variables
|
||||
* `GITHUB_TOKEN`/`GH_TOKEN`.
|
||||
*
|
||||
* Subsequent calls return the same cached instance to avoid spawning
|
||||
* multiple REST clients within a single action run.
|
||||
*/
|
||||
getOctokit(token?: string): ReturnType<typeof github.getOctokit>;
|
||||
}
|
||||
|
||||
/** Internal helper – *not* exported. */
|
||||
function _getRequiredEnv(
|
||||
name: string,
|
||||
env: Record<string, string | undefined>,
|
||||
): string | undefined {
|
||||
const value = env[name];
|
||||
|
||||
// Avoid leaking secrets into logs while still logging non-secret variables.
|
||||
if (name.endsWith("KEY") || name.endsWith("TOKEN")) {
|
||||
if (value) {
|
||||
console.log(`value for ${name} was found`);
|
||||
}
|
||||
} else {
|
||||
console.log(`${name}=${value}`);
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
/** Create a context backed by the supplied environment map (defaults to `process.env`). */
|
||||
export function createEnvContext(
|
||||
env: Record<string, string | undefined> = process.env,
|
||||
): EnvContext {
|
||||
// Lazily instantiated Octokit client – shared across this context.
|
||||
let cachedOctokit: ReturnType<typeof github.getOctokit> | null = null;
|
||||
|
||||
return {
|
||||
get(name: string): string {
|
||||
const value = _getRequiredEnv(name, env);
|
||||
if (value == null) {
|
||||
fail(`Missing required environment variable: ${name}`);
|
||||
}
|
||||
return value;
|
||||
},
|
||||
|
||||
tryGet(name: string): string | undefined {
|
||||
return _getRequiredEnv(name, env);
|
||||
},
|
||||
|
||||
tryGetNonEmpty(name: string): string | null {
|
||||
const value = _getRequiredEnv(name, env);
|
||||
return value == null || value === "" ? null : value;
|
||||
},
|
||||
|
||||
getOctokit(token?: string) {
|
||||
if (cachedOctokit) {
|
||||
return cachedOctokit;
|
||||
}
|
||||
|
||||
// Determine the token to authenticate with.
|
||||
const githubToken = token ?? env["GITHUB_TOKEN"] ?? env["GH_TOKEN"];
|
||||
|
||||
if (!githubToken) {
|
||||
fail(
|
||||
"Unable to locate a GitHub token. `github_token` should have been set on the action.",
|
||||
);
|
||||
}
|
||||
|
||||
cachedOctokit = github.getOctokit(githubToken!);
|
||||
return cachedOctokit;
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Shared context built from the actual `process.env`. Production code that is
|
||||
* not yet refactored to receive a context explicitly may import and use this
|
||||
* singleton. Tests should avoid the singleton and instead pass their own
|
||||
* context to the functions they exercise.
|
||||
*/
|
||||
export const defaultContext: EnvContext = createEnvContext();
|
||||
4
.github/actions/codex/src/fail.ts
vendored
Normal file
4
.github/actions/codex/src/fail.ts
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
export function fail(message: string): never {
|
||||
console.error(message);
|
||||
process.exit(1);
|
||||
}
|
||||
149
.github/actions/codex/src/git-helpers.ts
vendored
Normal file
149
.github/actions/codex/src/git-helpers.ts
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
import { spawnSync } from "child_process";
|
||||
import * as github from "@actions/github";
|
||||
import { EnvContext } from "./env-context";
|
||||
|
||||
function runGit(args: string[], silent = true): string {
|
||||
console.info(`Running git ${args.join(" ")}`);
|
||||
const res = spawnSync("git", args, {
|
||||
encoding: "utf8",
|
||||
stdio: silent ? ["ignore", "pipe", "pipe"] : "inherit",
|
||||
});
|
||||
if (res.error) {
|
||||
throw res.error;
|
||||
}
|
||||
if (res.status !== 0) {
|
||||
// Return stderr so caller may handle; else throw.
|
||||
throw new Error(
|
||||
`git ${args.join(" ")} failed with code ${res.status}: ${res.stderr}`,
|
||||
);
|
||||
}
|
||||
return res.stdout.trim();
|
||||
}
|
||||
|
||||
function stageAllChanges() {
|
||||
runGit(["add", "-A"]);
|
||||
}
|
||||
|
||||
function hasStagedChanges(): boolean {
|
||||
const res = spawnSync("git", ["diff", "--cached", "--quiet", "--exit-code"]);
|
||||
return res.status !== 0;
|
||||
}
|
||||
|
||||
function ensureOnBranch(
|
||||
issueNumber: number,
|
||||
protectedBranches: string[],
|
||||
suggestedSlug?: string,
|
||||
): string {
|
||||
let branch = "";
|
||||
try {
|
||||
branch = runGit(["symbolic-ref", "--short", "-q", "HEAD"]);
|
||||
} catch {
|
||||
branch = "";
|
||||
}
|
||||
|
||||
// If detached HEAD or on a protected branch, create a new branch.
|
||||
if (!branch || protectedBranches.includes(branch)) {
|
||||
if (suggestedSlug) {
|
||||
const safeSlug = suggestedSlug
|
||||
.toLowerCase()
|
||||
.replace(/[^\w\s-]/g, "")
|
||||
.trim()
|
||||
.replace(/\s+/g, "-");
|
||||
branch = `codex-fix-${issueNumber}-${safeSlug}`;
|
||||
} else {
|
||||
branch = `codex-fix-${issueNumber}-${Date.now()}`;
|
||||
}
|
||||
runGit(["switch", "-c", branch]);
|
||||
}
|
||||
return branch;
|
||||
}
|
||||
|
||||
function commitIfNeeded(issueNumber: number) {
|
||||
if (hasStagedChanges()) {
|
||||
runGit([
|
||||
"commit",
|
||||
"-m",
|
||||
`fix: automated fix for #${issueNumber} via Codex`,
|
||||
]);
|
||||
}
|
||||
}
|
||||
|
||||
function pushBranch(branch: string, githubToken: string, ctx: EnvContext) {
|
||||
const repoSlug = ctx.get("GITHUB_REPOSITORY"); // owner/repo
|
||||
const remoteUrl = `https://x-access-token:${githubToken}@github.com/${repoSlug}.git`;
|
||||
|
||||
runGit(["push", "--force-with-lease", "-u", remoteUrl, `HEAD:${branch}`]);
|
||||
}
|
||||
|
||||
/**
|
||||
* If this returns a string, it is the URL of the created PR.
|
||||
*/
|
||||
export async function maybePublishPRForIssue(
|
||||
issueNumber: number,
|
||||
lastMessage: string,
|
||||
ctx: EnvContext,
|
||||
): Promise<string | undefined> {
|
||||
// Only proceed if GITHUB_TOKEN available.
|
||||
const githubToken =
|
||||
ctx.tryGetNonEmpty("GITHUB_TOKEN") ?? ctx.tryGetNonEmpty("GH_TOKEN");
|
||||
if (!githubToken) {
|
||||
console.warn("No GitHub token - skipping PR creation.");
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// Print `git status` for debugging.
|
||||
runGit(["status"]);
|
||||
|
||||
// Stage any remaining changes so they can be committed and pushed.
|
||||
stageAllChanges();
|
||||
|
||||
const octokit = ctx.getOctokit(githubToken);
|
||||
|
||||
const { owner, repo } = github.context.repo;
|
||||
|
||||
// Determine default branch to treat as protected.
|
||||
let defaultBranch = "main";
|
||||
try {
|
||||
const repoInfo = await octokit.rest.repos.get({ owner, repo });
|
||||
defaultBranch = repoInfo.data.default_branch ?? "main";
|
||||
} catch (e) {
|
||||
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
|
||||
}
|
||||
|
||||
const sanitizedMessage = lastMessage.replace(/\u2022/g, "-");
|
||||
const [summaryLine] = sanitizedMessage.split(/\r?\n/);
|
||||
const branch = ensureOnBranch(issueNumber, [defaultBranch, "master"], summaryLine);
|
||||
commitIfNeeded(issueNumber);
|
||||
pushBranch(branch, githubToken, ctx);
|
||||
|
||||
// Try to find existing PR for this branch
|
||||
const headParam = `${owner}:${branch}`;
|
||||
const existing = await octokit.rest.pulls.list({
|
||||
owner,
|
||||
repo,
|
||||
head: headParam,
|
||||
state: "open",
|
||||
});
|
||||
if (existing.data.length > 0) {
|
||||
return existing.data[0].html_url;
|
||||
}
|
||||
|
||||
// Determine base branch (default to main)
|
||||
let baseBranch = "main";
|
||||
try {
|
||||
const repoInfo = await octokit.rest.repos.get({ owner, repo });
|
||||
baseBranch = repoInfo.data.default_branch ?? "main";
|
||||
} catch (e) {
|
||||
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
|
||||
}
|
||||
|
||||
const pr = await octokit.rest.pulls.create({
|
||||
owner,
|
||||
repo,
|
||||
title: summaryLine,
|
||||
head: branch,
|
||||
base: baseBranch,
|
||||
body: sanitizedMessage,
|
||||
});
|
||||
return pr.data.html_url;
|
||||
}
|
||||
16
.github/actions/codex/src/git-user.ts
vendored
Normal file
16
.github/actions/codex/src/git-user.ts
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
export function setGitHubActionsUser(): void {
|
||||
const commands = [
|
||||
["git", "config", "--global", "user.name", "github-actions[bot]"],
|
||||
[
|
||||
"git",
|
||||
"config",
|
||||
"--global",
|
||||
"user.email",
|
||||
"41898282+github-actions[bot]@users.noreply.github.com",
|
||||
],
|
||||
];
|
||||
|
||||
for (const command of commands) {
|
||||
Bun.spawnSync(command);
|
||||
}
|
||||
}
|
||||
11
.github/actions/codex/src/github-workspace.ts
vendored
Normal file
11
.github/actions/codex/src/github-workspace.ts
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
import * as pathMod from "path";
|
||||
import { EnvContext } from "./env-context";
|
||||
|
||||
export function resolveWorkspacePath(path: string, ctx: EnvContext): string {
|
||||
if (pathMod.isAbsolute(path)) {
|
||||
return path;
|
||||
} else {
|
||||
const workspace = ctx.get("GITHUB_WORKSPACE");
|
||||
return pathMod.join(workspace, path);
|
||||
}
|
||||
}
|
||||
56
.github/actions/codex/src/load-config.ts
vendored
Normal file
56
.github/actions/codex/src/load-config.ts
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
import type { Config, LabelConfig } from "./config";
|
||||
|
||||
import { getDefaultConfig } from "./default-label-config";
|
||||
import { readFileSync, readdirSync, statSync } from "fs";
|
||||
import * as path from "path";
|
||||
|
||||
/**
|
||||
* Build an in-memory configuration object by scanning the repository for
|
||||
* Markdown templates located in `.github/codex/labels`.
|
||||
*
|
||||
* Each `*.md` file in that directory represents a label that can trigger the
|
||||
* Codex GitHub Action. The filename **without** the extension is interpreted
|
||||
* as the label name, e.g. `codex-review.md` ➜ `codex-review`.
|
||||
*
|
||||
* For every such label we derive the corresponding `doneLabel` by appending
|
||||
* the suffix `-completed`.
|
||||
*/
|
||||
export function loadConfig(workspace: string): Config {
|
||||
const labelsDir = path.join(workspace, ".github", "codex", "labels");
|
||||
|
||||
let entries: string[];
|
||||
try {
|
||||
entries = readdirSync(labelsDir);
|
||||
} catch {
|
||||
// If the directory is missing, return the default configuration.
|
||||
return getDefaultConfig();
|
||||
}
|
||||
|
||||
const labels: Record<string, LabelConfig> = {};
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.endsWith(".md")) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const fullPath = path.join(labelsDir, entry);
|
||||
|
||||
if (!statSync(fullPath).isFile()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const labelName = entry.slice(0, -3); // trim ".md"
|
||||
|
||||
labels[labelName] = new FileLabelConfig(fullPath);
|
||||
}
|
||||
|
||||
return { labels };
|
||||
}
|
||||
|
||||
class FileLabelConfig implements LabelConfig {
|
||||
constructor(private readonly promptPath: string) {}
|
||||
|
||||
getPromptTemplate(): string {
|
||||
return readFileSync(this.promptPath, "utf8");
|
||||
}
|
||||
}
|
||||
80
.github/actions/codex/src/main.ts
vendored
Executable file
80
.github/actions/codex/src/main.ts
vendored
Executable file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import type { Config } from "./config";
|
||||
|
||||
import { defaultContext, EnvContext } from "./env-context";
|
||||
import { loadConfig } from "./load-config";
|
||||
import { setGitHubActionsUser } from "./git-user";
|
||||
import { onLabeled } from "./process-label";
|
||||
import { ensureBaseAndHeadCommitsForPRAreAvailable } from "./prompt-template";
|
||||
import { performAdditionalValidation } from "./verify-inputs";
|
||||
import { onComment } from "./comment";
|
||||
import { onReview } from "./review";
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const ctx: EnvContext = defaultContext;
|
||||
|
||||
// Build the configuration dynamically by scanning `.github/codex/labels`.
|
||||
const GITHUB_WORKSPACE = ctx.get("GITHUB_WORKSPACE");
|
||||
const config: Config = loadConfig(GITHUB_WORKSPACE);
|
||||
|
||||
// Optionally perform additional validation of prompt template files.
|
||||
performAdditionalValidation(config, GITHUB_WORKSPACE);
|
||||
|
||||
const GITHUB_EVENT_NAME = ctx.get("GITHUB_EVENT_NAME");
|
||||
const GITHUB_EVENT_ACTION = ctx.get("GITHUB_EVENT_ACTION");
|
||||
|
||||
// Set user.name and user.email to a bot before Codex runs, just in case it
|
||||
// creates a commit.
|
||||
setGitHubActionsUser();
|
||||
|
||||
switch (GITHUB_EVENT_NAME) {
|
||||
case "issues": {
|
||||
if (GITHUB_EVENT_ACTION === "labeled") {
|
||||
await onLabeled(config, ctx);
|
||||
return;
|
||||
} else if (GITHUB_EVENT_ACTION === "opened") {
|
||||
await onComment(ctx);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "issue_comment": {
|
||||
if (GITHUB_EVENT_ACTION === "created") {
|
||||
await onComment(ctx);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "pull_request": {
|
||||
if (GITHUB_EVENT_ACTION === "labeled") {
|
||||
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
|
||||
await onLabeled(config, ctx);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "pull_request_review": {
|
||||
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
|
||||
if (GITHUB_EVENT_ACTION === "submitted") {
|
||||
await onReview(ctx);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "pull_request_review_comment": {
|
||||
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
|
||||
if (GITHUB_EVENT_ACTION === "created") {
|
||||
await onComment(ctx);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.warn(
|
||||
`Unsupported action '${GITHUB_EVENT_ACTION}' for event '${GITHUB_EVENT_NAME}'.`,
|
||||
);
|
||||
}
|
||||
|
||||
main();
|
||||
62
.github/actions/codex/src/post-comment.ts
vendored
Normal file
62
.github/actions/codex/src/post-comment.ts
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
import { fail } from "./fail";
|
||||
import * as github from "@actions/github";
|
||||
import { EnvContext } from "./env-context";
|
||||
|
||||
/**
|
||||
* Post a comment to the issue / pull request currently in scope.
|
||||
*
|
||||
* Provide the environment context so that token lookup (inside getOctokit) does
|
||||
* not rely on global state.
|
||||
*/
|
||||
export async function postComment(
|
||||
commentBody: string,
|
||||
ctx: EnvContext,
|
||||
): Promise<void> {
|
||||
// Append a footer with a link back to the workflow run, if available.
|
||||
const footer = buildWorkflowRunFooter(ctx);
|
||||
const bodyWithFooter = footer ? `${commentBody}${footer}` : commentBody;
|
||||
|
||||
const octokit = ctx.getOctokit();
|
||||
console.info("Got Octokit instance for posting comment");
|
||||
const { owner, repo } = github.context.repo;
|
||||
const issueNumber = github.context.issue.number;
|
||||
|
||||
if (!issueNumber) {
|
||||
console.warn(
|
||||
"No issue or pull_request number found in GitHub context; skipping comment creation.",
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
console.info("Calling octokit.rest.issues.createComment()");
|
||||
await octokit.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
body: bodyWithFooter,
|
||||
});
|
||||
} catch (error) {
|
||||
fail(`Failed to create comment via GitHub API: ${error}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to build a Markdown fragment linking back to the workflow run that
|
||||
* generated the current comment. Returns `undefined` if required environment
|
||||
* variables are missing – e.g. when running outside of GitHub Actions – so we
|
||||
* can gracefully skip the footer in those cases.
|
||||
*/
|
||||
function buildWorkflowRunFooter(ctx: EnvContext): string | undefined {
|
||||
const serverUrl =
|
||||
ctx.tryGetNonEmpty("GITHUB_SERVER_URL") ?? "https://github.com";
|
||||
const repository = ctx.tryGetNonEmpty("GITHUB_REPOSITORY");
|
||||
const runId = ctx.tryGetNonEmpty("GITHUB_RUN_ID");
|
||||
|
||||
if (!repository || !runId) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const url = `${serverUrl}/${repository}/actions/runs/${runId}`;
|
||||
return `\n\n---\n*[_View workflow run_](${url})*`;
|
||||
}
|
||||
195
.github/actions/codex/src/process-label.ts
vendored
Normal file
195
.github/actions/codex/src/process-label.ts
vendored
Normal file
@@ -0,0 +1,195 @@
|
||||
import { fail } from "./fail";
|
||||
import { EnvContext } from "./env-context";
|
||||
import { renderPromptTemplate } from "./prompt-template";
|
||||
|
||||
import { postComment } from "./post-comment";
|
||||
import { runCodex } from "./run-codex";
|
||||
|
||||
import * as github from "@actions/github";
|
||||
import { Config, LabelConfig } from "./config";
|
||||
import { maybePublishPRForIssue } from "./git-helpers";
|
||||
|
||||
export async function onLabeled(
|
||||
config: Config,
|
||||
ctx: EnvContext,
|
||||
): Promise<void> {
|
||||
const GITHUB_EVENT_LABEL_NAME = ctx.get("GITHUB_EVENT_LABEL_NAME");
|
||||
const labelConfig = config.labels[GITHUB_EVENT_LABEL_NAME] as
|
||||
| LabelConfig
|
||||
| undefined;
|
||||
if (!labelConfig) {
|
||||
fail(
|
||||
`Label \`${GITHUB_EVENT_LABEL_NAME}\` not found in config: ${JSON.stringify(config)}`,
|
||||
);
|
||||
}
|
||||
|
||||
await processLabelConfig(ctx, GITHUB_EVENT_LABEL_NAME, labelConfig);
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrapper that handles `-in-progress` and `-completed` semantics around the core lint/fix/review
|
||||
* processing. It will:
|
||||
*
|
||||
* - Skip execution if the `-in-progress` or `-completed` label is already present.
|
||||
* - Mark the PR/issue as `-in-progress`.
|
||||
* - After successful execution, mark the PR/issue as `-completed`.
|
||||
*/
|
||||
async function processLabelConfig(
|
||||
ctx: EnvContext,
|
||||
label: string,
|
||||
labelConfig: LabelConfig,
|
||||
): Promise<void> {
|
||||
const octokit = ctx.getOctokit();
|
||||
const { owner, repo, issueNumber, labelNames } =
|
||||
await getCurrentLabels(octokit);
|
||||
|
||||
const inProgressLabel = `${label}-in-progress`;
|
||||
const completedLabel = `${label}-completed`;
|
||||
for (const markerLabel of [inProgressLabel, completedLabel]) {
|
||||
if (labelNames.includes(markerLabel)) {
|
||||
console.log(
|
||||
`Label '${markerLabel}' already present on issue/PR #${issueNumber}. Skipping Codex action.`,
|
||||
);
|
||||
|
||||
// Clean up: remove the triggering label to avoid confusion and re-runs.
|
||||
await addAndRemoveLabels(octokit, {
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
remove: markerLabel,
|
||||
});
|
||||
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Mark the PR/issue as in progress.
|
||||
await addAndRemoveLabels(octokit, {
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
add: inProgressLabel,
|
||||
remove: label,
|
||||
});
|
||||
|
||||
// Run the core Codex processing.
|
||||
await processLabel(ctx, label, labelConfig);
|
||||
|
||||
// Mark the PR/issue as completed.
|
||||
await addAndRemoveLabels(octokit, {
|
||||
owner,
|
||||
repo,
|
||||
issueNumber,
|
||||
add: completedLabel,
|
||||
remove: inProgressLabel,
|
||||
});
|
||||
}
|
||||
|
||||
async function processLabel(
|
||||
ctx: EnvContext,
|
||||
label: string,
|
||||
labelConfig: LabelConfig,
|
||||
): Promise<void> {
|
||||
const template = labelConfig.getPromptTemplate();
|
||||
const populatedTemplate = await renderPromptTemplate(template, ctx);
|
||||
|
||||
// Always run Codex and post the resulting message as a comment.
|
||||
let commentBody = await runCodex(populatedTemplate, ctx);
|
||||
|
||||
// Current heuristic: only try to create a PR if "attempt" or "fix" is in the
|
||||
// label name. (Yes, we plan to evolve this.)
|
||||
if (label.indexOf("fix") !== -1 || label.indexOf("attempt") !== -1) {
|
||||
console.info(`label ${label} indicates we should attempt to create a PR`);
|
||||
const prUrl = await maybeFixIssue(ctx, commentBody);
|
||||
if (prUrl) {
|
||||
commentBody += `\n\n---\nOpened pull request: ${prUrl}`;
|
||||
}
|
||||
} else {
|
||||
console.info(
|
||||
`label ${label} does not indicate we should attempt to create a PR`,
|
||||
);
|
||||
}
|
||||
|
||||
await postComment(commentBody, ctx);
|
||||
}
|
||||
|
||||
async function maybeFixIssue(
|
||||
ctx: EnvContext,
|
||||
lastMessage: string,
|
||||
): Promise<string | undefined> {
|
||||
// Attempt to create a PR out of any changes Codex produced.
|
||||
const issueNumber = github.context.issue.number!; // exists for issues triggering this path
|
||||
try {
|
||||
return await maybePublishPRForIssue(issueNumber, lastMessage, ctx);
|
||||
} catch (e) {
|
||||
console.warn(`Failed to publish PR: ${e}`);
|
||||
}
|
||||
}
|
||||
|
||||
async function getCurrentLabels(
|
||||
octokit: ReturnType<typeof github.getOctokit>,
|
||||
): Promise<{
|
||||
owner: string;
|
||||
repo: string;
|
||||
issueNumber: number;
|
||||
labelNames: Array<string>;
|
||||
}> {
|
||||
const { owner, repo } = github.context.repo;
|
||||
const issueNumber = github.context.issue.number;
|
||||
|
||||
if (!issueNumber) {
|
||||
fail("No issue or pull_request number found in GitHub context.");
|
||||
}
|
||||
|
||||
const { data: issueData } = await octokit.rest.issues.get({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
});
|
||||
|
||||
const labelNames =
|
||||
issueData.labels?.map((label: any) =>
|
||||
typeof label === "string" ? label : label.name,
|
||||
) ?? [];
|
||||
|
||||
return { owner, repo, issueNumber, labelNames };
|
||||
}
|
||||
|
||||
async function addAndRemoveLabels(
|
||||
octokit: ReturnType<typeof github.getOctokit>,
|
||||
opts: {
|
||||
owner: string;
|
||||
repo: string;
|
||||
issueNumber: number;
|
||||
add?: string;
|
||||
remove?: string;
|
||||
},
|
||||
): Promise<void> {
|
||||
const { owner, repo, issueNumber, add, remove } = opts;
|
||||
|
||||
if (add) {
|
||||
try {
|
||||
await octokit.rest.issues.addLabels({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
labels: [add],
|
||||
});
|
||||
} catch (error) {
|
||||
console.warn(`Failed to add label '${add}': ${error}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (remove) {
|
||||
try {
|
||||
await octokit.rest.issues.removeLabel({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
name: remove,
|
||||
});
|
||||
} catch (error) {
|
||||
console.warn(`Failed to remove label '${remove}': ${error}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
284
.github/actions/codex/src/prompt-template.ts
vendored
Normal file
284
.github/actions/codex/src/prompt-template.ts
vendored
Normal file
@@ -0,0 +1,284 @@
|
||||
/*
|
||||
* Utilities to render Codex prompt templates.
|
||||
*
|
||||
* A template is a Markdown (or plain-text) file that may contain one or more
|
||||
* placeholders of the form `{CODEX_ACTION_<NAME>}`. At runtime these
|
||||
* placeholders are substituted with dynamically generated content. Each
|
||||
* placeholder is resolved **exactly once** even if it appears multiple times
|
||||
* in the same template.
|
||||
*/
|
||||
|
||||
import { readFile } from "fs/promises";
|
||||
|
||||
import { EnvContext } from "./env-context";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Lazily caches parsed `$GITHUB_EVENT_PATH` contents keyed by the file path so
|
||||
* we only hit the filesystem once per unique event payload.
|
||||
*/
|
||||
const githubEventDataCache: Map<string, Promise<any>> = new Map();
|
||||
|
||||
function getGitHubEventData(ctx: EnvContext): Promise<any> {
|
||||
const eventPath = ctx.get("GITHUB_EVENT_PATH");
|
||||
let cached = githubEventDataCache.get(eventPath);
|
||||
if (!cached) {
|
||||
cached = readFile(eventPath, "utf8").then((raw) => JSON.parse(raw));
|
||||
githubEventDataCache.set(eventPath, cached);
|
||||
}
|
||||
return cached;
|
||||
}
|
||||
|
||||
async function runCommand(args: Array<string>): Promise<string> {
|
||||
const result = Bun.spawnSync(args, {
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
return result.stdout.toString();
|
||||
}
|
||||
|
||||
console.error(`Error running ${JSON.stringify(args)}: ${result.stderr}`);
|
||||
return "";
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Regex that captures the variable name without the surrounding { } braces.
|
||||
const VAR_REGEX = /\{(CODEX_ACTION_[A-Z0-9_]+)\}/g;
|
||||
|
||||
// Cache individual placeholder values so each one is resolved at most once per
|
||||
// process even if many templates reference it.
|
||||
const placeholderCache: Map<string, Promise<string>> = new Map();
|
||||
|
||||
/**
|
||||
* Parse a template string, resolve all placeholders and return the rendered
|
||||
* result.
|
||||
*/
|
||||
export async function renderPromptTemplate(
|
||||
template: string,
|
||||
ctx: EnvContext,
|
||||
): Promise<string> {
|
||||
// ---------------------------------------------------------------------
|
||||
// 1) Gather all *unique* placeholders present in the template.
|
||||
// ---------------------------------------------------------------------
|
||||
const variables = new Set<string>();
|
||||
for (const match of template.matchAll(VAR_REGEX)) {
|
||||
variables.add(match[1]);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 2) Kick off (or reuse) async resolution for each variable.
|
||||
// ---------------------------------------------------------------------
|
||||
for (const variable of variables) {
|
||||
if (!placeholderCache.has(variable)) {
|
||||
placeholderCache.set(variable, resolveVariable(variable, ctx));
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 3) Await completion so we can perform a simple synchronous replace below.
|
||||
// ---------------------------------------------------------------------
|
||||
const resolvedEntries: [string, string][] = [];
|
||||
for (const [key, promise] of placeholderCache.entries()) {
|
||||
resolvedEntries.push([key, await promise]);
|
||||
}
|
||||
const resolvedMap = new Map<string, string>(resolvedEntries);
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 4) Replace each occurrence. We use replace with a callback to ensure
|
||||
// correct substitution even if variable names overlap (they shouldn't,
|
||||
// but better safe than sorry).
|
||||
// ---------------------------------------------------------------------
|
||||
return template.replace(VAR_REGEX, (_, varName: string) => {
|
||||
return resolvedMap.get(varName) ?? "";
|
||||
});
|
||||
}
|
||||
|
||||
export async function ensureBaseAndHeadCommitsForPRAreAvailable(
|
||||
ctx: EnvContext,
|
||||
): Promise<{ baseSha: string; headSha: string } | null> {
|
||||
const prShas = await getPrShas(ctx);
|
||||
if (prShas == null) {
|
||||
console.warn("Unable to resolve PR branches");
|
||||
return null;
|
||||
}
|
||||
|
||||
const event = await getGitHubEventData(ctx);
|
||||
const pr = event.pull_request;
|
||||
if (!pr) {
|
||||
console.warn("event.pull_request is not defined - unexpected");
|
||||
return null;
|
||||
}
|
||||
|
||||
const workspace = ctx.get("GITHUB_WORKSPACE");
|
||||
|
||||
// Refs (branch names)
|
||||
const baseRef: string | undefined = pr.base?.ref;
|
||||
const headRef: string | undefined = pr.head?.ref;
|
||||
|
||||
// Clone URLs
|
||||
const baseRemoteUrl: string | undefined = pr.base?.repo?.clone_url;
|
||||
const headRemoteUrl: string | undefined = pr.head?.repo?.clone_url;
|
||||
|
||||
if (!baseRef || !headRef || !baseRemoteUrl || !headRemoteUrl) {
|
||||
console.warn(
|
||||
"Missing PR ref or remote URL information - cannot fetch commits",
|
||||
);
|
||||
return null;
|
||||
}
|
||||
|
||||
// Ensure we have the base branch.
|
||||
await runCommand([
|
||||
"git",
|
||||
"-C",
|
||||
workspace,
|
||||
"fetch",
|
||||
"--no-tags",
|
||||
"origin",
|
||||
baseRef,
|
||||
]);
|
||||
|
||||
// Ensure we have the head branch.
|
||||
if (headRemoteUrl === baseRemoteUrl) {
|
||||
// Same repository – the commit is available from `origin`.
|
||||
await runCommand([
|
||||
"git",
|
||||
"-C",
|
||||
workspace,
|
||||
"fetch",
|
||||
"--no-tags",
|
||||
"origin",
|
||||
headRef,
|
||||
]);
|
||||
} else {
|
||||
// Fork – make sure a `pr` remote exists that points at the fork. Attempting
|
||||
// to add a remote that already exists causes git to error, so we swallow
|
||||
// any non-zero exit codes from that specific command.
|
||||
await runCommand([
|
||||
"git",
|
||||
"-C",
|
||||
workspace,
|
||||
"remote",
|
||||
"add",
|
||||
"pr",
|
||||
headRemoteUrl,
|
||||
]);
|
||||
|
||||
// Whether adding succeeded or the remote already existed, attempt to fetch
|
||||
// the head ref from the `pr` remote.
|
||||
await runCommand([
|
||||
"git",
|
||||
"-C",
|
||||
workspace,
|
||||
"fetch",
|
||||
"--no-tags",
|
||||
"pr",
|
||||
headRef,
|
||||
]);
|
||||
}
|
||||
|
||||
return prShas;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Internal helpers – still exported for use by other modules.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export async function resolvePrDiff(ctx: EnvContext): Promise<string> {
|
||||
const prShas = await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
|
||||
if (prShas == null) {
|
||||
console.warn("Unable to resolve PR branches");
|
||||
return "";
|
||||
}
|
||||
|
||||
const workspace = ctx.get("GITHUB_WORKSPACE");
|
||||
const { baseSha, headSha } = prShas;
|
||||
return runCommand([
|
||||
"git",
|
||||
"-C",
|
||||
workspace,
|
||||
"diff",
|
||||
"--color=never",
|
||||
`${baseSha}..${headSha}`,
|
||||
]);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Placeholder resolution
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function resolveVariable(name: string, ctx: EnvContext): Promise<string> {
|
||||
switch (name) {
|
||||
case "CODEX_ACTION_ISSUE_TITLE": {
|
||||
const event = await getGitHubEventData(ctx);
|
||||
const issue = event.issue ?? event.pull_request;
|
||||
return issue?.title ?? "";
|
||||
}
|
||||
|
||||
case "CODEX_ACTION_ISSUE_BODY": {
|
||||
const event = await getGitHubEventData(ctx);
|
||||
const issue = event.issue ?? event.pull_request;
|
||||
return issue?.body ?? "";
|
||||
}
|
||||
|
||||
case "CODEX_ACTION_GITHUB_EVENT_PATH": {
|
||||
return ctx.get("GITHUB_EVENT_PATH");
|
||||
}
|
||||
|
||||
case "CODEX_ACTION_BASE_REF": {
|
||||
const event = await getGitHubEventData(ctx);
|
||||
return event?.pull_request?.base?.ref ?? "";
|
||||
}
|
||||
|
||||
case "CODEX_ACTION_HEAD_REF": {
|
||||
const event = await getGitHubEventData(ctx);
|
||||
return event?.pull_request?.head?.ref ?? "";
|
||||
}
|
||||
|
||||
case "CODEX_ACTION_PR_DIFF": {
|
||||
return resolvePrDiff(ctx);
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Add new template variables here.
|
||||
// -------------------------------------------------------------------
|
||||
|
||||
default: {
|
||||
// Unknown variable – leave it blank to avoid leaking placeholders to the
|
||||
// final prompt. The alternative would be to `fail()` here, but silently
|
||||
// ignoring unknown placeholders is more forgiving and better matches the
|
||||
// behaviour of typical template engines.
|
||||
console.warn(`Unknown template variable: ${name}`);
|
||||
return "";
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function getPrShas(
|
||||
ctx: EnvContext,
|
||||
): Promise<{ baseSha: string; headSha: string } | null> {
|
||||
const event = await getGitHubEventData(ctx);
|
||||
const pr = event.pull_request;
|
||||
if (!pr) {
|
||||
console.warn("event.pull_request is not defined");
|
||||
return null;
|
||||
}
|
||||
|
||||
// Prefer explicit SHAs if available to avoid relying on local branch names.
|
||||
const baseSha: string | undefined = pr.base?.sha;
|
||||
const headSha: string | undefined = pr.head?.sha;
|
||||
|
||||
if (!baseSha || !headSha) {
|
||||
console.warn("one of base or head is not defined on event.pull_request");
|
||||
return null;
|
||||
}
|
||||
|
||||
return { baseSha, headSha };
|
||||
}
|
||||
42
.github/actions/codex/src/review.ts
vendored
Normal file
42
.github/actions/codex/src/review.ts
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
import type { EnvContext } from "./env-context";
|
||||
import { runCodex } from "./run-codex";
|
||||
import { postComment } from "./post-comment";
|
||||
import { addEyesReaction } from "./add-reaction";
|
||||
|
||||
/**
|
||||
* Handle `pull_request_review` events. We treat the review body the same way
|
||||
* as a normal comment.
|
||||
*/
|
||||
export async function onReview(ctx: EnvContext): Promise<void> {
|
||||
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
|
||||
if (!triggerPhrase) {
|
||||
console.warn("Empty trigger phrase: skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
const reviewBody = ctx.tryGet("GITHUB_EVENT_REVIEW_BODY");
|
||||
|
||||
if (!reviewBody) {
|
||||
console.warn("Review body not found in environment: skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!reviewBody.includes(triggerPhrase)) {
|
||||
console.log(
|
||||
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this review.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const prompt = reviewBody.replace(triggerPhrase, "").trim();
|
||||
|
||||
if (prompt.length === 0) {
|
||||
console.warn("Prompt is empty after removing trigger phrase: skipping.");
|
||||
return;
|
||||
}
|
||||
|
||||
await addEyesReaction(ctx);
|
||||
|
||||
const lastMessage = await runCodex(prompt, ctx);
|
||||
await postComment(lastMessage, ctx);
|
||||
}
|
||||
56
.github/actions/codex/src/run-codex.ts
vendored
Normal file
56
.github/actions/codex/src/run-codex.ts
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
import { fail } from "./fail";
|
||||
import { EnvContext } from "./env-context";
|
||||
import { tmpdir } from "os";
|
||||
import { join } from "node:path";
|
||||
import { readFile, mkdtemp } from "fs/promises";
|
||||
import { resolveWorkspacePath } from "./github-workspace";
|
||||
|
||||
/**
|
||||
* Runs the Codex CLI with the provided prompt and returns the output written
|
||||
* to the "last message" file.
|
||||
*/
|
||||
export async function runCodex(
|
||||
prompt: string,
|
||||
ctx: EnvContext,
|
||||
): Promise<string> {
|
||||
const OPENAI_API_KEY = ctx.get("OPENAI_API_KEY");
|
||||
|
||||
const tempDirPath = await mkdtemp(join(tmpdir(), "codex-"));
|
||||
const lastMessageOutput = join(tempDirPath, "codex-prompt.md");
|
||||
|
||||
const args = ["/usr/local/bin/codex-exec"];
|
||||
|
||||
const inputCodexArgs = ctx.tryGet("INPUT_CODEX_ARGS")?.trim();
|
||||
if (inputCodexArgs) {
|
||||
args.push(...inputCodexArgs.split(/\s+/));
|
||||
}
|
||||
|
||||
args.push("--output-last-message", lastMessageOutput, prompt);
|
||||
|
||||
const env: Record<string, string> = { ...process.env, OPENAI_API_KEY };
|
||||
const INPUT_CODEX_HOME = ctx.tryGet("INPUT_CODEX_HOME");
|
||||
if (INPUT_CODEX_HOME) {
|
||||
env.CODEX_HOME = resolveWorkspacePath(INPUT_CODEX_HOME, ctx);
|
||||
}
|
||||
|
||||
console.log(`Running Codex: ${JSON.stringify(args)}`);
|
||||
const result = Bun.spawnSync(args, {
|
||||
stdout: "inherit",
|
||||
stderr: "inherit",
|
||||
env,
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
fail(`Codex failed: see above for details.`);
|
||||
}
|
||||
|
||||
// Read the output generated by Codex.
|
||||
let lastMessage: string;
|
||||
try {
|
||||
lastMessage = await readFile(lastMessageOutput, "utf8");
|
||||
} catch (err) {
|
||||
fail(`Failed to read Codex output at '${lastMessageOutput}': ${err}`);
|
||||
}
|
||||
|
||||
return lastMessage;
|
||||
}
|
||||
33
.github/actions/codex/src/verify-inputs.ts
vendored
Normal file
33
.github/actions/codex/src/verify-inputs.ts
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
// Validate the inputs passed to the composite action.
|
||||
// The script currently ensures that the provided configuration file exists and
|
||||
// matches the expected schema.
|
||||
|
||||
import type { Config } from "./config";
|
||||
|
||||
import { existsSync } from "fs";
|
||||
import * as path from "path";
|
||||
import { fail } from "./fail";
|
||||
|
||||
export function performAdditionalValidation(config: Config, workspace: string) {
|
||||
// Additional validation: ensure referenced prompt files exist and are Markdown.
|
||||
for (const [label, details] of Object.entries(config.labels)) {
|
||||
// Determine which prompt key is present (the schema guarantees exactly one).
|
||||
const promptPathStr =
|
||||
(details as any).prompt ?? (details as any).promptPath;
|
||||
|
||||
if (promptPathStr) {
|
||||
const promptPath = path.isAbsolute(promptPathStr)
|
||||
? promptPathStr
|
||||
: path.join(workspace, promptPathStr);
|
||||
|
||||
if (!existsSync(promptPath)) {
|
||||
fail(`Prompt file for label '${label}' not found: ${promptPath}`);
|
||||
}
|
||||
if (!promptPath.endsWith(".md")) {
|
||||
fail(
|
||||
`Prompt file for label '${label}' must be a .md file (got ${promptPathStr}).`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
15
.github/actions/codex/tsconfig.json
vendored
Normal file
15
.github/actions/codex/tsconfig.json
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["ESNext"],
|
||||
"target": "ESNext",
|
||||
"module": "ESNext",
|
||||
"moduleDetection": "force",
|
||||
"moduleResolution": "bundler",
|
||||
|
||||
"noEmit": true,
|
||||
"strict": true,
|
||||
"skipLibCheck": true
|
||||
},
|
||||
|
||||
"include": ["src"]
|
||||
}
|
||||
44
.github/actions/linux-code-sign/action.yml
vendored
44
.github/actions/linux-code-sign/action.yml
vendored
@@ -1,44 +0,0 @@
|
||||
name: linux-code-sign
|
||||
description: Sign Linux artifacts with cosign.
|
||||
inputs:
|
||||
target:
|
||||
description: Target triple for the artifacts to sign.
|
||||
required: true
|
||||
artifacts-dir:
|
||||
description: Absolute path to the directory containing built binaries to sign.
|
||||
required: true
|
||||
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@v3.7.0
|
||||
|
||||
- name: Cosign Linux artifacts
|
||||
shell: bash
|
||||
env:
|
||||
COSIGN_EXPERIMENTAL: "1"
|
||||
COSIGN_YES: "true"
|
||||
COSIGN_OIDC_CLIENT_ID: "sigstore"
|
||||
COSIGN_OIDC_ISSUER: "https://oauth2.sigstore.dev/auth"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
dest="${{ inputs.artifacts-dir }}"
|
||||
if [[ ! -d "$dest" ]]; then
|
||||
echo "Destination $dest does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for binary in codex codex-responses-api-proxy; do
|
||||
artifact="${dest}/${binary}"
|
||||
if [[ ! -f "$artifact" ]]; then
|
||||
echo "Binary $artifact not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cosign sign-blob \
|
||||
--yes \
|
||||
--bundle "${artifact}.sigstore" \
|
||||
"$artifact"
|
||||
done
|
||||
246
.github/actions/macos-code-sign/action.yml
vendored
246
.github/actions/macos-code-sign/action.yml
vendored
@@ -1,246 +0,0 @@
|
||||
name: macos-code-sign
|
||||
description: Configure, sign, notarize, and clean up macOS code signing artifacts.
|
||||
inputs:
|
||||
target:
|
||||
description: Rust compilation target triple (e.g. aarch64-apple-darwin).
|
||||
required: true
|
||||
sign-binaries:
|
||||
description: Whether to sign and notarize the macOS binaries.
|
||||
required: false
|
||||
default: "true"
|
||||
sign-dmg:
|
||||
description: Whether to sign and notarize the macOS dmg.
|
||||
required: false
|
||||
default: "true"
|
||||
apple-certificate:
|
||||
description: Base64-encoded Apple signing certificate (P12).
|
||||
required: true
|
||||
apple-certificate-password:
|
||||
description: Password for the signing certificate.
|
||||
required: true
|
||||
apple-notarization-key-p8:
|
||||
description: Base64-encoded Apple notarization key (P8).
|
||||
required: true
|
||||
apple-notarization-key-id:
|
||||
description: Apple notarization key ID.
|
||||
required: true
|
||||
apple-notarization-issuer-id:
|
||||
description: Apple notarization issuer ID.
|
||||
required: true
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Configure Apple code signing
|
||||
shell: bash
|
||||
env:
|
||||
KEYCHAIN_PASSWORD: actions
|
||||
APPLE_CERTIFICATE: ${{ inputs.apple-certificate }}
|
||||
APPLE_CERTIFICATE_PASSWORD: ${{ inputs.apple-certificate-password }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${APPLE_CERTIFICATE:-}" ]]; then
|
||||
echo "APPLE_CERTIFICATE is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${APPLE_CERTIFICATE_PASSWORD:-}" ]]; then
|
||||
echo "APPLE_CERTIFICATE_PASSWORD is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cert_path="${RUNNER_TEMP}/apple_signing_certificate.p12"
|
||||
echo "$APPLE_CERTIFICATE" | base64 -d > "$cert_path"
|
||||
|
||||
keychain_path="${RUNNER_TEMP}/codex-signing.keychain-db"
|
||||
security create-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
|
||||
security set-keychain-settings -lut 21600 "$keychain_path"
|
||||
security unlock-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
|
||||
|
||||
keychain_args=()
|
||||
cleanup_keychain() {
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "${keychain_args[@]}" || true
|
||||
security default-keychain -s "${keychain_args[0]}" || true
|
||||
else
|
||||
security list-keychains -s || true
|
||||
fi
|
||||
if [[ -f "$keychain_path" ]]; then
|
||||
security delete-keychain "$keychain_path" || true
|
||||
fi
|
||||
}
|
||||
|
||||
while IFS= read -r keychain; do
|
||||
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
|
||||
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
|
||||
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "$keychain_path" "${keychain_args[@]}"
|
||||
else
|
||||
security list-keychains -s "$keychain_path"
|
||||
fi
|
||||
|
||||
security default-keychain -s "$keychain_path"
|
||||
security import "$cert_path" -k "$keychain_path" -P "$APPLE_CERTIFICATE_PASSWORD" -T /usr/bin/codesign -T /usr/bin/security
|
||||
security set-key-partition-list -S apple-tool:,apple: -s -k "$KEYCHAIN_PASSWORD" "$keychain_path" > /dev/null
|
||||
|
||||
codesign_hashes=()
|
||||
while IFS= read -r hash; do
|
||||
[[ -n "$hash" ]] && codesign_hashes+=("$hash")
|
||||
done < <(security find-identity -v -p codesigning "$keychain_path" \
|
||||
| sed -n 's/.*\([0-9A-F]\{40\}\).*/\1/p' \
|
||||
| sort -u)
|
||||
|
||||
if ((${#codesign_hashes[@]} == 0)); then
|
||||
echo "No signing identities found in $keychain_path"
|
||||
cleanup_keychain
|
||||
rm -f "$cert_path"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ((${#codesign_hashes[@]} > 1)); then
|
||||
echo "Multiple signing identities found in $keychain_path:"
|
||||
printf ' %s\n' "${codesign_hashes[@]}"
|
||||
cleanup_keychain
|
||||
rm -f "$cert_path"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
APPLE_CODESIGN_IDENTITY="${codesign_hashes[0]}"
|
||||
|
||||
rm -f "$cert_path"
|
||||
|
||||
echo "APPLE_CODESIGN_IDENTITY=$APPLE_CODESIGN_IDENTITY" >> "$GITHUB_ENV"
|
||||
echo "APPLE_CODESIGN_KEYCHAIN=$keychain_path" >> "$GITHUB_ENV"
|
||||
echo "::add-mask::$APPLE_CODESIGN_IDENTITY"
|
||||
|
||||
- name: Sign macOS binaries
|
||||
if: ${{ inputs.sign-binaries == 'true' }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${APPLE_CODESIGN_IDENTITY:-}" ]]; then
|
||||
echo "APPLE_CODESIGN_IDENTITY is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
keychain_args=()
|
||||
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" && -f "${APPLE_CODESIGN_KEYCHAIN}" ]]; then
|
||||
keychain_args+=(--keychain "${APPLE_CODESIGN_KEYCHAIN}")
|
||||
fi
|
||||
|
||||
for binary in codex codex-responses-api-proxy; do
|
||||
path="codex-rs/target/${{ inputs.target }}/release/${binary}"
|
||||
codesign --force --options runtime --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$path"
|
||||
done
|
||||
|
||||
- name: Notarize macOS binaries
|
||||
if: ${{ inputs.sign-binaries == 'true' }}
|
||||
shell: bash
|
||||
env:
|
||||
APPLE_NOTARIZATION_KEY_P8: ${{ inputs.apple-notarization-key-p8 }}
|
||||
APPLE_NOTARIZATION_KEY_ID: ${{ inputs.apple-notarization-key-id }}
|
||||
APPLE_NOTARIZATION_ISSUER_ID: ${{ inputs.apple-notarization-issuer-id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
for var in APPLE_NOTARIZATION_KEY_P8 APPLE_NOTARIZATION_KEY_ID APPLE_NOTARIZATION_ISSUER_ID; do
|
||||
if [[ -z "${!var:-}" ]]; then
|
||||
echo "$var is required for notarization"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
notary_key_path="${RUNNER_TEMP}/notarytool.key.p8"
|
||||
echo "$APPLE_NOTARIZATION_KEY_P8" | base64 -d > "$notary_key_path"
|
||||
cleanup_notary() {
|
||||
rm -f "$notary_key_path"
|
||||
}
|
||||
trap cleanup_notary EXIT
|
||||
|
||||
source "$GITHUB_ACTION_PATH/notary_helpers.sh"
|
||||
|
||||
notarize_binary() {
|
||||
local binary="$1"
|
||||
local source_path="codex-rs/target/${{ inputs.target }}/release/${binary}"
|
||||
local archive_path="${RUNNER_TEMP}/${binary}.zip"
|
||||
|
||||
if [[ ! -f "$source_path" ]]; then
|
||||
echo "Binary $source_path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f "$archive_path"
|
||||
ditto -c -k --keepParent "$source_path" "$archive_path"
|
||||
|
||||
notarize_submission "$binary" "$archive_path" "$notary_key_path"
|
||||
}
|
||||
|
||||
notarize_binary "codex"
|
||||
notarize_binary "codex-responses-api-proxy"
|
||||
|
||||
- name: Sign and notarize macOS dmg
|
||||
if: ${{ inputs.sign-dmg == 'true' }}
|
||||
shell: bash
|
||||
env:
|
||||
APPLE_NOTARIZATION_KEY_P8: ${{ inputs.apple-notarization-key-p8 }}
|
||||
APPLE_NOTARIZATION_KEY_ID: ${{ inputs.apple-notarization-key-id }}
|
||||
APPLE_NOTARIZATION_ISSUER_ID: ${{ inputs.apple-notarization-issuer-id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
for var in APPLE_CODESIGN_IDENTITY APPLE_NOTARIZATION_KEY_P8 APPLE_NOTARIZATION_KEY_ID APPLE_NOTARIZATION_ISSUER_ID; do
|
||||
if [[ -z "${!var:-}" ]]; then
|
||||
echo "$var is required"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
notary_key_path="${RUNNER_TEMP}/notarytool.key.p8"
|
||||
echo "$APPLE_NOTARIZATION_KEY_P8" | base64 -d > "$notary_key_path"
|
||||
cleanup_notary() {
|
||||
rm -f "$notary_key_path"
|
||||
}
|
||||
trap cleanup_notary EXIT
|
||||
|
||||
source "$GITHUB_ACTION_PATH/notary_helpers.sh"
|
||||
|
||||
dmg_path="codex-rs/target/${{ inputs.target }}/release/codex-${{ inputs.target }}.dmg"
|
||||
|
||||
if [[ ! -f "$dmg_path" ]]; then
|
||||
echo "dmg $dmg_path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
keychain_args=()
|
||||
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" && -f "${APPLE_CODESIGN_KEYCHAIN}" ]]; then
|
||||
keychain_args+=(--keychain "${APPLE_CODESIGN_KEYCHAIN}")
|
||||
fi
|
||||
|
||||
codesign --force --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$dmg_path"
|
||||
notarize_submission "codex-${{ inputs.target }}.dmg" "$dmg_path" "$notary_key_path"
|
||||
xcrun stapler staple "$dmg_path"
|
||||
|
||||
- name: Remove signing keychain
|
||||
if: ${{ always() }}
|
||||
shell: bash
|
||||
env:
|
||||
APPLE_CODESIGN_KEYCHAIN: ${{ env.APPLE_CODESIGN_KEYCHAIN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" ]]; then
|
||||
keychain_args=()
|
||||
while IFS= read -r keychain; do
|
||||
[[ "$keychain" == "$APPLE_CODESIGN_KEYCHAIN" ]] && continue
|
||||
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
|
||||
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "${keychain_args[@]}"
|
||||
security default-keychain -s "${keychain_args[0]}"
|
||||
fi
|
||||
|
||||
if [[ -f "$APPLE_CODESIGN_KEYCHAIN" ]]; then
|
||||
security delete-keychain "$APPLE_CODESIGN_KEYCHAIN"
|
||||
fi
|
||||
fi
|
||||
@@ -1,46 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
notarize_submission() {
|
||||
local label="$1"
|
||||
local path="$2"
|
||||
local notary_key_path="$3"
|
||||
|
||||
if [[ -z "${APPLE_NOTARIZATION_KEY_ID:-}" || -z "${APPLE_NOTARIZATION_ISSUER_ID:-}" ]]; then
|
||||
echo "APPLE_NOTARIZATION_KEY_ID and APPLE_NOTARIZATION_ISSUER_ID are required for notarization"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "$notary_key_path" || ! -f "$notary_key_path" ]]; then
|
||||
echo "Notary key file $notary_key_path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$path" ]]; then
|
||||
echo "Notarization payload $path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local submission_json
|
||||
submission_json=$(xcrun notarytool submit "$path" \
|
||||
--key "$notary_key_path" \
|
||||
--key-id "$APPLE_NOTARIZATION_KEY_ID" \
|
||||
--issuer "$APPLE_NOTARIZATION_ISSUER_ID" \
|
||||
--output-format json \
|
||||
--wait)
|
||||
|
||||
local status submission_id
|
||||
status=$(printf '%s\n' "$submission_json" | jq -r '.status // "Unknown"')
|
||||
submission_id=$(printf '%s\n' "$submission_json" | jq -r '.id // ""')
|
||||
|
||||
if [[ -z "$submission_id" ]]; then
|
||||
echo "Failed to retrieve submission ID for $label"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "::notice title=Notarization::$label submission ${submission_id} completed with status ${status}"
|
||||
|
||||
if [[ "$status" != "Accepted" ]]; then
|
||||
echo "Notarization failed for ${label} (submission ${submission_id}, status ${status})"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
57
.github/actions/windows-code-sign/action.yml
vendored
57
.github/actions/windows-code-sign/action.yml
vendored
@@ -1,57 +0,0 @@
|
||||
name: windows-code-sign
|
||||
description: Sign Windows binaries with Azure Trusted Signing.
|
||||
inputs:
|
||||
target:
|
||||
description: Target triple for the artifacts to sign.
|
||||
required: true
|
||||
client-id:
|
||||
description: Azure Trusted Signing client ID.
|
||||
required: true
|
||||
tenant-id:
|
||||
description: Azure tenant ID for Trusted Signing.
|
||||
required: true
|
||||
subscription-id:
|
||||
description: Azure subscription ID for Trusted Signing.
|
||||
required: true
|
||||
endpoint:
|
||||
description: Azure Trusted Signing endpoint.
|
||||
required: true
|
||||
account-name:
|
||||
description: Azure Trusted Signing account name.
|
||||
required: true
|
||||
certificate-profile-name:
|
||||
description: Certificate profile name for signing.
|
||||
required: true
|
||||
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: Azure login for Trusted Signing (OIDC)
|
||||
uses: azure/login@v2
|
||||
with:
|
||||
client-id: ${{ inputs.client-id }}
|
||||
tenant-id: ${{ inputs.tenant-id }}
|
||||
subscription-id: ${{ inputs.subscription-id }}
|
||||
|
||||
- name: Sign Windows binaries with Azure Trusted Signing
|
||||
uses: azure/trusted-signing-action@v0
|
||||
with:
|
||||
endpoint: ${{ inputs.endpoint }}
|
||||
trusted-signing-account-name: ${{ inputs.account-name }}
|
||||
certificate-profile-name: ${{ inputs.certificate-profile-name }}
|
||||
exclude-environment-credential: true
|
||||
exclude-workload-identity-credential: true
|
||||
exclude-managed-identity-credential: true
|
||||
exclude-shared-token-cache-credential: true
|
||||
exclude-visual-studio-credential: true
|
||||
exclude-visual-studio-code-credential: true
|
||||
exclude-azure-cli-credential: false
|
||||
exclude-azure-powershell-credential: true
|
||||
exclude-azure-developer-cli-credential: true
|
||||
exclude-interactive-browser-credential: true
|
||||
cache-dependencies: false
|
||||
files: |
|
||||
${{ github.workspace }}/codex-rs/target/${{ inputs.target }}/release/codex.exe
|
||||
${{ github.workspace }}/codex-rs/target/${{ inputs.target }}/release/codex-responses-api-proxy.exe
|
||||
${{ github.workspace }}/codex-rs/target/${{ inputs.target }}/release/codex-windows-sandbox-setup.exe
|
||||
${{ github.workspace }}/codex-rs/target/${{ inputs.target }}/release/codex-command-runner.exe
|
||||
BIN
.github/codex-cli-login.png
vendored
BIN
.github/codex-cli-login.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 2.9 MiB |
BIN
.github/codex-cli-permissions.png
vendored
BIN
.github/codex-cli-permissions.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 408 KiB |
BIN
.github/codex-cli-splash.png
vendored
BIN
.github/codex-cli-splash.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 3.1 MiB |
2
.github/codex/home/config.toml
vendored
2
.github/codex/home/config.toml
vendored
@@ -1,3 +1,3 @@
|
||||
model = "gpt-5.1"
|
||||
model = "o3"
|
||||
|
||||
# Consider setting [mcp_servers] here!
|
||||
|
||||
139
.github/codex/labels/codex-rust-review.md
vendored
139
.github/codex/labels/codex-rust-review.md
vendored
@@ -1,139 +0,0 @@
|
||||
Review this PR and respond with a very concise final message, formatted in Markdown.
|
||||
|
||||
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
|
||||
|
||||
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
|
||||
|
||||
Things to look out for when doing the review:
|
||||
|
||||
## General Principles
|
||||
|
||||
- **Make sure the pull request body explains the motivation behind the change.** If the author has failed to do this, call it out, and if you think you can deduce the motivation behind the change, propose copy.
|
||||
- Ideally, the PR body also contains a small summary of the change. For small changes, the PR title may be sufficient.
|
||||
- Each PR should ideally do one conceptual thing. For example, if a PR does a refactoring as well as introducing a new feature, push back and suggest the refactoring be done in a separate PR. This makes things easier for the reviewer, as refactoring changes can often be far-reaching, yet quick to review.
|
||||
- When introducing new code, be on the lookout for code that duplicates existing code. When found, propose a way to refactor the existing code such that it should be reused.
|
||||
|
||||
## Code Organization
|
||||
|
||||
- Each create in the Cargo workspace in `codex-rs` has a specific purpose: make a note if you believe new code is not introduced in the correct crate.
|
||||
- When possible, try to keep the `core` crate as small as possible. Non-core but shared logic is often a good candidate for `codex-rs/common`.
|
||||
- Be wary of large files and offer suggestions for how to break things into more reasonably-sized files.
|
||||
- Rust files should generally be organized such that the public parts of the API appear near the top of the file and helper functions go below. This is analagous to the "inverted pyramid" structure that is favored in journalism.
|
||||
|
||||
## Assertions in Tests
|
||||
|
||||
Assert the equality of the entire objects instead of doing "piecemeal comparisons," performing `assert_eq!()` on individual fields.
|
||||
|
||||
Note that unit tests also function as "executable documentation." As shown in the following example, "piecemeal comparisons" are often more verbose, provide less coverage, and are not as useful as executable documentation.
|
||||
|
||||
For example, suppose you have the following enum:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, PartialEq)]
|
||||
enum Message {
|
||||
Request {
|
||||
id: String,
|
||||
method: String,
|
||||
params: Option<serde_json::Value>,
|
||||
},
|
||||
Notification {
|
||||
method: String,
|
||||
params: Option<serde_json::Value>,
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
This is an example of a _piecemeal_ comparison:
|
||||
|
||||
```rust
|
||||
// BAD: Piecemeal Comparison
|
||||
|
||||
#[test]
|
||||
fn test_get_latest_messages() {
|
||||
let messages = get_latest_messages();
|
||||
assert_eq!(messages.len(), 2);
|
||||
|
||||
let m0 = &messages[0];
|
||||
match m0 {
|
||||
Message::Request { id, method, params } => {
|
||||
assert_eq!(id, "123");
|
||||
assert_eq!(method, "subscribe");
|
||||
assert_eq!(
|
||||
*params,
|
||||
Some(json!({
|
||||
"conversation_id": "x42z86"
|
||||
}))
|
||||
)
|
||||
}
|
||||
Message::Notification { .. } => {
|
||||
panic!("expected Request");
|
||||
}
|
||||
}
|
||||
|
||||
let m1 = &messages[1];
|
||||
match m1 {
|
||||
Message::Request { .. } => {
|
||||
panic!("expected Notification");
|
||||
}
|
||||
Message::Notification { method, params } => {
|
||||
assert_eq!(method, "log");
|
||||
assert_eq!(
|
||||
*params,
|
||||
Some(json!({
|
||||
"level": "info",
|
||||
"message": "subscribed"
|
||||
}))
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This is a _deep_ comparison:
|
||||
|
||||
```rust
|
||||
// GOOD: Verify the entire structure with a single assert_eq!().
|
||||
|
||||
use pretty_assertions::assert_eq;
|
||||
|
||||
#[test]
|
||||
fn test_get_latest_messages() {
|
||||
let messages = get_latest_messages();
|
||||
|
||||
assert_eq!(
|
||||
vec![
|
||||
Message::Request {
|
||||
id: "123".to_string(),
|
||||
method: "subscribe".to_string(),
|
||||
params: Some(json!({
|
||||
"conversation_id": "x42z86"
|
||||
})),
|
||||
},
|
||||
Message::Notification {
|
||||
method: "log".to_string(),
|
||||
params: Some(json!({
|
||||
"level": "info",
|
||||
"message": "subscribed"
|
||||
})),
|
||||
},
|
||||
],
|
||||
messages,
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## More Tactical Rust Things To Look Out For
|
||||
|
||||
- Do not use `unsafe` (unless you have a really, really good reason like using an operating system API directly and no safe wrapper exists). For example, there are cases where it is tempting to use `unsafe` in order to use `std::env::set_var()`, but this indeed `unsafe` and has led to race conditions on multiple occasions. (When this happens, find a mechanism other than environment variables to use for configuration.)
|
||||
- Encourage the use of small enums or the newtype pattern in Rust if it helps readability without adding significant cognitive load or lines of code.
|
||||
- If you see opportunities for the changes in a diff to use more idiomatic Rust, please make specific recommendations. For example, favor the use of expressions over `return`.
|
||||
- When modifying a `Cargo.toml` file, make sure that dependency lists stay alphabetically sorted. Also consider whether a new dependency is added to the appropriate place (e.g., `[dependencies]` versus `[dev-dependencies]`)
|
||||
|
||||
## Pull Request Body
|
||||
|
||||
- If the nature of the change seems to have a visual component (which is often the case for changes to `codex-rs/tui`), recommend including a screenshot or video to demonstrate the change, if appropriate.
|
||||
- References to existing GitHub issues and PRs are encouraged, where appropriate, though you likely do not have network access, so may not be able to help here.
|
||||
|
||||
# PR Information
|
||||
|
||||
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the `base` and `head` refs that define this PR. Both refs are available locally.
|
||||
4
.github/dependabot.yaml
vendored
4
.github/dependabot.yaml
vendored
@@ -24,7 +24,3 @@ updates:
|
||||
directory: /
|
||||
schedule:
|
||||
interval: weekly
|
||||
- package-ecosystem: rust-toolchain
|
||||
directory: codex-rs
|
||||
schedule:
|
||||
interval: weekly
|
||||
|
||||
90
.github/dotslash-config.json
vendored
90
.github/dotslash-config.json
vendored
@@ -1,83 +1,27 @@
|
||||
{
|
||||
"outputs": {
|
||||
"codex-exec": {
|
||||
"platforms": {
|
||||
"macos-aarch64": { "regex": "^codex-exec-aarch64-apple-darwin\\.zst$", "path": "codex-exec" },
|
||||
"macos-x86_64": { "regex": "^codex-exec-x86_64-apple-darwin\\.zst$", "path": "codex-exec" },
|
||||
"linux-x86_64": { "regex": "^codex-exec-x86_64-unknown-linux-musl\\.zst$", "path": "codex-exec" },
|
||||
"linux-aarch64": { "regex": "^codex-exec-aarch64-unknown-linux-musl\\.zst$", "path": "codex-exec" }
|
||||
}
|
||||
},
|
||||
|
||||
"codex": {
|
||||
"platforms": {
|
||||
"macos-aarch64": {
|
||||
"regex": "^codex-aarch64-apple-darwin\\.zst$",
|
||||
"path": "codex"
|
||||
},
|
||||
"macos-x86_64": {
|
||||
"regex": "^codex-x86_64-apple-darwin\\.zst$",
|
||||
"path": "codex"
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"regex": "^codex-x86_64-unknown-linux-musl\\.zst$",
|
||||
"path": "codex"
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"regex": "^codex-aarch64-unknown-linux-musl\\.zst$",
|
||||
"path": "codex"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"regex": "^codex-x86_64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex.exe"
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"regex": "^codex-aarch64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex.exe"
|
||||
}
|
||||
"macos-aarch64": { "regex": "^codex-aarch64-apple-darwin\\.zst$", "path": "codex" },
|
||||
"macos-x86_64": { "regex": "^codex-x86_64-apple-darwin\\.zst$", "path": "codex" },
|
||||
"linux-x86_64": { "regex": "^codex-x86_64-unknown-linux-musl\\.zst$", "path": "codex" },
|
||||
"linux-aarch64": { "regex": "^codex-aarch64-unknown-linux-musl\\.zst$", "path": "codex" }
|
||||
}
|
||||
},
|
||||
"codex-responses-api-proxy": {
|
||||
|
||||
"codex-linux-sandbox": {
|
||||
"platforms": {
|
||||
"macos-aarch64": {
|
||||
"regex": "^codex-responses-api-proxy-aarch64-apple-darwin\\.zst$",
|
||||
"path": "codex-responses-api-proxy"
|
||||
},
|
||||
"macos-x86_64": {
|
||||
"regex": "^codex-responses-api-proxy-x86_64-apple-darwin\\.zst$",
|
||||
"path": "codex-responses-api-proxy"
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"regex": "^codex-responses-api-proxy-x86_64-unknown-linux-musl\\.zst$",
|
||||
"path": "codex-responses-api-proxy"
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"regex": "^codex-responses-api-proxy-aarch64-unknown-linux-musl\\.zst$",
|
||||
"path": "codex-responses-api-proxy"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"regex": "^codex-responses-api-proxy-x86_64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-responses-api-proxy.exe"
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"regex": "^codex-responses-api-proxy-aarch64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-responses-api-proxy.exe"
|
||||
}
|
||||
}
|
||||
},
|
||||
"codex-command-runner": {
|
||||
"platforms": {
|
||||
"windows-x86_64": {
|
||||
"regex": "^codex-command-runner-x86_64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-command-runner.exe"
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"regex": "^codex-command-runner-aarch64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-command-runner.exe"
|
||||
}
|
||||
}
|
||||
},
|
||||
"codex-windows-sandbox-setup": {
|
||||
"platforms": {
|
||||
"windows-x86_64": {
|
||||
"regex": "^codex-windows-sandbox-setup-x86_64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-windows-sandbox-setup.exe"
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"regex": "^codex-windows-sandbox-setup-aarch64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex-windows-sandbox-setup.exe"
|
||||
}
|
||||
"linux-x86_64": { "regex": "^codex-linux-sandbox-x86_64-unknown-linux-musl\\.zst$", "path": "codex-linux-sandbox" },
|
||||
"linux-aarch64": { "regex": "^codex-linux-sandbox-aarch64-unknown-linux-musl\\.zst$", "path": "codex-linux-sandbox" }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
18
.github/prompts/issue-deduplicator.txt
vendored
18
.github/prompts/issue-deduplicator.txt
vendored
@@ -1,18 +0,0 @@
|
||||
You are an assistant that triages new GitHub issues by identifying potential duplicates.
|
||||
|
||||
You will receive the following JSON files located in the current working directory:
|
||||
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
|
||||
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
|
||||
|
||||
Instructions:
|
||||
- Load both files as JSON and review their contents carefully. The codex-existing-issues.json file is large, ensure you explore all of it.
|
||||
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
|
||||
- Only consider an issue a potential duplicate if there is a clear overlap in symptoms, feature requests, reproduction steps, or error messages.
|
||||
- Prioritize newer issues when similarity is comparable.
|
||||
- Ignore pull requests and issues whose similarity is tenuous.
|
||||
- When unsure, prefer returning fewer matches.
|
||||
|
||||
Output requirements:
|
||||
- Respond with a JSON array of issue numbers (integers), ordered from most likely duplicate to least.
|
||||
- Include at most five numbers.
|
||||
- If you find no plausible duplicates, respond with `[]`.
|
||||
26
.github/prompts/issue-labeler.txt
vendored
26
.github/prompts/issue-labeler.txt
vendored
@@ -1,26 +0,0 @@
|
||||
You are an assistant that reviews GitHub issues for the repository.
|
||||
|
||||
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
|
||||
Follow these rules:
|
||||
- Only pick labels out of the list below.
|
||||
- Prefer a small set of precise labels over many broad ones.
|
||||
- If none of the labels fit, respond with an empty JSON array: []
|
||||
- Output must be a JSON array of label names (strings) with no additional commentary.
|
||||
|
||||
Labels to apply:
|
||||
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
|
||||
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
|
||||
3. extension — VS Code (or other IDE) extension-specific issues.
|
||||
4. windows-os — Bugs or friction specific to Windows environments (PowerShell behavior, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
5. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
8. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
|
||||
Issue information is available in environment variables:
|
||||
|
||||
ISSUE_NUMBER
|
||||
ISSUE_TITLE
|
||||
ISSUE_BODY
|
||||
REPO_FULL_NAME
|
||||
8
.github/pull_request_template.md
vendored
8
.github/pull_request_template.md
vendored
@@ -1,8 +0,0 @@
|
||||
# External (non-OpenAI) Pull Request Requirements
|
||||
|
||||
Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
|
||||
https://github.com/openai/codex/blob/main/docs/contributing.md
|
||||
|
||||
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.
|
||||
|
||||
Include a link to a bug report or enhancement request.
|
||||
26
.github/workflows/cargo-deny.yml
vendored
26
.github/workflows/cargo-deny.yml
vendored
@@ -1,26 +0,0 @@
|
||||
name: cargo-deny
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
cargo-deny:
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./codex-rs
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Install Rust toolchain
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Run cargo-deny
|
||||
uses: EmbarkStudios/cargo-deny-action@v1
|
||||
with:
|
||||
rust-version: stable
|
||||
manifest-path: ./codex-rs/Cargo.toml
|
||||
79
.github/workflows/ci.yml
vendored
79
.github/workflows/ci.yml
vendored
@@ -1,7 +1,7 @@
|
||||
name: ci
|
||||
|
||||
on:
|
||||
pull_request: {}
|
||||
pull_request: { branches: [main] }
|
||||
push: { branches: [main] }
|
||||
|
||||
jobs:
|
||||
@@ -12,45 +12,67 @@ jobs:
|
||||
NODE_OPTIONS: --max-old-space-size=4096
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 10.8.1
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
- name: Get pnpm store directory
|
||||
id: pnpm-cache
|
||||
shell: bash
|
||||
run: |
|
||||
echo "store_path=$(pnpm store path --silent)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Setup pnpm cache
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
node-version: 22
|
||||
path: ${{ steps.pnpm-cache.outputs.store_path }}
|
||||
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pnpm-store-
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
run: pnpm install
|
||||
|
||||
# stage_npm_packages.py requires DotSlash when staging releases.
|
||||
- uses: facebook/install-dotslash@v2
|
||||
# Run all tasks using workspace filters
|
||||
|
||||
- name: Stage npm package
|
||||
id: stage_npm_package
|
||||
- name: Check TypeScript code formatting
|
||||
working-directory: codex-cli
|
||||
run: pnpm run format
|
||||
|
||||
- name: Check Markdown and config file formatting
|
||||
run: pnpm run format
|
||||
|
||||
- name: Run tests
|
||||
run: pnpm run test
|
||||
|
||||
- name: Lint
|
||||
run: |
|
||||
pnpm --filter @openai/codex exec -- eslint src tests --ext ts --ext tsx \
|
||||
--report-unused-disable-directives \
|
||||
--rule "no-console:error" \
|
||||
--rule "no-debugger:error" \
|
||||
--max-warnings=-1
|
||||
|
||||
- name: Type-check
|
||||
run: pnpm run typecheck
|
||||
|
||||
- name: Build
|
||||
run: pnpm run build
|
||||
|
||||
- name: Ensure staging a release works.
|
||||
working-directory: codex-cli
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
# Use a rust-release version that includes all native binaries.
|
||||
CODEX_VERSION=0.74.0
|
||||
OUTPUT_DIR="${RUNNER_TEMP}"
|
||||
python3 ./scripts/stage_npm_packages.py \
|
||||
--release-version "$CODEX_VERSION" \
|
||||
--package codex \
|
||||
--output-dir "$OUTPUT_DIR"
|
||||
PACK_OUTPUT="${OUTPUT_DIR}/codex-npm-${CODEX_VERSION}.tgz"
|
||||
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload staged npm package artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: codex-npm-staging
|
||||
path: ${{ steps.stage_npm_package.outputs.pack_output }}
|
||||
run: pnpm stage-release
|
||||
|
||||
- name: Ensure root README.md contains only ASCII and certain Unicode code points
|
||||
run: ./scripts/asciicheck.py README.md
|
||||
@@ -61,6 +83,3 @@ jobs:
|
||||
run: ./scripts/asciicheck.py codex-cli/README.md
|
||||
- name: Check codex-cli/README ToC
|
||||
run: python3 scripts/readme_toc.py codex-cli/README.md
|
||||
|
||||
- name: Prettier (run `pnpm run format:fix` to fix)
|
||||
run: pnpm run format
|
||||
|
||||
28
.github/workflows/cla.yml
vendored
28
.github/workflows/cla.yml
vendored
@@ -13,37 +13,17 @@ permissions:
|
||||
|
||||
jobs:
|
||||
cla:
|
||||
# Only run the CLA assistant for the canonical openai repo so forks are not blocked
|
||||
# and contributors who signed previously do not receive duplicate CLA notifications.
|
||||
if: ${{ github.repository_owner == 'openai' }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: contributor-assistant/github-action@v2.6.1
|
||||
# Run on close only if the PR was merged. This will lock the PR to preserve
|
||||
# the CLA agreement. We don't want to lock PRs that have been closed without
|
||||
# merging because the contributor may want to respond with additional comments.
|
||||
# This action has a "lock-pullrequest-aftermerge" option that can be set to false,
|
||||
# but that would unconditionally skip locking even in cases where the PR was merged.
|
||||
if: |
|
||||
(
|
||||
github.event_name == 'pull_request_target' &&
|
||||
(
|
||||
github.event.action == 'opened' ||
|
||||
github.event.action == 'synchronize' ||
|
||||
(github.event.action == 'closed' && github.event.pull_request.merged == true)
|
||||
)
|
||||
) ||
|
||||
(
|
||||
github.event_name == 'issue_comment' &&
|
||||
(
|
||||
github.event.comment.body == 'recheck' ||
|
||||
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'
|
||||
)
|
||||
)
|
||||
github.event_name == 'pull_request_target' ||
|
||||
github.event.comment.body == 'recheck' ||
|
||||
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
path-to-document: https://github.com/openai/codex/blob/main/docs/CLA.md
|
||||
path-to-signatures: signatures/cla.json
|
||||
branch: cla-signatures
|
||||
allowlist: codex,dependabot,dependabot[bot],github-actions[bot]
|
||||
allowlist: dependabot[bot]
|
||||
|
||||
105
.github/workflows/close-stale-contributor-prs.yml
vendored
105
.github/workflows/close-stale-contributor-prs.yml
vendored
@@ -1,105 +0,0 @@
|
||||
name: Close stale contributor PRs
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 6 * * *"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
close-stale-contributor-prs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Close inactive PRs from contributors
|
||||
uses: actions/github-script@v8
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
const DAYS_INACTIVE = 14;
|
||||
const cutoff = new Date(Date.now() - DAYS_INACTIVE * 24 * 60 * 60 * 1000);
|
||||
const { owner, repo } = context.repo;
|
||||
const dryRun = false;
|
||||
const stalePrs = [];
|
||||
|
||||
core.info(`Dry run mode: ${dryRun}`);
|
||||
|
||||
const prs = await github.paginate(github.rest.pulls.list, {
|
||||
owner,
|
||||
repo,
|
||||
state: "open",
|
||||
per_page: 100,
|
||||
sort: "updated",
|
||||
direction: "asc",
|
||||
});
|
||||
|
||||
for (const pr of prs) {
|
||||
const lastUpdated = new Date(pr.updated_at);
|
||||
if (lastUpdated > cutoff) {
|
||||
core.info(`PR ${pr.number} is fresh`);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!pr.user || pr.user.type !== "User") {
|
||||
core.info(`PR ${pr.number} wasn't created by a user`);
|
||||
continue;
|
||||
}
|
||||
|
||||
let permission;
|
||||
try {
|
||||
const permissionResponse = await github.rest.repos.getCollaboratorPermissionLevel({
|
||||
owner,
|
||||
repo,
|
||||
username: pr.user.login,
|
||||
});
|
||||
permission = permissionResponse.data.permission;
|
||||
} catch (error) {
|
||||
if (error.status === 404) {
|
||||
core.info(`Author ${pr.user.login} is not a collaborator; skipping #${pr.number}`);
|
||||
continue;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
|
||||
const hasContributorAccess = ["admin", "maintain", "write"].includes(permission);
|
||||
if (!hasContributorAccess) {
|
||||
core.info(`Author ${pr.user.login} has ${permission} access; skipping #${pr.number}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
stalePrs.push(pr);
|
||||
}
|
||||
|
||||
if (!stalePrs.length) {
|
||||
core.info("No stale contributor pull requests found.");
|
||||
return;
|
||||
}
|
||||
|
||||
for (const pr of stalePrs) {
|
||||
const issue_number = pr.number;
|
||||
const closeComment = `Closing this pull request because it has had no updates for more than ${DAYS_INACTIVE} days. If you plan to continue working on it, feel free to reopen or open a new PR.`;
|
||||
|
||||
if (dryRun) {
|
||||
core.info(`[dry-run] Would close contributor PR #${issue_number} from ${pr.user.login}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number,
|
||||
body: closeComment,
|
||||
});
|
||||
|
||||
await github.rest.pulls.update({
|
||||
owner,
|
||||
repo,
|
||||
pull_number: issue_number,
|
||||
state: "closed",
|
||||
});
|
||||
|
||||
core.info(`Closed contributor PR #${issue_number} from ${pr.user.login}`);
|
||||
}
|
||||
4
.github/workflows/codespell.yml
vendored
4
.github/workflows/codespell.yml
vendored
@@ -18,10 +18,10 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
uses: actions/checkout@v4
|
||||
- name: Annotate locations with typos
|
||||
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
|
||||
- name: Codespell
|
||||
uses: codespell-project/actions-codespell@8f01853be192eb0f849a5c7d721450e7a467c579 # v2.2
|
||||
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
|
||||
with:
|
||||
ignore_words_file: .codespellignore
|
||||
|
||||
95
.github/workflows/codex.yml
vendored
Normal file
95
.github/workflows/codex.yml
vendored
Normal file
@@ -0,0 +1,95 @@
|
||||
name: Codex
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, labeled]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
codex:
|
||||
# This `if` check provides complex filtering logic to avoid running Codex
|
||||
# on every PR. Admittedly, one thing this does not verify is whether the
|
||||
# sender has write access to the repo: that must be done as part of a
|
||||
# runtime step.
|
||||
#
|
||||
# Note the label values should match the ones in the .github/codex/labels
|
||||
# folder.
|
||||
if: |
|
||||
(github.event_name == 'issues' && (
|
||||
(github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-triage'))
|
||||
)) ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'labeled' && github.event.label.name == 'codex-review')
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write # can push or create branches
|
||||
issues: write # for comments + labels on issues/PRs
|
||||
pull-requests: write # for PR comments/labels
|
||||
steps:
|
||||
# TODO: Consider adding an optional mode (--dry-run?) to actions/codex
|
||||
# that verifies whether Codex should actually be run for this event.
|
||||
# (For example, it may be rejected because the sender does not have
|
||||
# write access to the repo.) The benefit would be two-fold:
|
||||
# 1. As the first step of this job, it gives us a chance to add a reaction
|
||||
# or comment to the PR/issue ASAP to "ack" the request.
|
||||
# 2. It saves resources by skipping the clone and setup steps below if
|
||||
# Codex is not going to run.
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# We install the dependencies like we would for an ordinary CI job,
|
||||
# particularly because Codex will not have network access to install
|
||||
# these dependencies.
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 10.8.1
|
||||
run_install: false
|
||||
|
||||
- name: Get pnpm store directory
|
||||
id: pnpm-cache
|
||||
shell: bash
|
||||
run: |
|
||||
echo "store_path=$(pnpm store path --silent)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Setup pnpm cache
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ${{ steps.pnpm-cache.outputs.store_path }}
|
||||
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pnpm-store-
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install
|
||||
|
||||
- uses: dtolnay/rust-toolchain@1.88
|
||||
with:
|
||||
targets: x86_64-unknown-linux-gnu
|
||||
components: clippy
|
||||
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
# Note it is possible that the `verify` step internal to Run Codex will
|
||||
# fail, in which case the work to setup the repo was worthless :(
|
||||
- name: Run Codex
|
||||
uses: ./.github/actions/codex
|
||||
with:
|
||||
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
codex_home: ./.github/codex/home
|
||||
139
.github/workflows/issue-deduplicator.yml
vendored
139
.github/workflows/issue-deduplicator.yml
vendored
@@ -1,139 +0,0 @@
|
||||
name: Issue Deduplicator
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- opened
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
gather-duplicates:
|
||||
name: Identify potential duplicates
|
||||
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate') }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
codex_output: ${{ steps.codex.outputs.final-message }}
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
||||
- name: Prepare Codex inputs
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -eo pipefail
|
||||
|
||||
CURRENT_ISSUE_FILE=codex-current-issue.json
|
||||
EXISTING_ISSUES_FILE=codex-existing-issues.json
|
||||
|
||||
gh issue list --repo "${{ github.repository }}" \
|
||||
--json number,title,body,createdAt \
|
||||
--limit 1000 \
|
||||
--state all \
|
||||
--search "sort:created-desc" \
|
||||
| jq '.' \
|
||||
> "$EXISTING_ISSUES_FILE"
|
||||
|
||||
gh issue view "${{ github.event.issue.number }}" \
|
||||
--repo "${{ github.repository }}" \
|
||||
--json number,title,body \
|
||||
| jq '.' \
|
||||
> "$CURRENT_ISSUE_FILE"
|
||||
|
||||
- id: codex
|
||||
uses: openai/codex-action@main
|
||||
with:
|
||||
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
allow-users: "*"
|
||||
prompt: |
|
||||
You are an assistant that triages new GitHub issues by identifying potential duplicates.
|
||||
|
||||
You will receive the following JSON files located in the current working directory:
|
||||
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
|
||||
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
|
||||
|
||||
Instructions:
|
||||
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
|
||||
- Focus on the underlying intent and context of each issue—such as reported symptoms, feature requests, reproduction steps, or error messages—rather than relying solely on string similarity or synthetic metrics.
|
||||
- After your analysis, validate your results in 1-2 lines explaining your decision to return the selected matches.
|
||||
- When unsure, prefer returning fewer matches.
|
||||
- Include at most five numbers.
|
||||
|
||||
output-schema: |
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"reason": { "type": "string" }
|
||||
},
|
||||
"required": ["issues", "reason"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
||||
comment-on-issue:
|
||||
name: Comment with potential duplicates
|
||||
needs: gather-duplicates
|
||||
if: ${{ needs.gather-duplicates.result != 'skipped' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
steps:
|
||||
- name: Comment on issue
|
||||
uses: actions/github-script@v8
|
||||
env:
|
||||
CODEX_OUTPUT: ${{ needs.gather-duplicates.outputs.codex_output }}
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
script: |
|
||||
const raw = process.env.CODEX_OUTPUT ?? '';
|
||||
let parsed;
|
||||
try {
|
||||
parsed = JSON.parse(raw);
|
||||
} catch (error) {
|
||||
core.info(`Codex output was not valid JSON. Raw output: ${raw}`);
|
||||
core.info(`Parse error: ${error.message}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const issues = Array.isArray(parsed?.issues) ? parsed.issues : [];
|
||||
const currentIssueNumber = String(context.payload.issue.number);
|
||||
|
||||
console.log(`Current issue number: ${currentIssueNumber}`);
|
||||
console.log(issues);
|
||||
|
||||
const filteredIssues = issues.filter((value) => String(value) !== currentIssueNumber);
|
||||
|
||||
if (filteredIssues.length === 0) {
|
||||
core.info('Codex reported no potential duplicates.');
|
||||
return;
|
||||
}
|
||||
|
||||
const lines = [
|
||||
'Potential duplicates detected. Please review them and close your issue if it is a duplicate.',
|
||||
'',
|
||||
...filteredIssues.map((value) => `- #${String(value)}`),
|
||||
'',
|
||||
'*Powered by [Codex Action](https://github.com/openai/codex-action)*'];
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.payload.issue.number,
|
||||
body: lines.join("\n"),
|
||||
});
|
||||
|
||||
- name: Remove codex-deduplicate label
|
||||
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate' }}
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GH_REPO: ${{ github.repository }}
|
||||
run: |
|
||||
gh issue edit "${{ github.event.issue.number }}" --remove-label codex-deduplicate || true
|
||||
echo "Attempted to remove label: codex-deduplicate"
|
||||
130
.github/workflows/issue-labeler.yml
vendored
130
.github/workflows/issue-labeler.yml
vendored
@@ -1,130 +0,0 @@
|
||||
name: Issue Labeler
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- opened
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
gather-labels:
|
||||
name: Generate label suggestions
|
||||
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-label') }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
codex_output: ${{ steps.codex.outputs.final-message }}
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
||||
- id: codex
|
||||
uses: openai/codex-action@main
|
||||
with:
|
||||
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
allow-users: "*"
|
||||
prompt: |
|
||||
You are an assistant that reviews GitHub issues for the repository.
|
||||
|
||||
Your job is to choose the most appropriate labels for the issue described later in this prompt.
|
||||
Follow these rules:
|
||||
|
||||
- Add one (and only one) of the following three labels to distinguish the type of issue. Default to "bug" if unsure.
|
||||
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
|
||||
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
|
||||
3. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
|
||||
- If applicable, add one of the following labels to specify which sub-product or product surface the issue relates to.
|
||||
1. CLI — the Codex command line interface.
|
||||
2. extension — VS Code (or other IDE) extension-specific issues.
|
||||
3. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
4. github-action — Issues with the Codex GitHub action.
|
||||
5. iOS — Issues with the Codex iOS app.
|
||||
|
||||
- Additionally add zero or more of the following labels that are relevant to the issue content. Prefer a small set of precise labels over many broad ones.
|
||||
1. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
2. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
3. mcp-server — Problems related to the codex mcp-server command, where codex runs as an MCP server.
|
||||
4. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
5. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
6. code-review — Issues related to the code review feature or functionality.
|
||||
7. auth - Problems related to authentication, login, or access tokens.
|
||||
8. codex-exec - Problems related to the "codex exec" command or functionality.
|
||||
9. context-management - Problems related to compaction, context windows, or available context reporting.
|
||||
10. custom-model - Problems that involve using custom model providers, local models, or OSS models.
|
||||
11. rate-limits - Problems related to token limits, rate limits, or token usage reporting.
|
||||
12. sandbox - Issues related to local sandbox environments or tool call approvals to override sandbox restrictions.
|
||||
13. tool-calls - Problems related to specific tool call invocations including unexpected errors, failures, or hangs.
|
||||
14. TUI - Problems with the terminal user interface (TUI) including keyboard shortcuts, copy & pasting, menus, or screen update issues.
|
||||
|
||||
Issue number: ${{ github.event.issue.number }}
|
||||
|
||||
Issue title:
|
||||
${{ github.event.issue.title }}
|
||||
|
||||
Issue body:
|
||||
${{ github.event.issue.body }}
|
||||
|
||||
Repository full name:
|
||||
${{ github.repository }}
|
||||
|
||||
output-schema: |
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": ["labels"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
||||
apply-labels:
|
||||
name: Apply labels from Codex output
|
||||
needs: gather-labels
|
||||
if: ${{ needs.gather-labels.result != 'skipped' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GH_REPO: ${{ github.repository }}
|
||||
ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
CODEX_OUTPUT: ${{ needs.gather-labels.outputs.codex_output }}
|
||||
steps:
|
||||
- name: Apply labels
|
||||
run: |
|
||||
json=${CODEX_OUTPUT//$'\r'/}
|
||||
if [ -z "$json" ]; then
|
||||
echo "Codex produced no output. Skipping label application."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if ! printf '%s' "$json" | jq -e 'type == "object" and (.labels | type == "array")' >/dev/null 2>&1; then
|
||||
echo "Codex output did not include a labels array. Raw output: $json"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
labels=$(printf '%s' "$json" | jq -r '.labels[] | tostring')
|
||||
if [ -z "$labels" ]; then
|
||||
echo "Codex returned an empty array. Nothing to do."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cmd=(gh issue edit "$ISSUE_NUMBER")
|
||||
while IFS= read -r label; do
|
||||
cmd+=(--add-label "$label")
|
||||
done <<< "$labels"
|
||||
|
||||
"${cmd[@]}" || true
|
||||
|
||||
- name: Remove codex-label trigger
|
||||
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-label' }}
|
||||
run: |
|
||||
gh issue edit "$ISSUE_NUMBER" --remove-label codex-label || true
|
||||
echo "Attempted to remove label: codex-label"
|
||||
504
.github/workflows/rust-ci.yml
vendored
504
.github/workflows/rust-ci.yml
vendored
@@ -1,544 +1,118 @@
|
||||
name: rust-ci
|
||||
on:
|
||||
pull_request: {}
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "codex-rs/**"
|
||||
- ".github/**"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
workflow_dispatch:
|
||||
|
||||
# CI builds in debug (dev) for faster signal.
|
||||
# For CI, we build in debug (`--profile dev`) rather than release mode so we
|
||||
# get signal faster.
|
||||
|
||||
jobs:
|
||||
# --- Detect what changed to detect which tests to run (always runs) -------------------------------------
|
||||
changed:
|
||||
name: Detect changed areas
|
||||
runs-on: ubuntu-24.04
|
||||
outputs:
|
||||
codex: ${{ steps.detect.outputs.codex }}
|
||||
workflows: ${{ steps.detect.outputs.workflows }}
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Detect changed paths (no external action)
|
||||
id: detect
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
BASE_SHA='${{ github.event.pull_request.base.sha }}'
|
||||
HEAD_SHA='${{ github.event.pull_request.head.sha }}'
|
||||
echo "Base SHA: $BASE_SHA"
|
||||
echo "Head SHA: $HEAD_SHA"
|
||||
# List files changed between base and PR head
|
||||
mapfile -t files < <(git diff --name-only --no-renames "$BASE_SHA" "$HEAD_SHA")
|
||||
else
|
||||
# On push / manual runs, default to running everything
|
||||
files=("codex-rs/force" ".github/force")
|
||||
fi
|
||||
|
||||
codex=false
|
||||
workflows=false
|
||||
for f in "${files[@]}"; do
|
||||
[[ $f == codex-rs/* ]] && codex=true
|
||||
[[ $f == .github/* ]] && workflows=true
|
||||
done
|
||||
|
||||
echo "codex=$codex" >> "$GITHUB_OUTPUT"
|
||||
echo "workflows=$workflows" >> "$GITHUB_OUTPUT"
|
||||
|
||||
# --- CI that doesn't need specific targets ---------------------------------
|
||||
# CI that don't need specific targets
|
||||
general:
|
||||
name: Format / etc
|
||||
runs-on: ubuntu-24.04
|
||||
needs: changed
|
||||
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
- uses: actions/checkout@v4
|
||||
- uses: dtolnay/rust-toolchain@1.88
|
||||
with:
|
||||
components: rustfmt
|
||||
- name: cargo fmt
|
||||
run: cargo fmt -- --config imports_granularity=Item --check
|
||||
- name: Verify codegen for mcp-types
|
||||
run: ./mcp-types/check_lib_rs.py
|
||||
|
||||
cargo_shear:
|
||||
name: cargo shear
|
||||
runs-on: ubuntu-24.04
|
||||
needs: changed
|
||||
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
|
||||
with:
|
||||
tool: cargo-shear
|
||||
version: 1.5.1
|
||||
- name: cargo shear
|
||||
run: cargo shear
|
||||
|
||||
# --- CI to validate on different os/targets --------------------------------
|
||||
lint_build:
|
||||
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
|
||||
# CI to validate on different os/targets
|
||||
lint_build_test:
|
||||
name: ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
needs: changed
|
||||
# Keep job-level if to avoid spinning up runners when not needed
|
||||
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
env:
|
||||
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
|
||||
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
|
||||
CARGO_INCREMENTAL: "0"
|
||||
SCCACHE_CACHE_SIZE: 10G
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
# Note: While Codex CLI does not support Windows today, we include
|
||||
# Windows in CI to ensure the code at least builds there.
|
||||
include:
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
profile: dev
|
||||
- runner: macos-14
|
||||
target: x86_64-apple-darwin
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-gnu
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-gnu
|
||||
profile: dev
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
profile: dev
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
profile: dev
|
||||
|
||||
# Also run representative release builds on Mac and Linux because
|
||||
# there could be release-only build errors we want to catch.
|
||||
# Hopefully this also pre-populates the build cache to speed up
|
||||
# releases.
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
profile: release
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
profile: release
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
profile: release
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
profile: release
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
- uses: actions/checkout@v4
|
||||
- uses: dtolnay/rust-toolchain@1.88
|
||||
with:
|
||||
targets: ${{ matrix.target }}
|
||||
components: clippy
|
||||
|
||||
- name: Compute lockfile hash
|
||||
id: lockhash
|
||||
working-directory: codex-rs
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
|
||||
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
# Explicit cache restore: split cargo home vs target, so we can
|
||||
# avoid caching the large target dir on the gnu-dev job.
|
||||
- name: Restore cargo home cache
|
||||
id: cache_cargo_home_restore
|
||||
uses: actions/cache/restore@v5
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
|
||||
restore-keys: |
|
||||
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
|
||||
|
||||
# Install and restore sccache cache
|
||||
- name: Install sccache
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
|
||||
with:
|
||||
tool: sccache
|
||||
version: 0.7.5
|
||||
|
||||
- name: Configure sccache backend
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
|
||||
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
|
||||
echo "Using sccache GitHub backend"
|
||||
else
|
||||
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
|
||||
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
|
||||
echo "Using sccache local disk + actions/cache fallback"
|
||||
fi
|
||||
|
||||
- name: Enable sccache wrapper
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
shell: bash
|
||||
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Restore sccache cache (fallback)
|
||||
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
|
||||
id: cache_sccache_restore
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: ${{ github.workspace }}/.sccache/
|
||||
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
|
||||
restore-keys: |
|
||||
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
|
||||
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Prepare APT cache directories (musl)
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
sudo mkdir -p /var/cache/apt/archives /var/lib/apt/lists
|
||||
sudo chown -R "$USER:$USER" /var/cache/apt /var/lib/apt/lists
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Restore APT cache (musl)
|
||||
id: cache_apt_restore
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: |
|
||||
/var/cache/apt
|
||||
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
|
||||
${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Install musl build tools
|
||||
env:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
sudo apt-get -y update -o Acquire::Retries=3
|
||||
sudo apt-get -y install --no-install-recommends musl-tools pkg-config
|
||||
|
||||
- name: Install cargo-chef
|
||||
if: ${{ matrix.profile == 'release' }}
|
||||
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
|
||||
with:
|
||||
tool: cargo-chef
|
||||
version: 0.1.71
|
||||
|
||||
- name: Pre-warm dependency cache (cargo-chef)
|
||||
if: ${{ matrix.profile == 'release' }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
RECIPE="${RUNNER_TEMP}/chef-recipe.json"
|
||||
cargo chef prepare --recipe-path "$RECIPE"
|
||||
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release --all-features
|
||||
sudo apt install -y musl-tools pkg-config
|
||||
|
||||
- name: cargo clippy
|
||||
id: clippy
|
||||
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} -- -D warnings
|
||||
continue-on-error: true
|
||||
run: cargo clippy --target ${{ matrix.target }} --all-features --tests -- -D warnings
|
||||
|
||||
# Running `cargo build` from the workspace root builds the workspace using
|
||||
# the union of all features from third-party crates. This can mask errors
|
||||
# where individual crates have underspecified features. To avoid this, we
|
||||
# run `cargo check` for each crate individually, though because this is
|
||||
# run `cargo build` for each crate individually, though because this is
|
||||
# slower, we only do this for the x86_64-unknown-linux-gnu target.
|
||||
- name: cargo check individual crates
|
||||
id: cargo_check_all_crates
|
||||
if: ${{ matrix.target == 'x86_64-unknown-linux-gnu' && matrix.profile != 'release' }}
|
||||
- name: cargo build individual crates
|
||||
id: build
|
||||
if: ${{ matrix.target == 'x86_64-unknown-linux-gnu' }}
|
||||
continue-on-error: true
|
||||
run: |
|
||||
find . -name Cargo.toml -mindepth 2 -maxdepth 2 -print0 \
|
||||
| xargs -0 -n1 -I{} bash -c 'cd "$(dirname "{}")" && cargo check --profile ${{ matrix.profile }}'
|
||||
run: find . -name Cargo.toml -mindepth 2 -maxdepth 2 -print0 | xargs -0 -n1 -I{} bash -c 'cd "$(dirname "{}")" && cargo build'
|
||||
|
||||
# Save caches explicitly; make non-fatal so cache packaging
|
||||
# never fails the overall job. Only save when key wasn't hit.
|
||||
- name: Save cargo home cache
|
||||
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
|
||||
- name: cargo test
|
||||
id: test
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
|
||||
|
||||
- name: Save sccache cache (fallback)
|
||||
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: ${{ github.workspace }}/.sccache/
|
||||
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
|
||||
|
||||
- name: sccache stats
|
||||
if: always() && env.USE_SCCACHE == 'true'
|
||||
continue-on-error: true
|
||||
run: sccache --show-stats || true
|
||||
|
||||
- name: sccache summary
|
||||
if: always() && env.USE_SCCACHE == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
{
|
||||
echo "### sccache stats — ${{ matrix.target }} (${{ matrix.profile }})";
|
||||
echo;
|
||||
echo '```';
|
||||
sccache --show-stats || true;
|
||||
echo '```';
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Save APT cache (musl)
|
||||
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: |
|
||||
/var/cache/apt
|
||||
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
|
||||
run: cargo test --all-features --target ${{ matrix.target }}
|
||||
env:
|
||||
RUST_BACKTRACE: 1
|
||||
|
||||
# Fail the job if any of the previous steps failed.
|
||||
- name: verify all steps passed
|
||||
if: |
|
||||
steps.clippy.outcome == 'failure' ||
|
||||
steps.cargo_check_all_crates.outcome == 'failure'
|
||||
steps.build.outcome == 'failure' ||
|
||||
steps.test.outcome == 'failure'
|
||||
run: |
|
||||
echo "One or more checks failed (clippy or cargo_check_all_crates). See logs for details."
|
||||
echo "One or more checks failed (clippy, build, or test). See logs for details."
|
||||
exit 1
|
||||
|
||||
tests:
|
||||
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
needs: changed
|
||||
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
env:
|
||||
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
|
||||
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
|
||||
CARGO_INCREMENTAL: "0"
|
||||
SCCACHE_CACHE_SIZE: 10G
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-gnu
|
||||
profile: dev
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-gnu
|
||||
profile: dev
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
profile: dev
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
profile: dev
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
||||
# We have been running out of space when running this job on Linux for
|
||||
# x86_64-unknown-linux-gnu, so remove some unnecessary dependencies.
|
||||
- name: Remove unnecessary dependencies to save space
|
||||
if: ${{ startsWith(matrix.runner, 'ubuntu') }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
sudo rm -rf \
|
||||
/usr/local/lib/android \
|
||||
/usr/share/dotnet \
|
||||
/usr/local/share/boost \
|
||||
/usr/local/lib/node_modules \
|
||||
/opt/ghc
|
||||
sudo apt-get remove -y docker.io docker-compose podman buildah
|
||||
|
||||
# Some integration tests rely on DotSlash being installed.
|
||||
# See https://github.com/openai/codex/pull/7617.
|
||||
- name: Install DotSlash
|
||||
uses: facebook/install-dotslash@v2
|
||||
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
with:
|
||||
targets: ${{ matrix.target }}
|
||||
|
||||
- name: Compute lockfile hash
|
||||
id: lockhash
|
||||
working-directory: codex-rs
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
|
||||
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Restore cargo home cache
|
||||
id: cache_cargo_home_restore
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
|
||||
restore-keys: |
|
||||
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
|
||||
|
||||
- name: Install sccache
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
|
||||
with:
|
||||
tool: sccache
|
||||
version: 0.7.5
|
||||
|
||||
- name: Configure sccache backend
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
|
||||
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
|
||||
echo "Using sccache GitHub backend"
|
||||
else
|
||||
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
|
||||
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
|
||||
echo "Using sccache local disk + actions/cache fallback"
|
||||
fi
|
||||
|
||||
- name: Enable sccache wrapper
|
||||
if: ${{ env.USE_SCCACHE == 'true' }}
|
||||
shell: bash
|
||||
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Restore sccache cache (fallback)
|
||||
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
|
||||
id: cache_sccache_restore
|
||||
uses: actions/cache/restore@v5
|
||||
with:
|
||||
path: ${{ github.workspace }}/.sccache/
|
||||
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
|
||||
restore-keys: |
|
||||
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
|
||||
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
|
||||
|
||||
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
|
||||
with:
|
||||
tool: nextest
|
||||
version: 0.9.103
|
||||
|
||||
- name: tests
|
||||
id: test
|
||||
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test
|
||||
env:
|
||||
RUST_BACKTRACE: 1
|
||||
NEXTEST_STATUS_LEVEL: leak
|
||||
|
||||
- name: Save cargo home cache
|
||||
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
|
||||
|
||||
- name: Save sccache cache (fallback)
|
||||
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v5
|
||||
with:
|
||||
path: ${{ github.workspace }}/.sccache/
|
||||
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
|
||||
|
||||
- name: sccache stats
|
||||
if: always() && env.USE_SCCACHE == 'true'
|
||||
continue-on-error: true
|
||||
run: sccache --show-stats || true
|
||||
|
||||
- name: sccache summary
|
||||
if: always() && env.USE_SCCACHE == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
{
|
||||
echo "### sccache stats — ${{ matrix.target }} (tests)";
|
||||
echo;
|
||||
echo '```';
|
||||
sccache --show-stats || true;
|
||||
echo '```';
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: verify tests passed
|
||||
if: steps.test.outcome == 'failure'
|
||||
run: |
|
||||
echo "Tests failed. See logs for details."
|
||||
exit 1
|
||||
|
||||
# --- Gatherer job that you mark as the ONLY required status -----------------
|
||||
results:
|
||||
name: CI results (required)
|
||||
needs: [changed, general, cargo_shear, lint_build, tests]
|
||||
if: always()
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Summarize
|
||||
shell: bash
|
||||
run: |
|
||||
echo "general: ${{ needs.general.result }}"
|
||||
echo "shear : ${{ needs.cargo_shear.result }}"
|
||||
echo "lint : ${{ needs.lint_build.result }}"
|
||||
echo "tests : ${{ needs.tests.result }}"
|
||||
|
||||
# If nothing relevant changed (PR touching only root README, etc.),
|
||||
# declare success regardless of other jobs.
|
||||
if [[ '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' && '${{ github.event_name }}' != 'push' ]]; then
|
||||
echo 'No relevant changes -> CI not required.'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Otherwise require the jobs to have succeeded
|
||||
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
|
||||
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
|
||||
[[ '${{ needs.lint_build.result }}' == 'success' ]] || { echo 'lint_build failed'; exit 1; }
|
||||
[[ '${{ needs.tests.result }}' == 'success' ]] || { echo 'tests failed'; exit 1; }
|
||||
|
||||
- name: sccache summary note
|
||||
if: always()
|
||||
run: |
|
||||
echo "Per-job sccache stats are attached to each matrix job's Step Summary."
|
||||
|
||||
51
.github/workflows/rust-release-prepare.yml
vendored
51
.github/workflows/rust-release-prepare.yml
vendored
@@ -1,51 +0,0 @@
|
||||
name: rust-release-prepare
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 */4 * * *"
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
ref: main
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Update models.json
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
client_version="99.99.99"
|
||||
terminal_info="github-actions"
|
||||
user_agent="codex_cli_rs/99.99.99 (Linux $(uname -r); $(uname -m)) ${terminal_info}"
|
||||
base_url="${OPENAI_BASE_URL:-https://chatgpt.com/backend-api/codex}"
|
||||
|
||||
headers=(
|
||||
-H "Authorization: Bearer ${OPENAI_API_KEY}"
|
||||
-H "User-Agent: ${user_agent}"
|
||||
)
|
||||
|
||||
url="${base_url%/}/models?client_version=${client_version}"
|
||||
curl --http1.1 --fail --show-error --location "${headers[@]}" "${url}" | jq '.' > codex-rs/core/models.json
|
||||
|
||||
- name: Open pull request (if changed)
|
||||
uses: peter-evans/create-pull-request@v7
|
||||
with:
|
||||
commit-message: "Update models.json"
|
||||
title: "Update models.json"
|
||||
body: "Automated update of models.json."
|
||||
branch: "bot/update-models-json"
|
||||
reviewers: "pakrym-oai,aibrahim-oai"
|
||||
delete-branch: true
|
||||
366
.github/workflows/rust-release.yml
vendored
366
.github/workflows/rust-release.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
tag-check:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Validate tag matches Cargo.toml version
|
||||
shell: bash
|
||||
@@ -47,12 +47,9 @@ jobs:
|
||||
|
||||
build:
|
||||
needs: tag-check
|
||||
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
name: ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
@@ -61,9 +58,9 @@ jobs:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: macos-15-xlarge
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
- runner: macos-15-xlarge
|
||||
- runner: macos-14
|
||||
target: x86_64-apple-darwin
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
@@ -73,18 +70,14 @@ jobs:
|
||||
target: aarch64-unknown-linux-musl
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-gnu
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
- uses: actions/checkout@v4
|
||||
- uses: dtolnay/rust-toolchain@1.88
|
||||
with:
|
||||
targets: ${{ matrix.target }}
|
||||
|
||||
- uses: actions/cache@v5
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
@@ -92,113 +85,15 @@ jobs:
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
|
||||
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Install musl build tools
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y musl-tools pkg-config
|
||||
sudo apt install -y musl-tools pkg-config
|
||||
|
||||
- name: Cargo build
|
||||
shell: bash
|
||||
run: |
|
||||
if [[ "${{ contains(matrix.target, 'windows') }}" == 'true' ]]; then
|
||||
cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy --bin codex-windows-sandbox-setup --bin codex-command-runner
|
||||
else
|
||||
cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy
|
||||
fi
|
||||
|
||||
- if: ${{ contains(matrix.target, 'linux') }}
|
||||
name: Cosign Linux artifacts
|
||||
uses: ./.github/actions/linux-code-sign
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
artifacts-dir: ${{ github.workspace }}/codex-rs/target/${{ matrix.target }}/release
|
||||
|
||||
- if: ${{ contains(matrix.target, 'windows') }}
|
||||
name: Sign Windows binaries with Azure Trusted Signing
|
||||
uses: ./.github/actions/windows-code-sign
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
client-id: ${{ secrets.AZURE_TRUSTED_SIGNING_CLIENT_ID }}
|
||||
tenant-id: ${{ secrets.AZURE_TRUSTED_SIGNING_TENANT_ID }}
|
||||
subscription-id: ${{ secrets.AZURE_TRUSTED_SIGNING_SUBSCRIPTION_ID }}
|
||||
endpoint: ${{ secrets.AZURE_TRUSTED_SIGNING_ENDPOINT }}
|
||||
account-name: ${{ secrets.AZURE_TRUSTED_SIGNING_ACCOUNT_NAME }}
|
||||
certificate-profile-name: ${{ secrets.AZURE_TRUSTED_SIGNING_CERTIFICATE_PROFILE_NAME }}
|
||||
|
||||
- if: ${{ runner.os == 'macOS' }}
|
||||
name: MacOS code signing (binaries)
|
||||
uses: ./.github/actions/macos-code-sign
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
sign-binaries: "true"
|
||||
sign-dmg: "false"
|
||||
apple-certificate: ${{ secrets.APPLE_CERTIFICATE_P12 }}
|
||||
apple-certificate-password: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
|
||||
apple-notarization-key-p8: ${{ secrets.APPLE_NOTARIZATION_KEY_P8 }}
|
||||
apple-notarization-key-id: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
|
||||
apple-notarization-issuer-id: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
|
||||
|
||||
- if: ${{ runner.os == 'macOS' }}
|
||||
name: Build macOS dmg
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
target="${{ matrix.target }}"
|
||||
release_dir="target/${target}/release"
|
||||
dmg_root="${RUNNER_TEMP}/codex-dmg-root"
|
||||
volname="Codex (${target})"
|
||||
dmg_path="${release_dir}/codex-${target}.dmg"
|
||||
|
||||
# The previous "MacOS code signing (binaries)" step signs + notarizes the
|
||||
# built artifacts in `${release_dir}`. This step packages *those same*
|
||||
# signed binaries into a dmg.
|
||||
codex_binary_path="${release_dir}/codex"
|
||||
proxy_binary_path="${release_dir}/codex-responses-api-proxy"
|
||||
|
||||
rm -rf "$dmg_root"
|
||||
mkdir -p "$dmg_root"
|
||||
|
||||
if [[ ! -f "$codex_binary_path" ]]; then
|
||||
echo "Binary $codex_binary_path not found"
|
||||
exit 1
|
||||
fi
|
||||
if [[ ! -f "$proxy_binary_path" ]]; then
|
||||
echo "Binary $proxy_binary_path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ditto "$codex_binary_path" "${dmg_root}/codex"
|
||||
ditto "$proxy_binary_path" "${dmg_root}/codex-responses-api-proxy"
|
||||
|
||||
rm -f "$dmg_path"
|
||||
hdiutil create \
|
||||
-volname "$volname" \
|
||||
-srcfolder "$dmg_root" \
|
||||
-format UDZO \
|
||||
-ov \
|
||||
"$dmg_path"
|
||||
|
||||
if [[ ! -f "$dmg_path" ]]; then
|
||||
echo "dmg $dmg_path not found after build"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- if: ${{ runner.os == 'macOS' }}
|
||||
name: MacOS code signing (dmg)
|
||||
uses: ./.github/actions/macos-code-sign
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
sign-binaries: "false"
|
||||
sign-dmg: "true"
|
||||
apple-certificate: ${{ secrets.APPLE_CERTIFICATE_P12 }}
|
||||
apple-certificate-password: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
|
||||
apple-notarization-key-p8: ${{ secrets.APPLE_NOTARIZATION_KEY_P8 }}
|
||||
apple-notarization-key-id: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
|
||||
apple-notarization-issuer-id: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
|
||||
run: cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-exec --bin codex-linux-sandbox
|
||||
|
||||
- name: Stage artifacts
|
||||
shell: bash
|
||||
@@ -206,29 +101,18 @@ jobs:
|
||||
dest="dist/${{ matrix.target }}"
|
||||
mkdir -p "$dest"
|
||||
|
||||
if [[ "${{ matrix.runner }}" == windows* ]]; then
|
||||
cp target/${{ matrix.target }}/release/codex.exe "$dest/codex-${{ matrix.target }}.exe"
|
||||
cp target/${{ matrix.target }}/release/codex-responses-api-proxy.exe "$dest/codex-responses-api-proxy-${{ matrix.target }}.exe"
|
||||
cp target/${{ matrix.target }}/release/codex-windows-sandbox-setup.exe "$dest/codex-windows-sandbox-setup-${{ matrix.target }}.exe"
|
||||
cp target/${{ matrix.target }}/release/codex-command-runner.exe "$dest/codex-command-runner-${{ matrix.target }}.exe"
|
||||
else
|
||||
cp target/${{ matrix.target }}/release/codex "$dest/codex-${{ matrix.target }}"
|
||||
cp target/${{ matrix.target }}/release/codex-responses-api-proxy "$dest/codex-responses-api-proxy-${{ matrix.target }}"
|
||||
fi
|
||||
cp target/${{ matrix.target }}/release/codex-exec "$dest/codex-exec-${{ matrix.target }}"
|
||||
cp target/${{ matrix.target }}/release/codex "$dest/codex-${{ matrix.target }}"
|
||||
|
||||
if [[ "${{ matrix.target }}" == *linux* ]]; then
|
||||
cp target/${{ matrix.target }}/release/codex.sigstore "$dest/codex-${{ matrix.target }}.sigstore"
|
||||
cp target/${{ matrix.target }}/release/codex-responses-api-proxy.sigstore "$dest/codex-responses-api-proxy-${{ matrix.target }}.sigstore"
|
||||
fi
|
||||
|
||||
if [[ "${{ matrix.target }}" == *apple-darwin ]]; then
|
||||
cp target/${{ matrix.target }}/release/codex-${{ matrix.target }}.dmg "$dest/codex-${{ matrix.target }}.dmg"
|
||||
fi
|
||||
|
||||
- if: ${{ matrix.runner == 'windows-11-arm' }}
|
||||
name: Install zstd
|
||||
shell: powershell
|
||||
run: choco install -y zstandard
|
||||
# After https://github.com/openai/codex/pull/1228 is merged and a new
|
||||
# release is cut with an artifacts built after that PR, the `-gnu`
|
||||
# variants can go away as we will only use the `-musl` variants.
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'x86_64-unknown-linux-gnu' || matrix.target == 'aarch64-unknown-linux-gnu' || matrix.target == 'aarch64-unknown-linux-musl' }}
|
||||
name: Stage Linux-only artifacts
|
||||
shell: bash
|
||||
run: |
|
||||
dest="dist/${{ matrix.target }}"
|
||||
cp target/${{ matrix.target }}/release/codex-linux-sandbox "$dest/codex-linux-sandbox-${{ matrix.target }}"
|
||||
|
||||
- name: Compress artifacts
|
||||
shell: bash
|
||||
@@ -237,21 +121,12 @@ jobs:
|
||||
# ${{ matrix.target }}
|
||||
dest="dist/${{ matrix.target }}"
|
||||
|
||||
# We want to ship the raw Windows executables in the GitHub Release
|
||||
# in addition to the compressed archives. Keep the originals for
|
||||
# Windows targets; remove them elsewhere to limit the number of
|
||||
# artifacts that end up in the GitHub Release.
|
||||
keep_originals=false
|
||||
if [[ "${{ matrix.runner }}" == windows* ]]; then
|
||||
keep_originals=true
|
||||
fi
|
||||
|
||||
# For compatibility with environments that lack the `zstd` tool we
|
||||
# additionally create a `.tar.gz` for all platforms and `.zip` for
|
||||
# Windows alongside every single binary that we publish. The end result is:
|
||||
# additionally create a `.tar.gz` alongside every single binary that
|
||||
# we publish. The end result is:
|
||||
# codex-<target>.zst (existing)
|
||||
# codex-<target>.tar.gz (new)
|
||||
# codex-<target>.zip (only for Windows)
|
||||
# ...same naming for codex-exec-* and codex-linux-sandbox-*
|
||||
|
||||
# 1. Produce a .tar.gz for every file in the directory *before* we
|
||||
# run `zstd --rm`, because that flag deletes the original files.
|
||||
@@ -259,35 +134,19 @@ jobs:
|
||||
base="$(basename "$f")"
|
||||
# Skip files that are already archives (shouldn't happen, but be
|
||||
# safe).
|
||||
if [[ "$base" == *.tar.gz || "$base" == *.zip || "$base" == *.dmg ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Don't try to compress signature bundles.
|
||||
if [[ "$base" == *.sigstore ]]; then
|
||||
if [[ "$base" == *.tar.gz ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Create per-binary tar.gz
|
||||
tar -C "$dest" -czf "$dest/${base}.tar.gz" "$base"
|
||||
|
||||
# Create zip archive for Windows binaries
|
||||
# Must run from inside the dest dir so 7z won't
|
||||
# embed the directory path inside the zip.
|
||||
if [[ "${{ matrix.runner }}" == windows* ]]; then
|
||||
(cd "$dest" && 7z a "${base}.zip" "$base")
|
||||
fi
|
||||
|
||||
# Also create .zst (existing behaviour) *and* remove the original
|
||||
# uncompressed binary to keep the directory small.
|
||||
zstd_args=(-T0 -19)
|
||||
if [[ "${keep_originals}" == false ]]; then
|
||||
zstd_args+=(--rm)
|
||||
fi
|
||||
zstd "${zstd_args[@]}" "$dest/$base"
|
||||
zstd -T0 -19 --rm "$dest/$base"
|
||||
done
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.target }}
|
||||
# Upload the per-binary .zst files as well as the new .tar.gz
|
||||
@@ -295,49 +154,19 @@ jobs:
|
||||
path: |
|
||||
codex-rs/dist/${{ matrix.target }}/*
|
||||
|
||||
shell-tool-mcp:
|
||||
name: shell-tool-mcp
|
||||
needs: tag-check
|
||||
uses: ./.github/workflows/shell-tool-mcp.yml
|
||||
with:
|
||||
release-tag: ${{ github.ref_name }}
|
||||
publish: true
|
||||
secrets: inherit
|
||||
|
||||
release:
|
||||
needs:
|
||||
- build
|
||||
- shell-tool-mcp
|
||||
needs: build
|
||||
name: release
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
actions: read
|
||||
outputs:
|
||||
version: ${{ steps.release_name.outputs.name }}
|
||||
tag: ${{ github.ref_name }}
|
||||
should_publish_npm: ${{ steps.npm_publish_settings.outputs.should_publish }}
|
||||
npm_tag: ${{ steps.npm_publish_settings.outputs.npm_tag }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- uses: actions/download-artifact@v7
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: dist
|
||||
|
||||
- name: List
|
||||
run: ls -R dist/
|
||||
|
||||
# This is a temporary fix: we should modify shell-tool-mcp.yml so these
|
||||
# files do not end up in dist/ in the first place.
|
||||
- name: Delete entries from dist/ that should not go in the release
|
||||
run: |
|
||||
rm -rf dist/shell-tool-mcp*
|
||||
|
||||
ls -R dist/
|
||||
|
||||
- name: Define release name
|
||||
id: release_name
|
||||
run: |
|
||||
@@ -346,59 +175,15 @@ jobs:
|
||||
version="${GITHUB_REF_NAME#rust-v}"
|
||||
echo "name=${version}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Determine npm publish settings
|
||||
id: npm_publish_settings
|
||||
env:
|
||||
VERSION: ${{ steps.release_name.outputs.name }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
version="${VERSION}"
|
||||
|
||||
if [[ "${version}" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo "should_publish=true" >> "$GITHUB_OUTPUT"
|
||||
echo "npm_tag=" >> "$GITHUB_OUTPUT"
|
||||
elif [[ "${version}" =~ ^[0-9]+\.[0-9]+\.[0-9]+-alpha\.[0-9]+$ ]]; then
|
||||
echo "should_publish=true" >> "$GITHUB_OUTPUT"
|
||||
echo "npm_tag=alpha" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "should_publish=false" >> "$GITHUB_OUTPUT"
|
||||
echo "npm_tag=" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js for npm packaging
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
# stage_npm_packages.py requires DotSlash when staging releases.
|
||||
- uses: facebook/install-dotslash@v2
|
||||
- name: Stage npm packages
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
./scripts/stage_npm_packages.py \
|
||||
--release-version "${{ steps.release_name.outputs.name }}" \
|
||||
--package codex \
|
||||
--package codex-responses-api-proxy \
|
||||
--package codex-sdk
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
name: ${{ steps.release_name.outputs.name }}
|
||||
tag_name: ${{ github.ref_name }}
|
||||
files: dist/**
|
||||
# Mark as prerelease only when the version has a suffix after x.y.z
|
||||
# (e.g. -alpha, -beta). Otherwise publish a normal release.
|
||||
prerelease: ${{ contains(steps.release_name.outputs.name, '-') }}
|
||||
# For now, tag releases as "prerelease" because we are not claiming
|
||||
# the Rust CLI is stable yet.
|
||||
prerelease: true
|
||||
|
||||
- uses: facebook/dotslash-publish-release@v2
|
||||
env:
|
||||
@@ -406,90 +191,3 @@ jobs:
|
||||
with:
|
||||
tag: ${{ github.ref_name }}
|
||||
config: .github/dotslash-config.json
|
||||
|
||||
# Publish to npm using OIDC authentication.
|
||||
# July 31, 2025: https://github.blog/changelog/2025-07-31-npm-trusted-publishing-with-oidc-is-generally-available/
|
||||
# npm docs: https://docs.npmjs.com/trusted-publishers
|
||||
publish-npm:
|
||||
# Publish to npm for stable releases and alpha pre-releases with numeric suffixes.
|
||||
if: ${{ needs.release.outputs.should_publish_npm == 'true' }}
|
||||
name: publish-npm
|
||||
needs: release
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write # Required for OIDC
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 22
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
scope: "@openai"
|
||||
|
||||
# Trusted publishing requires npm CLI version 11.5.1 or later.
|
||||
- name: Update npm
|
||||
run: npm install -g npm@latest
|
||||
|
||||
- name: Download npm tarballs from release
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
version="${{ needs.release.outputs.version }}"
|
||||
tag="${{ needs.release.outputs.tag }}"
|
||||
mkdir -p dist/npm
|
||||
gh release download "$tag" \
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--pattern "codex-npm-${version}.tgz" \
|
||||
--dir dist/npm
|
||||
gh release download "$tag" \
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--pattern "codex-responses-api-proxy-npm-${version}.tgz" \
|
||||
--dir dist/npm
|
||||
gh release download "$tag" \
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--pattern "codex-sdk-npm-${version}.tgz" \
|
||||
--dir dist/npm
|
||||
|
||||
# No NODE_AUTH_TOKEN needed because we use OIDC.
|
||||
- name: Publish to npm
|
||||
env:
|
||||
VERSION: ${{ needs.release.outputs.version }}
|
||||
NPM_TAG: ${{ needs.release.outputs.npm_tag }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
tag_args=()
|
||||
if [[ -n "${NPM_TAG}" ]]; then
|
||||
tag_args+=(--tag "${NPM_TAG}")
|
||||
fi
|
||||
|
||||
tarballs=(
|
||||
"codex-npm-${VERSION}.tgz"
|
||||
"codex-responses-api-proxy-npm-${VERSION}.tgz"
|
||||
"codex-sdk-npm-${VERSION}.tgz"
|
||||
)
|
||||
|
||||
for tarball in "${tarballs[@]}"; do
|
||||
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" "${tag_args[@]}"
|
||||
done
|
||||
|
||||
update-branch:
|
||||
name: Update latest-alpha-cli branch
|
||||
permissions:
|
||||
contents: write
|
||||
needs: release
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Update latest-alpha-cli branch
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
gh api \
|
||||
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
|
||||
-X PATCH \
|
||||
-f sha="${GITHUB_SHA}" \
|
||||
-F force=true
|
||||
|
||||
43
.github/workflows/sdk.yml
vendored
43
.github/workflows/sdk.yml
vendored
@@ -1,43 +0,0 @@
|
||||
name: sdk
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request: {}
|
||||
|
||||
jobs:
|
||||
sdks:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 22
|
||||
cache: pnpm
|
||||
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
|
||||
- name: build codex
|
||||
run: cargo build --bin codex
|
||||
working-directory: codex-rs
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Build SDK packages
|
||||
run: pnpm -r --filter ./sdk/typescript run build
|
||||
|
||||
- name: Lint SDK packages
|
||||
run: pnpm -r --filter ./sdk/typescript run lint
|
||||
|
||||
- name: Test SDK packages
|
||||
run: pnpm -r --filter ./sdk/typescript run test
|
||||
48
.github/workflows/shell-tool-mcp-ci.yml
vendored
48
.github/workflows/shell-tool-mcp-ci.yml
vendored
@@ -1,48 +0,0 @@
|
||||
name: shell-tool-mcp CI
|
||||
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- "shell-tool-mcp/**"
|
||||
- ".github/workflows/shell-tool-mcp-ci.yml"
|
||||
- "pnpm-lock.yaml"
|
||||
- "pnpm-workspace.yaml"
|
||||
pull_request:
|
||||
paths:
|
||||
- "shell-tool-mcp/**"
|
||||
- ".github/workflows/shell-tool-mcp-ci.yml"
|
||||
- "pnpm-lock.yaml"
|
||||
- "pnpm-workspace.yaml"
|
||||
|
||||
env:
|
||||
NODE_VERSION: 22
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
cache: "pnpm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Format check
|
||||
run: pnpm --filter @openai/codex-shell-tool-mcp run format
|
||||
|
||||
- name: Run tests
|
||||
run: pnpm --filter @openai/codex-shell-tool-mcp test
|
||||
|
||||
- name: Build
|
||||
run: pnpm --filter @openai/codex-shell-tool-mcp run build
|
||||
405
.github/workflows/shell-tool-mcp.yml
vendored
405
.github/workflows/shell-tool-mcp.yml
vendored
@@ -1,405 +0,0 @@
|
||||
name: shell-tool-mcp
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
release-version:
|
||||
description: Version to publish (x.y.z or x.y.z-alpha.N). Defaults to GITHUB_REF_NAME when it starts with rust-v.
|
||||
required: false
|
||||
type: string
|
||||
release-tag:
|
||||
description: Tag name to use when downloading release artifacts (defaults to rust-v<version>).
|
||||
required: false
|
||||
type: string
|
||||
publish:
|
||||
description: Whether to publish to npm when the version is releasable.
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
NODE_VERSION: 22
|
||||
|
||||
jobs:
|
||||
metadata:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
version: ${{ steps.compute.outputs.version }}
|
||||
release_tag: ${{ steps.compute.outputs.release_tag }}
|
||||
should_publish: ${{ steps.compute.outputs.should_publish }}
|
||||
npm_tag: ${{ steps.compute.outputs.npm_tag }}
|
||||
steps:
|
||||
- name: Compute version and tags
|
||||
id: compute
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
version="${{ inputs.release-version }}"
|
||||
release_tag="${{ inputs.release-tag }}"
|
||||
|
||||
if [[ -z "$version" ]]; then
|
||||
if [[ -n "$release_tag" && "$release_tag" =~ ^rust-v.+ ]]; then
|
||||
version="${release_tag#rust-v}"
|
||||
elif [[ "${GITHUB_REF_NAME:-}" =~ ^rust-v.+ ]]; then
|
||||
version="${GITHUB_REF_NAME#rust-v}"
|
||||
release_tag="${GITHUB_REF_NAME}"
|
||||
else
|
||||
echo "release-version is required when GITHUB_REF_NAME is not a rust-v tag."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$release_tag" ]]; then
|
||||
release_tag="rust-v${version}"
|
||||
fi
|
||||
|
||||
npm_tag=""
|
||||
should_publish="false"
|
||||
if [[ "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
should_publish="true"
|
||||
elif [[ "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+-alpha\.[0-9]+$ ]]; then
|
||||
should_publish="true"
|
||||
npm_tag="alpha"
|
||||
fi
|
||||
|
||||
echo "version=${version}" >> "$GITHUB_OUTPUT"
|
||||
echo "release_tag=${release_tag}" >> "$GITHUB_OUTPUT"
|
||||
echo "npm_tag=${npm_tag}" >> "$GITHUB_OUTPUT"
|
||||
echo "should_publish=${should_publish}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
rust-binaries:
|
||||
name: Build Rust - ${{ matrix.target }}
|
||||
needs: metadata
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
defaults:
|
||||
run:
|
||||
working-directory: codex-rs
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: macos-15-xlarge
|
||||
target: aarch64-apple-darwin
|
||||
- runner: macos-15-xlarge
|
||||
target: x86_64-apple-darwin
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
install_musl: true
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
install_musl: true
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- uses: dtolnay/rust-toolchain@1.90
|
||||
with:
|
||||
targets: ${{ matrix.target }}
|
||||
|
||||
- if: ${{ matrix.install_musl }}
|
||||
name: Install musl build dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y musl-tools pkg-config
|
||||
|
||||
- name: Build exec server binaries
|
||||
run: cargo build --release --target ${{ matrix.target }} --bin codex-exec-mcp-server --bin codex-execve-wrapper
|
||||
|
||||
- name: Stage exec server binaries
|
||||
run: |
|
||||
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}"
|
||||
mkdir -p "$dest"
|
||||
cp "target/${{ matrix.target }}/release/codex-exec-mcp-server" "$dest/"
|
||||
cp "target/${{ matrix.target }}/release/codex-execve-wrapper" "$dest/"
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: shell-tool-mcp-rust-${{ matrix.target }}
|
||||
path: artifacts/**
|
||||
if-no-files-found: error
|
||||
|
||||
bash-linux:
|
||||
name: Build Bash (Linux) - ${{ matrix.variant }} - ${{ matrix.target }}
|
||||
needs: metadata
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
container:
|
||||
image: ${{ matrix.image }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
variant: ubuntu-24.04
|
||||
image: ubuntu:24.04
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
variant: ubuntu-22.04
|
||||
image: ubuntu:22.04
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
variant: debian-12
|
||||
image: debian:12
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
variant: debian-11
|
||||
image: debian:11
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
variant: centos-9
|
||||
image: quay.io/centos/centos:stream9
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: ubuntu-24.04
|
||||
image: arm64v8/ubuntu:24.04
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: ubuntu-22.04
|
||||
image: arm64v8/ubuntu:22.04
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: ubuntu-20.04
|
||||
image: arm64v8/ubuntu:20.04
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: debian-12
|
||||
image: arm64v8/debian:12
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: debian-11
|
||||
image: arm64v8/debian:11
|
||||
- runner: ubuntu-24.04-arm
|
||||
target: aarch64-unknown-linux-musl
|
||||
variant: centos-9
|
||||
image: quay.io/centos/centos:stream9
|
||||
steps:
|
||||
- name: Install build prerequisites
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if command -v apt-get >/dev/null 2>&1; then
|
||||
apt-get update
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y git build-essential bison autoconf gettext
|
||||
elif command -v dnf >/dev/null 2>&1; then
|
||||
dnf install -y git gcc gcc-c++ make bison autoconf gettext
|
||||
elif command -v yum >/dev/null 2>&1; then
|
||||
yum install -y git gcc gcc-c++ make bison autoconf gettext
|
||||
else
|
||||
echo "Unsupported package manager in container"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Build patched Bash
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
git clone --depth 1 https://github.com/bminor/bash /tmp/bash
|
||||
cd /tmp/bash
|
||||
git fetch --depth 1 origin a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
|
||||
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
|
||||
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/bash-exec-wrapper.patch"
|
||||
./configure --without-bash-malloc
|
||||
cores="$(command -v nproc >/dev/null 2>&1 && nproc || getconf _NPROCESSORS_ONLN)"
|
||||
make -j"${cores}"
|
||||
|
||||
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/bash/${{ matrix.variant }}"
|
||||
mkdir -p "$dest"
|
||||
cp bash "$dest/bash"
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: shell-tool-mcp-bash-${{ matrix.target }}-${{ matrix.variant }}
|
||||
path: artifacts/**
|
||||
if-no-files-found: error
|
||||
|
||||
bash-darwin:
|
||||
name: Build Bash (macOS) - ${{ matrix.variant }} - ${{ matrix.target }}
|
||||
needs: metadata
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- runner: macos-15-xlarge
|
||||
target: aarch64-apple-darwin
|
||||
variant: macos-15
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
variant: macos-14
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Build patched Bash
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
git clone --depth 1 https://github.com/bminor/bash /tmp/bash
|
||||
cd /tmp/bash
|
||||
git fetch --depth 1 origin a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
|
||||
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
|
||||
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/bash-exec-wrapper.patch"
|
||||
./configure --without-bash-malloc
|
||||
cores="$(getconf _NPROCESSORS_ONLN)"
|
||||
make -j"${cores}"
|
||||
|
||||
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/bash/${{ matrix.variant }}"
|
||||
mkdir -p "$dest"
|
||||
cp bash "$dest/bash"
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: shell-tool-mcp-bash-${{ matrix.target }}-${{ matrix.variant }}
|
||||
path: artifacts/**
|
||||
if-no-files-found: error
|
||||
|
||||
package:
|
||||
name: Package npm module
|
||||
needs:
|
||||
- metadata
|
||||
- rust-binaries
|
||||
- bash-linux
|
||||
- bash-darwin
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
PACKAGE_VERSION: ${{ needs.metadata.outputs.version }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 10.8.1
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
|
||||
- name: Install JavaScript dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Build (shell-tool-mcp)
|
||||
run: pnpm --filter @openai/codex-shell-tool-mcp run build
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Assemble staging directory
|
||||
id: staging
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
staging="${STAGING_DIR}"
|
||||
mkdir -p "$staging" "$staging/vendor"
|
||||
cp shell-tool-mcp/README.md "$staging/"
|
||||
cp shell-tool-mcp/package.json "$staging/"
|
||||
cp -R shell-tool-mcp/bin "$staging/"
|
||||
|
||||
found_vendor="false"
|
||||
shopt -s nullglob
|
||||
for vendor_dir in artifacts/*/vendor; do
|
||||
rsync -av "$vendor_dir/" "$staging/vendor/"
|
||||
found_vendor="true"
|
||||
done
|
||||
if [[ "$found_vendor" == "false" ]]; then
|
||||
echo "No vendor payloads were downloaded."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
node - <<'NODE'
|
||||
import fs from "node:fs";
|
||||
import path from "node:path";
|
||||
|
||||
const stagingDir = process.env.STAGING_DIR;
|
||||
const version = process.env.PACKAGE_VERSION;
|
||||
const pkgPath = path.join(stagingDir, "package.json");
|
||||
const pkg = JSON.parse(fs.readFileSync(pkgPath, "utf8"));
|
||||
pkg.version = version;
|
||||
fs.writeFileSync(pkgPath, JSON.stringify(pkg, null, 2) + "\n");
|
||||
NODE
|
||||
|
||||
echo "dir=$staging" >> "$GITHUB_OUTPUT"
|
||||
env:
|
||||
STAGING_DIR: ${{ runner.temp }}/shell-tool-mcp
|
||||
|
||||
- name: Ensure binaries are executable
|
||||
run: |
|
||||
set -euo pipefail
|
||||
staging="${{ steps.staging.outputs.dir }}"
|
||||
chmod +x \
|
||||
"$staging"/vendor/*/codex-exec-mcp-server \
|
||||
"$staging"/vendor/*/codex-execve-wrapper \
|
||||
"$staging"/vendor/*/bash/*/bash
|
||||
|
||||
- name: Create npm tarball
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p dist/npm
|
||||
staging="${{ steps.staging.outputs.dir }}"
|
||||
pack_info=$(cd "$staging" && npm pack --ignore-scripts --json --pack-destination "${GITHUB_WORKSPACE}/dist/npm")
|
||||
filename=$(PACK_INFO="$pack_info" node -e 'const data = JSON.parse(process.env.PACK_INFO); console.log(data[0].filename);')
|
||||
mv "dist/npm/${filename}" "dist/npm/codex-shell-tool-mcp-npm-${PACKAGE_VERSION}.tgz"
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: codex-shell-tool-mcp-npm
|
||||
path: dist/npm/codex-shell-tool-mcp-npm-${{ env.PACKAGE_VERSION }}.tgz
|
||||
if-no-files-found: error
|
||||
|
||||
publish:
|
||||
name: Publish npm package
|
||||
needs:
|
||||
- metadata
|
||||
- package
|
||||
if: ${{ inputs.publish && needs.metadata.outputs.should_publish == 'true' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
steps:
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 10.8.1
|
||||
run_install: false
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
registry-url: https://registry.npmjs.org
|
||||
scope: "@openai"
|
||||
|
||||
- name: Update npm
|
||||
run: npm install -g npm@latest
|
||||
|
||||
- name: Download npm tarball
|
||||
uses: actions/download-artifact@v7
|
||||
with:
|
||||
name: codex-shell-tool-mcp-npm
|
||||
path: dist/npm
|
||||
|
||||
- name: Publish to npm
|
||||
env:
|
||||
NPM_TAG: ${{ needs.metadata.outputs.npm_tag }}
|
||||
VERSION: ${{ needs.metadata.outputs.version }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
tag_args=()
|
||||
if [[ -n "${NPM_TAG}" ]]; then
|
||||
tag_args+=(--tag "${NPM_TAG}")
|
||||
fi
|
||||
npm publish "dist/npm/codex-shell-tool-mcp-npm-${VERSION}.tgz" "${tag_args[@]}"
|
||||
9
.gitignore
vendored
9
.gitignore
vendored
@@ -30,7 +30,6 @@ result
|
||||
# cli tools
|
||||
CLAUDE.md
|
||||
.claude/
|
||||
AGENTS.override.md
|
||||
|
||||
# caches
|
||||
.cache/
|
||||
@@ -64,9 +63,6 @@ apply_patch/
|
||||
# coverage
|
||||
coverage/
|
||||
|
||||
# personal files
|
||||
personal/
|
||||
|
||||
# os
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
@@ -85,8 +81,3 @@ CHANGELOG.ignore.md
|
||||
# nix related
|
||||
.direnv
|
||||
.envrc
|
||||
|
||||
# Python bytecode files
|
||||
__pycache__/
|
||||
*.pyc
|
||||
|
||||
|
||||
1
.husky/pre-commit
Normal file
1
.husky/pre-commit
Normal file
@@ -0,0 +1 @@
|
||||
pnpm lint-staged
|
||||
@@ -1,7 +1,3 @@
|
||||
/codex-cli/dist
|
||||
/codex-cli/node_modules
|
||||
pnpm-lock.yaml
|
||||
|
||||
prompt.md
|
||||
*_prompt.md
|
||||
*_instructions.md
|
||||
|
||||
11
.vscode/extensions.json
vendored
11
.vscode/extensions.json
vendored
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"recommendations": [
|
||||
"rust-lang.rust-analyzer",
|
||||
"tamasfe.even-better-toml",
|
||||
"vadimcn.vscode-lldb",
|
||||
|
||||
// Useful if touching files in .github/workflows, though most
|
||||
// contributors will not be doing that?
|
||||
// "github.vscode-github-actions",
|
||||
]
|
||||
}
|
||||
36
.vscode/launch.json
vendored
36
.vscode/launch.json
vendored
@@ -1,22 +1,18 @@
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Cargo launch",
|
||||
"cargo": {
|
||||
"cwd": "${workspaceFolder}/codex-rs",
|
||||
"args": ["build", "--bin=codex-tui"]
|
||||
},
|
||||
"args": []
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "attach",
|
||||
"name": "Attach to running codex CLI",
|
||||
"pid": "${command:pickProcess}",
|
||||
"sourceLanguages": ["rust"]
|
||||
}
|
||||
]
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Cargo launch",
|
||||
"cargo": {
|
||||
"cwd": "${workspaceFolder}/codex-rs",
|
||||
"args": [
|
||||
"build",
|
||||
"--bin=codex-tui"
|
||||
]
|
||||
},
|
||||
"args": []
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
11
.vscode/settings.json
vendored
11
.vscode/settings.json
vendored
@@ -3,17 +3,8 @@
|
||||
"rust-analyzer.check.command": "clippy",
|
||||
"rust-analyzer.check.extraArgs": ["--all-features", "--tests"],
|
||||
"rust-analyzer.rustfmt.extraArgs": ["--config", "imports_granularity=Item"],
|
||||
"rust-analyzer.cargo.targetDir": "${workspaceFolder}/codex-rs/target/rust-analyzer",
|
||||
"[rust]": {
|
||||
"editor.defaultFormatter": "rust-lang.rust-analyzer",
|
||||
"editor.formatOnSave": true,
|
||||
},
|
||||
"[toml]": {
|
||||
"editor.defaultFormatter": "tamasfe.even-better-toml",
|
||||
"editor.formatOnSave": true,
|
||||
},
|
||||
// Array order for options in ~/.codex/config.toml such as `notify` and the
|
||||
// `args` for an MCP server is significant, so we disable reordering.
|
||||
"evenBetterToml.formatter.reorderArrays": false,
|
||||
"evenBetterToml.formatter.reorderKeys": true,
|
||||
}
|
||||
}
|
||||
|
||||
102
AGENTS.md
102
AGENTS.md
@@ -2,104 +2,8 @@
|
||||
|
||||
In the codex-rs folder where the rust code lives:
|
||||
|
||||
- Crate names are prefixed with `codex-`. For example, the `core` folder's crate is named `codex-core`
|
||||
- When using format! and you can inline variables into {}, always do that.
|
||||
- Install any commands the repo relies on (for example `just`, `rg`, or `cargo-insta`) if they aren't already available before running instructions here.
|
||||
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
|
||||
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
|
||||
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
|
||||
- Always collapse if statements per https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if
|
||||
- Always inline format! args when possible per https://rust-lang.github.io/rust-clippy/master/index.html#uninlined_format_args
|
||||
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
|
||||
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
|
||||
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
|
||||
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR`. You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
|
||||
|
||||
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspace‑wide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
|
||||
Before creating a pull request with changes to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix` (in `codex-rs` directory) to fix any linter issues in the code, ensure the test suite passes by running `cargo test --all-features` in the `codex-rs` directory.
|
||||
|
||||
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
|
||||
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
|
||||
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
|
||||
|
||||
## TUI style conventions
|
||||
|
||||
See `codex-rs/tui/styles.md`.
|
||||
|
||||
## TUI code conventions
|
||||
|
||||
- Use concise styling helpers from ratatui’s Stylize trait.
|
||||
- Basic spans: use "text".into()
|
||||
- Styled spans: use "text".red(), "text".green(), "text".magenta(), "text".dim(), etc.
|
||||
- Prefer these over constructing styles with `Span::styled` and `Style` directly.
|
||||
- Example: patch summary file lines
|
||||
- Desired: vec![" └ ".into(), "M".red(), " ".dim(), "tui/src/app.rs".dim()]
|
||||
|
||||
### TUI Styling (ratatui)
|
||||
|
||||
- Prefer Stylize helpers: use "text".dim(), .bold(), .cyan(), .italic(), .underlined() instead of manual Style where possible.
|
||||
- Prefer simple conversions: use "text".into() for spans and vec![…].into() for lines; when inference is ambiguous (e.g., Paragraph::new/Cell::from), use Line::from(spans) or Span::from(text).
|
||||
- Computed styles: if the Style is computed at runtime, using `Span::styled` is OK (`Span::from(text).set_style(style)` is also acceptable).
|
||||
- Avoid hardcoded white: do not use `.white()`; prefer the default foreground (no color).
|
||||
- Chaining: combine helpers by chaining for readability (e.g., url.cyan().underlined()).
|
||||
- Single items: prefer "text".into(); use Line::from(text) or Span::from(text) only when the target type isn’t obvious from context, or when using .into() would require extra type annotations.
|
||||
- Building lines: use vec![…].into() to construct a Line when the target type is obvious and no extra type annotations are needed; otherwise use Line::from(vec![…]).
|
||||
- Avoid churn: don’t refactor between equivalent forms (Span::styled ↔ set_style, Line::from ↔ .into()) without a clear readability or functional gain; follow file‑local conventions and do not introduce type annotations solely to satisfy .into().
|
||||
- Compactness: prefer the form that stays on one line after rustfmt; if only one of Line::from(vec![…]) or vec![…].into() avoids wrapping, choose that. If both wrap, pick the one with fewer wrapped lines.
|
||||
|
||||
### Text wrapping
|
||||
|
||||
- Always use textwrap::wrap to wrap plain strings.
|
||||
- If you have a ratatui Line and you want to wrap it, use the helpers in tui/src/wrapping.rs, e.g. word_wrap_lines / word_wrap_line.
|
||||
- If you need to indent wrapped lines, use the initial_indent / subsequent_indent options from RtOptions if you can, rather than writing custom logic.
|
||||
- If you have a list of lines and you need to prefix them all with some prefix (optionally different on the first vs subsequent lines), use the `prefix_lines` helper from line_utils.
|
||||
|
||||
## Tests
|
||||
|
||||
### Snapshot tests
|
||||
|
||||
This repo uses snapshot tests (via `insta`), especially in `codex-rs/tui`, to validate rendered output. When UI or text output changes intentionally, update the snapshots as follows:
|
||||
|
||||
- Run tests to generate any updated snapshots:
|
||||
- `cargo test -p codex-tui`
|
||||
- Check what’s pending:
|
||||
- `cargo insta pending-snapshots -p codex-tui`
|
||||
- Review changes by reading the generated `*.snap.new` files directly in the repo, or preview a specific file:
|
||||
- `cargo insta show -p codex-tui path/to/file.snap.new`
|
||||
- Only if you intend to accept all new snapshots in this crate, run:
|
||||
- `cargo insta accept -p codex-tui`
|
||||
|
||||
If you don’t have the tool:
|
||||
|
||||
- `cargo install cargo-insta`
|
||||
|
||||
### Test assertions
|
||||
|
||||
- Tests should use pretty_assertions::assert_eq for clearer diffs. Import this at the top of the test module if it isn't already.
|
||||
- Prefer deep equals comparisons whenever possible. Perform `assert_eq!()` on entire objects, rather than individual fields.
|
||||
- Avoid mutating process environment in tests; prefer passing environment-derived flags or dependencies from above.
|
||||
|
||||
### Integration tests (core)
|
||||
|
||||
- Prefer the utilities in `core_test_support::responses` when writing end-to-end Codex tests.
|
||||
|
||||
- All `mount_sse*` helpers return a `ResponseMock`; hold onto it so you can assert against outbound `/responses` POST bodies.
|
||||
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
|
||||
- `ResponsesRequest` exposes helpers (`body_json`, `input`, `function_call_output`, `custom_tool_call_output`, `call_output`, `header`, `path`, `query_param`) so assertions can target structured payloads instead of manual JSON digging.
|
||||
- Build SSE payloads with the provided `ev_*` constructors and the `sse(...)`.
|
||||
- Prefer `wait_for_event` over `wait_for_event_with_timeout`.
|
||||
- Prefer `mount_sse_once` over `mount_sse_once_match` or `mount_sse_sequence`
|
||||
|
||||
- Typical pattern:
|
||||
|
||||
```rust
|
||||
let mock = responses::mount_sse_once(&server, responses::sse(vec![
|
||||
responses::ev_response_created("resp-1"),
|
||||
responses::ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
|
||||
responses::ev_completed("resp-1"),
|
||||
])).await;
|
||||
|
||||
codex.submit(Op::UserTurn { ... }).await?;
|
||||
|
||||
// Assert request body if needed.
|
||||
let request = mock.single_request();
|
||||
// assert using request.function_call_output(call_id) or request.json_body() or other helpers.
|
||||
```
|
||||
When making individual changes prefer running tests on individual files or projects first.
|
||||
|
||||
212
CHANGELOG.md
212
CHANGELOG.md
@@ -1 +1,211 @@
|
||||
The changelog can be found on the [releases page](https://github.com/openai/codex/releases).
|
||||
# Changelog
|
||||
|
||||
You can install any of these versions: `npm install -g codex@version`
|
||||
|
||||
## `0.1.2505172129`
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Add node version check (#1007)
|
||||
- Persist token after refresh (#1006)
|
||||
|
||||
## `0.1.2505171619`
|
||||
|
||||
- `codex --login` + `codex --free` (#998)
|
||||
|
||||
## `0.1.2505161800`
|
||||
|
||||
- Sign in with chatgpt credits (#974)
|
||||
- Add support for OpenAI tool type, local_shell (#961)
|
||||
|
||||
## `0.1.2505161243`
|
||||
|
||||
- Sign in with chatgpt (#963)
|
||||
- Session history viewer (#912)
|
||||
- Apply patch issue when using different cwd (#942)
|
||||
- Diff command for filenames with special characters (#954)
|
||||
|
||||
## `0.1.2505160811`
|
||||
|
||||
- `codex-mini-latest` (#951)
|
||||
|
||||
## `0.1.2505140839`
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Gpt-4.1 apply_patch handling (#930)
|
||||
- Add support for fileOpener in config.json (#911)
|
||||
- Patch in #366 and #367 for marked-terminal (#916)
|
||||
- Remember to set lastIndex = 0 on shared RegExp (#918)
|
||||
- Always load version from package.json at runtime (#909)
|
||||
- Tweak the label for citations for better rendering (#919)
|
||||
- Tighten up some logic around session timestamps and ids (#922)
|
||||
- Change EventMsg enum so every variant takes a single struct (#925)
|
||||
- Reasoning default to medium, show workdir when supplied (#931)
|
||||
- Test_dev_null_write() was not using echo as intended (#923)
|
||||
|
||||
## `0.1.2504301751`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- User config api key (#569)
|
||||
- `@mention` files in codex (#701)
|
||||
- Add `--reasoning` CLI flag (#314)
|
||||
- Lower default retry wait time and increase number of tries (#720)
|
||||
- Add common package registries domains to allowed-domains list (#414)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Insufficient quota message (#758)
|
||||
- Input keyboard shortcut opt+delete (#685)
|
||||
- `/diff` should include untracked files (#686)
|
||||
- Only allow running without sandbox if explicitly marked in safe container (#699)
|
||||
- Tighten up check for /usr/bin/sandbox-exec (#710)
|
||||
- Check if sandbox-exec is available (#696)
|
||||
- Duplicate messages in quiet mode (#680)
|
||||
|
||||
## `0.1.2504251709`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add openai model info configuration (#551)
|
||||
- Added provider to run quiet mode function (#571)
|
||||
- Create parent directories when creating new files (#552)
|
||||
- Print bug report URL in terminal instead of opening browser (#510) (#528)
|
||||
- Add support for custom provider configuration in the user config (#537)
|
||||
- Add support for OpenAI-Organization and OpenAI-Project headers (#626)
|
||||
- Add specific instructions for creating API keys in error msg (#581)
|
||||
- Enhance toCodePoints to prevent potential unicode 14 errors (#615)
|
||||
- More native keyboard navigation in multiline editor (#655)
|
||||
- Display error on selection of invalid model (#594)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Model selection (#643)
|
||||
- Nits in apply patch (#640)
|
||||
- Input keyboard shortcuts (#676)
|
||||
- `apply_patch` unicode characters (#625)
|
||||
- Don't clear turn input before retries (#611)
|
||||
- More loosely match context for apply_patch (#610)
|
||||
- Update bug report template - there is no --revision flag (#614)
|
||||
- Remove outdated copy of text input and external editor feature (#670)
|
||||
- Remove unreachable "disableResponseStorage" logic flow introduced in #543 (#573)
|
||||
- Non-openai mode - fix for gemini content: null, fix 429 to throw before stream (#563)
|
||||
- Only allow going up in history when not already in history if input is empty (#654)
|
||||
- Do not grant "node" user sudo access when using run_in_container.sh (#627)
|
||||
- Update scripts/build_container.sh to use pnpm instead of npm (#631)
|
||||
- Update lint-staged config to use pnpm --filter (#582)
|
||||
- Non-openai mode - don't default temp and top_p (#572)
|
||||
- Fix error catching when checking for updates (#597)
|
||||
- Close stdin when running an exec tool call (#636)
|
||||
|
||||
## `0.1.2504221401`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Show actionable errors when api keys are missing (#523)
|
||||
- Add CLI `--version` flag (#492)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Agent loop for ZDR (`disableResponseStorage`) (#543)
|
||||
- Fix relative `workdir` check for `apply_patch` (#556)
|
||||
- Minimal mid-stream #429 retry loop using existing back-off (#506)
|
||||
- Inconsistent usage of base URL and API key (#507)
|
||||
- Remove requirement for api key for ollama (#546)
|
||||
- Support `[provider]_BASE_URL` (#542)
|
||||
|
||||
## `0.1.2504220136`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add support for ZDR orgs (#481)
|
||||
- Include fractional portion of chunk that exceeds stdout/stderr limit (#497)
|
||||
|
||||
## `0.1.2504211509`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Support multiple providers via Responses-Completion transformation (#247)
|
||||
- Add user-defined safe commands configuration and approval logic #380 (#386)
|
||||
- Allow switching approval modes when prompted to approve an edit/command (#400)
|
||||
- Add support for `/diff` command autocomplete in TerminalChatInput (#431)
|
||||
- Auto-open model selector if user selects deprecated model (#427)
|
||||
- Read approvalMode from config file (#298)
|
||||
- `/diff` command to view git diff (#426)
|
||||
- Tab completions for file paths (#279)
|
||||
- Add /command autocomplete (#317)
|
||||
- Allow multi-line input (#438)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- `full-auto` support in quiet mode (#374)
|
||||
- Enable shell option for child process execution (#391)
|
||||
- Configure husky and lint-staged for pnpm monorepo (#384)
|
||||
- Command pipe execution by improving shell detection (#437)
|
||||
- Name of the file not matching the name of the component (#354)
|
||||
- Allow proper exit from new Switch approval mode dialog (#453)
|
||||
- Ensure /clear resets context and exclude system messages from approximateTokenUsed count (#443)
|
||||
- `/clear` now clears terminal screen and resets context left indicator (#425)
|
||||
- Correct fish completion function name in CLI script (#485)
|
||||
- Auto-open model-selector when model is not found (#448)
|
||||
- Remove unnecessary isLoggingEnabled() checks (#420)
|
||||
- Improve test reliability for `raw-exec` (#434)
|
||||
- Unintended tear down of agent loop (#483)
|
||||
- Remove extraneous type casts (#462)
|
||||
|
||||
## `0.1.2504181820`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add `/bug` report command (#312)
|
||||
- Notify when a newer version is available (#333)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Update context left display logic in TerminalChatInput component (#307)
|
||||
- Improper spawn of sh on Windows Powershell (#318)
|
||||
- `/bug` report command, thinking indicator (#381)
|
||||
- Include pnpm lock file (#377)
|
||||
|
||||
## `0.1.2504172351`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add Nix flake for reproducible development environments (#225)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Handle invalid commands (#304)
|
||||
- Raw-exec-process-group.test improve reliability and error handling (#280)
|
||||
- Canonicalize the writeable paths used in seatbelt policy (#275)
|
||||
|
||||
## `0.1.2504172304`
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add shell completion subcommand (#138)
|
||||
- Add command history persistence (#152)
|
||||
- Shell command explanation option (#173)
|
||||
- Support bun fallback runtime for codex CLI (#282)
|
||||
- Add notifications for MacOS using Applescript (#160)
|
||||
- Enhance image path detection in input processing (#189)
|
||||
- `--config`/`-c` flag to open global instructions in nvim (#158)
|
||||
- Update position of cursor when navigating input history with arrow keys to the end of the text (#255)
|
||||
|
||||
### 🪲 Bug Fixes
|
||||
|
||||
- Correct word deletion logic for trailing spaces (Ctrl+Backspace) (#131)
|
||||
- Improve Windows compatibility for CLI commands and sandbox (#261)
|
||||
- Correct typos in thinking texts (transcendent & parroting) (#108)
|
||||
- Add empty vite config file to prevent resolving to parent (#273)
|
||||
- Update regex to better match the retry error messages (#266)
|
||||
- Add missing "as" in prompt prefix in agent loop (#186)
|
||||
- Allow continuing after interrupting assistant (#178)
|
||||
- Standardize filename to kebab-case 🐍➡️🥙 (#302)
|
||||
- Small update to bug report template (#288)
|
||||
- Duplicated message on model change (#276)
|
||||
- Typos in prompts and comments (#195)
|
||||
- Check workdir before spawn (#221)
|
||||
|
||||
<!-- generated - do not edit -->
|
||||
|
||||
4
NOTICE
4
NOTICE
@@ -1,6 +1,2 @@
|
||||
OpenAI Codex
|
||||
Copyright 2025 OpenAI
|
||||
|
||||
This project includes code derived from [Ratatui](https://github.com/ratatui/ratatui), licensed under the MIT license.
|
||||
Copyright (c) 2016-2022 Florian Dehau
|
||||
Copyright (c) 2023-2025 The Ratatui Developers
|
||||
|
||||
573
README.md
573
README.md
@@ -1,44 +1,345 @@
|
||||
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install --cask codex</code></p>
|
||||
<h1 align="center">OpenAI Codex CLI</h1>
|
||||
<p align="center">Lightweight coding agent that runs in your terminal</p>
|
||||
|
||||
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
|
||||
</br>
|
||||
</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href="https://developers.openai.com/codex/ide">install in your IDE</a>
|
||||
</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href="https://chatgpt.com/codex">chatgpt.com/codex</a></p>
|
||||
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
|
||||
|
||||
<p align="center">
|
||||
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />
|
||||
</p>
|
||||
This is the home of the **Codex CLI**, which is a coding agent from OpenAI that runs locally on your computer. If you are looking for the _cloud-based agent_ from OpenAI, **Codex [Web]**, see <https://chatgpt.com/codex>.
|
||||
|
||||
<!--  -->
|
||||
|
||||
---
|
||||
|
||||
<details>
|
||||
<summary><strong>Table of contents</strong></summary>
|
||||
|
||||
<!-- Begin ToC -->
|
||||
|
||||
- [Experimental technology disclaimer](#experimental-technology-disclaimer)
|
||||
- [Quickstart](#quickstart)
|
||||
- [OpenAI API Users](#openai-api-users)
|
||||
- [OpenAI Plus/Pro Users](#openai-pluspro-users)
|
||||
- [Why Codex?](#why-codex)
|
||||
- [Security model & permissions](#security-model--permissions)
|
||||
- [Platform sandboxing details](#platform-sandboxing-details)
|
||||
- [System requirements](#system-requirements)
|
||||
- [CLI reference](#cli-reference)
|
||||
- [Memory & project docs](#memory--project-docs)
|
||||
- [Non-interactive / CI mode](#non-interactive--ci-mode)
|
||||
- [Model Context Protocol (MCP)](#model-context-protocol-mcp)
|
||||
- [Tracing / verbose logging](#tracing--verbose-logging)
|
||||
- [Recipes](#recipes)
|
||||
- [Installation](#installation)
|
||||
- [DotSlash](#dotslash)
|
||||
- [Configuration](#configuration)
|
||||
- [FAQ](#faq)
|
||||
- [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
|
||||
- [Codex open source fund](#codex-open-source-fund)
|
||||
- [Contributing](#contributing)
|
||||
- [Development workflow](#development-workflow)
|
||||
- [Writing high-impact code changes](#writing-high-impact-code-changes)
|
||||
- [Opening a pull request](#opening-a-pull-request)
|
||||
- [Review process](#review-process)
|
||||
- [Community values](#community-values)
|
||||
- [Getting help](#getting-help)
|
||||
- [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
|
||||
- [Quick fixes](#quick-fixes)
|
||||
- [Releasing `codex`](#releasing-codex)
|
||||
- [Security & responsible AI](#security--responsible-ai)
|
||||
- [License](#license)
|
||||
|
||||
<!-- End ToC -->
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Experimental technology disclaimer
|
||||
|
||||
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
|
||||
|
||||
- Bug reports
|
||||
- Feature requests
|
||||
- Pull requests
|
||||
- Good vibes
|
||||
|
||||
Help us improve by filing issues or submitting PRs (see the section below for how to contribute)!
|
||||
|
||||
## Quickstart
|
||||
|
||||
### Installing and running Codex CLI
|
||||
|
||||
Install globally with your preferred package manager. If you use npm:
|
||||
Install globally with your preferred package manager:
|
||||
|
||||
```shell
|
||||
npm install -g @openai/codex
|
||||
npm install -g @openai/codex # Alternatively: `brew install codex`
|
||||
```
|
||||
|
||||
Alternatively, if you use Homebrew:
|
||||
Or go to the [latest GitHub Release](https://github.com/openai/codex/releases/latest) and download the appropriate binary for your platform.
|
||||
|
||||
### OpenAI API Users
|
||||
|
||||
Next, set your OpenAI API key as an environment variable:
|
||||
|
||||
```shell
|
||||
brew install --cask codex
|
||||
export OPENAI_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
Then simply run `codex` to get started:
|
||||
> [!NOTE]
|
||||
> This command sets the key only for your current terminal session. You can add the `export` line to your shell's configuration file (e.g., `~/.zshrc`), but we recommend setting it for the session.
|
||||
|
||||
### OpenAI Plus/Pro Users
|
||||
|
||||
If you have a paid OpenAI account, run the following to start the login process:
|
||||
|
||||
```
|
||||
codex login
|
||||
```
|
||||
|
||||
If you complete the process successfully, you should have a `~/.codex/auth.json` file that contains the credentials that Codex will use.
|
||||
|
||||
If you encounter problems with the login flow, please comment on <https://github.com/openai/codex/issues/1243>.
|
||||
|
||||
<details>
|
||||
<summary><strong>Use <code>--profile</code> to use other models</strong></summary>
|
||||
|
||||
Codex also allows you to use other providers that support the OpenAI Chat Completions (or Responses) API.
|
||||
|
||||
To do so, you must first define custom [providers](./config.md#model_providers) in `~/.codex/config.toml`. For example, the provider for a standard Ollama setup would be defined as follows:
|
||||
|
||||
```toml
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "http://localhost:11434/v1"
|
||||
```
|
||||
|
||||
The `base_url` will have `/chat/completions` appended to it to build the full URL for the request.
|
||||
|
||||
For providers that also require an `Authorization` header of the form `Bearer: SECRET`, an `env_key` can be specified, which indicates the environment variable to read to use as the value of `SECRET` when making a request:
|
||||
|
||||
```toml
|
||||
[model_providers.openrouter]
|
||||
name = "OpenRouter"
|
||||
base_url = "https://openrouter.ai/api/v1"
|
||||
env_key = "OPENROUTER_API_KEY"
|
||||
```
|
||||
|
||||
Providers that speak the Responses API are also supported by adding `wire_api = "responses"` as part of the definition. Accessing OpenAI models via Azure is an example of such a provider, though it also requires specifying additional `query_params` that need to be appended to the request URL:
|
||||
|
||||
```toml
|
||||
[model_providers.azure]
|
||||
name = "Azure"
|
||||
# Make sure you set the appropriate subdomain for this URL.
|
||||
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
|
||||
env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
|
||||
# Newer versions appear to support the responses API, see https://github.com/openai/codex/pull/1321
|
||||
query_params = { api-version = "2025-04-01-preview" }
|
||||
wire_api = "responses"
|
||||
```
|
||||
|
||||
Once you have defined a provider you wish to use, you can configure it as your default provider as follows:
|
||||
|
||||
```toml
|
||||
model_provider = "azure"
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> If you find yourself experimenting with a variety of models and providers, then you likely want to invest in defining a _profile_ for each configuration like so:
|
||||
|
||||
```toml
|
||||
[profiles.o3]
|
||||
model_provider = "azure"
|
||||
model = "o3"
|
||||
|
||||
[profiles.mistral]
|
||||
model_provider = "ollama"
|
||||
model = "mistral"
|
||||
```
|
||||
|
||||
This way, you can specify one command-line argument (.e.g., `--profile o3`, `--profile mistral`) to override multiple settings together.
|
||||
|
||||
</details>
|
||||
<br />
|
||||
|
||||
Run interactively:
|
||||
|
||||
```shell
|
||||
codex
|
||||
```
|
||||
|
||||
If you're running into upgrade issues with Homebrew, see the [FAQ entry on brew upgrade codex](./docs/faq.md#brew-upgrade-codex-isnt-upgrading-me).
|
||||
Or, run with a prompt as input (and optionally in `Full Auto` mode):
|
||||
|
||||
<details>
|
||||
<summary>You can also go to the <a href="https://github.com/openai/codex/releases/latest">latest GitHub Release</a> and download the appropriate binary for your platform.</summary>
|
||||
```shell
|
||||
codex "explain this codebase to me"
|
||||
```
|
||||
|
||||
Each GitHub Release contains many executables, but in practice, you likely want one of these:
|
||||
```shell
|
||||
codex --full-auto "create the fanciest todo-list app"
|
||||
```
|
||||
|
||||
That's it - Codex will scaffold a file, run it inside a sandbox, install any
|
||||
missing dependencies, and show you the live result. Approve the changes and
|
||||
they'll be committed to your working directory.
|
||||
|
||||
---
|
||||
|
||||
## Why Codex?
|
||||
|
||||
Codex CLI is built for developers who already **live in the terminal** and want
|
||||
ChatGPT-level reasoning **plus** the power to actually run code, manipulate
|
||||
files, and iterate - all under version control. In short, it's _chat-driven
|
||||
development_ that understands and executes your repo.
|
||||
|
||||
- **Zero setup** - bring your OpenAI API key and it just works!
|
||||
- **Full auto-approval, while safe + secure** by running network-disabled and directory-sandboxed
|
||||
- **Multimodal** - pass in screenshots or diagrams to implement features ✨
|
||||
|
||||
And it's **fully open-source** so you can see and contribute to how it develops!
|
||||
|
||||
---
|
||||
|
||||
## Security model & permissions
|
||||
|
||||
Codex lets you decide _how much autonomy_ you want to grant the agent. The following options can be configured independently:
|
||||
|
||||
- [`approval_policy`](./codex-rs/config.md#approval_policy) determines when you should be prompted to approve whether Codex can execute a command
|
||||
- [`sandbox`](./codex-rs/config.md#sandbox) determines the _sandbox policy_ that Codex uses to execute untrusted commands
|
||||
|
||||
By default, Codex runs with `--ask-for-approval untrusted` and `--sandbox read-only`, which means that:
|
||||
|
||||
- The user is prompted to approve every command not on the set of "trusted" commands built into Codex (`cat`, `ls`, etc.)
|
||||
- Approved commands are run outside of a sandbox because user approval implies "trust," in this case.
|
||||
|
||||
Running Codex with the `--full-auto` convenience flag changes the configuration to `--ask-for-approval on-failure` and `--sandbox workspace-write`, which means that:
|
||||
|
||||
- Codex does not initially ask for user approval before running an individual command.
|
||||
- Though when it runs a command, it is run under a sandbox in which:
|
||||
- It can read any file on the system.
|
||||
- It can only write files under the current directory (or the directory specified via `--cd`).
|
||||
- Network requests are completely disabled.
|
||||
- Only if the command exits with a non-zero exit code will it ask the user for approval. If granted, it will re-attempt the command outside of the sandbox. (A common case is when Codex cannot `npm install` a dependency because that requires network access.)
|
||||
|
||||
Again, these two options can be configured independently. For example, if you want Codex to perform an "exploration" where you are happy for it to read anything it wants but you never want to be prompted, you could run Codex with `--ask-for-approval never` and `--sandbox read-only`.
|
||||
|
||||
### Platform sandboxing details
|
||||
|
||||
The mechanism Codex uses to implement the sandbox policy depends on your OS:
|
||||
|
||||
- **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` that was specified.
|
||||
- **Linux** uses a combination of Landlock/seccomp APIs to enforce the `sandbox` configuration.
|
||||
|
||||
Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running `codex` with `--sandbox danger-full-access` (or, more simply, the `--dangerously-bypass-approvals-and-sandbox` flag) within your container.
|
||||
|
||||
---
|
||||
|
||||
## System requirements
|
||||
|
||||
| Requirement | Details |
|
||||
| --------------------------- | --------------------------------------------------------------- |
|
||||
| Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
|
||||
| Git (optional, recommended) | 2.23+ for built-in PR helpers |
|
||||
| RAM | 4-GB minimum (8-GB recommended) |
|
||||
|
||||
---
|
||||
|
||||
## CLI reference
|
||||
|
||||
| Command | Purpose | Example |
|
||||
| ------------------ | ---------------------------------- | ------------------------------- |
|
||||
| `codex` | Interactive TUI | `codex` |
|
||||
| `codex "..."` | Initial prompt for interactive TUI | `codex "fix lint errors"` |
|
||||
| `codex exec "..."` | Non-interactive "automation mode" | `codex exec "explain utils.ts"` |
|
||||
|
||||
Key flags: `--model/-m`, `--ask-for-approval/-a`.
|
||||
|
||||
---
|
||||
|
||||
## Memory & project docs
|
||||
|
||||
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down:
|
||||
|
||||
1. `~/.codex/AGENTS.md` - personal global guidance
|
||||
2. `AGENTS.md` at repo root - shared project notes
|
||||
3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
|
||||
|
||||
---
|
||||
|
||||
## Non-interactive / CI mode
|
||||
|
||||
Run Codex head-less in pipelines. Example GitHub Action step:
|
||||
|
||||
```yaml
|
||||
- name: Update changelog via Codex
|
||||
run: |
|
||||
npm install -g @openai/codex
|
||||
export OPENAI_API_KEY="${{ secrets.OPENAI_KEY }}"
|
||||
codex exec --full-auto "update CHANGELOG for next release"
|
||||
```
|
||||
|
||||
## Model Context Protocol (MCP)
|
||||
|
||||
The Codex CLI can be configured to leverage MCP servers by defining an [`mcp_servers`](./codex-rs/config.md#mcp_servers) section in `~/.codex/config.toml`. It is intended to mirror how tools such as Claude and Cursor define `mcpServers` in their respective JSON config files, though the Codex format is slightly different since it uses TOML rather than JSON, e.g.:
|
||||
|
||||
```toml
|
||||
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
|
||||
[mcp_servers.server-name]
|
||||
command = "npx"
|
||||
args = ["-y", "mcp-server"]
|
||||
env = { "API_KEY" = "value" }
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> It is somewhat experimental, but the Codex CLI can also be run as an MCP _server_ via `codex mcp`. If you launch it with an MCP client such as `npx @modelcontextprotocol/inspector codex mcp` and send it a `tools/list` request, you will see that there is only one tool, `codex`, that accepts a grab-bag of inputs, including a catch-all `config` map for anything you might want to override. Feel free to play around with it and provide feedback via GitHub issues.
|
||||
|
||||
## Tracing / verbose logging
|
||||
|
||||
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
|
||||
|
||||
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
|
||||
|
||||
```
|
||||
tail -F ~/.codex/log/codex-tui.log
|
||||
```
|
||||
|
||||
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
|
||||
|
||||
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
|
||||
|
||||
---
|
||||
|
||||
## Recipes
|
||||
|
||||
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md) for more tips and usage patterns.
|
||||
|
||||
| ✨ | What you type | What happens |
|
||||
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
|
||||
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
|
||||
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
|
||||
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
|
||||
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
|
||||
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
|
||||
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
|
||||
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
<details open>
|
||||
<summary><strong>Install Codex CLI using your preferred package manager.</strong></summary>
|
||||
|
||||
From `brew` (recommended, downloads only the binary for your platform):
|
||||
|
||||
```bash
|
||||
brew install codex
|
||||
```
|
||||
|
||||
From `npm` (generally more readily available, but downloads binaries for all supported platforms):
|
||||
|
||||
```bash
|
||||
npm i -g @openai/codex
|
||||
```
|
||||
|
||||
Or go to the [latest GitHub Release](https://github.com/openai/codex/releases/latest) and download the appropriate binary for your platform.
|
||||
|
||||
Admittedly, each GitHub Release contains many executables, but in practice, you likely want one of these:
|
||||
|
||||
- macOS
|
||||
- Apple Silicon/arm64: `codex-aarch64-apple-darwin.tar.gz`
|
||||
@@ -49,61 +350,211 @@ Each GitHub Release contains many executables, but in practice, you likely want
|
||||
|
||||
Each archive contains a single entry with the platform baked into the name (e.g., `codex-x86_64-unknown-linux-musl`), so you likely want to rename it to `codex` after extracting it.
|
||||
|
||||
### DotSlash
|
||||
|
||||
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the Codex CLI named `codex`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
|
||||
|
||||
</details>
|
||||
|
||||
### Using Codex with your ChatGPT plan
|
||||
<details>
|
||||
<summary><strong>Build from source</strong></summary>
|
||||
|
||||
<p align="center">
|
||||
<img src="./.github/codex-cli-login.png" alt="Codex CLI login" width="80%" />
|
||||
</p>
|
||||
```bash
|
||||
# Clone the repository and navigate to the root of the Cargo workspace.
|
||||
git clone https://github.com/openai/codex.git
|
||||
cd codex/codex-rs
|
||||
|
||||
Run `codex` and select **Sign in with ChatGPT**. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. [Learn more about what's included in your ChatGPT plan](https://help.openai.com/en/articles/11369540-codex-in-chatgpt).
|
||||
# Install the Rust toolchain, if necessary.
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
source "$HOME/.cargo/env"
|
||||
rustup component add rustfmt
|
||||
rustup component add clippy
|
||||
|
||||
You can also use Codex with an API key, but this requires [additional setup](./docs/authentication.md#usage-based-billing-alternative-use-an-openai-api-key). If you previously used an API key for usage-based billing, see the [migration steps](./docs/authentication.md#migrating-from-usage-based-billing-api-key). If you're having trouble with login, please comment on [this issue](https://github.com/openai/codex/issues/1243).
|
||||
# Build Codex.
|
||||
cargo build
|
||||
|
||||
### Model Context Protocol (MCP)
|
||||
# Launch the TUI with a sample prompt.
|
||||
cargo run --bin codex -- "explain this codebase to me"
|
||||
|
||||
Codex can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
|
||||
# After making changes, ensure the code is clean.
|
||||
cargo fmt -- --config imports_granularity=Item
|
||||
cargo clippy --tests
|
||||
|
||||
### Configuration
|
||||
# Run the tests.
|
||||
cargo test
|
||||
```
|
||||
|
||||
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
|
||||
</details>
|
||||
|
||||
### Execpolicy
|
||||
---
|
||||
|
||||
See the [Execpolicy quickstart](./docs/execpolicy.md) to set up rules that govern what commands Codex can execute.
|
||||
## Configuration
|
||||
|
||||
### Docs & FAQ
|
||||
Codex supports a rich set of configuration options documented in [`codex-rs/config.md`](./codex-rs/config.md).
|
||||
|
||||
- [**Getting started**](./docs/getting-started.md)
|
||||
- [CLI usage](./docs/getting-started.md#cli-usage)
|
||||
- [Slash Commands](./docs/slash_commands.md)
|
||||
- [Running with a prompt as input](./docs/getting-started.md#running-with-a-prompt-as-input)
|
||||
- [Example prompts](./docs/getting-started.md#example-prompts)
|
||||
- [Custom prompts](./docs/prompts.md)
|
||||
- [Memory with AGENTS.md](./docs/getting-started.md#memory-with-agentsmd)
|
||||
- [**Configuration**](./docs/config.md)
|
||||
- [Example config](./docs/example-config.md)
|
||||
- [**Sandbox & approvals**](./docs/sandbox.md)
|
||||
- [**Execpolicy quickstart**](./docs/execpolicy.md)
|
||||
- [**Authentication**](./docs/authentication.md)
|
||||
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
|
||||
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
|
||||
- **Automating Codex**
|
||||
- [GitHub Action](https://github.com/openai/codex-action)
|
||||
- [TypeScript SDK](./sdk/typescript/README.md)
|
||||
- [Non-interactive mode (`codex exec`)](./docs/exec.md)
|
||||
- [**Advanced**](./docs/advanced.md)
|
||||
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
|
||||
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
|
||||
- [**Zero data retention (ZDR)**](./docs/zdr.md)
|
||||
- [**Contributing**](./docs/contributing.md)
|
||||
- [**Install & build**](./docs/install.md)
|
||||
- [System Requirements](./docs/install.md#system-requirements)
|
||||
- [DotSlash](./docs/install.md#dotslash)
|
||||
- [Build from source](./docs/install.md#build-from-source)
|
||||
- [**FAQ**](./docs/faq.md)
|
||||
- [**Open source fund**](./docs/open-source-fund.md)
|
||||
By default, Codex loads its configuration from `~/.codex/config.toml`.
|
||||
|
||||
Though `--config` can be used to set/override ad-hoc config values for individual invocations of `codex`.
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
<details>
|
||||
<summary>OpenAI released a model called Codex in 2021 - is this related?</summary>
|
||||
|
||||
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Which models are supported?</summary>
|
||||
|
||||
Any model available with [Responses API](https://platform.openai.com/docs/api-reference/responses). The default is `o4-mini`, but pass `--model gpt-4.1` or set `model: gpt-4.1` in your config file to override.
|
||||
|
||||
</details>
|
||||
<details>
|
||||
<summary>Why does <code>o3</code> or <code>o4-mini</code> not work for me?</summary>
|
||||
|
||||
It's possible that your [API account needs to be verified](https://help.openai.com/en/articles/10910291-api-organization-verification) in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know!
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>How do I stop Codex from editing my files?</summary>
|
||||
|
||||
Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback.
|
||||
|
||||
</details>
|
||||
<details>
|
||||
<summary>Does it work on Windows?</summary>
|
||||
|
||||
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex has been tested on macOS and Linux with Node 22.
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Zero data retention (ZDR) usage
|
||||
|
||||
Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
|
||||
|
||||
```
|
||||
OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
|
||||
```
|
||||
|
||||
Ensure you are running `codex` with `--config disable_response_storage=true` or add this line to `~/.codex/config.toml` to avoid specifying the command line option each time:
|
||||
|
||||
```toml
|
||||
disable_response_storage = true
|
||||
```
|
||||
|
||||
See [the configuration documentation on `disable_response_storage`](./codex-rs/config.md#disable_response_storage) for details.
|
||||
|
||||
---
|
||||
|
||||
## Codex open source fund
|
||||
|
||||
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
|
||||
|
||||
- Grants are awarded up to **$25,000** API credits.
|
||||
- Applications are reviewed **on a rolling basis**.
|
||||
|
||||
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
|
||||
|
||||
---
|
||||
|
||||
## Contributing
|
||||
|
||||
This project is under active development and the code will likely change pretty significantly. We'll update this message once that's complete!
|
||||
|
||||
More broadly we welcome contributions - whether you are opening your very first pull request or you're a seasoned maintainer. At the same time we care about reliability and long-term maintainability, so the bar for merging code is intentionally **high**. The guidelines below spell out what "high-quality" means in practice and should make the whole process transparent and friendly.
|
||||
|
||||
### Development workflow
|
||||
|
||||
- Create a _topic branch_ from `main` - e.g. `feat/interactive-prompt`.
|
||||
- Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs.
|
||||
- Following the [development setup](#development-workflow) instructions above, ensure your change is free of lint warnings and test failures.
|
||||
|
||||
### Writing high-impact code changes
|
||||
|
||||
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
|
||||
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
|
||||
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
|
||||
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
|
||||
|
||||
### Opening a pull request
|
||||
|
||||
- Fill in the PR template (or include similar information) - **What? Why? How?**
|
||||
- Run **all** checks locally (`cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item`). CI failures that could have been caught locally slow down the process.
|
||||
- Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts.
|
||||
- Mark the PR as **Ready for review** only when you believe it is in a merge-able state.
|
||||
|
||||
### Review process
|
||||
|
||||
1. One maintainer will be assigned as a primary reviewer.
|
||||
2. We may ask for changes - please do not take this personally. We value the work, we just also value consistency and long-term maintainability.
|
||||
3. When there is consensus that the PR meets the bar, a maintainer will squash-and-merge.
|
||||
|
||||
### Community values
|
||||
|
||||
- **Be kind and inclusive.** Treat others with respect; we follow the [Contributor Covenant](https://www.contributor-covenant.org/).
|
||||
- **Assume good intent.** Written communication is hard - err on the side of generosity.
|
||||
- **Teach & learn.** If you spot something confusing, open an issue or PR with improvements.
|
||||
|
||||
### Getting help
|
||||
|
||||
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
|
||||
|
||||
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
|
||||
|
||||
### Contributor license agreement (CLA)
|
||||
|
||||
All contributors **must** accept the CLA. The process is lightweight:
|
||||
|
||||
1. Open your pull request.
|
||||
2. Paste the following comment (or reply `recheck` if you've signed before):
|
||||
|
||||
```text
|
||||
I have read the CLA Document and I hereby sign the CLA
|
||||
```
|
||||
|
||||
3. The CLA-Assistant bot records your signature in the repo and marks the status check as passed.
|
||||
|
||||
No special Git commands, email attachments, or commit footers required.
|
||||
|
||||
#### Quick fixes
|
||||
|
||||
| Scenario | Command |
|
||||
| ----------------- | ------------------------------------------------ |
|
||||
| Amend last commit | `git commit --amend -s --no-edit && git push -f` |
|
||||
|
||||
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
|
||||
|
||||
### Releasing `codex`
|
||||
|
||||
_For admins only._
|
||||
|
||||
Make sure you are on `main` and have no local changes. Then run:
|
||||
|
||||
```shell
|
||||
VERSION=0.2.0 # Can also be 0.2.0-alpha.1 or any valid Rust version.
|
||||
./codex-rs/scripts/create_github_release.sh "$VERSION"
|
||||
```
|
||||
|
||||
This will make a local commit on top of `main` with `version` set to `$VERSION` in `codex-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
|
||||
|
||||
This will push the commit using the tag `rust-v${VERSION}`, which in turn kicks off [the release workflow](.github/workflows/rust-release.yml). This will create a new GitHub Release named `$VERSION`.
|
||||
|
||||
If everything looks good in the generated GitHub Release, uncheck the **pre-release** box so it is the latest release.
|
||||
|
||||
Create a PR to update [`Formula/c/codex.rb`](https://github.com/Homebrew/homebrew-core/blob/main/Formula/c/codex.rb) on Homebrew.
|
||||
|
||||
---
|
||||
|
||||
## Security & responsible AI
|
||||
|
||||
Have you discovered a vulnerability or have concerns about model output? Please e-mail **security@openai.com** and we will respond promptly.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
header = """
|
||||
# Changelog
|
||||
|
||||
You can install any of these versions: `npm install -g @openai/codex@<version>`
|
||||
You can install any of these versions: `npm install -g codex@version`
|
||||
"""
|
||||
|
||||
body = """
|
||||
|
||||
9
codex-cli/.editorconfig
Normal file
9
codex-cli/.editorconfig
Normal file
@@ -0,0 +1,9 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[*.{js,ts,jsx,tsx}]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
107
codex-cli/.eslintrc.cjs
Normal file
107
codex-cli/.eslintrc.cjs
Normal file
@@ -0,0 +1,107 @@
|
||||
module.exports = {
|
||||
root: true,
|
||||
env: { browser: true, node: true, es2020: true },
|
||||
extends: [
|
||||
"eslint:recommended",
|
||||
"plugin:@typescript-eslint/recommended",
|
||||
"plugin:react-hooks/recommended",
|
||||
],
|
||||
ignorePatterns: [
|
||||
".eslintrc.cjs",
|
||||
"build.mjs",
|
||||
"dist",
|
||||
"vite.config.ts",
|
||||
"src/components/vendor",
|
||||
],
|
||||
parser: "@typescript-eslint/parser",
|
||||
parserOptions: {
|
||||
tsconfigRootDir: __dirname,
|
||||
project: ["./tsconfig.json"],
|
||||
},
|
||||
plugins: ["import", "react-hooks", "react-refresh"],
|
||||
rules: {
|
||||
// Imports
|
||||
"@typescript-eslint/consistent-type-imports": "error",
|
||||
"import/no-cycle": ["error", { maxDepth: 1 }],
|
||||
"import/no-duplicates": "error",
|
||||
"import/order": [
|
||||
"error",
|
||||
{
|
||||
groups: ["type"],
|
||||
"newlines-between": "always",
|
||||
alphabetize: {
|
||||
order: "asc",
|
||||
caseInsensitive: false,
|
||||
},
|
||||
},
|
||||
],
|
||||
// We use the import/ plugin instead.
|
||||
"sort-imports": "off",
|
||||
|
||||
"@typescript-eslint/array-type": ["error", { default: "generic" }],
|
||||
// FIXME(mbolin): Introduce this.
|
||||
// "@typescript-eslint/explicit-function-return-type": "error",
|
||||
"@typescript-eslint/explicit-module-boundary-types": "error",
|
||||
"@typescript-eslint/no-explicit-any": "error",
|
||||
"@typescript-eslint/switch-exhaustiveness-check": [
|
||||
"error",
|
||||
{
|
||||
allowDefaultCaseForExhaustiveSwitch: false,
|
||||
requireDefaultForNonUnion: true,
|
||||
},
|
||||
],
|
||||
|
||||
// Use typescript-eslint/no-unused-vars, no-unused-vars reports
|
||||
// false positives with typescript
|
||||
"no-unused-vars": "off",
|
||||
"@typescript-eslint/no-unused-vars": [
|
||||
"error",
|
||||
{
|
||||
argsIgnorePattern: "^_",
|
||||
varsIgnorePattern: "^_",
|
||||
caughtErrorsIgnorePattern: "^_",
|
||||
},
|
||||
],
|
||||
|
||||
curly: "error",
|
||||
|
||||
eqeqeq: ["error", "always", { null: "never" }],
|
||||
"react-refresh/only-export-components": [
|
||||
"error",
|
||||
{ allowConstantExport: true },
|
||||
],
|
||||
"no-await-in-loop": "error",
|
||||
"no-bitwise": "error",
|
||||
"no-caller": "error",
|
||||
// This is fine during development, but should not be checked in.
|
||||
"no-console": "error",
|
||||
// This is fine during development, but should not be checked in.
|
||||
"no-debugger": "error",
|
||||
"no-duplicate-case": "error",
|
||||
"no-eval": "error",
|
||||
"no-ex-assign": "error",
|
||||
"no-return-await": "error",
|
||||
"no-param-reassign": "error",
|
||||
"no-script-url": "error",
|
||||
"no-self-compare": "error",
|
||||
"no-unsafe-finally": "error",
|
||||
"no-var": "error",
|
||||
"react-hooks/rules-of-hooks": "error",
|
||||
"react-hooks/exhaustive-deps": "error",
|
||||
},
|
||||
overrides: [
|
||||
{
|
||||
// apply only to files under tests/
|
||||
files: ["tests/**/*.{ts,tsx,js,jsx}"],
|
||||
rules: {
|
||||
"@typescript-eslint/no-explicit-any": "off",
|
||||
"import/order": "off",
|
||||
"@typescript-eslint/explicit-module-boundary-types": "off",
|
||||
"@typescript-eslint/ban-ts-comment": "off",
|
||||
"@typescript-eslint/no-var-requires": "off",
|
||||
"no-await-in-loop": "off",
|
||||
"no-control-regex": "off",
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
8
codex-cli/.gitignore
vendored
8
codex-cli/.gitignore
vendored
@@ -1 +1,7 @@
|
||||
/vendor/
|
||||
# Added by ./scripts/install_native_deps.sh
|
||||
/bin/codex-aarch64-apple-darwin
|
||||
/bin/codex-aarch64-unknown-linux-musl
|
||||
/bin/codex-linux-sandbox-arm64
|
||||
/bin/codex-linux-sandbox-x64
|
||||
/bin/codex-x86_64-apple-darwin
|
||||
/bin/codex-x86_64-unknown-linux-musl
|
||||
|
||||
45
codex-cli/HUSKY.md
Normal file
45
codex-cli/HUSKY.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Husky Git Hooks
|
||||
|
||||
This project uses [Husky](https://typicode.github.io/husky/) to enforce code quality checks before commits and pushes.
|
||||
|
||||
## What's Included
|
||||
|
||||
- **Pre-commit Hook**: Runs lint-staged to check files that are about to be committed.
|
||||
|
||||
- Lints and formats TypeScript/TSX files using ESLint and Prettier
|
||||
- Formats JSON, MD, and YML files using Prettier
|
||||
|
||||
- **Pre-push Hook**: Runs tests and type checking before pushing to the remote repository.
|
||||
- Executes `npm test` to run all tests
|
||||
- Executes `npm run typecheck` to check TypeScript types
|
||||
|
||||
## Benefits
|
||||
|
||||
- Ensures consistent code style across the project
|
||||
- Prevents pushing code with failing tests or type errors
|
||||
- Reduces the need for style-related code review comments
|
||||
- Improves overall code quality
|
||||
|
||||
## For Contributors
|
||||
|
||||
You don't need to do anything special to use these hooks. They will automatically run when you commit or push code.
|
||||
|
||||
If you need to bypass the hooks in exceptional cases:
|
||||
|
||||
```bash
|
||||
# Skip pre-commit hooks
|
||||
git commit -m "Your message" --no-verify
|
||||
|
||||
# Skip pre-push hooks
|
||||
git push --no-verify
|
||||
```
|
||||
|
||||
Note: Please use these bypass options sparingly and only when absolutely necessary.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues with the hooks:
|
||||
|
||||
1. Make sure you have the latest dependencies installed: `npm install`
|
||||
2. Ensure the hook scripts are executable (Unix systems): `chmod +x .husky/pre-commit .husky/pre-push`
|
||||
3. Check if there are any ESLint or Prettier configuration issues in your code
|
||||
@@ -208,7 +208,7 @@ The hardening mechanism Codex uses depends on your OS:
|
||||
| Requirement | Details |
|
||||
| --------------------------- | --------------------------------------------------------------- |
|
||||
| Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
|
||||
| Node.js | **16 or newer** (Node 20 LTS recommended) |
|
||||
| Node.js | **22 or newer** (LTS recommended) |
|
||||
| Git (optional, recommended) | 2.23+ for built-in PR helpers |
|
||||
| RAM | 4-GB minimum (8-GB recommended) |
|
||||
|
||||
@@ -513,7 +513,7 @@ Codex runs model-generated commands in a sandbox. If a proposed command or file
|
||||
<details>
|
||||
<summary>Does it work on Windows?</summary>
|
||||
|
||||
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex is regularly tested on macOS and Linux with Node 20+, and also supports Node 16.
|
||||
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex has been tested on macOS and Linux with Node 22.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
288
codex-cli/bin/codex.js
Normal file → Executable file
288
codex-cli/bin/codex.js
Normal file → Executable file
@@ -1,177 +1,153 @@
|
||||
#!/usr/bin/env node
|
||||
// Unified entry point for the Codex CLI.
|
||||
/*
|
||||
* Behavior
|
||||
* =========
|
||||
* 1. By default we import the JavaScript implementation located in
|
||||
* dist/cli.js.
|
||||
*
|
||||
* 2. Developers can opt-in to a pre-compiled Rust binary by setting the
|
||||
* environment variable CODEX_RUST to a truthy value (`1`, `true`, etc.).
|
||||
* When that variable is present we resolve the correct binary for the
|
||||
* current platform / architecture and execute it via child_process.
|
||||
*
|
||||
* If the CODEX_RUST=1 is specified and there is no native binary for the
|
||||
* current platform / architecture, an error is thrown.
|
||||
*/
|
||||
|
||||
import { spawn } from "node:child_process";
|
||||
import { existsSync } from "fs";
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import { fileURLToPath } from "url";
|
||||
import { fileURLToPath, pathToFileURL } from "url";
|
||||
|
||||
// Determine whether the user explicitly wants the Rust CLI.
|
||||
|
||||
// __dirname equivalent in ESM
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
const { platform, arch } = process;
|
||||
// For the @native release of the Node module, the `use-native` file is added,
|
||||
// indicating we should default to the native binary. For other releases,
|
||||
// setting CODEX_RUST=1 will opt-in to the native binary, if included.
|
||||
const wantsNative = fs.existsSync(path.join(__dirname, "use-native")) ||
|
||||
(process.env.CODEX_RUST != null
|
||||
? ["1", "true", "yes"].includes(process.env.CODEX_RUST.toLowerCase())
|
||||
: false);
|
||||
|
||||
let targetTriple = null;
|
||||
switch (platform) {
|
||||
case "linux":
|
||||
case "android":
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
targetTriple = "x86_64-unknown-linux-musl";
|
||||
break;
|
||||
case "arm64":
|
||||
targetTriple = "aarch64-unknown-linux-musl";
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
case "darwin":
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
targetTriple = "x86_64-apple-darwin";
|
||||
break;
|
||||
case "arm64":
|
||||
targetTriple = "aarch64-apple-darwin";
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
case "win32":
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
targetTriple = "x86_64-pc-windows-msvc";
|
||||
break;
|
||||
case "arm64":
|
||||
targetTriple = "aarch64-pc-windows-msvc";
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
// Try native binary if requested.
|
||||
if (wantsNative && process.platform !== 'win32') {
|
||||
const { platform, arch } = process;
|
||||
|
||||
if (!targetTriple) {
|
||||
throw new Error(`Unsupported platform: ${platform} (${arch})`);
|
||||
}
|
||||
|
||||
const vendorRoot = path.join(__dirname, "..", "vendor");
|
||||
const archRoot = path.join(vendorRoot, targetTriple);
|
||||
const codexBinaryName = process.platform === "win32" ? "codex.exe" : "codex";
|
||||
const binaryPath = path.join(archRoot, "codex", codexBinaryName);
|
||||
|
||||
// Use an asynchronous spawn instead of spawnSync so that Node is able to
|
||||
// respond to signals (e.g. Ctrl-C / SIGINT) while the native binary is
|
||||
// executing. This allows us to forward those signals to the child process
|
||||
// and guarantees that when either the child terminates or the parent
|
||||
// receives a fatal signal, both processes exit in a predictable manner.
|
||||
|
||||
function getUpdatedPath(newDirs) {
|
||||
const pathSep = process.platform === "win32" ? ";" : ":";
|
||||
const existingPath = process.env.PATH || "";
|
||||
const updatedPath = [
|
||||
...newDirs,
|
||||
...existingPath.split(pathSep).filter(Boolean),
|
||||
].join(pathSep);
|
||||
return updatedPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Use heuristics to detect the package manager that was used to install Codex
|
||||
* in order to give the user a hint about how to update it.
|
||||
*/
|
||||
function detectPackageManager() {
|
||||
const userAgent = process.env.npm_config_user_agent || "";
|
||||
if (/\bbun\//.test(userAgent)) {
|
||||
return "bun";
|
||||
let targetTriple = null;
|
||||
switch (platform) {
|
||||
case "linux":
|
||||
case "android":
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
targetTriple = "x86_64-unknown-linux-musl";
|
||||
break;
|
||||
case "arm64":
|
||||
targetTriple = "aarch64-unknown-linux-musl";
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
case "darwin":
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
targetTriple = "x86_64-apple-darwin";
|
||||
break;
|
||||
case "arm64":
|
||||
targetTriple = "aarch64-apple-darwin";
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
const execPath = process.env.npm_execpath || "";
|
||||
if (execPath.includes("bun")) {
|
||||
return "bun";
|
||||
if (!targetTriple) {
|
||||
throw new Error(`Unsupported platform: ${platform} (${arch})`);
|
||||
}
|
||||
|
||||
const binaryPath = path.join(__dirname, "..", "bin", `codex-${targetTriple}`);
|
||||
|
||||
if (
|
||||
__dirname.includes(".bun/install/global") ||
|
||||
__dirname.includes(".bun\\install\\global")
|
||||
) {
|
||||
return "bun";
|
||||
}
|
||||
// Use an asynchronous spawn instead of spawnSync so that Node is able to
|
||||
// respond to signals (e.g. Ctrl-C / SIGINT) while the native binary is
|
||||
// executing. This allows us to forward those signals to the child process
|
||||
// and guarantees that when either the child terminates or the parent
|
||||
// receives a fatal signal, both processes exit in a predictable manner.
|
||||
const { spawn } = await import("child_process");
|
||||
|
||||
return userAgent ? "npm" : null;
|
||||
}
|
||||
|
||||
const additionalDirs = [];
|
||||
const pathDir = path.join(archRoot, "path");
|
||||
if (existsSync(pathDir)) {
|
||||
additionalDirs.push(pathDir);
|
||||
}
|
||||
const updatedPath = getUpdatedPath(additionalDirs);
|
||||
|
||||
const env = { ...process.env, PATH: updatedPath };
|
||||
const packageManagerEnvVar =
|
||||
detectPackageManager() === "bun"
|
||||
? "CODEX_MANAGED_BY_BUN"
|
||||
: "CODEX_MANAGED_BY_NPM";
|
||||
env[packageManagerEnvVar] = "1";
|
||||
|
||||
const child = spawn(binaryPath, process.argv.slice(2), {
|
||||
stdio: "inherit",
|
||||
env,
|
||||
});
|
||||
|
||||
child.on("error", (err) => {
|
||||
// Typically triggered when the binary is missing or not executable.
|
||||
// Re-throwing here will terminate the parent with a non-zero exit code
|
||||
// while still printing a helpful stack trace.
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// Forward common termination signals to the child so that it shuts down
|
||||
// gracefully. In the handler we temporarily disable the default behavior of
|
||||
// exiting immediately; once the child has been signaled we simply wait for
|
||||
// its exit event which will in turn terminate the parent (see below).
|
||||
const forwardSignal = (signal) => {
|
||||
if (child.killed) {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
child.kill(signal);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
};
|
||||
|
||||
["SIGINT", "SIGTERM", "SIGHUP"].forEach((sig) => {
|
||||
process.on(sig, () => forwardSignal(sig));
|
||||
});
|
||||
|
||||
// When the child exits, mirror its termination reason in the parent so that
|
||||
// shell scripts and other tooling observe the correct exit status.
|
||||
// Wrap the lifetime of the child process in a Promise so that we can await
|
||||
// its termination in a structured way. The Promise resolves with an object
|
||||
// describing how the child exited: either via exit code or due to a signal.
|
||||
const childResult = await new Promise((resolve) => {
|
||||
child.on("exit", (code, signal) => {
|
||||
if (signal) {
|
||||
resolve({ type: "signal", signal });
|
||||
} else {
|
||||
resolve({ type: "code", exitCode: code ?? 1 });
|
||||
}
|
||||
const child = spawn(binaryPath, process.argv.slice(2), {
|
||||
stdio: "inherit",
|
||||
});
|
||||
});
|
||||
|
||||
if (childResult.type === "signal") {
|
||||
// Re-emit the same signal so that the parent terminates with the expected
|
||||
// semantics (this also sets the correct exit code of 128 + n).
|
||||
process.kill(process.pid, childResult.signal);
|
||||
child.on("error", (err) => {
|
||||
// Typically triggered when the binary is missing or not executable.
|
||||
// Re-throwing here will terminate the parent with a non-zero exit code
|
||||
// while still printing a helpful stack trace.
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// Forward common termination signals to the child so that it shuts down
|
||||
// gracefully. In the handler we temporarily disable the default behavior of
|
||||
// exiting immediately; once the child has been signaled we simply wait for
|
||||
// its exit event which will in turn terminate the parent (see below).
|
||||
const forwardSignal = (signal) => {
|
||||
if (child.killed) {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
child.kill(signal);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
};
|
||||
|
||||
["SIGINT", "SIGTERM", "SIGHUP"].forEach((sig) => {
|
||||
process.on(sig, () => forwardSignal(sig));
|
||||
});
|
||||
|
||||
// When the child exits, mirror its termination reason in the parent so that
|
||||
// shell scripts and other tooling observe the correct exit status.
|
||||
// Wrap the lifetime of the child process in a Promise so that we can await
|
||||
// its termination in a structured way. The Promise resolves with an object
|
||||
// describing how the child exited: either via exit code or due to a signal.
|
||||
const childResult = await new Promise((resolve) => {
|
||||
child.on("exit", (code, signal) => {
|
||||
if (signal) {
|
||||
resolve({ type: "signal", signal });
|
||||
} else {
|
||||
resolve({ type: "code", exitCode: code ?? 1 });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (childResult.type === "signal") {
|
||||
// Re-emit the same signal so that the parent terminates with the expected
|
||||
// semantics (this also sets the correct exit code of 128 + n).
|
||||
process.kill(process.pid, childResult.signal);
|
||||
} else {
|
||||
process.exit(childResult.exitCode);
|
||||
}
|
||||
} else {
|
||||
process.exit(childResult.exitCode);
|
||||
// Fallback: execute the original JavaScript CLI.
|
||||
|
||||
// Resolve the path to the compiled CLI bundle
|
||||
const cliPath = path.resolve(__dirname, "../dist/cli.js");
|
||||
const cliUrl = pathToFileURL(cliPath).href;
|
||||
|
||||
// Load and execute the CLI
|
||||
try {
|
||||
await import(cliUrl);
|
||||
} catch (err) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,79 +0,0 @@
|
||||
#!/usr/bin/env dotslash
|
||||
|
||||
{
|
||||
"name": "rg",
|
||||
"platforms": {
|
||||
"macos-aarch64": {
|
||||
"size": 1787248,
|
||||
"hash": "blake3",
|
||||
"digest": "8d9942032585ea8ee805937634238d9aee7b210069f4703c88fbe568e26fb78a",
|
||||
"format": "tar.gz",
|
||||
"path": "ripgrep-14.1.1-aarch64-apple-darwin/rg",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-aarch64-apple-darwin.tar.gz"
|
||||
}
|
||||
]
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"size": 2047405,
|
||||
"hash": "blake3",
|
||||
"digest": "0b670b8fa0a3df2762af2fc82cc4932f684ca4c02dbd1260d4f3133fd4b2a515",
|
||||
"format": "tar.gz",
|
||||
"path": "ripgrep-14.1.1-aarch64-unknown-linux-gnu/rg",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-aarch64-unknown-linux-gnu.tar.gz"
|
||||
}
|
||||
]
|
||||
},
|
||||
"macos-x86_64": {
|
||||
"size": 2082672,
|
||||
"hash": "blake3",
|
||||
"digest": "e9b862fc8da3127f92791f0ff6a799504154ca9d36c98bf3e60a81c6b1f7289e",
|
||||
"format": "tar.gz",
|
||||
"path": "ripgrep-14.1.1-x86_64-apple-darwin/rg",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-x86_64-apple-darwin.tar.gz"
|
||||
}
|
||||
]
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"size": 2566310,
|
||||
"hash": "blake3",
|
||||
"digest": "f73cca4e54d78c31f832c7f6e2c0b4db8b04fa3eaa747915727d570893dbee76",
|
||||
"format": "tar.gz",
|
||||
"path": "ripgrep-14.1.1-x86_64-unknown-linux-musl/rg",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-x86_64-unknown-linux-musl.tar.gz"
|
||||
}
|
||||
]
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"size": 2058893,
|
||||
"hash": "blake3",
|
||||
"digest": "a8ce1a6fed4f8093ee997e57f33254e94b2cd18e26358b09db599c89882eadbd",
|
||||
"format": "zip",
|
||||
"path": "ripgrep-14.1.1-x86_64-pc-windows-msvc/rg.exe",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/BurntSushi/ripgrep/releases/download/14.1.1/ripgrep-14.1.1-x86_64-pc-windows-msvc.zip"
|
||||
}
|
||||
]
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"size": 1667740,
|
||||
"hash": "blake3",
|
||||
"digest": "47b971a8c4fca1d23a4e7c19bd4d88465ebc395598458133139406d3bf85f3fa",
|
||||
"format": "zip",
|
||||
"path": "rg.exe",
|
||||
"providers": [
|
||||
{
|
||||
"url": "https://github.com/microsoft/ripgrep-prebuilt/releases/download/v13.0.0-13/ripgrep-v13.0.0-13-aarch64-pc-windows-msvc.zip"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
88
codex-cli/build.mjs
Normal file
88
codex-cli/build.mjs
Normal file
@@ -0,0 +1,88 @@
|
||||
import * as esbuild from "esbuild";
|
||||
import * as fs from "fs";
|
||||
import * as path from "path";
|
||||
|
||||
const OUT_DIR = 'dist'
|
||||
/**
|
||||
* ink attempts to import react-devtools-core in an ESM-unfriendly way:
|
||||
*
|
||||
* https://github.com/vadimdemedes/ink/blob/eab6ef07d4030606530d58d3d7be8079b4fb93bb/src/reconciler.ts#L22-L45
|
||||
*
|
||||
* to make this work, we have to strip the import out of the build.
|
||||
*/
|
||||
const ignoreReactDevToolsPlugin = {
|
||||
name: "ignore-react-devtools",
|
||||
setup(build) {
|
||||
// When an import for 'react-devtools-core' is encountered,
|
||||
// return an empty module.
|
||||
build.onResolve({ filter: /^react-devtools-core$/ }, (args) => {
|
||||
return { path: args.path, namespace: "ignore-devtools" };
|
||||
});
|
||||
build.onLoad({ filter: /.*/, namespace: "ignore-devtools" }, () => {
|
||||
return { contents: "", loader: "js" };
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
// Build mode detection (production vs development)
|
||||
//
|
||||
// • production (default): minified, external telemetry shebang handling.
|
||||
// • development (--dev|NODE_ENV=development|CODEX_DEV=1):
|
||||
// – no minification
|
||||
// – inline source maps for better stacktraces
|
||||
// – shebang tweaked to enable Node's source‑map support at runtime
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
const isDevBuild =
|
||||
process.argv.includes("--dev") ||
|
||||
process.env.CODEX_DEV === "1" ||
|
||||
process.env.NODE_ENV === "development";
|
||||
|
||||
const plugins = [ignoreReactDevToolsPlugin];
|
||||
|
||||
// Build Hygiene, ensure we drop previous dist dir and any leftover files
|
||||
const outPath = path.resolve(OUT_DIR);
|
||||
if (fs.existsSync(outPath)) {
|
||||
fs.rmSync(outPath, { recursive: true, force: true });
|
||||
}
|
||||
|
||||
// Add a shebang that enables source‑map support for dev builds so that stack
|
||||
// traces point to the original TypeScript lines without requiring callers to
|
||||
// remember to set NODE_OPTIONS manually.
|
||||
if (isDevBuild) {
|
||||
const devShebangLine =
|
||||
"#!/usr/bin/env -S NODE_OPTIONS=--enable-source-maps node\n";
|
||||
const devShebangPlugin = {
|
||||
name: "dev-shebang",
|
||||
setup(build) {
|
||||
build.onEnd(async () => {
|
||||
const outFile = path.resolve(isDevBuild ? `${OUT_DIR}/cli-dev.js` : `${OUT_DIR}/cli.js`);
|
||||
let code = await fs.promises.readFile(outFile, "utf8");
|
||||
if (code.startsWith("#!")) {
|
||||
code = code.replace(/^#!.*\n/, devShebangLine);
|
||||
await fs.promises.writeFile(outFile, code, "utf8");
|
||||
}
|
||||
});
|
||||
},
|
||||
};
|
||||
plugins.push(devShebangPlugin);
|
||||
}
|
||||
|
||||
esbuild
|
||||
.build({
|
||||
entryPoints: ["src/cli.tsx"],
|
||||
// Do not bundle the contents of package.json at build time: always read it
|
||||
// at runtime.
|
||||
external: ["../package.json"],
|
||||
bundle: true,
|
||||
format: "esm",
|
||||
platform: "node",
|
||||
tsconfig: "tsconfig.json",
|
||||
outfile: isDevBuild ? `${OUT_DIR}/cli-dev.js` : `${OUT_DIR}/cli.js`,
|
||||
minify: !isDevBuild,
|
||||
sourcemap: isDevBuild ? "inline" : true,
|
||||
plugins,
|
||||
inject: ["./require-shim.js"],
|
||||
})
|
||||
.catch(() => process.exit(1));
|
||||
43
codex-cli/default.nix
Normal file
43
codex-cli/default.nix
Normal file
@@ -0,0 +1,43 @@
|
||||
{ pkgs, monorep-deps ? [], ... }:
|
||||
let
|
||||
node = pkgs.nodejs_22;
|
||||
in
|
||||
rec {
|
||||
package = pkgs.buildNpmPackage {
|
||||
pname = "codex-cli";
|
||||
version = "0.1.0";
|
||||
src = ./.;
|
||||
npmDepsHash = "sha256-3tAalmh50I0fhhd7XreM+jvl0n4zcRhqygFNB1Olst8";
|
||||
nodejs = node;
|
||||
npmInstallFlags = [ "--frozen-lockfile" ];
|
||||
meta = with pkgs.lib; {
|
||||
description = "OpenAI Codex command‑line interface";
|
||||
license = licenses.asl20;
|
||||
homepage = "https://github.com/openai/codex";
|
||||
};
|
||||
};
|
||||
devShell = pkgs.mkShell {
|
||||
name = "codex-cli-dev";
|
||||
buildInputs = monorep-deps ++ [
|
||||
node
|
||||
pkgs.pnpm
|
||||
];
|
||||
shellHook = ''
|
||||
echo "Entering development shell for codex-cli"
|
||||
# cd codex-cli
|
||||
if [ -f package-lock.json ]; then
|
||||
pnpm ci || echo "npm ci failed"
|
||||
else
|
||||
pnpm install || echo "npm install failed"
|
||||
fi
|
||||
npm run build || echo "npm build failed"
|
||||
export PATH=$PWD/node_modules/.bin:$PATH
|
||||
alias codex="node $PWD/dist/cli.js"
|
||||
'';
|
||||
};
|
||||
app = {
|
||||
type = "app";
|
||||
program = "${package}/bin/codex";
|
||||
};
|
||||
}
|
||||
|
||||
44
codex-cli/examples/README.md
Normal file
44
codex-cli/examples/README.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Quick start examples
|
||||
|
||||
This directory bundles some self‑contained examples using the Codex CLI. If you have never used the Codex CLI before, and want to see it complete a sample task, start with running **camerascii**. You'll see your webcam feed turned into animated ASCII art in a few minutes.
|
||||
|
||||
If you want to get started using the Codex CLI directly, skip this and refer to the prompting guide.
|
||||
|
||||
## Structure
|
||||
|
||||
Each example contains the following:
|
||||
```
|
||||
example‑name/
|
||||
├── run.sh # helper script that launches a new Codex session for the task
|
||||
├── task.yaml # task spec containing a prompt passed to Codex
|
||||
├── template/ # (optional) starter files copied into each run
|
||||
└── runs/ # work directories created by run.sh
|
||||
```
|
||||
|
||||
**run.sh**: a convenience wrapper that does three things:
|
||||
- Creates `runs/run_N`, where *N* is the number of a run.
|
||||
- Copies the contents of `template/` into that folder (if present).
|
||||
- Launches the Codex CLI with the description from `task.yaml`.
|
||||
|
||||
**template/**: any existing files or markdown instructions you would like Codex to see before it starts working.
|
||||
|
||||
**runs/**: the directories produced by `run.sh`.
|
||||
|
||||
## Running an example
|
||||
|
||||
1. **Run the helper script**:
|
||||
```
|
||||
cd camerascii
|
||||
./run.sh
|
||||
```
|
||||
2. **Interact with the Codex CLI**: the CLI will open with the prompt: “*Take a look at the screenshot details and implement a webpage that uses a webcam to style the video feed accordingly…*” Confirm the commands Codex CLI requests to generate `index.html`.
|
||||
|
||||
3. **Check its work**: when Codex is done, open ``runs/run_1/index.html`` in a browser. Your webcam feed should now be rendered as a cascade of ASCII glyphs. If the outcome isn't what you expect, try running it again, or adjust the task prompt.
|
||||
|
||||
|
||||
## Other examples
|
||||
Besides **camerascii**, you can experiment with:
|
||||
|
||||
- **build‑codex‑demo**: recreate the original 2021 Codex YouTube demo.
|
||||
- **impossible‑pong**: where Codex creates more difficult levels.
|
||||
- **prompt‑analyzer**: make a data science app for clustering [prompts](https://github.com/f/awesome-chatgpt-prompts).
|
||||
65
codex-cli/examples/build-codex-demo/run.sh
Executable file
65
codex-cli/examples/build-codex-demo/run.sh
Executable file
@@ -0,0 +1,65 @@
|
||||
#!/bin/bash
|
||||
|
||||
# run.sh — Create a new run_N directory for a Codex task, optionally bootstrapped from a template,
|
||||
# then launch Codex with the task description from task.yaml.
|
||||
#
|
||||
# Usage:
|
||||
# ./run.sh # Prompts to confirm new run
|
||||
# ./run.sh --auto-confirm # Skips confirmation
|
||||
#
|
||||
# Assumes:
|
||||
# - yq and jq are installed
|
||||
# - ../task.yaml exists (with .name and .description fields)
|
||||
# - ../template/ exists (optional, for bootstrapping new runs)
|
||||
|
||||
# Enable auto-confirm mode if flag is passed
|
||||
auto_mode=false
|
||||
[[ "$1" == "--auto-confirm" ]] && auto_mode=true
|
||||
|
||||
# Move into the working directory
|
||||
cd runs || exit 1
|
||||
|
||||
# Grab task name for logging
|
||||
task_name=$(yq -o=json '.' ../task.yaml | jq -r '.name')
|
||||
echo "Checking for runs for task: $task_name"
|
||||
|
||||
# Find existing run_N directories
|
||||
shopt -s nullglob
|
||||
run_dirs=(run_[0-9]*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [ ${#run_dirs[@]} -eq 0 ]; then
|
||||
echo "There are 0 runs."
|
||||
new_run_number=1
|
||||
else
|
||||
max_run_number=0
|
||||
for d in "${run_dirs[@]}"; do
|
||||
[[ "$d" =~ ^run_([0-9]+)$ ]] && (( ${BASH_REMATCH[1]} > max_run_number )) && max_run_number=${BASH_REMATCH[1]}
|
||||
done
|
||||
new_run_number=$((max_run_number + 1))
|
||||
echo "There are $max_run_number runs."
|
||||
fi
|
||||
|
||||
# Confirm creation unless in auto mode
|
||||
if [ "$auto_mode" = false ]; then
|
||||
read -p "Create run_$new_run_number? (Y/N): " choice
|
||||
[[ "$choice" != [Yy] ]] && echo "Exiting." && exit 1
|
||||
fi
|
||||
|
||||
# Create the run directory
|
||||
mkdir "run_$new_run_number"
|
||||
|
||||
# Check if the template directory exists and copy its contents
|
||||
if [ -d "../template" ]; then
|
||||
cp -r ../template/* "run_$new_run_number"
|
||||
echo "Initialized run_$new_run_number from template/"
|
||||
else
|
||||
echo "Template directory does not exist. Skipping initialization from template."
|
||||
fi
|
||||
|
||||
cd "run_$new_run_number"
|
||||
|
||||
# Launch Codex
|
||||
echo "Launching..."
|
||||
description=$(yq -o=json '.' ../../task.yaml | jq -r '.description')
|
||||
codex "$description"
|
||||
88
codex-cli/examples/build-codex-demo/task.yaml
Normal file
88
codex-cli/examples/build-codex-demo/task.yaml
Normal file
@@ -0,0 +1,88 @@
|
||||
name: "build-codex-demo"
|
||||
description: |
|
||||
I want you to reimplement the original OpenAI Codex demo.
|
||||
|
||||
Functionality:
|
||||
- User types a prompt and hits enter to send
|
||||
- The prompt is added to the conversation history
|
||||
- The backend calls the OpenAI API with stream: true
|
||||
- Tokens are streamed back and appended to the code viewer
|
||||
- Syntax highlighting updates in real time
|
||||
- When a full HTML file is received, it is rendered in a sandboxed iframe
|
||||
- The iframe replaces the previous preview with the new HTML after the stream is complete (i.e. keep the old preview until a new stream is complete)
|
||||
- Append each assistant and user message to preserve context across turns
|
||||
- Errors are displayed to user gracefully
|
||||
- Ensure there is a fixed layout is responsive and faithful to the screenshot design
|
||||
- Be sure to parse the output from OpenAI call to strip the ```html tags code is returned within
|
||||
- Use the system prompt shared in the API call below to ensure the AI only returns HTML
|
||||
|
||||
Support a simple local backend that can:
|
||||
- Read local env for OPENAI_API_KEY
|
||||
- Expose an endpoint that streams completions from OpenAI
|
||||
- Backend should be a simple node.js app
|
||||
- App should be easy to run locally for development and testing
|
||||
- Minimal setup preferred — keep dependencies light unless justified
|
||||
|
||||
Description of layout and design:
|
||||
- Two stacked panels, vertically aligned:
|
||||
- Top Panel: Main interactive area with two main parts
|
||||
- Left Side: Visual output canvas. Mostly blank space with a small image preview in the upper-left
|
||||
- Right Side: Code display area
|
||||
- Light background with code shown in a monospace font
|
||||
- Comments in green; code aligns vertically like an IDE/snippet view
|
||||
- Bottom Panel: Prompt/command bar
|
||||
- A single-line text box with a placeholder prompt
|
||||
- A green arrow (submit button) on the right side
|
||||
- Scrolling should only be supported in the code editor and output canvas
|
||||
|
||||
Visual style
|
||||
- Minimalist UI, light and clean
|
||||
- Neutral white/gray background
|
||||
- Subtle shadow or border around both panels, giving them card-like elevation
|
||||
- Code section is color-coded, likely for syntax highlighting
|
||||
- Interactive feel with the text input styled like a chat/message interface
|
||||
|
||||
Here's the latest OpenAI API and prompt to use:
|
||||
```
|
||||
import OpenAI from "openai";
|
||||
|
||||
const openai = new OpenAI({
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
});
|
||||
|
||||
const response = await openai.responses.create({
|
||||
model: "gpt-4.1",
|
||||
input: [
|
||||
{
|
||||
"role": "system",
|
||||
"content": [
|
||||
{
|
||||
"type": "input_text",
|
||||
"text": "You are a coding agent that specializes in frontend code. Whenever you are prompted, return only the full HTML file."
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
text: {
|
||||
"format": {
|
||||
"type": "text"
|
||||
}
|
||||
},
|
||||
reasoning: {},
|
||||
tools: [],
|
||||
temperature: 1,
|
||||
top_p: 1
|
||||
});
|
||||
|
||||
console.log(response.output_text);
|
||||
```
|
||||
Additional things to note:
|
||||
- Strip any html and tags from the OpenAI response before rendering
|
||||
- Assume the OpenAI API model response always wraps HTML in markdown-style triple backticks like ```html <code> ```
|
||||
- The display code window should have syntax highlighting and line numbers.
|
||||
- Make sure to only display the code, not the backticks or ```html that wrap the code from the model.
|
||||
- Do not inject raw markdown; only parse and insert pure HTML into the iframe
|
||||
- Only the code viewer and output panel should scroll
|
||||
- Keep the previous preview visible until the full new HTML has streamed in
|
||||
|
||||
Add a README.md with what you've implemented and how to run it.
|
||||
68
codex-cli/examples/camerascii/run.sh
Executable file
68
codex-cli/examples/camerascii/run.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/bin/bash
|
||||
|
||||
# run.sh — Create a new run_N directory for a Codex task, optionally bootstrapped from a template,
|
||||
# then launch Codex with the task description from task.yaml.
|
||||
#
|
||||
# Usage:
|
||||
# ./run.sh # Prompts to confirm new run
|
||||
# ./run.sh --auto-confirm # Skips confirmation
|
||||
#
|
||||
# Assumes:
|
||||
# - yq and jq are installed
|
||||
# - ../task.yaml exists (with .name and .description fields)
|
||||
# - ../template/ exists (optional, for bootstrapping new runs)
|
||||
|
||||
# Enable auto-confirm mode if flag is passed
|
||||
auto_mode=false
|
||||
[[ "$1" == "--auto-confirm" ]] && auto_mode=true
|
||||
|
||||
# Create the runs directory if it doesn't exist
|
||||
mkdir -p runs
|
||||
|
||||
# Move into the working directory
|
||||
cd runs || exit 1
|
||||
|
||||
# Grab task name for logging
|
||||
task_name=$(yq -o=json '.' ../task.yaml | jq -r '.name')
|
||||
echo "Checking for runs for task: $task_name"
|
||||
|
||||
# Find existing run_N directories
|
||||
shopt -s nullglob
|
||||
run_dirs=(run_[0-9]*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [ ${#run_dirs[@]} -eq 0 ]; then
|
||||
echo "There are 0 runs."
|
||||
new_run_number=1
|
||||
else
|
||||
max_run_number=0
|
||||
for d in "${run_dirs[@]}"; do
|
||||
[[ "$d" =~ ^run_([0-9]+)$ ]] && (( ${BASH_REMATCH[1]} > max_run_number )) && max_run_number=${BASH_REMATCH[1]}
|
||||
done
|
||||
new_run_number=$((max_run_number + 1))
|
||||
echo "There are $max_run_number runs."
|
||||
fi
|
||||
|
||||
# Confirm creation unless in auto mode
|
||||
if [ "$auto_mode" = false ]; then
|
||||
read -p "Create run_$new_run_number? (Y/N): " choice
|
||||
[[ "$choice" != [Yy] ]] && echo "Exiting." && exit 1
|
||||
fi
|
||||
|
||||
# Create the run directory
|
||||
mkdir "run_$new_run_number"
|
||||
|
||||
# Check if the template directory exists and copy its contents
|
||||
if [ -d "../template" ]; then
|
||||
cp -r ../template/* "run_$new_run_number"
|
||||
echo "Initialized run_$new_run_number from template/"
|
||||
else
|
||||
echo "Template directory does not exist. Skipping initialization from template."
|
||||
fi
|
||||
|
||||
cd "run_$new_run_number"
|
||||
|
||||
# Launch Codex
|
||||
echo "Launching..."
|
||||
description=$(yq -o=json '.' ../../task.yaml | jq -r '.description')
|
||||
codex "$description"
|
||||
0
codex-cli/examples/camerascii/runs/.gitkeep
Normal file
0
codex-cli/examples/camerascii/runs/.gitkeep
Normal file
5
codex-cli/examples/camerascii/task.yaml
Normal file
5
codex-cli/examples/camerascii/task.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
name: "camerascii"
|
||||
description: |
|
||||
Take a look at the screenshot details and implement a webpage that uses webcam
|
||||
to style the video feed accordingly (i.e. as ASCII art). Add some of the relevant features
|
||||
from the screenshot to the webpage in index.html.
|
||||
34
codex-cli/examples/camerascii/template/screenshot_details.md
Normal file
34
codex-cli/examples/camerascii/template/screenshot_details.md
Normal file
@@ -0,0 +1,34 @@
|
||||
### Screenshot Description
|
||||
|
||||
The image is a full–page screenshot of a single post on the social‑media site X (formerly Twitter).
|
||||
|
||||
1. **Header row**
|
||||
* At the very top‑left is a small circular avatar. The photo shows the side profile of a person whose face is softly lit in bluish‑purple tones; only the head and part of the neck are visible.
|
||||
* In the far upper‑right corner sit two standard X / Twitter interface icons: a circle containing a diagonal line (the “Mute / Block” indicator) and a three‑dot overflow menu.
|
||||
|
||||
2. **Tweet body text**
|
||||
* Below the header, in regular type, the author writes:
|
||||
|
||||
“Okay, OpenAI’s o3 is insane. Spent an hour messing with it and built an image‑to‑ASCII art converter, the exact tool I’ve always wanted. And it works so well”
|
||||
|
||||
3. **Embedded media**
|
||||
* The majority of the screenshot is occupied by an embedded 12‑second video of the converter UI. The video window has rounded corners and a dark theme.
|
||||
* **Left panel (tool controls)** – a slim vertical sidebar with the following labeled sections and blue–accented UI controls:
|
||||
* Theme selector (“Dark” is chosen).
|
||||
* A small checkbox labeled “Ignore White”.
|
||||
* **Upload Image** button area that shows the chosen file name.
|
||||
* **Image Processing** sliders:
|
||||
* “ASCII Width” (value ≈ 143)
|
||||
* “Brightness” (‑65)
|
||||
* “Contrast” (58)
|
||||
* “Blur (px)” (0.5)
|
||||
* A square checkbox for “Invert Colors”.
|
||||
* **Dithering** subsection with a checkbox (“Enable Dithering”) and a dropdown for the algorithm (value: “Noise”).
|
||||
* **Character Set** dropdown (value: “Detailed (Default)”).
|
||||
* **Display** slider labeled “Zoom (%)” (value ≈ 170) and a “Reset” button.
|
||||
|
||||
* **Main preview area (right side)** – a dark gray canvas that renders the selected image as white ASCII characters. The preview clearly depicts a stylized **palm tree**: a skinny trunk rises from the bottom centre, and a crown of splayed fronds fills the upper right quadrant.
|
||||
* A small black badge showing **“0:12”** overlays the bottom‑left corner of the media frame, indicating the video’s duration.
|
||||
* In the top‑right area of the media window are two pill‑shaped buttons: a heart‑shaped “Save” button and a cog‑shaped “Settings” button.
|
||||
|
||||
Overall, the screenshot shows the user excitedly announcing the success of their custom “Image to ASCII” converter created with OpenAI’s “o3”, accompanied by a short video demonstration of the tool converting a palm‑tree photo into ASCII art.
|
||||
68
codex-cli/examples/impossible-pong/run.sh
Executable file
68
codex-cli/examples/impossible-pong/run.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/bin/bash
|
||||
|
||||
# run.sh — Create a new run_N directory for a Codex task, optionally bootstrapped from a template,
|
||||
# then launch Codex with the task description from task.yaml.
|
||||
#
|
||||
# Usage:
|
||||
# ./run.sh # Prompts to confirm new run
|
||||
# ./run.sh --auto-confirm # Skips confirmation
|
||||
#
|
||||
# Assumes:
|
||||
# - yq and jq are installed
|
||||
# - ../task.yaml exists (with .name and .description fields)
|
||||
# - ../template/ exists (optional, for bootstrapping new runs)
|
||||
|
||||
# Enable auto-confirm mode if flag is passed
|
||||
auto_mode=false
|
||||
[[ "$1" == "--auto-confirm" ]] && auto_mode=true
|
||||
|
||||
# Create the runs directory if it doesn't exist
|
||||
mkdir -p runs
|
||||
|
||||
# Move into the working directory
|
||||
cd runs || exit 1
|
||||
|
||||
# Grab task name for logging
|
||||
task_name=$(yq -o=json '.' ../task.yaml | jq -r '.name')
|
||||
echo "Checking for runs for task: $task_name"
|
||||
|
||||
# Find existing run_N directories
|
||||
shopt -s nullglob
|
||||
run_dirs=(run_[0-9]*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [ ${#run_dirs[@]} -eq 0 ]; then
|
||||
echo "There are 0 runs."
|
||||
new_run_number=1
|
||||
else
|
||||
max_run_number=0
|
||||
for d in "${run_dirs[@]}"; do
|
||||
[[ "$d" =~ ^run_([0-9]+)$ ]] && (( ${BASH_REMATCH[1]} > max_run_number )) && max_run_number=${BASH_REMATCH[1]}
|
||||
done
|
||||
new_run_number=$((max_run_number + 1))
|
||||
echo "There are $max_run_number runs."
|
||||
fi
|
||||
|
||||
# Confirm creation unless in auto mode
|
||||
if [ "$auto_mode" = false ]; then
|
||||
read -p "Create run_$new_run_number? (Y/N): " choice
|
||||
[[ "$choice" != [Yy] ]] && echo "Exiting." && exit 1
|
||||
fi
|
||||
|
||||
# Create the run directory
|
||||
mkdir "run_$new_run_number"
|
||||
|
||||
# Check if the template directory exists and copy its contents
|
||||
if [ -d "../template" ]; then
|
||||
cp -r ../template/* "run_$new_run_number"
|
||||
echo "Initialized run_$new_run_number from template/"
|
||||
else
|
||||
echo "Template directory does not exist. Skipping initialization from template."
|
||||
fi
|
||||
|
||||
cd "run_$new_run_number"
|
||||
|
||||
# Launch Codex
|
||||
echo "Launching..."
|
||||
description=$(yq -o=json '.' ../../task.yaml | jq -r '.description')
|
||||
codex "$description"
|
||||
0
codex-cli/examples/impossible-pong/runs/.gitkeep
Normal file
0
codex-cli/examples/impossible-pong/runs/.gitkeep
Normal file
11
codex-cli/examples/impossible-pong/task.yaml
Normal file
11
codex-cli/examples/impossible-pong/task.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
name: "impossible-pong"
|
||||
description: |
|
||||
Update index.html with the following features:
|
||||
- Add an overlaid styled popup to start the game on first load
|
||||
- Between each point, show a 3 second countdown (this should be skipped if a player wins)
|
||||
- After each game the AI wins, display text at the bottom of the screen with lighthearted insults for the player
|
||||
- Add a leaderboard to the right of the court that shows how many games each player has won.
|
||||
- When a player wins, a styled popup appears with the winner's name and the option to play again. The leaderboard should update.
|
||||
- Add an "even more insane" difficulty mode that adds spin to the ball that makes it harder to predict.
|
||||
- Add an "even more(!!) insane" difficulty mode where the ball does a spin mid court and then picks a random (reasonable) direction to go in (this should only advantage the AI player)
|
||||
- Let the user choose which difficulty mode they want to play in on the popup that appears when the game starts.
|
||||
233
codex-cli/examples/impossible-pong/template/index.html
Normal file
233
codex-cli/examples/impossible-pong/template/index.html
Normal file
@@ -0,0 +1,233 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<title>Pong</title>
|
||||
<style>
|
||||
body {
|
||||
margin: 0;
|
||||
background: #000;
|
||||
color: white;
|
||||
font-family: sans-serif;
|
||||
overflow: hidden;
|
||||
}
|
||||
#controls {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 10px;
|
||||
background: #111;
|
||||
position: fixed;
|
||||
top: 0;
|
||||
width: 100%;
|
||||
z-index: 2;
|
||||
}
|
||||
canvas {
|
||||
display: block;
|
||||
margin: 60px auto 0 auto;
|
||||
background: #000;
|
||||
}
|
||||
button, select {
|
||||
background: #222;
|
||||
color: white;
|
||||
border: 1px solid #555;
|
||||
padding: 6px 12px;
|
||||
cursor: pointer;
|
||||
}
|
||||
button:hover {
|
||||
background: #333;
|
||||
}
|
||||
#score {
|
||||
font-weight: bold;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div id="controls">
|
||||
<button id="startPauseBtn">Pause</button>
|
||||
<button id="resetBtn">Reset</button>
|
||||
<label>Mode:
|
||||
<select id="modeSelect">
|
||||
<option value="player">Player vs AI</option>
|
||||
<option value="ai">AI vs AI</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>Difficulty:
|
||||
<select id="difficultySelect">
|
||||
<option value="basic">Basic</option>
|
||||
<option value="fast">Gets Fast</option>
|
||||
<option value="insane">Insane</option>
|
||||
</select>
|
||||
</label>
|
||||
<div id="score">Player: 0 | AI: 0</div>
|
||||
</div>
|
||||
|
||||
<canvas id="pong" width="800" height="600"></canvas>
|
||||
|
||||
<script>
|
||||
const canvas = document.getElementById('pong');
|
||||
const ctx = canvas.getContext('2d');
|
||||
const startPauseBtn = document.getElementById('startPauseBtn');
|
||||
const resetBtn = document.getElementById('resetBtn');
|
||||
const modeSelect = document.getElementById('modeSelect');
|
||||
const difficultySelect = document.getElementById('difficultySelect');
|
||||
const scoreDisplay = document.getElementById('score');
|
||||
|
||||
const paddleWidth = 10, paddleHeight = 100;
|
||||
const ballRadius = 8;
|
||||
|
||||
let player = { x: 0, y: canvas.height / 2 - paddleHeight / 2 };
|
||||
let ai = { x: canvas.width - paddleWidth, y: canvas.height / 2 - paddleHeight / 2 };
|
||||
let ball = { x: canvas.width / 2, y: canvas.height / 2, vx: 5, vy: 3 };
|
||||
|
||||
let isPaused = false;
|
||||
let mode = 'player';
|
||||
let difficulty = 'basic';
|
||||
|
||||
const tennisSteps = ['0', '15', '30', '40', 'Adv', 'Win'];
|
||||
let scores = { player: 0, ai: 0 };
|
||||
|
||||
function tennisDisplay() {
|
||||
if (scores.player >= 3 && scores.ai >= 3) {
|
||||
if (scores.player === scores.ai) return 'Deuce';
|
||||
if (scores.player === scores.ai + 1) return 'Advantage Player';
|
||||
if (scores.ai === scores.player + 1) return 'Advantage AI';
|
||||
}
|
||||
return `Player: ${tennisSteps[Math.min(scores.player, 4)]} | AI: ${tennisSteps[Math.min(scores.ai, 4)]}`;
|
||||
}
|
||||
|
||||
function updateScore(winner) {
|
||||
scores[winner]++;
|
||||
const diff = scores[winner] - scores[opponent(winner)];
|
||||
if (scores[winner] >= 4 && diff >= 2) {
|
||||
alert(`${winner === 'player' ? 'Player' : 'AI'} wins the game!`);
|
||||
scores = { player: 0, ai: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
function opponent(winner) {
|
||||
return winner === 'player' ? 'ai' : 'player';
|
||||
}
|
||||
|
||||
function drawRect(x, y, w, h, color = "#fff") {
|
||||
ctx.fillStyle = color;
|
||||
ctx.fillRect(x, y, w, h);
|
||||
}
|
||||
|
||||
function drawCircle(x, y, r, color = "#fff") {
|
||||
ctx.fillStyle = color;
|
||||
ctx.beginPath();
|
||||
ctx.arc(x, y, r, 0, Math.PI * 2);
|
||||
ctx.closePath();
|
||||
ctx.fill();
|
||||
}
|
||||
|
||||
function resetBall() {
|
||||
ball.x = canvas.width / 2;
|
||||
ball.y = canvas.height / 2;
|
||||
let baseSpeed = difficulty === 'insane' ? 8 : 5;
|
||||
ball.vx = baseSpeed * (Math.random() > 0.5 ? 1 : -1);
|
||||
ball.vy = 3 * (Math.random() > 0.5 ? 1 : -1);
|
||||
}
|
||||
|
||||
function update() {
|
||||
if (isPaused) return;
|
||||
|
||||
ball.x += ball.vx;
|
||||
ball.y += ball.vy;
|
||||
|
||||
// Wall bounce
|
||||
if (ball.y < 0 || ball.y > canvas.height) ball.vy *= -1;
|
||||
|
||||
// Paddle collision
|
||||
let paddle = ball.x < canvas.width / 2 ? player : ai;
|
||||
if (
|
||||
ball.x - ballRadius < paddle.x + paddleWidth &&
|
||||
ball.x + ballRadius > paddle.x &&
|
||||
ball.y > paddle.y &&
|
||||
ball.y < paddle.y + paddleHeight
|
||||
) {
|
||||
ball.vx *= -1;
|
||||
|
||||
if (difficulty === 'fast') {
|
||||
ball.vx *= 1.05;
|
||||
ball.vy *= 1.05;
|
||||
} else if (difficulty === 'insane') {
|
||||
ball.vx *= 1.1;
|
||||
ball.vy *= 1.1;
|
||||
}
|
||||
}
|
||||
|
||||
// Scoring
|
||||
if (ball.x < 0) {
|
||||
updateScore('ai');
|
||||
resetBall();
|
||||
} else if (ball.x > canvas.width) {
|
||||
updateScore('player');
|
||||
resetBall();
|
||||
}
|
||||
|
||||
// Paddle AI
|
||||
if (mode === 'ai') {
|
||||
player.y += (ball.y - (player.y + paddleHeight / 2)) * 0.1;
|
||||
}
|
||||
|
||||
ai.y += (ball.y - (ai.y + paddleHeight / 2)) * 0.1;
|
||||
|
||||
// Clamp paddles
|
||||
player.y = Math.max(0, Math.min(canvas.height - paddleHeight, player.y));
|
||||
ai.y = Math.max(0, Math.min(canvas.height - paddleHeight, ai.y));
|
||||
}
|
||||
|
||||
function drawCourtBoundaries() {
|
||||
drawRect(0, 0, canvas.width, 4); // Top
|
||||
drawRect(0, canvas.height - 4, canvas.width, 4); // Bottom
|
||||
}
|
||||
|
||||
function draw() {
|
||||
drawRect(0, 0, canvas.width, canvas.height, "#000");
|
||||
drawCourtBoundaries();
|
||||
drawRect(player.x, player.y, paddleWidth, paddleHeight);
|
||||
drawRect(ai.x, ai.y, paddleWidth, paddleHeight);
|
||||
drawCircle(ball.x, ball.y, ballRadius);
|
||||
scoreDisplay.textContent = tennisDisplay();
|
||||
}
|
||||
|
||||
function loop() {
|
||||
update();
|
||||
draw();
|
||||
requestAnimationFrame(loop);
|
||||
}
|
||||
|
||||
startPauseBtn.onclick = () => {
|
||||
isPaused = !isPaused;
|
||||
startPauseBtn.textContent = isPaused ? "Resume" : "Pause";
|
||||
};
|
||||
|
||||
resetBtn.onclick = () => {
|
||||
scores = { player: 0, ai: 0 };
|
||||
resetBall();
|
||||
};
|
||||
|
||||
modeSelect.onchange = (e) => {
|
||||
mode = e.target.value;
|
||||
};
|
||||
|
||||
difficultySelect.onchange = (e) => {
|
||||
difficulty = e.target.value;
|
||||
resetBall();
|
||||
};
|
||||
|
||||
document.addEventListener("mousemove", (e) => {
|
||||
if (mode === 'player') {
|
||||
const rect = canvas.getBoundingClientRect();
|
||||
player.y = e.clientY - rect.top - paddleHeight / 2;
|
||||
}
|
||||
});
|
||||
|
||||
loop();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
68
codex-cli/examples/prompt-analyzer/run.sh
Executable file
68
codex-cli/examples/prompt-analyzer/run.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/bin/bash
|
||||
|
||||
# run.sh — Create a new run_N directory for a Codex task, optionally bootstrapped from a template,
|
||||
# then launch Codex with the task description from task.yaml.
|
||||
#
|
||||
# Usage:
|
||||
# ./run.sh # Prompts to confirm new run
|
||||
# ./run.sh --auto-confirm # Skips confirmation
|
||||
#
|
||||
# Assumes:
|
||||
# - yq and jq are installed
|
||||
# - ../task.yaml exists (with .name and .description fields)
|
||||
# - ../template/ exists (optional, for bootstrapping new runs)
|
||||
|
||||
# Enable auto-confirm mode if flag is passed
|
||||
auto_mode=false
|
||||
[[ "$1" == "--auto-confirm" ]] && auto_mode=true
|
||||
|
||||
# Create the runs directory if it doesn't exist
|
||||
mkdir -p runs
|
||||
|
||||
# Move into the working directory
|
||||
cd runs || exit 1
|
||||
|
||||
# Grab task name for logging
|
||||
task_name=$(yq -o=json '.' ../task.yaml | jq -r '.name')
|
||||
echo "Checking for runs for task: $task_name"
|
||||
|
||||
# Find existing run_N directories
|
||||
shopt -s nullglob
|
||||
run_dirs=(run_[0-9]*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [ ${#run_dirs[@]} -eq 0 ]; then
|
||||
echo "There are 0 runs."
|
||||
new_run_number=1
|
||||
else
|
||||
max_run_number=0
|
||||
for d in "${run_dirs[@]}"; do
|
||||
[[ "$d" =~ ^run_([0-9]+)$ ]] && (( ${BASH_REMATCH[1]} > max_run_number )) && max_run_number=${BASH_REMATCH[1]}
|
||||
done
|
||||
new_run_number=$((max_run_number + 1))
|
||||
echo "There are $max_run_number runs."
|
||||
fi
|
||||
|
||||
# Confirm creation unless in auto mode
|
||||
if [ "$auto_mode" = false ]; then
|
||||
read -p "Create run_$new_run_number? (Y/N): " choice
|
||||
[[ "$choice" != [Yy] ]] && echo "Exiting." && exit 1
|
||||
fi
|
||||
|
||||
# Create the run directory
|
||||
mkdir "run_$new_run_number"
|
||||
|
||||
# Check if the template directory exists and copy its contents
|
||||
if [ -d "../template" ]; then
|
||||
cp -r ../template/* "run_$new_run_number"
|
||||
echo "Initialized run_$new_run_number from template/"
|
||||
else
|
||||
echo "Template directory does not exist. Skipping initialization from template."
|
||||
fi
|
||||
|
||||
cd "run_$new_run_number"
|
||||
|
||||
# Launch Codex
|
||||
echo "Launching..."
|
||||
description=$(yq -o=json '.' ../../task.yaml | jq -r '.description')
|
||||
codex "$description"
|
||||
0
codex-cli/examples/prompt-analyzer/runs/.gitkeep
Normal file
0
codex-cli/examples/prompt-analyzer/runs/.gitkeep
Normal file
17
codex-cli/examples/prompt-analyzer/task.yaml
Normal file
17
codex-cli/examples/prompt-analyzer/task.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
name: "prompt-analyzer"
|
||||
description: |
|
||||
I have some existing work here (embedding prompts, clustering them, generating
|
||||
summaries with GPT). I want to make it more interactive and reusable.
|
||||
|
||||
Objective: create an interactive cluster explorer
|
||||
- Build a lightweight streamlit app UI
|
||||
- Allow users to upload a CSV of prompts
|
||||
- Display clustered prompts with auto-generated cluster names and summaries
|
||||
- Click "cluster" and see progress stream in a small window (primarily for aesthetic reasons)
|
||||
- Let users browse examples by cluster, view outliers, and inspect individual prompts
|
||||
- See generated analysis rendered in the app, along with the plots displayed nicely
|
||||
- Support selecting clustering algorithms (e.g. DBSCAN, KMeans, etc) and "recluster"
|
||||
- Include token count + histogram of prompt lengths
|
||||
- Add interactive filters in UI (e.g. filter by token length, keyword, or cluster)
|
||||
|
||||
When you're done, update the README.md with a changelog and instructions for how to run the app.
|
||||
231
codex-cli/examples/prompt-analyzer/template/Clustering.ipynb
Normal file
231
codex-cli/examples/prompt-analyzer/template/Clustering.ipynb
Normal file
@@ -0,0 +1,231 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## K-means Clustering in Python using OpenAI\n",
|
||||
"\n",
|
||||
"We use a simple k-means algorithm to demonstrate how clustering can be done. Clustering can help discover valuable, hidden groupings within the data. The dataset is created in the [Get_embeddings_from_dataset Notebook](Get_embeddings_from_dataset.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"(1000, 1536)"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# imports\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"from ast import literal_eval\n",
|
||||
"\n",
|
||||
"# load data\n",
|
||||
"datafile_path = \"./data/fine_food_reviews_with_embeddings_1k.csv\"\n",
|
||||
"\n",
|
||||
"df = pd.read_csv(datafile_path)\n",
|
||||
"df[\"embedding\"] = df.embedding.apply(literal_eval).apply(np.array) # convert string to numpy array\n",
|
||||
"matrix = np.vstack(df.embedding.values)\n",
|
||||
"matrix.shape\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 1. Find the clusters using K-means"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We show the simplest use of K-means. You can pick the number of clusters that fits your use case best."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/opt/homebrew/lib/python3.11/site-packages/sklearn/cluster/_kmeans.py:870: FutureWarning: The default value of `n_init` will change from 10 to 'auto' in 1.4. Set the value of `n_init` explicitly to suppress the warning\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Cluster\n",
|
||||
"0 4.105691\n",
|
||||
"1 4.191176\n",
|
||||
"2 4.215613\n",
|
||||
"3 4.306590\n",
|
||||
"Name: Score, dtype: float64"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from sklearn.cluster import KMeans\n",
|
||||
"\n",
|
||||
"n_clusters = 4\n",
|
||||
"\n",
|
||||
"kmeans = KMeans(n_clusters=n_clusters, init=\"k-means++\", random_state=42)\n",
|
||||
"kmeans.fit(matrix)\n",
|
||||
"labels = kmeans.labels_\n",
|
||||
"df[\"Cluster\"] = labels\n",
|
||||
"\n",
|
||||
"df.groupby(\"Cluster\").Score.mean().sort_values()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.manifold import TSNE\n",
|
||||
"import matplotlib\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"\n",
|
||||
"tsne = TSNE(n_components=2, perplexity=15, random_state=42, init=\"random\", learning_rate=200)\n",
|
||||
"vis_dims2 = tsne.fit_transform(matrix)\n",
|
||||
"\n",
|
||||
"x = [x for x, y in vis_dims2]\n",
|
||||
"y = [y for x, y in vis_dims2]\n",
|
||||
"\n",
|
||||
"for category, color in enumerate([\"purple\", \"green\", \"red\", \"blue\"]):\n",
|
||||
" xs = np.array(x)[df.Cluster == category]\n",
|
||||
" ys = np.array(y)[df.Cluster == category]\n",
|
||||
" plt.scatter(xs, ys, color=color, alpha=0.3)\n",
|
||||
"\n",
|
||||
" avg_x = xs.mean()\n",
|
||||
" avg_y = ys.mean()\n",
|
||||
"\n",
|
||||
" plt.scatter(avg_x, avg_y, marker=\"x\", color=color, s=100)\n",
|
||||
"plt.title(\"Clusters identified visualized in language 2d using t-SNE\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Visualization of clusters in a 2d projection. In this run, the green cluster (#1) seems quite different from the others. Let's see a few samples from each cluster."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2. Text samples in the clusters & naming the clusters\n",
|
||||
"\n",
|
||||
"Let's show random samples from each cluster. We'll use gpt-4 to name the clusters, based on a random sample of 5 reviews from that cluster."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from openai import OpenAI\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"client = OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\", \"<your OpenAI API key if not set as env var>\"))\n",
|
||||
"\n",
|
||||
"# Reading a review which belong to each group.\n",
|
||||
"rev_per_cluster = 5\n",
|
||||
"\n",
|
||||
"for i in range(n_clusters):\n",
|
||||
" print(f\"Cluster {i} Theme:\", end=\" \")\n",
|
||||
"\n",
|
||||
" reviews = \"\\n\".join(\n",
|
||||
" df[df.Cluster == i]\n",
|
||||
" .combined.str.replace(\"Title: \", \"\")\n",
|
||||
" .str.replace(\"\\n\\nContent: \", \": \")\n",
|
||||
" .sample(rev_per_cluster, random_state=42)\n",
|
||||
" .values\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" messages = [\n",
|
||||
" {\"role\": \"user\", \"content\": f'What do the following customer reviews have in common?\\n\\nCustomer reviews:\\n\"\"\"\\n{reviews}\\n\"\"\"\\n\\nTheme:'}\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
" response = client.chat.completions.create(\n",
|
||||
" model=\"gpt-4\",\n",
|
||||
" messages=messages,\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=64,\n",
|
||||
" top_p=1,\n",
|
||||
" frequency_penalty=0,\n",
|
||||
" presence_penalty=0)\n",
|
||||
" print(response.choices[0].message.content.replace(\"\\n\", \"\"))\n",
|
||||
"\n",
|
||||
" sample_cluster_rows = df[df.Cluster == i].sample(rev_per_cluster, random_state=42)\n",
|
||||
" for j in range(rev_per_cluster):\n",
|
||||
" print(sample_cluster_rows.Score.values[j], end=\", \")\n",
|
||||
" print(sample_cluster_rows.Summary.values[j], end=\": \")\n",
|
||||
" print(sample_cluster_rows.Text.str[:70].values[j])\n",
|
||||
"\n",
|
||||
" print(\"-\" * 100)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It's important to note that clusters will not necessarily match what you intend to use them for. A larger amount of clusters will focus on more specific patterns, whereas a small number of clusters will usually focus on largest discrepancies in the data."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "openai",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "365536dcbde60510dc9073d6b991cd35db2d9bac356a11f5b64279a5e6708b97"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
103
codex-cli/examples/prompt-analyzer/template/README.md
Normal file
103
codex-cli/examples/prompt-analyzer/template/README.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Prompt‑Clustering Utility
|
||||
|
||||
This repository contains a small utility (`cluster_prompts.py`) that embeds a
|
||||
list of prompts with the OpenAI Embedding API, discovers natural groupings with
|
||||
unsupervised clustering, lets ChatGPT name & describe each cluster and finally
|
||||
produces a concise Markdown report plus a couple of diagnostic plots.
|
||||
|
||||
The default input file (`prompts.csv`) ships with the repo so you can try the
|
||||
script immediately, but you can of course point it at your own file.
|
||||
|
||||
---
|
||||
|
||||
## 1. Setup
|
||||
|
||||
1. Install the Python dependencies (preferably inside a virtual env):
|
||||
|
||||
```bash
|
||||
pip install pandas numpy scikit-learn matplotlib openai
|
||||
```
|
||||
|
||||
2. Export your OpenAI API key (**required**):
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="sk‑..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Basic usage
|
||||
|
||||
```bash
|
||||
# Minimal command – runs on prompts.csv and writes analysis.md + plots/
|
||||
python cluster_prompts.py
|
||||
```
|
||||
|
||||
This will
|
||||
|
||||
* create embeddings with the `text-embedding-3-small` model,
|
||||
* pick a suitable number *k* via silhouette score (K‑Means),
|
||||
* ask `gpt‑4o‑mini` to label & describe each cluster,
|
||||
* store the results in `analysis.md`,
|
||||
* and save two plots to `plots/` (`cluster_sizes.png` and `tsne.png`).
|
||||
|
||||
The script prints a short success message once done.
|
||||
|
||||
---
|
||||
|
||||
## 3. Command‑line options
|
||||
|
||||
| flag | default | description |
|
||||
|------|---------|-------------|
|
||||
| `--csv` | `prompts.csv` | path to the input CSV (must contain a `prompt` column; an `act` column is used as context if present) |
|
||||
| `--cache` | _(none)_ | embedding cache path (JSON). Speeds up repeated runs – new texts are appended automatically. |
|
||||
| `--cluster-method` | `kmeans` | `kmeans` (with automatic *k*) or `dbscan` |
|
||||
| `--k-max` | `10` | upper bound for *k* when `kmeans` is selected |
|
||||
| `--dbscan-min-samples` | `3` | min samples parameter for DBSCAN |
|
||||
| `--embedding-model` | `text-embedding-3-small` | any OpenAI embedding model |
|
||||
| `--chat-model` | `gpt-4o-mini` | chat model used to generate cluster names / descriptions |
|
||||
| `--output-md` | `analysis.md` | where to write the Markdown report |
|
||||
| `--plots-dir` | `plots` | directory for generated PNGs |
|
||||
|
||||
Example with customised options:
|
||||
|
||||
```bash
|
||||
python cluster_prompts.py \
|
||||
--csv my_prompts.csv \
|
||||
--cache .cache/embeddings.json \
|
||||
--cluster-method dbscan \
|
||||
--embedding-model text-embedding-3-large \
|
||||
--chat-model gpt-4o \
|
||||
--output-md my_analysis.md \
|
||||
--plots-dir my_plots
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Interpreting the output
|
||||
|
||||
### analysis.md
|
||||
|
||||
* Overview table: cluster label, generated name, member count and description.
|
||||
* Detailed section for every cluster with five representative example prompts.
|
||||
* Separate lists for
|
||||
* **Noise / outliers** (label `‑1` when DBSCAN is used) and
|
||||
* **Potentially ambiguous prompts** (only with K‑Means) – these are items that
|
||||
lie almost equally close to two centroids and might belong to multiple
|
||||
groups.
|
||||
|
||||
### plots/cluster_sizes.png
|
||||
|
||||
Quick bar‑chart visualisation of how many prompts ended up in each cluster.
|
||||
|
||||
---
|
||||
|
||||
## 5. Troubleshooting
|
||||
|
||||
* **Rate‑limits / quota errors** – lower the number of prompts per run or switch
|
||||
to a larger quota account.
|
||||
* **Authentication errors** – make sure `OPENAI_API_KEY` is exported in the
|
||||
shell where you run the script.
|
||||
* **Inadequate clusters** – try the other clustering method, adjust `--k-max`
|
||||
or tune DBSCAN parameters (`eps` range is inferred, `min_samples` exposed via
|
||||
CLI).
|
||||
23
codex-cli/examples/prompt-analyzer/template/analysis.md
Normal file
23
codex-cli/examples/prompt-analyzer/template/analysis.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Prompt Clustering Report
|
||||
|
||||
Generated by `cluster_prompts.py` – 2025-04-16
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
* Total prompts: **213**
|
||||
* Clustering method: **kmeans**
|
||||
* k (K‑Means): **2**
|
||||
* Silhouette score: **0.042**
|
||||
* Final clusters (excluding noise): **2**
|
||||
|
||||
|
||||
| label | name | #prompts | description |
|
||||
|-------|------|---------:|-------------|
|
||||
| 0 | Creative Guidance Roles | 121 | This cluster encompasses a variety of roles where individuals provide expert advice, suggestions, and creative ideas across different fields. Each role, be it interior decorator, comedian, IT architect, or artist advisor, focuses on enhancing the expertise and creativity of others by tailoring advice to specific requests and contexts. |
|
||||
| 1 | Role Customization Requests | 92 | This cluster contains various requests for role-specific assistance across different domains, including web development, language processing, IT troubleshooting, and creative endeavors. Each snippet illustrates a unique role that a user wishes to engage with, focusing on specific tasks without requiring explanations. |
|
||||
|
||||
---
|
||||
## Plots
|
||||
|
||||
The directory `plots/` contains a bar chart of the cluster sizes and a t‑SNE scatter plot coloured by cluster.
|
||||
@@ -0,0 +1,22 @@
|
||||
# Prompt Clustering Report
|
||||
|
||||
Generated by `cluster_prompts.py` – 2025-04-16
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
* Total prompts: **213**
|
||||
* Clustering method: **dbscan**
|
||||
* Final clusters (excluding noise): **1**
|
||||
|
||||
|
||||
| label | name | #prompts | description |
|
||||
|-------|------|---------:|-------------|
|
||||
| -1 | Noise / Outlier | 10 | Prompts that do not cleanly belong to any cluster. |
|
||||
| 0 | Role Simulation Tasks | 203 | This cluster consists of varied role-playing scenarios where users request an AI to assume specific professional roles, such as composer, dream interpreter, doctor, or IT architect. Each snippet showcases tasks that involve creating content, providing advice, or performing analytical functions based on user-defined themes or prompts. |
|
||||
|
||||
---
|
||||
|
||||
## Plots
|
||||
|
||||
The directory `plots/` contains a bar chart of the cluster sizes and a t‑SNE scatter plot coloured by cluster.
|
||||
547
codex-cli/examples/prompt-analyzer/template/cluster_prompts.py
Normal file
547
codex-cli/examples/prompt-analyzer/template/cluster_prompts.py
Normal file
@@ -0,0 +1,547 @@
|
||||
#!/usr/bin/env python3
|
||||
"""End‑to‑end pipeline for analysing a collection of text prompts.
|
||||
|
||||
The script performs the following steps:
|
||||
|
||||
1. Read a CSV file that must contain a column named ``prompt``. If an
|
||||
``act`` column is present it is used purely for reporting purposes.
|
||||
2. Create embeddings via the OpenAI API (``text-embedding-3-small`` by
|
||||
default). The user can optionally provide a JSON cache path so the
|
||||
expensive embedding step is only executed for new / unseen texts.
|
||||
3. Cluster the resulting vectors either with K‑Means (automatically picking
|
||||
*k* through the silhouette score) or with DBSCAN. Outliers are flagged
|
||||
as cluster ``-1`` when DBSCAN is selected.
|
||||
4. Ask a Chat Completion model (``gpt-4o-mini`` by default) to come up with a
|
||||
short name and description for every cluster.
|
||||
5. Write a human‑readable Markdown report (default: ``analysis.md``).
|
||||
6. Generate a couple of diagnostic plots (cluster sizes and a t‑SNE scatter
|
||||
plot) and store them in ``plots/``.
|
||||
|
||||
The script is intentionally opinionated yet configurable via a handful of CLI
|
||||
options – run ``python cluster_prompts.py --help`` for details.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Sequence
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
# External, heavy‑weight libraries are imported lazily so that users running the
|
||||
# ``--help`` command do not pay the startup cost.
|
||||
|
||||
|
||||
def parse_cli() -> argparse.Namespace: # noqa: D401
|
||||
"""Parse command‑line arguments."""
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="cluster_prompts.py",
|
||||
description="Embed, cluster and analyse text prompts via the OpenAI API.",
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||
)
|
||||
|
||||
parser.add_argument("--csv", type=Path, default=Path("prompts.csv"), help="Input CSV file.")
|
||||
parser.add_argument(
|
||||
"--cache",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Optional JSON cache for embeddings (will be created if it does not exist).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--embedding-model",
|
||||
default="text-embedding-3-small",
|
||||
help="OpenAI embedding model to use.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--chat-model",
|
||||
default="gpt-4o-mini",
|
||||
help="OpenAI chat model for cluster descriptions.",
|
||||
)
|
||||
|
||||
# Clustering parameters
|
||||
parser.add_argument(
|
||||
"--cluster-method",
|
||||
choices=["kmeans", "dbscan"],
|
||||
default="kmeans",
|
||||
help="Clustering algorithm to use.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--k-max",
|
||||
type=int,
|
||||
default=10,
|
||||
help="Upper bound for k when the kmeans method is selected.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dbscan-min-samples",
|
||||
type=int,
|
||||
default=3,
|
||||
help="min_samples parameter for DBSCAN (only relevant when dbscan is selected).",
|
||||
)
|
||||
|
||||
# Output paths
|
||||
parser.add_argument(
|
||||
"--output-md", type=Path, default=Path("analysis.md"), help="Markdown report path."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--plots-dir", type=Path, default=Path("plots"), help="Directory that will hold PNG plots."
|
||||
)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Embedding helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _lazy_import_openai(): # noqa: D401
|
||||
"""Import *openai* only when needed to keep startup lightweight."""
|
||||
|
||||
try:
|
||||
import openai # type: ignore
|
||||
|
||||
return openai
|
||||
except ImportError as exc: # pragma: no cover – we do not test missing deps.
|
||||
raise SystemExit(
|
||||
"The 'openai' package is required but not installed.\n"
|
||||
"Run 'pip install openai' and try again."
|
||||
) from exc
|
||||
|
||||
|
||||
def embed_texts(texts: Sequence[str], model: str, batch_size: int = 100) -> list[list[float]]:
|
||||
"""Embed *texts* with OpenAI and return a list of vectors.
|
||||
|
||||
Uses batching for efficiency but remains on the safe side regarding current
|
||||
OpenAI rate limits (can be adjusted by changing *batch_size*).
|
||||
"""
|
||||
|
||||
openai = _lazy_import_openai()
|
||||
client = openai.OpenAI()
|
||||
|
||||
embeddings: list[list[float]] = []
|
||||
|
||||
for batch_start in range(0, len(texts), batch_size):
|
||||
batch = texts[batch_start : batch_start + batch_size]
|
||||
|
||||
response = client.embeddings.create(input=batch, model=model)
|
||||
# The API returns the vectors in the same order as the input list.
|
||||
embeddings.extend(data.embedding for data in response.data)
|
||||
|
||||
return embeddings
|
||||
|
||||
|
||||
def load_or_create_embeddings(
|
||||
prompts: pd.Series, *, cache_path: Path | None, model: str
|
||||
) -> pd.DataFrame:
|
||||
"""Return a *DataFrame* with one row per prompt and the embedding columns.
|
||||
|
||||
* If *cache_path* is provided and exists, known embeddings are loaded from
|
||||
the JSON cache so they don't have to be re‑generated.
|
||||
* Missing embeddings are requested from the OpenAI API and subsequently
|
||||
appended to the cache.
|
||||
* The returned DataFrame has the same index as *prompts*.
|
||||
"""
|
||||
|
||||
cache: dict[str, list[float]] = {}
|
||||
if cache_path and cache_path.exists():
|
||||
try:
|
||||
cache = json.loads(cache_path.read_text())
|
||||
except json.JSONDecodeError: # pragma: no cover – unlikely.
|
||||
print("⚠️ Cache file exists but is not valid JSON – ignoring.", file=sys.stderr)
|
||||
|
||||
missing_mask = ~prompts.isin(cache)
|
||||
|
||||
if missing_mask.any():
|
||||
texts_to_embed = prompts[missing_mask].tolist()
|
||||
print(f"Embedding {len(texts_to_embed)} new prompt(s)…", flush=True)
|
||||
new_embeddings = embed_texts(texts_to_embed, model=model)
|
||||
|
||||
# Update cache (regardless of whether we persist it to disk later on).
|
||||
cache.update(dict(zip(texts_to_embed, new_embeddings)))
|
||||
|
||||
if cache_path:
|
||||
cache_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
cache_path.write_text(json.dumps(cache))
|
||||
|
||||
# Build a consistent embeddings matrix
|
||||
vectors = prompts.map(cache.__getitem__).tolist() # type: ignore[arg-type]
|
||||
mat = np.array(vectors, dtype=np.float32)
|
||||
return pd.DataFrame(mat, index=prompts.index)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Clustering helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _lazy_import_sklearn_cluster():
|
||||
"""Lazy import helper for scikit‑learn *cluster* sub‑module."""
|
||||
|
||||
# Importing scikit‑learn is slow; defer until needed.
|
||||
from sklearn.cluster import DBSCAN, KMeans # type: ignore
|
||||
from sklearn.metrics import silhouette_score # type: ignore
|
||||
from sklearn.preprocessing import StandardScaler # type: ignore
|
||||
|
||||
return KMeans, DBSCAN, silhouette_score, StandardScaler
|
||||
|
||||
|
||||
def cluster_kmeans(matrix: np.ndarray, k_max: int) -> np.ndarray:
|
||||
"""Auto‑select *k* (in ``[2, k_max]``) via Silhouette score and cluster."""
|
||||
|
||||
KMeans, _, silhouette_score, _ = _lazy_import_sklearn_cluster()
|
||||
|
||||
best_k = None
|
||||
best_score = -1.0
|
||||
best_labels: np.ndarray | None = None
|
||||
|
||||
for k in range(2, k_max + 1):
|
||||
model = KMeans(n_clusters=k, random_state=42, n_init="auto")
|
||||
labels = model.fit_predict(matrix)
|
||||
try:
|
||||
score = silhouette_score(matrix, labels)
|
||||
except ValueError:
|
||||
# Occurs when a cluster ended up with 1 sample – skip.
|
||||
continue
|
||||
|
||||
if score > best_score:
|
||||
best_k = k
|
||||
best_score = score
|
||||
best_labels = labels
|
||||
|
||||
if best_labels is None: # pragma: no cover – highly unlikely.
|
||||
raise RuntimeError("Unable to find a suitable number of clusters.")
|
||||
|
||||
print(f"K‑Means selected k={best_k} (silhouette={best_score:.3f}).", flush=True)
|
||||
return best_labels
|
||||
|
||||
|
||||
def cluster_dbscan(matrix: np.ndarray, min_samples: int) -> np.ndarray:
|
||||
"""Cluster with DBSCAN; *eps* is estimated via the k‑distance method."""
|
||||
|
||||
_, DBSCAN, _, StandardScaler = _lazy_import_sklearn_cluster()
|
||||
|
||||
# Scale features – DBSCAN is sensitive to feature scale.
|
||||
scaler = StandardScaler()
|
||||
matrix_scaled = scaler.fit_transform(matrix)
|
||||
|
||||
# Heuristic: use the median of the distances to the ``min_samples``‑th
|
||||
# nearest neighbour as eps. This is a commonly used rule of thumb.
|
||||
from sklearn.neighbors import NearestNeighbors # type: ignore # lazy import
|
||||
|
||||
neigh = NearestNeighbors(n_neighbors=min_samples)
|
||||
neigh.fit(matrix_scaled)
|
||||
distances, _ = neigh.kneighbors(matrix_scaled)
|
||||
kth_distances = distances[:, -1]
|
||||
eps = float(np.percentile(kth_distances, 90)) # choose a high‑ish value.
|
||||
|
||||
print(f"DBSCAN min_samples={min_samples}, eps={eps:.3f}", flush=True)
|
||||
model = DBSCAN(eps=eps, min_samples=min_samples)
|
||||
return model.fit_predict(matrix_scaled)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cluster labelling helpers (LLM)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def label_clusters(
|
||||
df: pd.DataFrame, labels: np.ndarray, chat_model: str, max_examples: int = 12
|
||||
) -> dict[int, dict[str, str]]:
|
||||
"""Generate a name & description for each cluster label via ChatGPT.
|
||||
|
||||
Returns a mapping ``label -> {"name": str, "description": str}``.
|
||||
"""
|
||||
|
||||
openai = _lazy_import_openai()
|
||||
client = openai.OpenAI()
|
||||
|
||||
out: dict[int, dict[str, str]] = {}
|
||||
|
||||
for lbl in sorted(set(labels)):
|
||||
if lbl == -1:
|
||||
# Noise (DBSCAN) – skip LLM call.
|
||||
out[lbl] = {
|
||||
"name": "Noise / Outlier",
|
||||
"description": "Prompts that do not cleanly belong to any cluster.",
|
||||
}
|
||||
continue
|
||||
|
||||
# Pick a handful of example prompts to send to the model.
|
||||
examples_series = df.loc[labels == lbl, "prompt"].sample(
|
||||
min(max_examples, (labels == lbl).sum()), random_state=42
|
||||
)
|
||||
examples = examples_series.tolist()
|
||||
|
||||
user_content = (
|
||||
"The following text snippets are all part of the same semantic cluster.\n"
|
||||
"Please propose \n"
|
||||
"1. A very short *title* for the cluster (≤ 4 words).\n"
|
||||
"2. A concise 2–3 sentence *description* that explains the common theme.\n\n"
|
||||
"Answer **strictly** as valid JSON with the keys 'name' and 'description'.\n\n"
|
||||
"Snippets:\n"
|
||||
)
|
||||
user_content += "\n".join(f"- {t}" for t in examples)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are an expert analyst, competent in summarising text clusters succinctly.",
|
||||
},
|
||||
{"role": "user", "content": user_content},
|
||||
]
|
||||
|
||||
try:
|
||||
resp = client.chat.completions.create(model=chat_model, messages=messages)
|
||||
reply = resp.choices[0].message.content.strip()
|
||||
|
||||
# Extract the JSON object even if the assistant wrapped it in markdown
|
||||
# code fences or added other text.
|
||||
|
||||
# Remove common markdown fences.
|
||||
reply_clean = reply.strip()
|
||||
# Take the substring between the first "{" and the last "}".
|
||||
m_start = reply_clean.find("{")
|
||||
m_end = reply_clean.rfind("}")
|
||||
if m_start == -1 or m_end == -1:
|
||||
raise ValueError("No JSON object found in model reply.")
|
||||
|
||||
json_str = reply_clean[m_start : m_end + 1]
|
||||
data = json.loads(json_str) # type: ignore[arg-type]
|
||||
|
||||
out[lbl] = {
|
||||
"name": str(data.get("name", "Unnamed"))[:60],
|
||||
"description": str(data.get("description", "")).strip(),
|
||||
}
|
||||
except Exception as exc: # pragma: no cover – network / runtime errors.
|
||||
print(f"⚠️ Failed to label cluster {lbl}: {exc}", file=sys.stderr)
|
||||
out[lbl] = {"name": f"Cluster {lbl}", "description": "<LLM call failed>"}
|
||||
|
||||
return out
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Reporting helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def generate_markdown_report(
|
||||
df: pd.DataFrame,
|
||||
labels: np.ndarray,
|
||||
meta: dict[int, dict[str, str]],
|
||||
outputs: dict[str, Any],
|
||||
path_md: Path,
|
||||
):
|
||||
"""Write a self‑contained Markdown analysis to *path_md*."""
|
||||
|
||||
path_md.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
cluster_ids = sorted(set(labels))
|
||||
counts = {lbl: int((labels == lbl).sum()) for lbl in cluster_ids}
|
||||
|
||||
lines: list[str] = []
|
||||
|
||||
lines.append("# Prompt Clustering Report\n")
|
||||
lines.append(f"Generated by `cluster_prompts.py` – {pd.Timestamp.now()}\n")
|
||||
|
||||
# High‑level stats
|
||||
total = len(labels)
|
||||
num_clusters = len(cluster_ids) - (1 if -1 in cluster_ids else 0)
|
||||
lines.append("\n## Overview\n")
|
||||
lines.append(f"* Total prompts: **{total}**")
|
||||
lines.append(f"* Clustering method: **{outputs['method']}**")
|
||||
if outputs.get("k"):
|
||||
lines.append(f"* k (K‑Means): **{outputs['k']}**")
|
||||
lines.append(f"* Silhouette score: **{outputs['silhouette']:.3f}**")
|
||||
lines.append(f"* Final clusters (excluding noise): **{num_clusters}**\n")
|
||||
|
||||
# Summary table
|
||||
lines.append("\n| label | name | #prompts | description |")
|
||||
lines.append("|-------|------|---------:|-------------|")
|
||||
for lbl in cluster_ids:
|
||||
meta_lbl = meta[lbl]
|
||||
lines.append(f"| {lbl} | {meta_lbl['name']} | {counts[lbl]} | {meta_lbl['description']} |")
|
||||
|
||||
# Detailed section per cluster
|
||||
for lbl in cluster_ids:
|
||||
lines.append("\n---\n")
|
||||
meta_lbl = meta[lbl]
|
||||
lines.append(f"### Cluster {lbl}: {meta_lbl['name']} ({counts[lbl]} prompts)\n")
|
||||
lines.append(f"{meta_lbl['description']}\n")
|
||||
|
||||
# Show a handful of illustrative prompts.
|
||||
sample_n = min(5, counts[lbl])
|
||||
examples = df.loc[labels == lbl, "prompt"].sample(sample_n, random_state=42).tolist()
|
||||
lines.append("\nExamples:\n")
|
||||
lines.extend([f"* {t}" for t in examples])
|
||||
|
||||
# Outliers / ambiguous prompts, if any.
|
||||
if -1 in cluster_ids:
|
||||
lines.append("\n---\n")
|
||||
lines.append(f"### Noise / outliers ({counts[-1]} prompts)\n")
|
||||
examples = (
|
||||
df.loc[labels == -1, "prompt"].sample(min(10, counts[-1]), random_state=42).tolist()
|
||||
)
|
||||
lines.extend([f"* {t}" for t in examples])
|
||||
|
||||
# Optional ambiguous set (for kmeans)
|
||||
ambiguous = outputs.get("ambiguous", [])
|
||||
if ambiguous:
|
||||
lines.append("\n---\n")
|
||||
lines.append(f"### Potentially ambiguous prompts ({len(ambiguous)})\n")
|
||||
lines.extend([f"* {t}" for t in ambiguous])
|
||||
|
||||
# Plot references
|
||||
lines.append("\n---\n")
|
||||
lines.append("## Plots\n")
|
||||
lines.append(
|
||||
"The directory `plots/` contains a bar chart of the cluster sizes and a t‑SNE scatter plot coloured by cluster.\n"
|
||||
)
|
||||
|
||||
path_md.write_text("\n".join(lines))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Plotting helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def create_plots(
|
||||
matrix: np.ndarray,
|
||||
labels: np.ndarray,
|
||||
for_devs: pd.Series | None,
|
||||
plots_dir: Path,
|
||||
):
|
||||
"""Generate cluster size and t‑SNE plots."""
|
||||
|
||||
import matplotlib.pyplot as plt # type: ignore – heavy, lazy import.
|
||||
from sklearn.manifold import TSNE # type: ignore – heavy, lazy import.
|
||||
|
||||
plots_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Bar chart with cluster sizes
|
||||
unique, counts = np.unique(labels, return_counts=True)
|
||||
order = np.argsort(-counts) # descending
|
||||
unique, counts = unique[order], counts[order]
|
||||
|
||||
plt.figure(figsize=(8, 4))
|
||||
plt.bar([str(u) for u in unique], counts, color="steelblue")
|
||||
plt.xlabel("Cluster label")
|
||||
plt.ylabel("# prompts")
|
||||
plt.title("Cluster sizes")
|
||||
plt.tight_layout()
|
||||
bar_path = plots_dir / "cluster_sizes.png"
|
||||
plt.savefig(bar_path, dpi=150)
|
||||
plt.close()
|
||||
|
||||
# t‑SNE scatter
|
||||
tsne = TSNE(
|
||||
n_components=2, perplexity=min(30, len(matrix) // 3), random_state=42, init="random"
|
||||
)
|
||||
xy = tsne.fit_transform(matrix)
|
||||
|
||||
plt.figure(figsize=(7, 6))
|
||||
scatter = plt.scatter(xy[:, 0], xy[:, 1], c=labels, cmap="tab20", s=20, alpha=0.8)
|
||||
plt.title("t‑SNE projection")
|
||||
plt.xticks([])
|
||||
plt.yticks([])
|
||||
|
||||
if for_devs is not None:
|
||||
# Overlay dev prompts as black edge markers
|
||||
dev_mask = for_devs.astype(bool).values
|
||||
plt.scatter(
|
||||
xy[dev_mask, 0],
|
||||
xy[dev_mask, 1],
|
||||
facecolors="none",
|
||||
edgecolors="black",
|
||||
linewidths=0.6,
|
||||
s=40,
|
||||
label="for_devs = TRUE",
|
||||
)
|
||||
plt.legend(loc="best")
|
||||
|
||||
tsne_path = plots_dir / "tsne.png"
|
||||
plt.tight_layout()
|
||||
plt.savefig(tsne_path, dpi=150)
|
||||
plt.close()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main entry point
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main() -> None: # noqa: D401
|
||||
args = parse_cli()
|
||||
|
||||
# Read CSV – require a 'prompt' column.
|
||||
df = pd.read_csv(args.csv)
|
||||
if "prompt" not in df.columns:
|
||||
raise SystemExit("Input CSV must contain a 'prompt' column.")
|
||||
|
||||
# Keep relevant columns only for clarity.
|
||||
df = df[[c for c in df.columns if c in {"act", "prompt", "for_devs"}]]
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# 1. Embeddings (may be cached)
|
||||
# ---------------------------------------------------------------------
|
||||
embeddings_df = load_or_create_embeddings(
|
||||
df["prompt"], cache_path=args.cache, model=args.embedding_model
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# 2. Clustering
|
||||
# ---------------------------------------------------------------------
|
||||
mat = embeddings_df.values.astype(np.float32)
|
||||
|
||||
if args.cluster_method == "kmeans":
|
||||
labels = cluster_kmeans(mat, k_max=args.k_max)
|
||||
else:
|
||||
labels = cluster_dbscan(mat, min_samples=args.dbscan_min_samples)
|
||||
|
||||
# Identify potentially ambiguous prompts (only meaningful for kmeans).
|
||||
outputs: dict[str, Any] = {"method": args.cluster_method}
|
||||
if args.cluster_method == "kmeans":
|
||||
from sklearn.cluster import KMeans # type: ignore – lazy
|
||||
|
||||
best_k = len(set(labels))
|
||||
# Re‑fit KMeans with the chosen k to get distances.
|
||||
kmeans = KMeans(n_clusters=best_k, random_state=42, n_init="auto").fit(mat)
|
||||
outputs["k"] = best_k
|
||||
# Silhouette score (again) – not super efficient but okay.
|
||||
from sklearn.metrics import silhouette_score # type: ignore
|
||||
|
||||
outputs["silhouette"] = silhouette_score(mat, labels)
|
||||
|
||||
distances = kmeans.transform(mat)
|
||||
# Ambiguous if the ratio between 1st and 2nd closest centroid < 1.1
|
||||
sorted_dist = np.sort(distances, axis=1)
|
||||
ratio = sorted_dist[:, 0] / (sorted_dist[:, 1] + 1e-9)
|
||||
ambiguous_mask = ratio > 0.9 # tunes threshold – close centroids.
|
||||
outputs["ambiguous"] = df.loc[ambiguous_mask, "prompt"].tolist()
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# 3. LLM naming / description
|
||||
# ---------------------------------------------------------------------
|
||||
meta = label_clusters(df, labels, chat_model=args.chat_model)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# 4. Plots
|
||||
# ---------------------------------------------------------------------
|
||||
create_plots(mat, labels, df.get("for_devs"), args.plots_dir)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# 5. Markdown report
|
||||
# ---------------------------------------------------------------------
|
||||
generate_markdown_report(df, labels, meta, outputs, path_md=args.output_md)
|
||||
|
||||
print(f"✅ Done. Report written to {args.output_md} – plots in {args.plots_dir}/", flush=True)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Guard the main block to allow safe import elsewhere.
|
||||
main()
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 19 KiB |
BIN
codex-cli/examples/prompt-analyzer/template/plots/tsne.png
Normal file
BIN
codex-cli/examples/prompt-analyzer/template/plots/tsne.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 100 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 20 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user