mirror of
https://github.com/openai/codex.git
synced 2026-02-02 06:57:03 +00:00
Compare commits
239 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ce9347388a | ||
|
|
d6515aa010 | ||
|
|
37b3807f96 | ||
|
|
7b8533fdbe | ||
|
|
5f47ab64c4 | ||
|
|
6915ba2100 | ||
|
|
50f53e7071 | ||
|
|
40fba1bb4c | ||
|
|
bdda762deb | ||
|
|
da5492694b | ||
|
|
a5d48a775b | ||
|
|
78f2785595 | ||
|
|
fc1723f131 | ||
|
|
ed5b0bfeb3 | ||
|
|
4b01f0f50a | ||
|
|
ac2b3ec2bb | ||
|
|
c052b89333 | ||
|
|
b424ca93ab | ||
|
|
32bd302d80 | ||
|
|
39c72b3151 | ||
|
|
2cdfd38c24 | ||
|
|
fc79a46c7a | ||
|
|
010dfa7751 | ||
|
|
54b9436699 | ||
|
|
af3bf801ce | ||
|
|
5fb6cbbcca | ||
|
|
7bdf63a009 | ||
|
|
119dabd272 | ||
|
|
c0baaa171b | ||
|
|
b45c204109 | ||
|
|
0139f6780c | ||
|
|
86ba270926 | ||
|
|
c146585cdb | ||
|
|
5fa7844ad7 | ||
|
|
84c9b574f9 | ||
|
|
272e13dd90 | ||
|
|
18d00e36b9 | ||
|
|
17550fee9e | ||
|
|
995f5c3614 | ||
|
|
9b53a306e3 | ||
|
|
0016346dfb | ||
|
|
f38ad65254 | ||
|
|
774892c6d7 | ||
|
|
897d4d5f17 | ||
|
|
8a281cd1f4 | ||
|
|
e8863b233b | ||
|
|
8fed0b53c4 | ||
|
|
00debb6399 | ||
|
|
0a0a10d8b3 | ||
|
|
13035561cd | ||
|
|
9be704a934 | ||
|
|
f7b4e29609 | ||
|
|
d6c5df9a0a | ||
|
|
8662162f45 | ||
|
|
57584d6f34 | ||
|
|
b70dcd80a2 | ||
|
|
c0f8a49e3e | ||
|
|
268a10f917 | ||
|
|
87362d6ebd | ||
|
|
f073bc5ccf | ||
|
|
9320565658 | ||
|
|
4de5b25c52 | ||
|
|
90b2f096c3 | ||
|
|
f3c57ab888 | ||
|
|
43ee0dfd19 | ||
|
|
c9d9a40c98 | ||
|
|
ab3d607be4 | ||
|
|
f7d8e12ae0 | ||
|
|
a8278b5423 | ||
|
|
cb99d71f57 | ||
|
|
f72e9da7c5 | ||
|
|
732c435345 | ||
|
|
5346cc422d | ||
|
|
f5e055ae36 | ||
|
|
8245a4f53b | ||
|
|
26f7c46856 | ||
|
|
90af046c5c | ||
|
|
961ed31901 | ||
|
|
85e7357973 | ||
|
|
f98fa85b44 | ||
|
|
ddcaf3dccd | ||
|
|
56296cad82 | ||
|
|
95b41dd7f1 | ||
|
|
bf82353f45 | ||
|
|
0308febc23 | ||
|
|
7b4a4c2219 | ||
|
|
3ddd4d47d0 | ||
|
|
ca6a0358de | ||
|
|
0026b12615 | ||
|
|
4300236681 | ||
|
|
ec238a2c39 | ||
|
|
b6165aee0c | ||
|
|
f4bc03d7c0 | ||
|
|
3c5e12e2a4 | ||
|
|
c89229db97 | ||
|
|
d3820f4782 | ||
|
|
e896db1180 | ||
|
|
96acb8a74e | ||
|
|
687a13bbe5 | ||
|
|
fe8122e514 | ||
|
|
876d4f450a | ||
|
|
f52320be86 | ||
|
|
a43ae86b6c | ||
|
|
496cb801e1 | ||
|
|
abd517091f | ||
|
|
b8b04514bc | ||
|
|
0e5d72cc57 | ||
|
|
60f9e85c16 | ||
|
|
b016a3e7d8 | ||
|
|
a0d56541cf | ||
|
|
226215f36d | ||
|
|
338c2c873c | ||
|
|
4b0f5eb6a8 | ||
|
|
75176dae70 | ||
|
|
12fd2b4160 | ||
|
|
f2555422b9 | ||
|
|
27f169bb91 | ||
|
|
b16c985ed2 | ||
|
|
35a770e871 | ||
|
|
b09f62a1c3 | ||
|
|
5833508a17 | ||
|
|
d73055c5b1 | ||
|
|
7e3a272b29 | ||
|
|
661663c98a | ||
|
|
721003c552 | ||
|
|
36f1cca1b1 | ||
|
|
d3e1beb26c | ||
|
|
c264ae6021 | ||
|
|
8cd882c4bd | ||
|
|
90fe5e4a7e | ||
|
|
a90a58f7a1 | ||
|
|
b2d81a7cac | ||
|
|
77a8b7fdeb | ||
|
|
7fa5e95c1f | ||
|
|
191d620707 | ||
|
|
53504a38d2 | ||
|
|
5c42419b02 | ||
|
|
aecbe0f333 | ||
|
|
a30a902db5 | ||
|
|
f3b4a26f32 | ||
|
|
dc3c6bf62a | ||
|
|
3203862167 | ||
|
|
06853d94f0 | ||
|
|
cc2f4aafd7 | ||
|
|
356ea6ea34 | ||
|
|
4764fc1ee7 | ||
|
|
90ef94d3b3 | ||
|
|
6c2969d22d | ||
|
|
0ad1b0782b | ||
|
|
d7acd146fb | ||
|
|
c5465aed60 | ||
|
|
a95605a867 | ||
|
|
848058f05b | ||
|
|
a4f1c9d67e | ||
|
|
665341c9b1 | ||
|
|
fae0e6c52c | ||
|
|
1b4a79f03c | ||
|
|
640192ac3d | ||
|
|
205c36e393 | ||
|
|
d13ee79c41 | ||
|
|
bde468ff8d | ||
|
|
e292d1ed21 | ||
|
|
de8d77274a | ||
|
|
a5b7675e42 | ||
|
|
9823de3cc6 | ||
|
|
c32e9cfe86 | ||
|
|
1d17ca1fa3 | ||
|
|
bfe3328129 | ||
|
|
e0b38bd7a2 | ||
|
|
153338c20f | ||
|
|
3495a7dc37 | ||
|
|
042d4d55d9 | ||
|
|
5af08e0719 | ||
|
|
33d3ecbccc | ||
|
|
69cb72f842 | ||
|
|
69ac5153d4 | ||
|
|
16b6951648 | ||
|
|
231c36f8d3 | ||
|
|
1e4541b982 | ||
|
|
7be3b484ad | ||
|
|
9617b69c8a | ||
|
|
1d94b9111c | ||
|
|
2d6cd6951a | ||
|
|
310e3c32e5 | ||
|
|
37786593a0 | ||
|
|
819a5782b6 | ||
|
|
c0a84473a4 | ||
|
|
591a8ecc16 | ||
|
|
c405d8c06c | ||
|
|
138be0fd73 | ||
|
|
25a2e15ec5 | ||
|
|
62cc8a4b8d | ||
|
|
f895d4cbb3 | ||
|
|
ed5d656fa8 | ||
|
|
c43a561916 | ||
|
|
b93cc0f431 | ||
|
|
4c566d484a | ||
|
|
06e34d4607 | ||
|
|
45936f8fbd | ||
|
|
ec98445abf | ||
|
|
b07aafa5f5 | ||
|
|
b727d3f98a | ||
|
|
2f6fb37d72 | ||
|
|
35c76ad47d | ||
|
|
c07fb71186 | ||
|
|
e899ae7d8a | ||
|
|
6f97ec4990 | ||
|
|
07c1db351a | ||
|
|
31102af54b | ||
|
|
5d78c1edd3 | ||
|
|
170c685882 | ||
|
|
609f75acec | ||
|
|
eabe18714f | ||
|
|
ceaba36c7f | ||
|
|
d94e8bad8b | ||
|
|
8a367ef6bf | ||
|
|
400a5a90bf | ||
|
|
2f370e946d | ||
|
|
751b3b50ac | ||
|
|
d78d0764aa | ||
|
|
699c121606 | ||
|
|
dde615f482 | ||
|
|
325fad1d92 | ||
|
|
f815157dd9 | ||
|
|
b8195a17e5 | ||
|
|
349ef7edc6 | ||
|
|
5881c0d6d4 | ||
|
|
8dd771d217 | ||
|
|
32853ecbc5 | ||
|
|
7fc3edf8a7 | ||
|
|
01e6503672 | ||
|
|
9c259737d3 | ||
|
|
b8e1fe60c5 | ||
|
|
ddfb7eb548 | ||
|
|
6910be3224 | ||
|
|
a534356fe1 | ||
|
|
c89b0e1235 | ||
|
|
f6a152848a | ||
|
|
3592ecb23c |
22
.github/ISSUE_TEMPLATE/2-bug-report.yml
vendored
22
.github/ISSUE_TEMPLATE/2-bug-report.yml
vendored
@@ -20,6 +20,14 @@ body:
|
||||
attributes:
|
||||
label: What version of Codex is running?
|
||||
description: Copy the output of `codex --version`
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: plan
|
||||
attributes:
|
||||
label: What subscription do you have?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: model
|
||||
attributes:
|
||||
@@ -32,11 +40,18 @@ body:
|
||||
description: |
|
||||
For MacOS and Linux: copy the output of `uname -mprs`
|
||||
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What issue are you seeing?
|
||||
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: What steps can reproduce the bug?
|
||||
description: Explain the bug and provide a code snippet that can reproduce it.
|
||||
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
@@ -44,11 +59,6 @@ body:
|
||||
attributes:
|
||||
label: What is the expected behavior?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What do you see instead?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
|
||||
6
.github/ISSUE_TEMPLATE/4-feature-request.yml
vendored
6
.github/ISSUE_TEMPLATE/4-feature-request.yml
vendored
@@ -2,7 +2,6 @@ name: 🎁 Feature Request
|
||||
description: Propose a new feature for Codex
|
||||
labels:
|
||||
- enhancement
|
||||
- needs triage
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
@@ -19,11 +18,6 @@ body:
|
||||
label: What feature would you like to see?
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: author
|
||||
attributes:
|
||||
label: Are you interested in implementing this feature?
|
||||
description: Please wait for acknowledgement before implementing or opening a PR.
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
|
||||
24
.github/ISSUE_TEMPLATE/5-vs-code-extension.yml
vendored
24
.github/ISSUE_TEMPLATE/5-vs-code-extension.yml
vendored
@@ -14,11 +14,21 @@ body:
|
||||
id: version
|
||||
attributes:
|
||||
label: What version of the VS Code extension are you using?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: plan
|
||||
attributes:
|
||||
label: What subscription do you have?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: ide
|
||||
attributes:
|
||||
label: Which IDE are you using?
|
||||
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: platform
|
||||
attributes:
|
||||
@@ -26,11 +36,18 @@ body:
|
||||
description: |
|
||||
For MacOS and Linux: copy the output of `uname -mprs`
|
||||
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What issue are you seeing?
|
||||
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: What steps can reproduce the bug?
|
||||
description: Explain the bug and provide a code snippet that can reproduce it.
|
||||
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
@@ -38,11 +55,6 @@ body:
|
||||
attributes:
|
||||
label: What is the expected behavior?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: What do you see instead?
|
||||
description: If possible, please provide text instead of a screenshot.
|
||||
- type: textarea
|
||||
id: notes
|
||||
attributes:
|
||||
|
||||
18
.github/prompts/issue-deduplicator.txt
vendored
Normal file
18
.github/prompts/issue-deduplicator.txt
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
You are an assistant that triages new GitHub issues by identifying potential duplicates.
|
||||
|
||||
You will receive the following JSON files located in the current working directory:
|
||||
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
|
||||
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
|
||||
|
||||
Instructions:
|
||||
- Load both files as JSON and review their contents carefully. The codex-existing-issues.json file is large, ensure you explore all of it.
|
||||
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
|
||||
- Only consider an issue a potential duplicate if there is a clear overlap in symptoms, feature requests, reproduction steps, or error messages.
|
||||
- Prioritize newer issues when similarity is comparable.
|
||||
- Ignore pull requests and issues whose similarity is tenuous.
|
||||
- When unsure, prefer returning fewer matches.
|
||||
|
||||
Output requirements:
|
||||
- Respond with a JSON array of issue numbers (integers), ordered from most likely duplicate to least.
|
||||
- Include at most five numbers.
|
||||
- If you find no plausible duplicates, respond with `[]`.
|
||||
26
.github/prompts/issue-labeler.txt
vendored
Normal file
26
.github/prompts/issue-labeler.txt
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
You are an assistant that reviews GitHub issues for the repository.
|
||||
|
||||
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
|
||||
Follow these rules:
|
||||
- Only pick labels out of the list below.
|
||||
- Prefer a small set of precise labels over many broad ones.
|
||||
- If none of the labels fit, respond with an empty JSON array: []
|
||||
- Output must be a JSON array of label names (strings) with no additional commentary.
|
||||
|
||||
Labels to apply:
|
||||
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
|
||||
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
|
||||
3. extension — VS Code (or other IDE) extension-specific issues.
|
||||
4. windows-os — Bugs or friction specific to Windows environments (PowerShell behavior, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
5. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
8. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
|
||||
Issue information is available in environment variables:
|
||||
|
||||
ISSUE_NUMBER
|
||||
ISSUE_TITLE
|
||||
ISSUE_BODY
|
||||
REPO_FULL_NAME
|
||||
13
.github/workflows/ci.yml
vendored
13
.github/workflows/ci.yml
vendored
@@ -27,7 +27,7 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
# build_npm_package.py requires DotSlash when staging releases.
|
||||
# stage_npm_packages.py requires DotSlash when staging releases.
|
||||
- uses: facebook/install-dotslash@v2
|
||||
|
||||
- name: Stage npm package
|
||||
@@ -37,10 +37,12 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
CODEX_VERSION=0.40.0
|
||||
PACK_OUTPUT="${RUNNER_TEMP}/codex-npm.tgz"
|
||||
python3 ./codex-cli/scripts/build_npm_package.py \
|
||||
OUTPUT_DIR="${RUNNER_TEMP}"
|
||||
python3 ./scripts/stage_npm_packages.py \
|
||||
--release-version "$CODEX_VERSION" \
|
||||
--pack-output "$PACK_OUTPUT"
|
||||
--package codex \
|
||||
--output-dir "$OUTPUT_DIR"
|
||||
PACK_OUTPUT="${OUTPUT_DIR}/codex-npm-${CODEX_VERSION}.tgz"
|
||||
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload staged npm package artifact
|
||||
@@ -58,3 +60,6 @@ jobs:
|
||||
run: ./scripts/asciicheck.py codex-cli/README.md
|
||||
- name: Check codex-cli/README ToC
|
||||
run: python3 scripts/readme_toc.py codex-cli/README.md
|
||||
|
||||
- name: Prettier (run `pnpm run format:fix` to fix)
|
||||
run: pnpm run format
|
||||
|
||||
140
.github/workflows/issue-deduplicator.yml
vendored
Normal file
140
.github/workflows/issue-deduplicator.yml
vendored
Normal file
@@ -0,0 +1,140 @@
|
||||
name: Issue Deduplicator
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- opened
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
gather-duplicates:
|
||||
name: Identify potential duplicates
|
||||
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate') }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
codex_output: ${{ steps.codex.outputs.final-message }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Prepare Codex inputs
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -eo pipefail
|
||||
|
||||
CURRENT_ISSUE_FILE=codex-current-issue.json
|
||||
EXISTING_ISSUES_FILE=codex-existing-issues.json
|
||||
|
||||
gh issue list --repo "${{ github.repository }}" \
|
||||
--json number,title,body,createdAt \
|
||||
--limit 1000 \
|
||||
--state all \
|
||||
--search "sort:created-desc" \
|
||||
| jq '.' \
|
||||
> "$EXISTING_ISSUES_FILE"
|
||||
|
||||
gh issue view "${{ github.event.issue.number }}" \
|
||||
--repo "${{ github.repository }}" \
|
||||
--json number,title,body \
|
||||
| jq '.' \
|
||||
> "$CURRENT_ISSUE_FILE"
|
||||
|
||||
- id: codex
|
||||
uses: openai/codex-action@main
|
||||
with:
|
||||
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
allow-users: "*"
|
||||
model: gpt-5
|
||||
prompt: |
|
||||
You are an assistant that triages new GitHub issues by identifying potential duplicates.
|
||||
|
||||
You will receive the following JSON files located in the current working directory:
|
||||
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
|
||||
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
|
||||
|
||||
Instructions:
|
||||
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
|
||||
- Focus on the underlying intent and context of each issue—such as reported symptoms, feature requests, reproduction steps, or error messages—rather than relying solely on string similarity or synthetic metrics.
|
||||
- After your analysis, validate your results in 1-2 lines explaining your decision to return the selected matches.
|
||||
- When unsure, prefer returning fewer matches.
|
||||
- Include at most five numbers.
|
||||
|
||||
output-schema: |
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"reason": { "type": "string" }
|
||||
},
|
||||
"required": ["issues", "reason"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
||||
comment-on-issue:
|
||||
name: Comment with potential duplicates
|
||||
needs: gather-duplicates
|
||||
if: ${{ needs.gather-duplicates.result != 'skipped' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
steps:
|
||||
- name: Comment on issue
|
||||
uses: actions/github-script@v7
|
||||
env:
|
||||
CODEX_OUTPUT: ${{ needs.gather-duplicates.outputs.codex_output }}
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
script: |
|
||||
const raw = process.env.CODEX_OUTPUT ?? '';
|
||||
let parsed;
|
||||
try {
|
||||
parsed = JSON.parse(raw);
|
||||
} catch (error) {
|
||||
core.info(`Codex output was not valid JSON. Raw output: ${raw}`);
|
||||
core.info(`Parse error: ${error.message}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const issues = Array.isArray(parsed?.issues) ? parsed.issues : [];
|
||||
const currentIssueNumber = String(context.payload.issue.number);
|
||||
|
||||
console.log(`Current issue number: ${currentIssueNumber}`);
|
||||
console.log(issues);
|
||||
|
||||
const filteredIssues = issues.filter((value) => String(value) !== currentIssueNumber);
|
||||
|
||||
if (filteredIssues.length === 0) {
|
||||
core.info('Codex reported no potential duplicates.');
|
||||
return;
|
||||
}
|
||||
|
||||
const lines = [
|
||||
'Potential duplicates detected. Please review them and close your issue if it is a duplicate.',
|
||||
'',
|
||||
...filteredIssues.map((value) => `- #${String(value)}`),
|
||||
'',
|
||||
'*Powered by [Codex Action](https://github.com/openai/codex-action)*'];
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.payload.issue.number,
|
||||
body: lines.join("\n"),
|
||||
});
|
||||
|
||||
- name: Remove codex-deduplicate label
|
||||
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate' }}
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GH_REPO: ${{ github.repository }}
|
||||
run: |
|
||||
gh issue edit "${{ github.event.issue.number }}" --remove-label codex-deduplicate || true
|
||||
echo "Attempted to remove label: codex-deduplicate"
|
||||
115
.github/workflows/issue-labeler.yml
vendored
Normal file
115
.github/workflows/issue-labeler.yml
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
name: Issue Labeler
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- opened
|
||||
- labeled
|
||||
|
||||
jobs:
|
||||
gather-labels:
|
||||
name: Generate label suggestions
|
||||
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-label') }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
codex_output: ${{ steps.codex.outputs.final-message }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- id: codex
|
||||
uses: openai/codex-action@main
|
||||
with:
|
||||
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
|
||||
allow-users: "*"
|
||||
prompt: |
|
||||
You are an assistant that reviews GitHub issues for the repository.
|
||||
|
||||
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
|
||||
Follow these rules:
|
||||
- Only pick labels out of the list below.
|
||||
- Prefer a small set of precise labels over many broad ones.
|
||||
|
||||
Labels to apply:
|
||||
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
|
||||
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
|
||||
3. extension — VS Code (or other IDE) extension-specific issues.
|
||||
4. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
5. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
8. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
|
||||
Issue number: ${{ github.event.issue.number }}
|
||||
|
||||
Issue title:
|
||||
${{ github.event.issue.title }}
|
||||
|
||||
Issue body:
|
||||
${{ github.event.issue.body }}
|
||||
|
||||
Repository full name:
|
||||
${{ github.repository }}
|
||||
|
||||
output-schema: |
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": ["labels"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
|
||||
apply-labels:
|
||||
name: Apply labels from Codex output
|
||||
needs: gather-labels
|
||||
if: ${{ needs.gather-labels.result != 'skipped' }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GH_REPO: ${{ github.repository }}
|
||||
ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
CODEX_OUTPUT: ${{ needs.gather-labels.outputs.codex_output }}
|
||||
steps:
|
||||
- name: Apply labels
|
||||
run: |
|
||||
json=${CODEX_OUTPUT//$'\r'/}
|
||||
if [ -z "$json" ]; then
|
||||
echo "Codex produced no output. Skipping label application."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if ! printf '%s' "$json" | jq -e 'type == "object" and (.labels | type == "array")' >/dev/null 2>&1; then
|
||||
echo "Codex output did not include a labels array. Raw output: $json"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
labels=$(printf '%s' "$json" | jq -r '.labels[] | tostring')
|
||||
if [ -z "$labels" ]; then
|
||||
echo "Codex returned an empty array. Nothing to do."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cmd=(gh issue edit "$ISSUE_NUMBER")
|
||||
while IFS= read -r label; do
|
||||
cmd+=(--add-label "$label")
|
||||
done <<< "$labels"
|
||||
|
||||
"${cmd[@]}" || true
|
||||
|
||||
- name: Remove codex-label trigger
|
||||
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-label' }}
|
||||
run: |
|
||||
gh issue edit "$ISSUE_NUMBER" --remove-label codex-label || true
|
||||
echo "Attempted to remove label: codex-label"
|
||||
42
.github/workflows/rust-ci.yml
vendored
42
.github/workflows/rust-ci.yml
vendored
@@ -148,15 +148,26 @@ jobs:
|
||||
targets: ${{ matrix.target }}
|
||||
components: clippy
|
||||
|
||||
- uses: actions/cache@v4
|
||||
# Explicit cache restore: split cargo home vs target, so we can
|
||||
# avoid caching the large target dir on the gnu-dev job.
|
||||
- name: Restore cargo home cache
|
||||
id: cache_cargo_home_restore
|
||||
uses: actions/cache/restore@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- name: Restore target cache (except gnu-dev)
|
||||
id: cache_target_restore
|
||||
if: ${{ !(matrix.target == 'x86_64-unknown-linux-gnu' && matrix.profile != 'release') }}
|
||||
uses: actions/cache/restore@v4
|
||||
with:
|
||||
path: ${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-target-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Install musl build tools
|
||||
@@ -194,6 +205,31 @@ jobs:
|
||||
env:
|
||||
RUST_BACKTRACE: 1
|
||||
|
||||
# Save caches explicitly; make non-fatal so cache packaging
|
||||
# never fails the overall job. Only save when key wasn't hit.
|
||||
- name: Save cargo home cache
|
||||
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/bin/
|
||||
~/.cargo/registry/index/
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- name: Save target cache (except gnu-dev)
|
||||
if: >-
|
||||
always() && !cancelled() &&
|
||||
(steps.cache_target_restore.outputs.cache-hit != 'true') &&
|
||||
!(matrix.target == 'x86_64-unknown-linux-gnu' && matrix.profile != 'release')
|
||||
continue-on-error: true
|
||||
uses: actions/cache/save@v4
|
||||
with:
|
||||
path: ${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-target-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
# Fail the job if any of the previous steps failed.
|
||||
- name: verify all steps passed
|
||||
if: |
|
||||
|
||||
241
.github/workflows/rust-release.yml
vendored
241
.github/workflows/rust-release.yml
vendored
@@ -47,7 +47,7 @@ jobs:
|
||||
|
||||
build:
|
||||
needs: tag-check
|
||||
name: ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.runner }}
|
||||
timeout-minutes: 30
|
||||
defaults:
|
||||
@@ -94,11 +94,181 @@ jobs:
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Install musl build tools
|
||||
run: |
|
||||
sudo apt install -y musl-tools pkg-config
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y musl-tools pkg-config
|
||||
|
||||
- name: Cargo build
|
||||
run: cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy
|
||||
|
||||
- if: ${{ matrix.runner == 'macos-14' }}
|
||||
name: Configure Apple code signing
|
||||
shell: bash
|
||||
env:
|
||||
KEYCHAIN_PASSWORD: actions
|
||||
APPLE_CERTIFICATE: ${{ secrets.APPLE_CERTIFICATE_P12 }}
|
||||
APPLE_CERTIFICATE_PASSWORD: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${APPLE_CERTIFICATE:-}" ]]; then
|
||||
echo "APPLE_CERTIFICATE is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${APPLE_CERTIFICATE_PASSWORD:-}" ]]; then
|
||||
echo "APPLE_CERTIFICATE_PASSWORD is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cert_path="${RUNNER_TEMP}/apple_signing_certificate.p12"
|
||||
echo "$APPLE_CERTIFICATE" | base64 -d > "$cert_path"
|
||||
|
||||
keychain_path="${RUNNER_TEMP}/codex-signing.keychain-db"
|
||||
security create-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
|
||||
security set-keychain-settings -lut 21600 "$keychain_path"
|
||||
security unlock-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
|
||||
|
||||
keychain_args=()
|
||||
cleanup_keychain() {
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "${keychain_args[@]}" || true
|
||||
security default-keychain -s "${keychain_args[0]}" || true
|
||||
else
|
||||
security list-keychains -s || true
|
||||
fi
|
||||
if [[ -f "$keychain_path" ]]; then
|
||||
security delete-keychain "$keychain_path" || true
|
||||
fi
|
||||
}
|
||||
|
||||
while IFS= read -r keychain; do
|
||||
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
|
||||
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
|
||||
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "$keychain_path" "${keychain_args[@]}"
|
||||
else
|
||||
security list-keychains -s "$keychain_path"
|
||||
fi
|
||||
|
||||
security default-keychain -s "$keychain_path"
|
||||
security import "$cert_path" -k "$keychain_path" -P "$APPLE_CERTIFICATE_PASSWORD" -T /usr/bin/codesign -T /usr/bin/security
|
||||
security set-key-partition-list -S apple-tool:,apple: -s -k "$KEYCHAIN_PASSWORD" "$keychain_path" > /dev/null
|
||||
|
||||
codesign_hashes=()
|
||||
while IFS= read -r hash; do
|
||||
[[ -n "$hash" ]] && codesign_hashes+=("$hash")
|
||||
done < <(security find-identity -v -p codesigning "$keychain_path" \
|
||||
| sed -n 's/.*\([0-9A-F]\{40\}\).*/\1/p' \
|
||||
| sort -u)
|
||||
|
||||
if ((${#codesign_hashes[@]} == 0)); then
|
||||
echo "No signing identities found in $keychain_path"
|
||||
cleanup_keychain
|
||||
rm -f "$cert_path"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ((${#codesign_hashes[@]} > 1)); then
|
||||
echo "Multiple signing identities found in $keychain_path:"
|
||||
printf ' %s\n' "${codesign_hashes[@]}"
|
||||
cleanup_keychain
|
||||
rm -f "$cert_path"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
APPLE_CODESIGN_IDENTITY="${codesign_hashes[0]}"
|
||||
|
||||
rm -f "$cert_path"
|
||||
|
||||
echo "APPLE_CODESIGN_IDENTITY=$APPLE_CODESIGN_IDENTITY" >> "$GITHUB_ENV"
|
||||
echo "APPLE_CODESIGN_KEYCHAIN=$keychain_path" >> "$GITHUB_ENV"
|
||||
echo "::add-mask::$APPLE_CODESIGN_IDENTITY"
|
||||
|
||||
- if: ${{ matrix.runner == 'macos-14' }}
|
||||
name: Sign macOS binaries
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${APPLE_CODESIGN_IDENTITY:-}" ]]; then
|
||||
echo "APPLE_CODESIGN_IDENTITY is required for macOS signing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
keychain_args=()
|
||||
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" && -f "${APPLE_CODESIGN_KEYCHAIN}" ]]; then
|
||||
keychain_args+=(--keychain "${APPLE_CODESIGN_KEYCHAIN}")
|
||||
fi
|
||||
|
||||
for binary in codex codex-responses-api-proxy; do
|
||||
path="target/${{ matrix.target }}/release/${binary}"
|
||||
codesign --force --options runtime --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$path"
|
||||
done
|
||||
|
||||
- if: ${{ matrix.runner == 'macos-14' }}
|
||||
name: Notarize macOS binaries
|
||||
shell: bash
|
||||
env:
|
||||
APPLE_NOTARIZATION_KEY_P8: ${{ secrets.APPLE_NOTARIZATION_KEY_P8 }}
|
||||
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
|
||||
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
for var in APPLE_NOTARIZATION_KEY_P8 APPLE_NOTARIZATION_KEY_ID APPLE_NOTARIZATION_ISSUER_ID; do
|
||||
if [[ -z "${!var:-}" ]]; then
|
||||
echo "$var is required for notarization"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
notary_key_path="${RUNNER_TEMP}/notarytool.key.p8"
|
||||
echo "$APPLE_NOTARIZATION_KEY_P8" | base64 -d > "$notary_key_path"
|
||||
cleanup_notary() {
|
||||
rm -f "$notary_key_path"
|
||||
}
|
||||
trap cleanup_notary EXIT
|
||||
|
||||
notarize_binary() {
|
||||
local binary="$1"
|
||||
local source_path="target/${{ matrix.target }}/release/${binary}"
|
||||
local archive_path="${RUNNER_TEMP}/${binary}.zip"
|
||||
|
||||
if [[ ! -f "$source_path" ]]; then
|
||||
echo "Binary $source_path not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f "$archive_path"
|
||||
ditto -c -k --keepParent "$source_path" "$archive_path"
|
||||
|
||||
submission_json=$(xcrun notarytool submit "$archive_path" \
|
||||
--key "$notary_key_path" \
|
||||
--key-id "$APPLE_NOTARIZATION_KEY_ID" \
|
||||
--issuer "$APPLE_NOTARIZATION_ISSUER_ID" \
|
||||
--output-format json \
|
||||
--wait)
|
||||
|
||||
status=$(printf '%s\n' "$submission_json" | jq -r '.status // "Unknown"')
|
||||
submission_id=$(printf '%s\n' "$submission_json" | jq -r '.id // ""')
|
||||
|
||||
if [[ -z "$submission_id" ]]; then
|
||||
echo "Failed to retrieve submission ID for $binary"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "::notice title=Notarization::$binary submission ${submission_id} completed with status ${status}"
|
||||
|
||||
if [[ "$status" != "Accepted" ]]; then
|
||||
echo "Notarization failed for ${binary} (submission ${submission_id}, status ${status})"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
notarize_binary "codex"
|
||||
notarize_binary "codex-responses-api-proxy"
|
||||
|
||||
- name: Stage artifacts
|
||||
shell: bash
|
||||
run: |
|
||||
@@ -157,6 +327,29 @@ jobs:
|
||||
zstd -T0 -19 --rm "$dest/$base"
|
||||
done
|
||||
|
||||
- name: Remove signing keychain
|
||||
if: ${{ always() && matrix.runner == 'macos-14' }}
|
||||
shell: bash
|
||||
env:
|
||||
APPLE_CODESIGN_KEYCHAIN: ${{ env.APPLE_CODESIGN_KEYCHAIN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" ]]; then
|
||||
keychain_args=()
|
||||
while IFS= read -r keychain; do
|
||||
[[ "$keychain" == "$APPLE_CODESIGN_KEYCHAIN" ]] && continue
|
||||
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
|
||||
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
|
||||
if ((${#keychain_args[@]} > 0)); then
|
||||
security list-keychains -s "${keychain_args[@]}"
|
||||
security default-keychain -s "${keychain_args[0]}"
|
||||
fi
|
||||
|
||||
if [[ -f "$APPLE_CODESIGN_KEYCHAIN" ]]; then
|
||||
security delete-keychain "$APPLE_CODESIGN_KEYCHAIN"
|
||||
fi
|
||||
fi
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.target }}
|
||||
@@ -216,31 +409,30 @@ jobs:
|
||||
echo "npm_tag=" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
# build_npm_package.py requires DotSlash when staging releases.
|
||||
- uses: facebook/install-dotslash@v2
|
||||
- name: Stage codex CLI npm package
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
TMP_DIR="${RUNNER_TEMP}/npm-stage"
|
||||
./codex-cli/scripts/build_npm_package.py \
|
||||
--package codex \
|
||||
--release-version "${{ steps.release_name.outputs.name }}" \
|
||||
--staging-dir "${TMP_DIR}" \
|
||||
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${{ steps.release_name.outputs.name }}.tgz"
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
run_install: false
|
||||
|
||||
- name: Stage responses API proxy npm package
|
||||
- name: Setup Node.js for npm packaging
|
||||
uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
# stage_npm_packages.py requires DotSlash when staging releases.
|
||||
- uses: facebook/install-dotslash@v2
|
||||
- name: Stage npm packages
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
TMP_DIR="${RUNNER_TEMP}/npm-stage-responses"
|
||||
./codex-cli/scripts/build_npm_package.py \
|
||||
--package codex-responses-api-proxy \
|
||||
./scripts/stage_npm_packages.py \
|
||||
--release-version "${{ steps.release_name.outputs.name }}" \
|
||||
--staging-dir "${TMP_DIR}" \
|
||||
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-responses-api-proxy-npm-${{ steps.release_name.outputs.name }}.tgz"
|
||||
--package codex \
|
||||
--package codex-responses-api-proxy \
|
||||
--package codex-sdk
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
@@ -300,6 +492,10 @@ jobs:
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--pattern "codex-responses-api-proxy-npm-${version}.tgz" \
|
||||
--dir dist/npm
|
||||
gh release download "$tag" \
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--pattern "codex-sdk-npm-${version}.tgz" \
|
||||
--dir dist/npm
|
||||
|
||||
# No NODE_AUTH_TOKEN needed because we use OIDC.
|
||||
- name: Publish to npm
|
||||
@@ -316,6 +512,7 @@ jobs:
|
||||
tarballs=(
|
||||
"codex-npm-${VERSION}.tgz"
|
||||
"codex-responses-api-proxy-npm-${VERSION}.tgz"
|
||||
"codex-sdk-npm-${VERSION}.tgz"
|
||||
)
|
||||
|
||||
for tarball in "${tarballs[@]}"; do
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,6 +30,7 @@ result
|
||||
# cli tools
|
||||
CLAUDE.md
|
||||
.claude/
|
||||
AGENTS.override.md
|
||||
|
||||
# caches
|
||||
.cache/
|
||||
|
||||
36
AGENTS.md
36
AGENTS.md
@@ -8,11 +8,17 @@ In the codex-rs folder where the rust code lives:
|
||||
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
|
||||
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
|
||||
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
|
||||
- Always collapse if statements per https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if
|
||||
- Always inline format! args when possible per https://rust-lang.github.io/rust-clippy/master/index.html#uninlined_format_args
|
||||
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
|
||||
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
|
||||
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
|
||||
|
||||
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspace‑wide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
|
||||
|
||||
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
|
||||
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
|
||||
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
|
||||
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
|
||||
|
||||
## TUI style conventions
|
||||
|
||||
@@ -28,6 +34,7 @@ See `codex-rs/tui/styles.md`.
|
||||
- Desired: vec![" └ ".into(), "M".red(), " ".dim(), "tui/src/app.rs".dim()]
|
||||
|
||||
### TUI Styling (ratatui)
|
||||
|
||||
- Prefer Stylize helpers: use "text".dim(), .bold(), .cyan(), .italic(), .underlined() instead of manual Style where possible.
|
||||
- Prefer simple conversions: use "text".into() for spans and vec![…].into() for lines; when inference is ambiguous (e.g., Paragraph::new/Cell::from), use Line::from(spans) or Span::from(text).
|
||||
- Computed styles: if the Style is computed at runtime, using `Span::styled` is OK (`Span::from(text).set_style(style)` is also acceptable).
|
||||
@@ -39,6 +46,7 @@ See `codex-rs/tui/styles.md`.
|
||||
- Compactness: prefer the form that stays on one line after rustfmt; if only one of Line::from(vec![…]) or vec![…].into() avoids wrapping, choose that. If both wrap, pick the one with fewer wrapped lines.
|
||||
|
||||
### Text wrapping
|
||||
|
||||
- Always use textwrap::wrap to wrap plain strings.
|
||||
- If you have a ratatui Line and you want to wrap it, use the helpers in tui/src/wrapping.rs, e.g. word_wrap_lines / word_wrap_line.
|
||||
- If you need to indent wrapped lines, use the initial_indent / subsequent_indent options from RtOptions if you can, rather than writing custom logic.
|
||||
@@ -60,8 +68,34 @@ This repo uses snapshot tests (via `insta`), especially in `codex-rs/tui`, to va
|
||||
- `cargo insta accept -p codex-tui`
|
||||
|
||||
If you don’t have the tool:
|
||||
|
||||
- `cargo install cargo-insta`
|
||||
|
||||
### Test assertions
|
||||
|
||||
- Tests should use pretty_assertions::assert_eq for clearer diffs. Import this at the top of the test module if it isn't already.
|
||||
|
||||
### Integration tests (core)
|
||||
|
||||
- Prefer the utilities in `core_test_support::responses` when writing end-to-end Codex tests.
|
||||
|
||||
- All `mount_sse*` helpers return a `ResponseMock`; hold onto it so you can assert against outbound `/responses` POST bodies.
|
||||
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
|
||||
- `ResponsesRequest` exposes helpers (`body_json`, `input`, `function_call_output`, `custom_tool_call_output`, `call_output`, `header`, `path`, `query_param`) so assertions can target structured payloads instead of manual JSON digging.
|
||||
- Build SSE payloads with the provided `ev_*` constructors and the `sse(...)`.
|
||||
|
||||
- Typical pattern:
|
||||
|
||||
```rust
|
||||
let mock = responses::mount_sse_once(&server, responses::sse(vec![
|
||||
responses::ev_response_created("resp-1"),
|
||||
responses::ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
|
||||
responses::ev_completed("resp-1"),
|
||||
])).await;
|
||||
|
||||
codex.submit(Op::UserTurn { ... }).await?;
|
||||
|
||||
// Assert request body if needed.
|
||||
let request = mock.single_request();
|
||||
// assert using request.function_call_output(call_id) or request.json_body() or other helpers.
|
||||
```
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
|
||||
|
||||
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
|
||||
@@ -62,8 +61,7 @@ You can also use Codex with an API key, but this requires [additional setup](./d
|
||||
|
||||
### Model Context Protocol (MCP)
|
||||
|
||||
Codex CLI supports [MCP servers](./docs/advanced.md#model-context-protocol-mcp). Enable by adding an `mcp_servers` section to your `~/.codex/config.toml`.
|
||||
|
||||
Codex can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
|
||||
|
||||
### Configuration
|
||||
|
||||
@@ -83,8 +81,11 @@ Codex CLI supports a rich set of configuration options, with preferences stored
|
||||
- [**Authentication**](./docs/authentication.md)
|
||||
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
|
||||
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
|
||||
- **Automating Codex**
|
||||
- [GitHub Action](https://github.com/openai/codex-action)
|
||||
- [TypeScript SDK](./sdk/typescript/README.md)
|
||||
- [Non-interactive mode (`codex exec`)](./docs/exec.md)
|
||||
- [**Advanced**](./docs/advanced.md)
|
||||
- [Non-interactive / CI mode](./docs/advanced.md#non-interactive--ci-mode)
|
||||
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
|
||||
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
|
||||
- [**Zero data retention (ZDR)**](./docs/zdr.md)
|
||||
|
||||
35
codex-cli/bin/codex.js
Executable file → Normal file
35
codex-cli/bin/codex.js
Executable file → Normal file
@@ -80,6 +80,32 @@ function getUpdatedPath(newDirs) {
|
||||
return updatedPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Use heuristics to detect the package manager that was used to install Codex
|
||||
* in order to give the user a hint about how to update it.
|
||||
*/
|
||||
function detectPackageManager() {
|
||||
const userAgent = process.env.npm_config_user_agent || "";
|
||||
if (/\bbun\//.test(userAgent)) {
|
||||
return "bun";
|
||||
}
|
||||
|
||||
const execPath = process.env.npm_execpath || "";
|
||||
if (execPath.includes("bun")) {
|
||||
return "bun";
|
||||
}
|
||||
|
||||
if (
|
||||
process.env.BUN_INSTALL ||
|
||||
process.env.BUN_INSTALL_GLOBAL_DIR ||
|
||||
process.env.BUN_INSTALL_BIN_DIR
|
||||
) {
|
||||
return "bun";
|
||||
}
|
||||
|
||||
return userAgent ? "npm" : null;
|
||||
}
|
||||
|
||||
const additionalDirs = [];
|
||||
const pathDir = path.join(archRoot, "path");
|
||||
if (existsSync(pathDir)) {
|
||||
@@ -87,9 +113,16 @@ if (existsSync(pathDir)) {
|
||||
}
|
||||
const updatedPath = getUpdatedPath(additionalDirs);
|
||||
|
||||
const env = { ...process.env, PATH: updatedPath };
|
||||
const packageManagerEnvVar =
|
||||
detectPackageManager() === "bun"
|
||||
? "CODEX_MANAGED_BY_BUN"
|
||||
: "CODEX_MANAGED_BY_NPM";
|
||||
env[packageManagerEnvVar] = "1";
|
||||
|
||||
const child = spawn(binaryPath, process.argv.slice(2), {
|
||||
stdio: "inherit",
|
||||
env: { ...process.env, PATH: updatedPath, CODEX_MANAGED_BY_NPM: "1" },
|
||||
env,
|
||||
});
|
||||
|
||||
child.on("error", (err) => {
|
||||
|
||||
@@ -1,11 +1,19 @@
|
||||
# npm releases
|
||||
|
||||
Run the following:
|
||||
|
||||
To build the 0.2.x or later version of the npm module, which runs the Rust version of the CLI, build it as follows:
|
||||
Use the staging helper in the repo root to generate npm tarballs for a release. For
|
||||
example, to stage the CLI, responses proxy, and SDK packages for version `0.6.0`:
|
||||
|
||||
```bash
|
||||
./codex-cli/scripts/build_npm_package.py --release-version 0.6.0
|
||||
./scripts/stage_npm_packages.py \
|
||||
--release-version 0.6.0 \
|
||||
--package codex \
|
||||
--package codex-responses-api-proxy \
|
||||
--package codex-sdk
|
||||
```
|
||||
|
||||
Note this will create `./codex-cli/vendor/` as a side-effect.
|
||||
This downloads the native artifacts once, hydrates `vendor/` for each package, and writes
|
||||
tarballs to `dist/npm/`.
|
||||
|
||||
If you need to invoke `build_npm_package.py` directly, run
|
||||
`codex-cli/scripts/install_native_deps.py` first and pass `--vendor-src` pointing to the
|
||||
directory that contains the populated `vendor/` tree.
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
@@ -14,19 +13,25 @@ SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
CODEX_CLI_ROOT = SCRIPT_DIR.parent
|
||||
REPO_ROOT = CODEX_CLI_ROOT.parent
|
||||
RESPONSES_API_PROXY_NPM_ROOT = REPO_ROOT / "codex-rs" / "responses-api-proxy" / "npm"
|
||||
GITHUB_REPO = "openai/codex"
|
||||
CODEX_SDK_ROOT = REPO_ROOT / "sdk" / "typescript"
|
||||
|
||||
# The docs are not clear on what the expected value/format of
|
||||
# workflow/workflowName is:
|
||||
# https://cli.github.com/manual/gh_run_list
|
||||
WORKFLOW_NAME = ".github/workflows/rust-release.yml"
|
||||
PACKAGE_NATIVE_COMPONENTS: dict[str, list[str]] = {
|
||||
"codex": ["codex", "rg"],
|
||||
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
|
||||
"codex-sdk": ["codex"],
|
||||
}
|
||||
COMPONENT_DEST_DIR: dict[str, str] = {
|
||||
"codex": "codex",
|
||||
"codex-responses-api-proxy": "codex-responses-api-proxy",
|
||||
"rg": "path",
|
||||
}
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Build or stage the Codex CLI npm package.")
|
||||
parser.add_argument(
|
||||
"--package",
|
||||
choices=("codex", "codex-responses-api-proxy"),
|
||||
choices=("codex", "codex-responses-api-proxy", "codex-sdk"),
|
||||
default="codex",
|
||||
help="Which npm package to stage (default: codex).",
|
||||
)
|
||||
@@ -37,14 +42,9 @@ def parse_args() -> argparse.Namespace:
|
||||
parser.add_argument(
|
||||
"--release-version",
|
||||
help=(
|
||||
"Version to stage for npm release. When provided, the script also resolves the "
|
||||
"matching rust-release workflow unless --workflow-url is supplied."
|
||||
"Version to stage for npm release."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--workflow-url",
|
||||
help="Optional GitHub Actions workflow run URL used to download native binaries.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--staging-dir",
|
||||
type=Path,
|
||||
@@ -64,6 +64,11 @@ def parse_args() -> argparse.Namespace:
|
||||
type=Path,
|
||||
help="Path where the generated npm tarball should be written.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--vendor-src",
|
||||
type=Path,
|
||||
help="Directory containing pre-installed native binaries to bundle (vendor root).",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@@ -86,29 +91,19 @@ def main() -> int:
|
||||
try:
|
||||
stage_sources(staging_dir, version, package)
|
||||
|
||||
workflow_url = args.workflow_url
|
||||
resolved_head_sha: str | None = None
|
||||
if not workflow_url:
|
||||
if release_version:
|
||||
workflow = resolve_release_workflow(version)
|
||||
workflow_url = workflow["url"]
|
||||
resolved_head_sha = workflow.get("headSha")
|
||||
else:
|
||||
workflow_url = resolve_latest_alpha_workflow_url()
|
||||
elif release_version:
|
||||
try:
|
||||
workflow = resolve_release_workflow(version)
|
||||
resolved_head_sha = workflow.get("headSha")
|
||||
except Exception:
|
||||
resolved_head_sha = None
|
||||
vendor_src = args.vendor_src.resolve() if args.vendor_src else None
|
||||
native_components = PACKAGE_NATIVE_COMPONENTS.get(package, [])
|
||||
|
||||
if release_version and resolved_head_sha:
|
||||
print(f"should `git checkout {resolved_head_sha}`")
|
||||
if native_components:
|
||||
if vendor_src is None:
|
||||
components_str = ", ".join(native_components)
|
||||
raise RuntimeError(
|
||||
"Native components "
|
||||
f"({components_str}) required for package '{package}'. Provide --vendor-src "
|
||||
"pointing to a directory containing pre-installed binaries."
|
||||
)
|
||||
|
||||
if not workflow_url:
|
||||
raise RuntimeError("Unable to determine workflow URL for native binaries.")
|
||||
|
||||
install_native_binaries(staging_dir, workflow_url, package)
|
||||
copy_native_binaries(vendor_src, staging_dir, native_components)
|
||||
|
||||
if release_version:
|
||||
staging_dir_str = str(staging_dir)
|
||||
@@ -119,12 +114,20 @@ def main() -> int:
|
||||
f" node {staging_dir_str}/bin/codex.js --version\n"
|
||||
f" node {staging_dir_str}/bin/codex.js --help\n\n"
|
||||
)
|
||||
else:
|
||||
elif package == "codex-responses-api-proxy":
|
||||
print(
|
||||
f"Staged version {version} for release in {staging_dir_str}\n\n"
|
||||
"Verify the responses API proxy:\n"
|
||||
f" node {staging_dir_str}/bin/codex-responses-api-proxy.js --help\n\n"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"Staged version {version} for release in {staging_dir_str}\n\n"
|
||||
"Verify the SDK contents:\n"
|
||||
f" ls {staging_dir_str}/dist\n"
|
||||
f" ls {staging_dir_str}/vendor\n"
|
||||
" node -e \"import('./dist/index.js').then(() => console.log('ok'))\"\n\n"
|
||||
)
|
||||
else:
|
||||
print(f"Staged package in {staging_dir}")
|
||||
|
||||
@@ -152,10 +155,9 @@ def prepare_staging_dir(staging_dir: Path | None) -> tuple[Path, bool]:
|
||||
|
||||
|
||||
def stage_sources(staging_dir: Path, version: str, package: str) -> None:
|
||||
bin_dir = staging_dir / "bin"
|
||||
bin_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if package == "codex":
|
||||
bin_dir = staging_dir / "bin"
|
||||
bin_dir.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(CODEX_CLI_ROOT / "bin" / "codex.js", bin_dir / "codex.js")
|
||||
rg_manifest = CODEX_CLI_ROOT / "bin" / "rg"
|
||||
if rg_manifest.exists():
|
||||
@@ -167,6 +169,8 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
|
||||
|
||||
package_json_path = CODEX_CLI_ROOT / "package.json"
|
||||
elif package == "codex-responses-api-proxy":
|
||||
bin_dir = staging_dir / "bin"
|
||||
bin_dir.mkdir(parents=True, exist_ok=True)
|
||||
launcher_src = RESPONSES_API_PROXY_NPM_ROOT / "bin" / "codex-responses-api-proxy.js"
|
||||
shutil.copy2(launcher_src, bin_dir / "codex-responses-api-proxy.js")
|
||||
|
||||
@@ -175,6 +179,9 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
|
||||
shutil.copy2(readme_src, staging_dir / "README.md")
|
||||
|
||||
package_json_path = RESPONSES_API_PROXY_NPM_ROOT / "package.json"
|
||||
elif package == "codex-sdk":
|
||||
package_json_path = CODEX_SDK_ROOT / "package.json"
|
||||
stage_codex_sdk_sources(staging_dir)
|
||||
else:
|
||||
raise RuntimeError(f"Unknown package '{package}'.")
|
||||
|
||||
@@ -182,91 +189,85 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
|
||||
package_json = json.load(fh)
|
||||
package_json["version"] = version
|
||||
|
||||
if package == "codex-sdk":
|
||||
scripts = package_json.get("scripts")
|
||||
if isinstance(scripts, dict):
|
||||
scripts.pop("prepare", None)
|
||||
|
||||
files = package_json.get("files")
|
||||
if isinstance(files, list):
|
||||
if "vendor" not in files:
|
||||
files.append("vendor")
|
||||
else:
|
||||
package_json["files"] = ["dist", "vendor"]
|
||||
|
||||
with open(staging_dir / "package.json", "w", encoding="utf-8") as out:
|
||||
json.dump(package_json, out, indent=2)
|
||||
out.write("\n")
|
||||
|
||||
|
||||
def install_native_binaries(staging_dir: Path, workflow_url: str, package: str) -> None:
|
||||
package_components = {
|
||||
"codex": ["codex", "rg"],
|
||||
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
|
||||
}
|
||||
|
||||
components = package_components.get(package)
|
||||
if components is None:
|
||||
raise RuntimeError(f"Unknown package '{package}'.")
|
||||
|
||||
cmd = ["./scripts/install_native_deps.py", "--workflow-url", workflow_url]
|
||||
for component in components:
|
||||
cmd.extend(["--component", component])
|
||||
cmd.append(str(staging_dir))
|
||||
subprocess.check_call(cmd, cwd=CODEX_CLI_ROOT)
|
||||
def run_command(cmd: list[str], cwd: Path | None = None) -> None:
|
||||
print("+", " ".join(cmd))
|
||||
subprocess.run(cmd, cwd=cwd, check=True)
|
||||
|
||||
|
||||
def resolve_latest_alpha_workflow_url() -> str:
|
||||
version = determine_latest_alpha_version()
|
||||
workflow = resolve_release_workflow(version)
|
||||
return workflow["url"]
|
||||
def stage_codex_sdk_sources(staging_dir: Path) -> None:
|
||||
package_root = CODEX_SDK_ROOT
|
||||
|
||||
run_command(["pnpm", "install", "--frozen-lockfile"], cwd=package_root)
|
||||
run_command(["pnpm", "run", "build"], cwd=package_root)
|
||||
|
||||
dist_src = package_root / "dist"
|
||||
if not dist_src.exists():
|
||||
raise RuntimeError("codex-sdk build did not produce a dist directory.")
|
||||
|
||||
shutil.copytree(dist_src, staging_dir / "dist")
|
||||
|
||||
readme_src = package_root / "README.md"
|
||||
if readme_src.exists():
|
||||
shutil.copy2(readme_src, staging_dir / "README.md")
|
||||
|
||||
license_src = REPO_ROOT / "LICENSE"
|
||||
if license_src.exists():
|
||||
shutil.copy2(license_src, staging_dir / "LICENSE")
|
||||
|
||||
|
||||
def determine_latest_alpha_version() -> str:
|
||||
releases = list_releases()
|
||||
best_key: tuple[int, int, int, int] | None = None
|
||||
best_version: str | None = None
|
||||
pattern = re.compile(r"^rust-v(\d+)\.(\d+)\.(\d+)-alpha\.(\d+)$")
|
||||
for release in releases:
|
||||
tag = release.get("tag_name", "")
|
||||
match = pattern.match(tag)
|
||||
if not match:
|
||||
def copy_native_binaries(vendor_src: Path, staging_dir: Path, components: list[str]) -> None:
|
||||
vendor_src = vendor_src.resolve()
|
||||
if not vendor_src.exists():
|
||||
raise RuntimeError(f"Vendor source directory not found: {vendor_src}")
|
||||
|
||||
components_set = {component for component in components if component in COMPONENT_DEST_DIR}
|
||||
if not components_set:
|
||||
return
|
||||
|
||||
vendor_dest = staging_dir / "vendor"
|
||||
if vendor_dest.exists():
|
||||
shutil.rmtree(vendor_dest)
|
||||
vendor_dest.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for target_dir in vendor_src.iterdir():
|
||||
if not target_dir.is_dir():
|
||||
continue
|
||||
key = tuple(int(match.group(i)) for i in range(1, 5))
|
||||
if best_key is None or key > best_key:
|
||||
best_key = key
|
||||
best_version = (
|
||||
f"{match.group(1)}.{match.group(2)}.{match.group(3)}-alpha.{match.group(4)}"
|
||||
)
|
||||
|
||||
if best_version is None:
|
||||
raise RuntimeError("No alpha releases found when resolving workflow URL.")
|
||||
return best_version
|
||||
dest_target_dir = vendor_dest / target_dir.name
|
||||
dest_target_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for component in components_set:
|
||||
dest_dir_name = COMPONENT_DEST_DIR.get(component)
|
||||
if dest_dir_name is None:
|
||||
continue
|
||||
|
||||
def list_releases() -> list[dict]:
|
||||
stdout = subprocess.check_output(
|
||||
["gh", "api", f"/repos/{GITHUB_REPO}/releases?per_page=100"],
|
||||
text=True,
|
||||
)
|
||||
try:
|
||||
releases = json.loads(stdout or "[]")
|
||||
except json.JSONDecodeError as exc:
|
||||
raise RuntimeError("Unable to parse releases JSON.") from exc
|
||||
if not isinstance(releases, list):
|
||||
raise RuntimeError("Unexpected response when listing releases.")
|
||||
return releases
|
||||
src_component_dir = target_dir / dest_dir_name
|
||||
if not src_component_dir.exists():
|
||||
raise RuntimeError(
|
||||
f"Missing native component '{component}' in vendor source: {src_component_dir}"
|
||||
)
|
||||
|
||||
|
||||
def resolve_release_workflow(version: str) -> dict:
|
||||
stdout = subprocess.check_output(
|
||||
[
|
||||
"gh",
|
||||
"run",
|
||||
"list",
|
||||
"--branch",
|
||||
f"rust-v{version}",
|
||||
"--json",
|
||||
"workflowName,url,headSha",
|
||||
"--workflow",
|
||||
WORKFLOW_NAME,
|
||||
"--jq",
|
||||
"first(.[])",
|
||||
],
|
||||
text=True,
|
||||
)
|
||||
workflow = json.loads(stdout or "[]")
|
||||
if not workflow:
|
||||
raise RuntimeError(f"Unable to find rust-release workflow for version {version}.")
|
||||
return workflow
|
||||
dest_component_dir = dest_target_dir / dest_dir_name
|
||||
if dest_component_dir.exists():
|
||||
shutil.rmtree(dest_component_dir)
|
||||
shutil.copytree(src_component_dir, dest_component_dir)
|
||||
|
||||
|
||||
def run_npm_pack(staging_dir: Path, output_path: Path) -> Path:
|
||||
|
||||
1794
codex-rs/Cargo.lock
generated
1794
codex-rs/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -3,8 +3,10 @@ members = [
|
||||
"backend-client",
|
||||
"ansi-escape",
|
||||
"app-server",
|
||||
"app-server-protocol",
|
||||
"apply-patch",
|
||||
"arg0",
|
||||
"codex-infty",
|
||||
"codex-backend-openapi-models",
|
||||
"cloud-tasks",
|
||||
"cloud-tasks-client",
|
||||
@@ -13,6 +15,7 @@ members = [
|
||||
"core",
|
||||
"exec",
|
||||
"execpolicy",
|
||||
"feedback",
|
||||
"file-search",
|
||||
"git-tooling",
|
||||
"linux-sandbox",
|
||||
@@ -31,6 +34,7 @@ members = [
|
||||
"git-apply",
|
||||
"utils/json-to-toml",
|
||||
"utils/readiness",
|
||||
"utils/string",
|
||||
]
|
||||
resolver = "2"
|
||||
|
||||
@@ -47,12 +51,14 @@ edition = "2024"
|
||||
app_test_support = { path = "app-server/tests/common" }
|
||||
codex-ansi-escape = { path = "ansi-escape" }
|
||||
codex-app-server = { path = "app-server" }
|
||||
codex-app-server-protocol = { path = "app-server-protocol" }
|
||||
codex-apply-patch = { path = "apply-patch" }
|
||||
codex-arg0 = { path = "arg0" }
|
||||
codex-chatgpt = { path = "chatgpt" }
|
||||
codex-common = { path = "common" }
|
||||
codex-core = { path = "core" }
|
||||
codex-exec = { path = "exec" }
|
||||
codex-feedback = { path = "feedback" }
|
||||
codex-file-search = { path = "file-search" }
|
||||
codex-git-tooling = { path = "git-tooling" }
|
||||
codex-linux-sandbox = { path = "linux-sandbox" }
|
||||
@@ -69,6 +75,7 @@ codex-rmcp-client = { path = "rmcp-client" }
|
||||
codex-tui = { path = "tui" }
|
||||
codex-utils-json-to-toml = { path = "utils/json-to-toml" }
|
||||
codex-utils-readiness = { path = "utils/readiness" }
|
||||
codex-utils-string = { path = "utils/string" }
|
||||
core_test_support = { path = "core/tests/common" }
|
||||
mcp-types = { path = "mcp-types" }
|
||||
mcp_test_support = { path = "mcp-server/tests/common" }
|
||||
@@ -80,9 +87,11 @@ anyhow = "1"
|
||||
arboard = "3"
|
||||
askama = "0.12"
|
||||
assert_cmd = "2"
|
||||
assert_matches = "1.5.0"
|
||||
async-channel = "2.3.1"
|
||||
async-stream = "0.3.6"
|
||||
async-trait = "0.1.89"
|
||||
axum = { version = "0.8", default-features = false }
|
||||
base64 = "0.22.1"
|
||||
bytes = "1.10.1"
|
||||
chrono = "0.4.42"
|
||||
@@ -95,11 +104,12 @@ derive_more = "2"
|
||||
diffy = "0.4.2"
|
||||
dirs = "6"
|
||||
dotenvy = "0.15.7"
|
||||
dunce = "1.0.4"
|
||||
env-flags = "0.1.1"
|
||||
env_logger = "0.11.5"
|
||||
escargot = "0.5"
|
||||
eventsource-stream = "0.2.3"
|
||||
futures = "0.3"
|
||||
futures = { version = "0.3", default-features = false }
|
||||
icu_decimal = "2.0.0"
|
||||
icu_locale_core = "2.0.0"
|
||||
ignore = "0.4.23"
|
||||
@@ -107,6 +117,7 @@ image = { version = "^0.25.8", default-features = false }
|
||||
indexmap = "2.6.0"
|
||||
insta = "1.43.2"
|
||||
itertools = "0.14.0"
|
||||
keyring = "3.6"
|
||||
landlock = "0.4.1"
|
||||
lazy_static = "1"
|
||||
libc = "0.2.175"
|
||||
@@ -114,6 +125,7 @@ log = "0.4"
|
||||
maplit = "1.0.2"
|
||||
mime_guess = "2.0.5"
|
||||
multimap = "0.10.0"
|
||||
notify = "8.2.0"
|
||||
nucleo-matcher = "0.3.1"
|
||||
openssl-sys = "*"
|
||||
opentelemetry = "0.30.0"
|
||||
@@ -123,6 +135,7 @@ opentelemetry-semantic-conventions = "0.30.0"
|
||||
opentelemetry_sdk = "0.30.0"
|
||||
os_info = "3.12.0"
|
||||
owo-colors = "4.2.0"
|
||||
paste = "1.0.15"
|
||||
path-absolutize = "3.1.1"
|
||||
path-clean = "1.0.1"
|
||||
pathdiff = "0.2"
|
||||
@@ -134,11 +147,14 @@ rand = "0.9"
|
||||
ratatui = "0.29.0"
|
||||
regex-lite = "0.1.7"
|
||||
reqwest = "0.12"
|
||||
rmcp = { version = "0.8.0", default-features = false }
|
||||
schemars = "0.8.22"
|
||||
seccompiler = "0.5.0"
|
||||
sentry = "0.34.0"
|
||||
serde = "1"
|
||||
serde_json = "1"
|
||||
serde_with = "3.14"
|
||||
serial_test = "3.2.0"
|
||||
sha1 = "0.10.6"
|
||||
sha2 = "0.10"
|
||||
shlex = "1.3.0"
|
||||
@@ -164,8 +180,9 @@ tracing = "0.1.41"
|
||||
tracing-appender = "0.2.3"
|
||||
tracing-subscriber = "0.3.20"
|
||||
tracing-test = "0.2.5"
|
||||
tree-sitter = "0.25.9"
|
||||
tree-sitter-bash = "0.25.0"
|
||||
tree-sitter = "0.25.10"
|
||||
tree-sitter-bash = "0.25"
|
||||
tree-sitter-highlight = "0.25.10"
|
||||
ts-rs = "11"
|
||||
unicode-segmentation = "1.12.0"
|
||||
unicode-width = "0.2"
|
||||
@@ -233,5 +250,9 @@ strip = "symbols"
|
||||
codegen-units = 1
|
||||
|
||||
[patch.crates-io]
|
||||
# Uncomment to debug local changes.
|
||||
# ratatui = { path = "../../ratatui" }
|
||||
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
|
||||
|
||||
# Uncomment to debug local changes.
|
||||
# rmcp = { path = "../../rust-sdk/crates/rmcp" }
|
||||
|
||||
@@ -23,9 +23,15 @@ Codex supports a rich set of configuration options. Note that the Rust CLI uses
|
||||
|
||||
### Model Context Protocol Support
|
||||
|
||||
Codex CLI functions as an MCP client that can connect to MCP servers on startup. See the [`mcp_servers`](../docs/config.md#mcp_servers) section in the configuration documentation for details.
|
||||
#### MCP client
|
||||
|
||||
It is still experimental, but you can also launch Codex as an MCP _server_ by running `codex mcp-server`. Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
|
||||
Codex CLI functions as an MCP client that allows the Codex CLI and IDE extension to connect to MCP servers on startup. See the [`configuration documentation`](../docs/config.md#mcp_servers) for details.
|
||||
|
||||
#### MCP server (experimental)
|
||||
|
||||
Codex can be launched as an MCP _server_ by running `codex mcp-server`. This allows _other_ MCP clients to use Codex as a tool for another agent.
|
||||
|
||||
Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
|
||||
|
||||
```shell
|
||||
npx @modelcontextprotocol/inspector codex mcp-server
|
||||
@@ -71,9 +77,13 @@ To test to see what happens when a command is run under the sandbox provided by
|
||||
|
||||
```
|
||||
# macOS
|
||||
codex debug seatbelt [--full-auto] [COMMAND]...
|
||||
codex sandbox macos [--full-auto] [COMMAND]...
|
||||
|
||||
# Linux
|
||||
codex sandbox linux [--full-auto] [COMMAND]...
|
||||
|
||||
# Legacy aliases
|
||||
codex debug seatbelt [--full-auto] [COMMAND]...
|
||||
codex debug landlock [--full-auto] [COMMAND]...
|
||||
```
|
||||
|
||||
|
||||
@@ -3,11 +3,30 @@ use ansi_to_tui::IntoText;
|
||||
use ratatui::text::Line;
|
||||
use ratatui::text::Text;
|
||||
|
||||
// Expand tabs in a best-effort way for transcript rendering.
|
||||
// Tabs can interact poorly with left-gutter prefixes in our TUI and CLI
|
||||
// transcript views (e.g., `nl` separates line numbers from content with a tab).
|
||||
// Replacing tabs with spaces avoids odd visual artifacts without changing
|
||||
// semantics for our use cases.
|
||||
fn expand_tabs(s: &str) -> std::borrow::Cow<'_, str> {
|
||||
if s.contains('\t') {
|
||||
// Keep it simple: replace each tab with 4 spaces.
|
||||
// We do not try to align to tab stops since most usages (like `nl`)
|
||||
// look acceptable with a fixed substitution and this avoids stateful math
|
||||
// across spans.
|
||||
std::borrow::Cow::Owned(s.replace('\t', " "))
|
||||
} else {
|
||||
std::borrow::Cow::Borrowed(s)
|
||||
}
|
||||
}
|
||||
|
||||
/// This function should be used when the contents of `s` are expected to match
|
||||
/// a single line. If multiple lines are found, a warning is logged and only the
|
||||
/// first line is returned.
|
||||
pub fn ansi_escape_line(s: &str) -> Line<'static> {
|
||||
let text = ansi_escape(s);
|
||||
// Normalize tabs to spaces to avoid odd gutter collisions in transcript mode.
|
||||
let s = expand_tabs(s);
|
||||
let text = ansi_escape(&s);
|
||||
match text.lines.as_slice() {
|
||||
[] => "".into(),
|
||||
[only] => only.clone(),
|
||||
|
||||
24
codex-rs/app-server-protocol/Cargo.toml
Normal file
24
codex-rs/app-server-protocol/Cargo.toml
Normal file
@@ -0,0 +1,24 @@
|
||||
[package]
|
||||
edition = "2024"
|
||||
name = "codex-app-server-protocol"
|
||||
version = { workspace = true }
|
||||
|
||||
[lib]
|
||||
name = "codex_app_server_protocol"
|
||||
path = "src/lib.rs"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
codex-protocol = { workspace = true }
|
||||
paste = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
strum_macros = { workspace = true }
|
||||
ts-rs = { workspace = true }
|
||||
uuid = { workspace = true, features = ["serde", "v7"] }
|
||||
|
||||
[dev-dependencies]
|
||||
anyhow = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
67
codex-rs/app-server-protocol/src/jsonrpc_lite.rs
Normal file
67
codex-rs/app-server-protocol/src/jsonrpc_lite.rs
Normal file
@@ -0,0 +1,67 @@
|
||||
//! We do not do true JSON-RPC 2.0, as we neither send nor expect the
|
||||
//! "jsonrpc": "2.0" field.
|
||||
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
use ts_rs::TS;
|
||||
|
||||
pub const JSONRPC_VERSION: &str = "2.0";
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, Hash, Eq, TS)]
|
||||
#[serde(untagged)]
|
||||
pub enum RequestId {
|
||||
String(String),
|
||||
#[ts(type = "number")]
|
||||
Integer(i64),
|
||||
}
|
||||
|
||||
pub type Result = serde_json::Value;
|
||||
|
||||
/// Refers to any valid JSON-RPC object that can be decoded off the wire, or encoded to be sent.
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
#[serde(untagged)]
|
||||
pub enum JSONRPCMessage {
|
||||
Request(JSONRPCRequest),
|
||||
Notification(JSONRPCNotification),
|
||||
Response(JSONRPCResponse),
|
||||
Error(JSONRPCError),
|
||||
}
|
||||
|
||||
/// A request that expects a response.
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
pub struct JSONRPCRequest {
|
||||
pub id: RequestId,
|
||||
pub method: String,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub params: Option<serde_json::Value>,
|
||||
}
|
||||
|
||||
/// A notification which does not expect a response.
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
pub struct JSONRPCNotification {
|
||||
pub method: String,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub params: Option<serde_json::Value>,
|
||||
}
|
||||
|
||||
/// A successful (non-error) response to a request.
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
pub struct JSONRPCResponse {
|
||||
pub id: RequestId,
|
||||
pub result: Result,
|
||||
}
|
||||
|
||||
/// A response to a request that indicates an error occurred.
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
pub struct JSONRPCError {
|
||||
pub error: JSONRPCErrorError,
|
||||
pub id: RequestId,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
|
||||
pub struct JSONRPCErrorError {
|
||||
pub code: i64,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub data: Option<serde_json::Value>,
|
||||
pub message: String,
|
||||
}
|
||||
5
codex-rs/app-server-protocol/src/lib.rs
Normal file
5
codex-rs/app-server-protocol/src/lib.rs
Normal file
@@ -0,0 +1,5 @@
|
||||
mod jsonrpc_lite;
|
||||
mod protocol;
|
||||
|
||||
pub use jsonrpc_lite::*;
|
||||
pub use protocol::*;
|
||||
@@ -1,77 +1,28 @@
|
||||
use std::collections::HashMap;
|
||||
use std::fmt::Display;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use crate::config_types::ReasoningEffort;
|
||||
use crate::config_types::ReasoningSummary;
|
||||
use crate::config_types::SandboxMode;
|
||||
use crate::config_types::Verbosity;
|
||||
use crate::protocol::AskForApproval;
|
||||
use crate::protocol::EventMsg;
|
||||
use crate::protocol::FileChange;
|
||||
use crate::protocol::ReviewDecision;
|
||||
use crate::protocol::SandboxPolicy;
|
||||
use crate::protocol::TurnAbortReason;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::RequestId;
|
||||
use crate::JSONRPCNotification;
|
||||
use crate::JSONRPCRequest;
|
||||
use crate::RequestId;
|
||||
use codex_protocol::ConversationId;
|
||||
use codex_protocol::config_types::ReasoningEffort;
|
||||
use codex_protocol::config_types::ReasoningSummary;
|
||||
use codex_protocol::config_types::SandboxMode;
|
||||
use codex_protocol::config_types::Verbosity;
|
||||
use codex_protocol::parse_command::ParsedCommand;
|
||||
use codex_protocol::protocol::AskForApproval;
|
||||
use codex_protocol::protocol::EventMsg;
|
||||
use codex_protocol::protocol::FileChange;
|
||||
use codex_protocol::protocol::ReviewDecision;
|
||||
use codex_protocol::protocol::SandboxPolicy;
|
||||
use codex_protocol::protocol::TurnAbortReason;
|
||||
use paste::paste;
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
use strum_macros::Display;
|
||||
use ts_rs::TS;
|
||||
use uuid::Uuid;
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, TS, Hash)]
|
||||
#[ts(type = "string")]
|
||||
pub struct ConversationId {
|
||||
uuid: Uuid,
|
||||
}
|
||||
|
||||
impl ConversationId {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
uuid: Uuid::now_v7(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn from_string(s: &str) -> Result<Self, uuid::Error> {
|
||||
Ok(Self {
|
||||
uuid: Uuid::parse_str(s)?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for ConversationId {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for ConversationId {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.uuid)
|
||||
}
|
||||
}
|
||||
|
||||
impl Serialize for ConversationId {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
serializer.collect_str(&self.uuid)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'de> Deserialize<'de> for ConversationId {
|
||||
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
|
||||
where
|
||||
D: serde::Deserializer<'de>,
|
||||
{
|
||||
let value = String::deserialize(deserializer)?;
|
||||
let uuid = Uuid::parse_str(&value).map_err(serde::de::Error::custom)?;
|
||||
Ok(Self { uuid })
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
|
||||
#[ts(type = "string")]
|
||||
pub struct GitSha(pub String);
|
||||
@@ -89,117 +40,137 @@ pub enum AuthMode {
|
||||
ChatGPT,
|
||||
}
|
||||
|
||||
/// Request from the client to the server.
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(tag = "method", rename_all = "camelCase")]
|
||||
pub enum ClientRequest {
|
||||
/// Generates an `enum ClientRequest` where each variant is a request that the
|
||||
/// client can send to the server. Each variant has associated `params` and
|
||||
/// `response` types. Also generates a `export_client_responses()` function to
|
||||
/// export all response types to TypeScript.
|
||||
macro_rules! client_request_definitions {
|
||||
(
|
||||
$(
|
||||
$(#[$variant_meta:meta])*
|
||||
$variant:ident {
|
||||
params: $(#[$params_meta:meta])* $params:ty,
|
||||
response: $response:ty,
|
||||
}
|
||||
),* $(,)?
|
||||
) => {
|
||||
/// Request from the client to the server.
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(tag = "method", rename_all = "camelCase")]
|
||||
pub enum ClientRequest {
|
||||
$(
|
||||
$(#[$variant_meta])*
|
||||
$variant {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
$(#[$params_meta])*
|
||||
params: $params,
|
||||
},
|
||||
)*
|
||||
}
|
||||
|
||||
pub fn export_client_responses(
|
||||
out_dir: &::std::path::Path,
|
||||
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
|
||||
$(
|
||||
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
|
||||
)*
|
||||
Ok(())
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
client_request_definitions! {
|
||||
Initialize {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: InitializeParams,
|
||||
response: InitializeResponse,
|
||||
},
|
||||
NewConversation {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: NewConversationParams,
|
||||
response: NewConversationResponse,
|
||||
},
|
||||
/// List recorded Codex conversations (rollouts) with optional pagination and search.
|
||||
ListConversations {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ListConversationsParams,
|
||||
response: ListConversationsResponse,
|
||||
},
|
||||
/// Resume a recorded Codex conversation from a rollout file.
|
||||
ResumeConversation {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ResumeConversationParams,
|
||||
response: ResumeConversationResponse,
|
||||
},
|
||||
ArchiveConversation {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ArchiveConversationParams,
|
||||
response: ArchiveConversationResponse,
|
||||
},
|
||||
SendUserMessage {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: SendUserMessageParams,
|
||||
response: SendUserMessageResponse,
|
||||
},
|
||||
SendUserTurn {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: SendUserTurnParams,
|
||||
response: SendUserTurnResponse,
|
||||
},
|
||||
InterruptConversation {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: InterruptConversationParams,
|
||||
response: InterruptConversationResponse,
|
||||
},
|
||||
AddConversationListener {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: AddConversationListenerParams,
|
||||
response: AddConversationSubscriptionResponse,
|
||||
},
|
||||
RemoveConversationListener {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: RemoveConversationListenerParams,
|
||||
response: RemoveConversationSubscriptionResponse,
|
||||
},
|
||||
GitDiffToRemote {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: GitDiffToRemoteParams,
|
||||
response: GitDiffToRemoteResponse,
|
||||
},
|
||||
LoginApiKey {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: LoginApiKeyParams,
|
||||
response: LoginApiKeyResponse,
|
||||
},
|
||||
LoginChatGpt {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: LoginChatGptResponse,
|
||||
},
|
||||
CancelLoginChatGpt {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: CancelLoginChatGptParams,
|
||||
response: CancelLoginChatGptResponse,
|
||||
},
|
||||
LogoutChatGpt {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: LogoutChatGptResponse,
|
||||
},
|
||||
GetAuthStatus {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: GetAuthStatusParams,
|
||||
response: GetAuthStatusResponse,
|
||||
},
|
||||
GetUserSavedConfig {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: GetUserSavedConfigResponse,
|
||||
},
|
||||
SetDefaultModel {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: SetDefaultModelParams,
|
||||
response: SetDefaultModelResponse,
|
||||
},
|
||||
GetUserAgent {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: GetUserAgentResponse,
|
||||
},
|
||||
UserInfo {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: UserInfoResponse,
|
||||
},
|
||||
FuzzyFileSearch {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: FuzzyFileSearchParams,
|
||||
response: FuzzyFileSearchResponse,
|
||||
},
|
||||
/// Execute a command (argv vector) under the server's sandbox.
|
||||
ExecOneOffCommand {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ExecOneOffCommandParams,
|
||||
response: ExecOneOffCommandResponse,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -429,7 +400,7 @@ pub struct ExecOneOffCommandParams {
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ExecArbitraryCommandResponse {
|
||||
pub struct ExecOneOffCommandResponse {
|
||||
pub exit_code: i32,
|
||||
pub stdout: String,
|
||||
pub stderr: String,
|
||||
@@ -633,30 +604,74 @@ pub enum InputItem {
|
||||
},
|
||||
}
|
||||
|
||||
// TODO(mbolin): Need test to ensure these constants match the enum variants.
|
||||
/// Generates an `enum ServerRequest` where each variant is a request that the
|
||||
/// server can send to the client along with the corresponding params and
|
||||
/// response types. It also generates helper types used by the app/server
|
||||
/// infrastructure (payload enum, request constructor, and export helpers).
|
||||
macro_rules! server_request_definitions {
|
||||
(
|
||||
$(
|
||||
$(#[$variant_meta:meta])*
|
||||
$variant:ident
|
||||
),* $(,)?
|
||||
) => {
|
||||
paste! {
|
||||
/// Request initiated from the server and sent to the client.
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(tag = "method", rename_all = "camelCase")]
|
||||
pub enum ServerRequest {
|
||||
$(
|
||||
$(#[$variant_meta])*
|
||||
$variant {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: [<$variant Params>],
|
||||
},
|
||||
)*
|
||||
}
|
||||
|
||||
pub const APPLY_PATCH_APPROVAL_METHOD: &str = "applyPatchApproval";
|
||||
pub const EXEC_COMMAND_APPROVAL_METHOD: &str = "execCommandApproval";
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub enum ServerRequestPayload {
|
||||
$( $variant([<$variant Params>]), )*
|
||||
}
|
||||
|
||||
/// Request initiated from the server and sent to the client.
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(tag = "method", rename_all = "camelCase")]
|
||||
pub enum ServerRequest {
|
||||
impl ServerRequestPayload {
|
||||
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
|
||||
match self {
|
||||
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn export_server_responses(
|
||||
out_dir: &::std::path::Path,
|
||||
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
|
||||
paste! {
|
||||
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
impl TryFrom<JSONRPCRequest> for ServerRequest {
|
||||
type Error = serde_json::Error;
|
||||
|
||||
fn try_from(value: JSONRPCRequest) -> Result<Self, Self::Error> {
|
||||
serde_json::from_value(serde_json::to_value(value)?)
|
||||
}
|
||||
}
|
||||
|
||||
server_request_definitions! {
|
||||
/// Request to approve a patch.
|
||||
ApplyPatchApproval {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ApplyPatchApprovalParams,
|
||||
},
|
||||
ApplyPatchApproval,
|
||||
/// Request to exec a command.
|
||||
ExecCommandApproval {
|
||||
#[serde(rename = "id")]
|
||||
request_id: RequestId,
|
||||
params: ExecCommandApprovalParams,
|
||||
},
|
||||
ExecCommandApproval,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ApplyPatchApprovalParams {
|
||||
pub conversation_id: ConversationId,
|
||||
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
|
||||
@@ -673,6 +688,7 @@ pub struct ApplyPatchApprovalParams {
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ExecCommandApprovalParams {
|
||||
pub conversation_id: ConversationId,
|
||||
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
|
||||
@@ -682,6 +698,7 @@ pub struct ExecCommandApprovalParams {
|
||||
pub cwd: PathBuf,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub reason: Option<String>,
|
||||
pub parsed_cmd: Vec<ParsedCommand>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
|
||||
@@ -710,6 +727,7 @@ pub struct FuzzyFileSearchParams {
|
||||
pub struct FuzzyFileSearchResult {
|
||||
pub root: String,
|
||||
pub path: String,
|
||||
pub file_name: String,
|
||||
pub score: u32,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub indices: Option<Vec<u32>>,
|
||||
@@ -746,6 +764,7 @@ pub struct SessionConfiguredNotification {
|
||||
pub history_log_id: u64,
|
||||
|
||||
/// Current number of entries in the history log.
|
||||
#[ts(type = "number")]
|
||||
pub history_entry_count: usize,
|
||||
|
||||
/// Optional initial messages (as events) for resumed sessions.
|
||||
@@ -842,12 +861,6 @@ mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_conversation_id_default_is_not_zeroes() {
|
||||
let id = ConversationId::default();
|
||||
assert_ne!(id.uuid, Uuid::nil());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn conversation_id_serializes_as_plain_string() -> Result<()> {
|
||||
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
|
||||
@@ -883,4 +896,48 @@ mod tests {
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn serialize_server_request() -> Result<()> {
|
||||
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
|
||||
let params = ExecCommandApprovalParams {
|
||||
conversation_id,
|
||||
call_id: "call-42".to_string(),
|
||||
command: vec!["echo".to_string(), "hello".to_string()],
|
||||
cwd: PathBuf::from("/tmp"),
|
||||
reason: Some("because tests".to_string()),
|
||||
parsed_cmd: vec![ParsedCommand::Unknown {
|
||||
cmd: "echo hello".to_string(),
|
||||
}],
|
||||
};
|
||||
let request = ServerRequest::ExecCommandApproval {
|
||||
request_id: RequestId::Integer(7),
|
||||
params: params.clone(),
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
json!({
|
||||
"method": "execCommandApproval",
|
||||
"id": 7,
|
||||
"params": {
|
||||
"conversationId": "67e55044-10b1-426f-9247-bb680e5fe0c8",
|
||||
"callId": "call-42",
|
||||
"command": ["echo", "hello"],
|
||||
"cwd": "/tmp",
|
||||
"reason": "because tests",
|
||||
"parsedCmd": [
|
||||
{
|
||||
"type": "unknown",
|
||||
"cmd": "echo hello"
|
||||
}
|
||||
]
|
||||
}
|
||||
}),
|
||||
serde_json::to_value(&request)?,
|
||||
);
|
||||
|
||||
let payload = ServerRequestPayload::ExecCommandApproval(params);
|
||||
assert_eq!(payload.request_with_id(RequestId::Integer(7)), request);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -22,10 +22,8 @@ codex-core = { workspace = true }
|
||||
codex-file-search = { workspace = true }
|
||||
codex-login = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
codex-utils-json-to-toml = { workspace = true }
|
||||
# We should only be using mcp-types for JSON-RPC types: it would be nice to
|
||||
# split this out into a separate crate at some point.
|
||||
mcp-types = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = [
|
||||
|
||||
15
codex-rs/app-server/README.md
Normal file
15
codex-rs/app-server/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# codex-app-server
|
||||
|
||||
`codex app-server` is the harness Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
|
||||
|
||||
## Protocol
|
||||
|
||||
Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the `"jsonrpc":"2.0"` header is omitted.
|
||||
|
||||
## Message Schema
|
||||
|
||||
Currently, you can dump a TypeScript version of the schema using `codex generate-ts`. It is specific to the version of Codex you used to run `generate-ts`, so the two are guaranteed to be compatible.
|
||||
|
||||
```
|
||||
codex generate-ts --out DIR
|
||||
```
|
||||
@@ -3,10 +3,57 @@ use crate::error_code::INVALID_REQUEST_ERROR_CODE;
|
||||
use crate::fuzzy_file_search::run_fuzzy_file_search;
|
||||
use crate::outgoing_message::OutgoingMessageSender;
|
||||
use crate::outgoing_message::OutgoingNotification;
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::ApplyPatchApprovalParams;
|
||||
use codex_app_server_protocol::ApplyPatchApprovalResponse;
|
||||
use codex_app_server_protocol::ArchiveConversationParams;
|
||||
use codex_app_server_protocol::ArchiveConversationResponse;
|
||||
use codex_app_server_protocol::AuthStatusChangeNotification;
|
||||
use codex_app_server_protocol::ClientRequest;
|
||||
use codex_app_server_protocol::ConversationSummary;
|
||||
use codex_app_server_protocol::ExecCommandApprovalParams;
|
||||
use codex_app_server_protocol::ExecCommandApprovalResponse;
|
||||
use codex_app_server_protocol::ExecOneOffCommandParams;
|
||||
use codex_app_server_protocol::ExecOneOffCommandResponse;
|
||||
use codex_app_server_protocol::FuzzyFileSearchParams;
|
||||
use codex_app_server_protocol::FuzzyFileSearchResponse;
|
||||
use codex_app_server_protocol::GetUserAgentResponse;
|
||||
use codex_app_server_protocol::GetUserSavedConfigResponse;
|
||||
use codex_app_server_protocol::GitDiffToRemoteResponse;
|
||||
use codex_app_server_protocol::InputItem as WireInputItem;
|
||||
use codex_app_server_protocol::InterruptConversationParams;
|
||||
use codex_app_server_protocol::InterruptConversationResponse;
|
||||
use codex_app_server_protocol::JSONRPCErrorError;
|
||||
use codex_app_server_protocol::ListConversationsParams;
|
||||
use codex_app_server_protocol::ListConversationsResponse;
|
||||
use codex_app_server_protocol::LoginApiKeyParams;
|
||||
use codex_app_server_protocol::LoginApiKeyResponse;
|
||||
use codex_app_server_protocol::LoginChatGptCompleteNotification;
|
||||
use codex_app_server_protocol::LoginChatGptResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RemoveConversationListenerParams;
|
||||
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::Result as JsonRpcResult;
|
||||
use codex_app_server_protocol::ResumeConversationParams;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserMessageResponse;
|
||||
use codex_app_server_protocol::SendUserTurnParams;
|
||||
use codex_app_server_protocol::SendUserTurnResponse;
|
||||
use codex_app_server_protocol::ServerNotification;
|
||||
use codex_app_server_protocol::ServerRequestPayload;
|
||||
use codex_app_server_protocol::SessionConfiguredNotification;
|
||||
use codex_app_server_protocol::SetDefaultModelParams;
|
||||
use codex_app_server_protocol::SetDefaultModelResponse;
|
||||
use codex_app_server_protocol::UserInfoResponse;
|
||||
use codex_app_server_protocol::UserSavedConfig;
|
||||
use codex_core::AuthManager;
|
||||
use codex_core::CodexConversation;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::Cursor as RolloutCursor;
|
||||
use codex_core::INTERACTIVE_SESSION_SOURCES;
|
||||
use codex_core::NewConversation;
|
||||
use codex_core::RolloutRecorder;
|
||||
use codex_core::SessionMeta;
|
||||
@@ -36,58 +83,12 @@ use codex_core::protocol::ReviewDecision;
|
||||
use codex_login::ServerOptions as LoginServerOptions;
|
||||
use codex_login::ShutdownHandle;
|
||||
use codex_login::run_login_server;
|
||||
use codex_protocol::mcp_protocol::APPLY_PATCH_APPROVAL_METHOD;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::ApplyPatchApprovalParams;
|
||||
use codex_protocol::mcp_protocol::ApplyPatchApprovalResponse;
|
||||
use codex_protocol::mcp_protocol::ArchiveConversationParams;
|
||||
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
|
||||
use codex_protocol::mcp_protocol::AuthStatusChangeNotification;
|
||||
use codex_protocol::mcp_protocol::ClientRequest;
|
||||
use codex_protocol::mcp_protocol::ConversationId;
|
||||
use codex_protocol::mcp_protocol::ConversationSummary;
|
||||
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
|
||||
use codex_protocol::mcp_protocol::ExecArbitraryCommandResponse;
|
||||
use codex_protocol::mcp_protocol::ExecCommandApprovalParams;
|
||||
use codex_protocol::mcp_protocol::ExecCommandApprovalResponse;
|
||||
use codex_protocol::mcp_protocol::ExecOneOffCommandParams;
|
||||
use codex_protocol::mcp_protocol::FuzzyFileSearchParams;
|
||||
use codex_protocol::mcp_protocol::FuzzyFileSearchResponse;
|
||||
use codex_protocol::mcp_protocol::GetUserAgentResponse;
|
||||
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
|
||||
use codex_protocol::mcp_protocol::GitDiffToRemoteResponse;
|
||||
use codex_protocol::mcp_protocol::InputItem as WireInputItem;
|
||||
use codex_protocol::mcp_protocol::InterruptConversationParams;
|
||||
use codex_protocol::mcp_protocol::InterruptConversationResponse;
|
||||
use codex_protocol::mcp_protocol::ListConversationsParams;
|
||||
use codex_protocol::mcp_protocol::ListConversationsResponse;
|
||||
use codex_protocol::mcp_protocol::LoginApiKeyParams;
|
||||
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
|
||||
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
|
||||
use codex_protocol::mcp_protocol::LoginChatGptResponse;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::ResumeConversationParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserTurnParams;
|
||||
use codex_protocol::mcp_protocol::SendUserTurnResponse;
|
||||
use codex_protocol::mcp_protocol::ServerNotification;
|
||||
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
|
||||
use codex_protocol::mcp_protocol::SetDefaultModelParams;
|
||||
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
|
||||
use codex_protocol::mcp_protocol::UserInfoResponse;
|
||||
use codex_protocol::mcp_protocol::UserSavedConfig;
|
||||
use codex_protocol::ConversationId;
|
||||
use codex_protocol::models::ContentItem;
|
||||
use codex_protocol::models::ResponseItem;
|
||||
use codex_protocol::protocol::InputMessageKind;
|
||||
use codex_protocol::protocol::USER_MESSAGE_BEGIN;
|
||||
use codex_utils_json_to_toml::json_to_toml;
|
||||
use mcp_types::JSONRPCErrorError;
|
||||
use mcp_types::RequestId;
|
||||
use std::collections::HashMap;
|
||||
use std::ffi::OsStr;
|
||||
use std::path::PathBuf;
|
||||
@@ -193,28 +194,43 @@ impl CodexMessageProcessor {
|
||||
ClientRequest::LoginApiKey { request_id, params } => {
|
||||
self.login_api_key(request_id, params).await;
|
||||
}
|
||||
ClientRequest::LoginChatGpt { request_id } => {
|
||||
ClientRequest::LoginChatGpt {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.login_chatgpt(request_id).await;
|
||||
}
|
||||
ClientRequest::CancelLoginChatGpt { request_id, params } => {
|
||||
self.cancel_login_chatgpt(request_id, params.login_id).await;
|
||||
}
|
||||
ClientRequest::LogoutChatGpt { request_id } => {
|
||||
ClientRequest::LogoutChatGpt {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.logout_chatgpt(request_id).await;
|
||||
}
|
||||
ClientRequest::GetAuthStatus { request_id, params } => {
|
||||
self.get_auth_status(request_id, params).await;
|
||||
}
|
||||
ClientRequest::GetUserSavedConfig { request_id } => {
|
||||
ClientRequest::GetUserSavedConfig {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.get_user_saved_config(request_id).await;
|
||||
}
|
||||
ClientRequest::SetDefaultModel { request_id, params } => {
|
||||
self.set_default_model(request_id, params).await;
|
||||
}
|
||||
ClientRequest::GetUserAgent { request_id } => {
|
||||
ClientRequest::GetUserAgent {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.get_user_agent(request_id).await;
|
||||
}
|
||||
ClientRequest::UserInfo { request_id } => {
|
||||
ClientRequest::UserInfo {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.get_user_info(request_id).await;
|
||||
}
|
||||
ClientRequest::FuzzyFileSearch { request_id, params } => {
|
||||
@@ -371,7 +387,7 @@ impl CodexMessageProcessor {
|
||||
self.outgoing
|
||||
.send_response(
|
||||
request_id,
|
||||
codex_protocol::mcp_protocol::CancelLoginChatGptResponse {},
|
||||
codex_app_server_protocol::CancelLoginChatGptResponse {},
|
||||
)
|
||||
.await;
|
||||
} else {
|
||||
@@ -407,7 +423,7 @@ impl CodexMessageProcessor {
|
||||
self.outgoing
|
||||
.send_response(
|
||||
request_id,
|
||||
codex_protocol::mcp_protocol::LogoutChatGptResponse {},
|
||||
codex_app_server_protocol::LogoutChatGptResponse {},
|
||||
)
|
||||
.await;
|
||||
|
||||
@@ -425,7 +441,7 @@ impl CodexMessageProcessor {
|
||||
async fn get_auth_status(
|
||||
&self,
|
||||
request_id: RequestId,
|
||||
params: codex_protocol::mcp_protocol::GetAuthStatusParams,
|
||||
params: codex_app_server_protocol::GetAuthStatusParams,
|
||||
) {
|
||||
let include_token = params.include_token.unwrap_or(false);
|
||||
let do_refresh = params.refresh_token.unwrap_or(false);
|
||||
@@ -440,7 +456,7 @@ impl CodexMessageProcessor {
|
||||
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
|
||||
|
||||
let response = if !requires_openai_auth {
|
||||
codex_protocol::mcp_protocol::GetAuthStatusResponse {
|
||||
codex_app_server_protocol::GetAuthStatusResponse {
|
||||
auth_method: None,
|
||||
auth_token: None,
|
||||
requires_openai_auth: Some(false),
|
||||
@@ -460,13 +476,13 @@ impl CodexMessageProcessor {
|
||||
(None, None)
|
||||
}
|
||||
};
|
||||
codex_protocol::mcp_protocol::GetAuthStatusResponse {
|
||||
codex_app_server_protocol::GetAuthStatusResponse {
|
||||
auth_method: reported_auth_method,
|
||||
auth_token: token_opt,
|
||||
requires_openai_auth: Some(true),
|
||||
}
|
||||
}
|
||||
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
|
||||
None => codex_app_server_protocol::GetAuthStatusResponse {
|
||||
auth_method: None,
|
||||
auth_token: None,
|
||||
requires_openai_auth: Some(true),
|
||||
@@ -484,7 +500,7 @@ impl CodexMessageProcessor {
|
||||
}
|
||||
|
||||
async fn get_user_saved_config(&self, request_id: RequestId) {
|
||||
let toml_value = match load_config_as_toml(&self.config.codex_home) {
|
||||
let toml_value = match load_config_as_toml(&self.config.codex_home).await {
|
||||
Ok(val) => val,
|
||||
Err(err) => {
|
||||
let error = JSONRPCErrorError {
|
||||
@@ -617,7 +633,7 @@ impl CodexMessageProcessor {
|
||||
.await
|
||||
{
|
||||
Ok(output) => {
|
||||
let response = ExecArbitraryCommandResponse {
|
||||
let response = ExecOneOffCommandResponse {
|
||||
exit_code: output.exit_code,
|
||||
stdout: output.stdout.text,
|
||||
stderr: output.stderr.text,
|
||||
@@ -637,18 +653,19 @@ impl CodexMessageProcessor {
|
||||
}
|
||||
|
||||
async fn process_new_conversation(&self, request_id: RequestId, params: NewConversationParams) {
|
||||
let config = match derive_config_from_params(params, self.codex_linux_sandbox_exe.clone()) {
|
||||
Ok(config) => config,
|
||||
Err(err) => {
|
||||
let error = JSONRPCErrorError {
|
||||
code: INVALID_REQUEST_ERROR_CODE,
|
||||
message: format!("error deriving config: {err}"),
|
||||
data: None,
|
||||
};
|
||||
self.outgoing.send_error(request_id, error).await;
|
||||
return;
|
||||
}
|
||||
};
|
||||
let config =
|
||||
match derive_config_from_params(params, self.codex_linux_sandbox_exe.clone()).await {
|
||||
Ok(config) => config,
|
||||
Err(err) => {
|
||||
let error = JSONRPCErrorError {
|
||||
code: INVALID_REQUEST_ERROR_CODE,
|
||||
message: format!("error deriving config: {err}"),
|
||||
data: None,
|
||||
};
|
||||
self.outgoing.send_error(request_id, error).await;
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
match self.conversation_manager.new_conversation(config).await {
|
||||
Ok(conversation_id) => {
|
||||
@@ -693,6 +710,7 @@ impl CodexMessageProcessor {
|
||||
&self.config.codex_home,
|
||||
page_size,
|
||||
cursor_ref,
|
||||
INTERACTIVE_SESSION_SOURCES,
|
||||
)
|
||||
.await
|
||||
{
|
||||
@@ -735,7 +753,7 @@ impl CodexMessageProcessor {
|
||||
// Derive a Config using the same logic as new conversation, honoring overrides if provided.
|
||||
let config = match params.overrides {
|
||||
Some(overrides) => {
|
||||
derive_config_from_params(overrides, self.codex_linux_sandbox_exe.clone())
|
||||
derive_config_from_params(overrides, self.codex_linux_sandbox_exe.clone()).await
|
||||
}
|
||||
None => Ok(self.config.as_ref().clone()),
|
||||
};
|
||||
@@ -793,7 +811,7 @@ impl CodexMessageProcessor {
|
||||
});
|
||||
|
||||
// Reply with conversation id + model and initial messages (when present)
|
||||
let response = codex_protocol::mcp_protocol::ResumeConversationResponse {
|
||||
let response = codex_app_server_protocol::ResumeConversationResponse {
|
||||
conversation_id,
|
||||
model: session_configured.model.clone(),
|
||||
initial_messages,
|
||||
@@ -1253,9 +1271,8 @@ async fn apply_bespoke_event_handling(
|
||||
reason,
|
||||
grant_root,
|
||||
};
|
||||
let value = serde_json::to_value(¶ms).unwrap_or_default();
|
||||
let rx = outgoing
|
||||
.send_request(APPLY_PATCH_APPROVAL_METHOD, Some(value))
|
||||
.send_request(ServerRequestPayload::ApplyPatchApproval(params))
|
||||
.await;
|
||||
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
|
||||
tokio::spawn(async move {
|
||||
@@ -1267,6 +1284,7 @@ async fn apply_bespoke_event_handling(
|
||||
command,
|
||||
cwd,
|
||||
reason,
|
||||
parsed_cmd,
|
||||
}) => {
|
||||
let params = ExecCommandApprovalParams {
|
||||
conversation_id,
|
||||
@@ -1274,10 +1292,10 @@ async fn apply_bespoke_event_handling(
|
||||
command,
|
||||
cwd,
|
||||
reason,
|
||||
parsed_cmd,
|
||||
};
|
||||
let value = serde_json::to_value(¶ms).unwrap_or_default();
|
||||
let rx = outgoing
|
||||
.send_request(EXEC_COMMAND_APPROVAL_METHOD, Some(value))
|
||||
.send_request(ServerRequestPayload::ExecCommandApproval(params))
|
||||
.await;
|
||||
|
||||
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
|
||||
@@ -1305,7 +1323,7 @@ async fn apply_bespoke_event_handling(
|
||||
}
|
||||
}
|
||||
|
||||
fn derive_config_from_params(
|
||||
async fn derive_config_from_params(
|
||||
params: NewConversationParams,
|
||||
codex_linux_sandbox_exe: Option<PathBuf>,
|
||||
) -> std::io::Result<Config> {
|
||||
@@ -1343,12 +1361,12 @@ fn derive_config_from_params(
|
||||
.map(|(k, v)| (k, json_to_toml(v)))
|
||||
.collect();
|
||||
|
||||
Config::load_with_cli_overrides(cli_overrides, overrides)
|
||||
Config::load_with_cli_overrides(cli_overrides, overrides).await
|
||||
}
|
||||
|
||||
async fn on_patch_approval_response(
|
||||
event_id: String,
|
||||
receiver: oneshot::Receiver<mcp_types::Result>,
|
||||
receiver: oneshot::Receiver<JsonRpcResult>,
|
||||
codex: Arc<CodexConversation>,
|
||||
) {
|
||||
let response = receiver.await;
|
||||
@@ -1390,7 +1408,7 @@ async fn on_patch_approval_response(
|
||||
|
||||
async fn on_exec_approval_response(
|
||||
event_id: String,
|
||||
receiver: oneshot::Receiver<mcp_types::Result>,
|
||||
receiver: oneshot::Receiver<JsonRpcResult>,
|
||||
conversation: Arc<CodexConversation>,
|
||||
) {
|
||||
let response = receiver.await;
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
use std::num::NonZero;
|
||||
use std::num::NonZeroUsize;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::AtomicBool;
|
||||
|
||||
use codex_app_server_protocol::FuzzyFileSearchResult;
|
||||
use codex_file_search as file_search;
|
||||
use codex_protocol::mcp_protocol::FuzzyFileSearchResult;
|
||||
use tokio::task::JoinSet;
|
||||
use tracing::warn;
|
||||
|
||||
@@ -56,9 +57,16 @@ pub(crate) async fn run_fuzzy_file_search(
|
||||
match res {
|
||||
Ok(Ok((root, res))) => {
|
||||
for m in res.matches {
|
||||
let path = m.path;
|
||||
//TODO(shijie): Move file name generation to file_search lib.
|
||||
let file_name = Path::new(&path)
|
||||
.file_name()
|
||||
.map(|name| name.to_string_lossy().into_owned())
|
||||
.unwrap_or_else(|| path.clone());
|
||||
let result = FuzzyFileSearchResult {
|
||||
root: root.clone(),
|
||||
path: m.path,
|
||||
path,
|
||||
file_name,
|
||||
score: m.score,
|
||||
indices: m.indices,
|
||||
};
|
||||
|
||||
@@ -8,7 +8,7 @@ use codex_common::CliConfigOverrides;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
|
||||
use mcp_types::JSONRPCMessage;
|
||||
use codex_app_server_protocol::JSONRPCMessage;
|
||||
use tokio::io::AsyncBufReadExt;
|
||||
use tokio::io::AsyncWriteExt;
|
||||
use tokio::io::BufReader;
|
||||
@@ -81,6 +81,7 @@ pub async fn run_main(
|
||||
)
|
||||
})?;
|
||||
let config = Config::load_with_cli_overrides(cli_kv_overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.map_err(|e| {
|
||||
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
|
||||
})?;
|
||||
@@ -111,17 +112,17 @@ pub async fn run_main(
|
||||
let stdout_writer_handle = tokio::spawn(async move {
|
||||
let mut stdout = io::stdout();
|
||||
while let Some(outgoing_message) = outgoing_rx.recv().await {
|
||||
let msg: JSONRPCMessage = outgoing_message.into();
|
||||
match serde_json::to_string(&msg) {
|
||||
Ok(json) => {
|
||||
let Ok(value) = serde_json::to_value(outgoing_message) else {
|
||||
error!("Failed to convert OutgoingMessage to JSON value");
|
||||
continue;
|
||||
};
|
||||
match serde_json::to_string(&value) {
|
||||
Ok(mut json) => {
|
||||
json.push('\n');
|
||||
if let Err(e) = stdout.write_all(json.as_bytes()).await {
|
||||
error!("Failed to write to stdout: {e}");
|
||||
break;
|
||||
}
|
||||
if let Err(e) = stdout.write_all(b"\n").await {
|
||||
error!("Failed to write newline to stdout: {e}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
|
||||
}
|
||||
|
||||
@@ -3,20 +3,21 @@ use std::path::PathBuf;
|
||||
use crate::codex_message_processor::CodexMessageProcessor;
|
||||
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
|
||||
use crate::outgoing_message::OutgoingMessageSender;
|
||||
use codex_protocol::mcp_protocol::ClientInfo;
|
||||
use codex_protocol::mcp_protocol::ClientRequest;
|
||||
use codex_protocol::mcp_protocol::InitializeResponse;
|
||||
use codex_app_server_protocol::ClientInfo;
|
||||
use codex_app_server_protocol::ClientRequest;
|
||||
use codex_app_server_protocol::InitializeResponse;
|
||||
|
||||
use codex_app_server_protocol::JSONRPCError;
|
||||
use codex_app_server_protocol::JSONRPCErrorError;
|
||||
use codex_app_server_protocol::JSONRPCNotification;
|
||||
use codex_app_server_protocol::JSONRPCRequest;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_core::AuthManager;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::default_client::USER_AGENT_SUFFIX;
|
||||
use codex_core::default_client::get_codex_user_agent;
|
||||
use mcp_types::JSONRPCError;
|
||||
use mcp_types::JSONRPCErrorError;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCRequest;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use codex_protocol::protocol::SessionSource;
|
||||
use std::sync::Arc;
|
||||
|
||||
pub(crate) struct MessageProcessor {
|
||||
@@ -34,8 +35,11 @@ impl MessageProcessor {
|
||||
config: Arc<Config>,
|
||||
) -> Self {
|
||||
let outgoing = Arc::new(outgoing);
|
||||
let auth_manager = AuthManager::shared(config.codex_home.clone());
|
||||
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
|
||||
let auth_manager = AuthManager::shared(config.codex_home.clone(), false);
|
||||
let conversation_manager = Arc::new(ConversationManager::new(
|
||||
auth_manager.clone(),
|
||||
SessionSource::VSCode,
|
||||
));
|
||||
let codex_message_processor = CodexMessageProcessor::new(
|
||||
auth_manager,
|
||||
conversation_manager,
|
||||
|
||||
@@ -2,16 +2,12 @@ use std::collections::HashMap;
|
||||
use std::sync::atomic::AtomicI64;
|
||||
use std::sync::atomic::Ordering;
|
||||
|
||||
use codex_protocol::mcp_protocol::ServerNotification;
|
||||
use mcp_types::JSONRPC_VERSION;
|
||||
use mcp_types::JSONRPCError;
|
||||
use mcp_types::JSONRPCErrorError;
|
||||
use mcp_types::JSONRPCMessage;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCRequest;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use mcp_types::Result;
|
||||
use codex_app_server_protocol::JSONRPCErrorError;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::Result;
|
||||
use codex_app_server_protocol::ServerNotification;
|
||||
use codex_app_server_protocol::ServerRequest;
|
||||
use codex_app_server_protocol::ServerRequestPayload;
|
||||
use serde::Serialize;
|
||||
use tokio::sync::Mutex;
|
||||
use tokio::sync::mpsc;
|
||||
@@ -38,8 +34,7 @@ impl OutgoingMessageSender {
|
||||
|
||||
pub(crate) async fn send_request(
|
||||
&self,
|
||||
method: &str,
|
||||
params: Option<serde_json::Value>,
|
||||
request: ServerRequestPayload,
|
||||
) -> oneshot::Receiver<Result> {
|
||||
let id = RequestId::Integer(self.next_request_id.fetch_add(1, Ordering::Relaxed));
|
||||
let outgoing_message_id = id.clone();
|
||||
@@ -49,11 +44,8 @@ impl OutgoingMessageSender {
|
||||
request_id_to_callback.insert(id, tx_approve);
|
||||
}
|
||||
|
||||
let outgoing_message = OutgoingMessage::Request(OutgoingRequest {
|
||||
id: outgoing_message_id,
|
||||
method: method.to_string(),
|
||||
params,
|
||||
});
|
||||
let outgoing_message =
|
||||
OutgoingMessage::Request(request.request_with_id(outgoing_message_id));
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
rx_approve
|
||||
}
|
||||
@@ -116,8 +108,10 @@ impl OutgoingMessageSender {
|
||||
}
|
||||
|
||||
/// Outgoing message from the server to the client.
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
#[serde(untagged)]
|
||||
pub(crate) enum OutgoingMessage {
|
||||
Request(OutgoingRequest),
|
||||
Request(ServerRequest),
|
||||
Notification(OutgoingNotification),
|
||||
/// AppServerNotification is specific to the case where this is run as an
|
||||
/// "app server" as opposed to an MCP server.
|
||||
@@ -126,64 +120,6 @@ pub(crate) enum OutgoingMessage {
|
||||
Error(OutgoingError),
|
||||
}
|
||||
|
||||
impl From<OutgoingMessage> for JSONRPCMessage {
|
||||
fn from(val: OutgoingMessage) -> Self {
|
||||
use OutgoingMessage::*;
|
||||
match val {
|
||||
Request(OutgoingRequest { id, method, params }) => {
|
||||
JSONRPCMessage::Request(JSONRPCRequest {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
id,
|
||||
method,
|
||||
params,
|
||||
})
|
||||
}
|
||||
Notification(OutgoingNotification { method, params }) => {
|
||||
JSONRPCMessage::Notification(JSONRPCNotification {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
method,
|
||||
params,
|
||||
})
|
||||
}
|
||||
AppServerNotification(notification) => {
|
||||
let method = notification.to_string();
|
||||
let params = match notification.to_params() {
|
||||
Ok(params) => Some(params),
|
||||
Err(err) => {
|
||||
warn!("failed to serialize notification params: {err}");
|
||||
None
|
||||
}
|
||||
};
|
||||
JSONRPCMessage::Notification(JSONRPCNotification {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
method,
|
||||
params,
|
||||
})
|
||||
}
|
||||
Response(OutgoingResponse { id, result }) => {
|
||||
JSONRPCMessage::Response(JSONRPCResponse {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
id,
|
||||
result,
|
||||
})
|
||||
}
|
||||
Error(OutgoingError { id, error }) => JSONRPCMessage::Error(JSONRPCError {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
id,
|
||||
error,
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize)]
|
||||
pub(crate) struct OutgoingRequest {
|
||||
pub id: RequestId,
|
||||
pub method: String,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub params: Option<serde_json::Value>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize)]
|
||||
pub(crate) struct OutgoingNotification {
|
||||
pub method: String,
|
||||
@@ -205,7 +141,7 @@ pub(crate) struct OutgoingError {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
|
||||
use codex_app_server_protocol::LoginChatGptCompleteNotification;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use uuid::Uuid;
|
||||
@@ -221,18 +157,17 @@ mod tests {
|
||||
error: None,
|
||||
});
|
||||
|
||||
let jsonrpc_notification: JSONRPCMessage =
|
||||
OutgoingMessage::AppServerNotification(notification).into();
|
||||
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
|
||||
assert_eq!(
|
||||
JSONRPCMessage::Notification(JSONRPCNotification {
|
||||
jsonrpc: "2.0".into(),
|
||||
method: "loginChatGptComplete".into(),
|
||||
params: Some(json!({
|
||||
json!({
|
||||
"method": "loginChatGptComplete",
|
||||
"params": {
|
||||
"loginId": Uuid::nil(),
|
||||
"success": true,
|
||||
})),
|
||||
},
|
||||
}),
|
||||
jsonrpc_notification,
|
||||
serde_json::to_value(jsonrpc_notification)
|
||||
.expect("ensure the strum macros serialize the method field correctly"),
|
||||
"ensure the strum macros serialize the method field correctly"
|
||||
);
|
||||
}
|
||||
|
||||
@@ -9,8 +9,7 @@ path = "lib.rs"
|
||||
[dependencies]
|
||||
anyhow = { workspace = true }
|
||||
assert_cmd = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
mcp-types = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = [
|
||||
|
||||
@@ -2,8 +2,8 @@ mod mcp_process;
|
||||
mod mock_model_server;
|
||||
mod responses;
|
||||
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
pub use mcp_process::McpProcess;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
pub use mock_model_server::create_mock_chat_completions_server;
|
||||
pub use responses::create_apply_patch_sse_response;
|
||||
pub use responses::create_final_assistant_message_sse_response;
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use std::collections::VecDeque;
|
||||
use std::path::Path;
|
||||
use std::process::Stdio;
|
||||
use std::sync::atomic::AtomicI64;
|
||||
@@ -11,29 +12,30 @@ use tokio::process::ChildStdout;
|
||||
|
||||
use anyhow::Context;
|
||||
use assert_cmd::prelude::*;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::ArchiveConversationParams;
|
||||
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
|
||||
use codex_protocol::mcp_protocol::ClientInfo;
|
||||
use codex_protocol::mcp_protocol::ClientNotification;
|
||||
use codex_protocol::mcp_protocol::GetAuthStatusParams;
|
||||
use codex_protocol::mcp_protocol::InitializeParams;
|
||||
use codex_protocol::mcp_protocol::InterruptConversationParams;
|
||||
use codex_protocol::mcp_protocol::ListConversationsParams;
|
||||
use codex_protocol::mcp_protocol::LoginApiKeyParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::ResumeConversationParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserTurnParams;
|
||||
use codex_protocol::mcp_protocol::SetDefaultModelParams;
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::ArchiveConversationParams;
|
||||
use codex_app_server_protocol::CancelLoginChatGptParams;
|
||||
use codex_app_server_protocol::ClientInfo;
|
||||
use codex_app_server_protocol::ClientNotification;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::InitializeParams;
|
||||
use codex_app_server_protocol::InterruptConversationParams;
|
||||
use codex_app_server_protocol::ListConversationsParams;
|
||||
use codex_app_server_protocol::LoginApiKeyParams;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::RemoveConversationListenerParams;
|
||||
use codex_app_server_protocol::ResumeConversationParams;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserTurnParams;
|
||||
use codex_app_server_protocol::ServerRequest;
|
||||
use codex_app_server_protocol::SetDefaultModelParams;
|
||||
|
||||
use mcp_types::JSONRPC_VERSION;
|
||||
use mcp_types::JSONRPCMessage;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCRequest;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::JSONRPCError;
|
||||
use codex_app_server_protocol::JSONRPCMessage;
|
||||
use codex_app_server_protocol::JSONRPCNotification;
|
||||
use codex_app_server_protocol::JSONRPCRequest;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use std::process::Command as StdCommand;
|
||||
use tokio::process::Command;
|
||||
|
||||
@@ -46,6 +48,7 @@ pub struct McpProcess {
|
||||
process: Child,
|
||||
stdin: ChildStdin,
|
||||
stdout: BufReader<ChildStdout>,
|
||||
pending_user_messages: VecDeque<JSONRPCNotification>,
|
||||
}
|
||||
|
||||
impl McpProcess {
|
||||
@@ -116,6 +119,7 @@ impl McpProcess {
|
||||
process,
|
||||
stdin,
|
||||
stdout,
|
||||
pending_user_messages: VecDeque::new(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -317,7 +321,6 @@ impl McpProcess {
|
||||
let request_id = self.next_request_id.fetch_add(1, Ordering::Relaxed);
|
||||
|
||||
let message = JSONRPCMessage::Request(JSONRPCRequest {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
id: RequestId::Integer(request_id),
|
||||
method: method.to_string(),
|
||||
params,
|
||||
@@ -331,12 +334,8 @@ impl McpProcess {
|
||||
id: RequestId,
|
||||
result: serde_json::Value,
|
||||
) -> anyhow::Result<()> {
|
||||
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
id,
|
||||
result,
|
||||
}))
|
||||
.await
|
||||
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse { id, result }))
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn send_notification(
|
||||
@@ -345,7 +344,6 @@ impl McpProcess {
|
||||
) -> anyhow::Result<()> {
|
||||
let value = serde_json::to_value(notification)?;
|
||||
self.send_jsonrpc_message(JSONRPCMessage::Notification(JSONRPCNotification {
|
||||
jsonrpc: JSONRPC_VERSION.into(),
|
||||
method: value
|
||||
.get("method")
|
||||
.and_then(|m| m.as_str())
|
||||
@@ -373,18 +371,21 @@ impl McpProcess {
|
||||
Ok(message)
|
||||
}
|
||||
|
||||
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<JSONRPCRequest> {
|
||||
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<ServerRequest> {
|
||||
eprintln!("in read_stream_until_request_message()");
|
||||
|
||||
loop {
|
||||
let message = self.read_jsonrpc_message().await?;
|
||||
|
||||
match message {
|
||||
JSONRPCMessage::Notification(_) => {
|
||||
eprintln!("notification: {message:?}");
|
||||
JSONRPCMessage::Notification(notification) => {
|
||||
eprintln!("notification: {notification:?}");
|
||||
self.enqueue_user_message(notification);
|
||||
}
|
||||
JSONRPCMessage::Request(jsonrpc_request) => {
|
||||
return Ok(jsonrpc_request);
|
||||
return jsonrpc_request.try_into().with_context(
|
||||
|| "failed to deserialize ServerRequest from JSONRPCRequest",
|
||||
);
|
||||
}
|
||||
JSONRPCMessage::Error(_) => {
|
||||
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
|
||||
@@ -405,8 +406,9 @@ impl McpProcess {
|
||||
loop {
|
||||
let message = self.read_jsonrpc_message().await?;
|
||||
match message {
|
||||
JSONRPCMessage::Notification(_) => {
|
||||
eprintln!("notification: {message:?}");
|
||||
JSONRPCMessage::Notification(notification) => {
|
||||
eprintln!("notification: {notification:?}");
|
||||
self.enqueue_user_message(notification);
|
||||
}
|
||||
JSONRPCMessage::Request(_) => {
|
||||
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
|
||||
@@ -426,12 +428,13 @@ impl McpProcess {
|
||||
pub async fn read_stream_until_error_message(
|
||||
&mut self,
|
||||
request_id: RequestId,
|
||||
) -> anyhow::Result<mcp_types::JSONRPCError> {
|
||||
) -> anyhow::Result<JSONRPCError> {
|
||||
loop {
|
||||
let message = self.read_jsonrpc_message().await?;
|
||||
match message {
|
||||
JSONRPCMessage::Notification(_) => {
|
||||
eprintln!("notification: {message:?}");
|
||||
JSONRPCMessage::Notification(notification) => {
|
||||
eprintln!("notification: {notification:?}");
|
||||
self.enqueue_user_message(notification);
|
||||
}
|
||||
JSONRPCMessage::Request(_) => {
|
||||
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
|
||||
@@ -454,6 +457,10 @@ impl McpProcess {
|
||||
) -> anyhow::Result<JSONRPCNotification> {
|
||||
eprintln!("in read_stream_until_notification_message({method})");
|
||||
|
||||
if let Some(notification) = self.take_pending_notification_by_method(method) {
|
||||
return Ok(notification);
|
||||
}
|
||||
|
||||
loop {
|
||||
let message = self.read_jsonrpc_message().await?;
|
||||
match message {
|
||||
@@ -461,6 +468,7 @@ impl McpProcess {
|
||||
if notification.method == method {
|
||||
return Ok(notification);
|
||||
}
|
||||
self.enqueue_user_message(notification);
|
||||
}
|
||||
JSONRPCMessage::Request(_) => {
|
||||
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
|
||||
@@ -474,4 +482,21 @@ impl McpProcess {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn take_pending_notification_by_method(&mut self, method: &str) -> Option<JSONRPCNotification> {
|
||||
if let Some(pos) = self
|
||||
.pending_user_messages
|
||||
.iter()
|
||||
.position(|notification| notification.method == method)
|
||||
{
|
||||
return self.pending_user_messages.remove(pos);
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
fn enqueue_user_message(&mut self, notification: JSONRPCNotification) {
|
||||
if notification.method == "codex/event/user_message" {
|
||||
self.pending_user_messages.push_back(notification);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,13 +2,13 @@ use std::path::Path;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_app_server_protocol::ArchiveConversationParams;
|
||||
use codex_app_server_protocol::ArchiveConversationResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
|
||||
use codex_protocol::mcp_protocol::ArchiveConversationParams;
|
||||
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
|
||||
@@ -2,13 +2,13 @@ use std::path::Path;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_protocol::mcp_protocol::AuthMode;
|
||||
use codex_protocol::mcp_protocol::GetAuthStatusParams;
|
||||
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
|
||||
use codex_protocol::mcp_protocol::LoginApiKeyParams;
|
||||
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::AuthMode;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::GetAuthStatusResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::LoginApiKeyParams;
|
||||
use codex_app_server_protocol::LoginApiKeyResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -5,25 +5,32 @@ use app_test_support::create_final_assistant_message_sse_response;
|
||||
use app_test_support::create_mock_chat_completions_server;
|
||||
use app_test_support::create_shell_sse_response;
|
||||
use app_test_support::to_response;
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::ExecCommandApprovalParams;
|
||||
use codex_app_server_protocol::InputItem;
|
||||
use codex_app_server_protocol::JSONRPCNotification;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RemoveConversationListenerParams;
|
||||
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserMessageResponse;
|
||||
use codex_app_server_protocol::SendUserTurnParams;
|
||||
use codex_app_server_protocol::SendUserTurnResponse;
|
||||
use codex_app_server_protocol::ServerRequest;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
use codex_core::protocol::SandboxPolicy;
|
||||
use codex_core::protocol_config_types::ReasoningEffort;
|
||||
use codex_core::protocol_config_types::ReasoningSummary;
|
||||
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserTurnParams;
|
||||
use codex_protocol::mcp_protocol::SendUserTurnResponse;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_protocol::config_types::SandboxMode;
|
||||
use codex_protocol::parse_command::ParsedCommand;
|
||||
use codex_protocol::protocol::Event;
|
||||
use codex_protocol::protocol::EventMsg;
|
||||
use codex_protocol::protocol::InputMessageKind;
|
||||
use pretty_assertions::assert_eq;
|
||||
use std::env;
|
||||
use tempfile::TempDir;
|
||||
@@ -115,7 +122,7 @@ async fn test_codex_jsonrpc_conversation_flow() {
|
||||
let send_user_id = mcp
|
||||
.send_send_user_message_request(SendUserMessageParams {
|
||||
conversation_id,
|
||||
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
|
||||
items: vec![codex_app_server_protocol::InputItem::Text {
|
||||
text: "text".to_string(),
|
||||
}],
|
||||
})
|
||||
@@ -265,7 +272,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
|
||||
let send_user_id = mcp
|
||||
.send_send_user_message_request(SendUserMessageParams {
|
||||
conversation_id,
|
||||
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
|
||||
items: vec![codex_app_server_protocol::InputItem::Text {
|
||||
text: "run python".to_string(),
|
||||
}],
|
||||
})
|
||||
@@ -290,11 +297,31 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
|
||||
.await
|
||||
.expect("waiting for exec approval request timeout")
|
||||
.expect("exec approval request");
|
||||
assert_eq!(request.method, EXEC_COMMAND_APPROVAL_METHOD);
|
||||
let ServerRequest::ExecCommandApproval { request_id, params } = request else {
|
||||
panic!("expected ExecCommandApproval request, got: {request:?}");
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
ExecCommandApprovalParams {
|
||||
conversation_id,
|
||||
call_id: "call1".to_string(),
|
||||
command: vec![
|
||||
"python3".to_string(),
|
||||
"-c".to_string(),
|
||||
"print(42)".to_string(),
|
||||
],
|
||||
cwd: working_directory.clone(),
|
||||
reason: None,
|
||||
parsed_cmd: vec![ParsedCommand::Unknown {
|
||||
cmd: "python3 -c 'print(42)'".to_string()
|
||||
}],
|
||||
},
|
||||
params
|
||||
);
|
||||
|
||||
// Approve so the first turn can complete
|
||||
mcp.send_response(
|
||||
request.id,
|
||||
request_id,
|
||||
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
|
||||
)
|
||||
.await
|
||||
@@ -313,7 +340,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
|
||||
let send_turn_id = mcp
|
||||
.send_send_user_turn_request(SendUserTurnParams {
|
||||
conversation_id,
|
||||
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
|
||||
items: vec![codex_app_server_protocol::InputItem::Text {
|
||||
text: "run python again".to_string(),
|
||||
}],
|
||||
cwd: working_directory.clone(),
|
||||
@@ -349,6 +376,234 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
|
||||
}
|
||||
|
||||
// Helper: minimal config.toml pointing at mock provider.
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
|
||||
async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() {
|
||||
if env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
|
||||
println!(
|
||||
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
let tmp = TempDir::new().expect("tmp dir");
|
||||
let codex_home = tmp.path().join("codex_home");
|
||||
std::fs::create_dir(&codex_home).expect("create codex home dir");
|
||||
let workspace_root = tmp.path().join("workspace");
|
||||
std::fs::create_dir(&workspace_root).expect("create workspace root");
|
||||
let first_cwd = workspace_root.join("turn1");
|
||||
let second_cwd = workspace_root.join("turn2");
|
||||
std::fs::create_dir(&first_cwd).expect("create first cwd");
|
||||
std::fs::create_dir(&second_cwd).expect("create second cwd");
|
||||
|
||||
let responses = vec![
|
||||
create_shell_sse_response(
|
||||
vec![
|
||||
"bash".to_string(),
|
||||
"-lc".to_string(),
|
||||
"echo first turn".to_string(),
|
||||
],
|
||||
None,
|
||||
Some(5000),
|
||||
"call-first",
|
||||
)
|
||||
.expect("create first shell response"),
|
||||
create_final_assistant_message_sse_response("done first")
|
||||
.expect("create first final assistant message"),
|
||||
create_shell_sse_response(
|
||||
vec![
|
||||
"bash".to_string(),
|
||||
"-lc".to_string(),
|
||||
"echo second turn".to_string(),
|
||||
],
|
||||
None,
|
||||
Some(5000),
|
||||
"call-second",
|
||||
)
|
||||
.expect("create second shell response"),
|
||||
create_final_assistant_message_sse_response("done second")
|
||||
.expect("create second final assistant message"),
|
||||
];
|
||||
let server = create_mock_chat_completions_server(responses).await;
|
||||
create_config_toml(&codex_home, &server.uri()).expect("write config");
|
||||
|
||||
let mut mcp = McpProcess::new(&codex_home)
|
||||
.await
|
||||
.expect("spawn mcp process");
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
|
||||
.await
|
||||
.expect("init timeout")
|
||||
.expect("init failed");
|
||||
|
||||
let new_conv_id = mcp
|
||||
.send_new_conversation_request(NewConversationParams {
|
||||
cwd: Some(first_cwd.to_string_lossy().into_owned()),
|
||||
approval_policy: Some(AskForApproval::Never),
|
||||
sandbox: Some(SandboxMode::WorkspaceWrite),
|
||||
..Default::default()
|
||||
})
|
||||
.await
|
||||
.expect("send newConversation");
|
||||
let new_conv_resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(new_conv_id)),
|
||||
)
|
||||
.await
|
||||
.expect("newConversation timeout")
|
||||
.expect("newConversation resp");
|
||||
let NewConversationResponse {
|
||||
conversation_id,
|
||||
model,
|
||||
..
|
||||
} = to_response::<NewConversationResponse>(new_conv_resp)
|
||||
.expect("deserialize newConversation response");
|
||||
|
||||
let add_listener_id = mcp
|
||||
.send_add_conversation_listener_request(AddConversationListenerParams { conversation_id })
|
||||
.await
|
||||
.expect("send addConversationListener");
|
||||
timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(add_listener_id)),
|
||||
)
|
||||
.await
|
||||
.expect("addConversationListener timeout")
|
||||
.expect("addConversationListener resp");
|
||||
|
||||
let first_turn_id = mcp
|
||||
.send_send_user_turn_request(SendUserTurnParams {
|
||||
conversation_id,
|
||||
items: vec![InputItem::Text {
|
||||
text: "first turn".to_string(),
|
||||
}],
|
||||
cwd: first_cwd.clone(),
|
||||
approval_policy: AskForApproval::Never,
|
||||
sandbox_policy: SandboxPolicy::WorkspaceWrite {
|
||||
writable_roots: vec![first_cwd.clone()],
|
||||
network_access: false,
|
||||
exclude_tmpdir_env_var: false,
|
||||
exclude_slash_tmp: false,
|
||||
},
|
||||
model: model.clone(),
|
||||
effort: Some(ReasoningEffort::Medium),
|
||||
summary: ReasoningSummary::Auto,
|
||||
})
|
||||
.await
|
||||
.expect("send first sendUserTurn");
|
||||
timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(first_turn_id)),
|
||||
)
|
||||
.await
|
||||
.expect("sendUserTurn 1 timeout")
|
||||
.expect("sendUserTurn 1 resp");
|
||||
timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_notification_message("codex/event/task_complete"),
|
||||
)
|
||||
.await
|
||||
.expect("task_complete 1 timeout")
|
||||
.expect("task_complete 1 notification");
|
||||
|
||||
let second_turn_id = mcp
|
||||
.send_send_user_turn_request(SendUserTurnParams {
|
||||
conversation_id,
|
||||
items: vec![InputItem::Text {
|
||||
text: "second turn".to_string(),
|
||||
}],
|
||||
cwd: second_cwd.clone(),
|
||||
approval_policy: AskForApproval::Never,
|
||||
sandbox_policy: SandboxPolicy::DangerFullAccess,
|
||||
model: model.clone(),
|
||||
effort: Some(ReasoningEffort::Medium),
|
||||
summary: ReasoningSummary::Auto,
|
||||
})
|
||||
.await
|
||||
.expect("send second sendUserTurn");
|
||||
timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(second_turn_id)),
|
||||
)
|
||||
.await
|
||||
.expect("sendUserTurn 2 timeout")
|
||||
.expect("sendUserTurn 2 resp");
|
||||
|
||||
let mut env_message: Option<String> = None;
|
||||
let second_cwd_str = second_cwd.to_string_lossy().into_owned();
|
||||
for _ in 0..10 {
|
||||
let notification = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_notification_message("codex/event/user_message"),
|
||||
)
|
||||
.await
|
||||
.expect("user_message timeout")
|
||||
.expect("user_message notification");
|
||||
let params = notification
|
||||
.params
|
||||
.clone()
|
||||
.expect("user_message should include params");
|
||||
let event: Event = serde_json::from_value(params).expect("deserialize user_message event");
|
||||
if let EventMsg::UserMessage(user) = event.msg
|
||||
&& matches!(user.kind, Some(InputMessageKind::EnvironmentContext))
|
||||
&& user.message.contains(&second_cwd_str)
|
||||
{
|
||||
env_message = Some(user.message);
|
||||
break;
|
||||
}
|
||||
}
|
||||
let env_message = env_message.expect("expected environment context update");
|
||||
assert!(
|
||||
env_message.contains("<sandbox_mode>danger-full-access</sandbox_mode>"),
|
||||
"env context should reflect new sandbox mode: {env_message}"
|
||||
);
|
||||
assert!(
|
||||
env_message.contains("<network_access>enabled</network_access>"),
|
||||
"env context should enable network access for danger-full-access policy: {env_message}"
|
||||
);
|
||||
assert!(
|
||||
env_message.contains(&second_cwd_str),
|
||||
"env context should include updated cwd: {env_message}"
|
||||
);
|
||||
|
||||
let exec_begin_notification = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
|
||||
)
|
||||
.await
|
||||
.expect("exec_command_begin timeout")
|
||||
.expect("exec_command_begin notification");
|
||||
let params = exec_begin_notification
|
||||
.params
|
||||
.clone()
|
||||
.expect("exec_command_begin params");
|
||||
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
|
||||
let exec_begin = match event.msg {
|
||||
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
|
||||
other => panic!("expected ExecCommandBegin event, got {other:?}"),
|
||||
};
|
||||
assert_eq!(
|
||||
exec_begin.cwd, second_cwd,
|
||||
"exec turn should run from updated cwd"
|
||||
);
|
||||
assert_eq!(
|
||||
exec_begin.command,
|
||||
vec![
|
||||
"bash".to_string(),
|
||||
"-lc".to_string(),
|
||||
"echo second turn".to_string()
|
||||
],
|
||||
"exec turn should run expected command"
|
||||
);
|
||||
|
||||
timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_notification_message("codex/event/task_complete"),
|
||||
)
|
||||
.await
|
||||
.expect("task_complete 2 timeout")
|
||||
.expect("task_complete 2 notification");
|
||||
}
|
||||
|
||||
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
|
||||
let config_toml = codex_home.join("config.toml");
|
||||
std::fs::write(
|
||||
|
||||
@@ -3,18 +3,18 @@ use std::path::Path;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_app_server_protocol::GetUserSavedConfigResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::Profile;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SandboxSettings;
|
||||
use codex_app_server_protocol::Tools;
|
||||
use codex_app_server_protocol::UserSavedConfig;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
use codex_protocol::config_types::ReasoningEffort;
|
||||
use codex_protocol::config_types::ReasoningSummary;
|
||||
use codex_protocol::config_types::SandboxMode;
|
||||
use codex_protocol::config_types::Verbosity;
|
||||
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
|
||||
use codex_protocol::mcp_protocol::Profile;
|
||||
use codex_protocol::mcp_protocol::SandboxSettings;
|
||||
use codex_protocol::mcp_protocol::Tools;
|
||||
use codex_protocol::mcp_protocol::UserSavedConfig;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -4,15 +4,15 @@ use app_test_support::McpProcess;
|
||||
use app_test_support::create_final_assistant_message_sse_response;
|
||||
use app_test_support::create_mock_chat_completions_server;
|
||||
use app_test_support::to_response;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::InputItem;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::InputItem;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserMessageResponse;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use app_test_support::McpProcess;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
@@ -9,30 +11,41 @@ use tokio::time::timeout;
|
||||
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn test_fuzzy_file_search_sorts_and_includes_indices() {
|
||||
async fn test_fuzzy_file_search_sorts_and_includes_indices() -> Result<()> {
|
||||
// Prepare a temporary Codex home and a separate root with test files.
|
||||
let codex_home = TempDir::new().expect("create temp codex home");
|
||||
let root = TempDir::new().expect("create temp search root");
|
||||
let codex_home = TempDir::new().context("create temp codex home")?;
|
||||
let root = TempDir::new().context("create temp search root")?;
|
||||
|
||||
// Create files designed to have deterministic ordering for query "abc".
|
||||
std::fs::write(root.path().join("abc"), "x").expect("write file abc");
|
||||
std::fs::write(root.path().join("abcde"), "x").expect("write file abcx");
|
||||
std::fs::write(root.path().join("abexy"), "x").expect("write file abcx");
|
||||
std::fs::write(root.path().join("zzz.txt"), "x").expect("write file zzz");
|
||||
// Create files designed to have deterministic ordering for query "abe".
|
||||
std::fs::write(root.path().join("abc"), "x").context("write file abc")?;
|
||||
std::fs::write(root.path().join("abcde"), "x").context("write file abcde")?;
|
||||
std::fs::write(root.path().join("abexy"), "x").context("write file abexy")?;
|
||||
std::fs::write(root.path().join("zzz.txt"), "x").context("write file zzz")?;
|
||||
let sub_dir = root.path().join("sub");
|
||||
std::fs::create_dir_all(&sub_dir).context("create sub dir")?;
|
||||
let sub_abce_path = sub_dir.join("abce");
|
||||
std::fs::write(&sub_abce_path, "x").context("write file sub/abce")?;
|
||||
let sub_abce_rel = sub_abce_path
|
||||
.strip_prefix(root.path())
|
||||
.context("strip root prefix from sub/abce")?
|
||||
.to_string_lossy()
|
||||
.to_string();
|
||||
|
||||
// Start MCP server and initialize.
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await.expect("spawn mcp");
|
||||
let mut mcp = McpProcess::new(codex_home.path())
|
||||
.await
|
||||
.context("spawn mcp")?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
|
||||
.await
|
||||
.expect("init timeout")
|
||||
.expect("init failed");
|
||||
.context("init timeout")?
|
||||
.context("init failed")?;
|
||||
|
||||
let root_path = root.path().to_string_lossy().to_string();
|
||||
// Send fuzzyFileSearch request.
|
||||
let request_id = mcp
|
||||
.send_fuzzy_file_search_request("abe", vec![root_path.clone()], None)
|
||||
.await
|
||||
.expect("send fuzzyFileSearch");
|
||||
.context("send fuzzyFileSearch")?;
|
||||
|
||||
// Read response and verify shape and ordering.
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
@@ -40,39 +53,65 @@ async fn test_fuzzy_file_search_sorts_and_includes_indices() {
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
||||
)
|
||||
.await
|
||||
.expect("fuzzyFileSearch timeout")
|
||||
.expect("fuzzyFileSearch resp");
|
||||
.context("fuzzyFileSearch timeout")?
|
||||
.context("fuzzyFileSearch resp")?;
|
||||
|
||||
let value = resp.result;
|
||||
// The path separator on Windows affects the score.
|
||||
let expected_score = if cfg!(windows) { 69 } else { 72 };
|
||||
|
||||
assert_eq!(
|
||||
value,
|
||||
json!({
|
||||
"files": [
|
||||
{ "root": root_path.clone(), "path": "abexy", "score": 88, "indices": [0, 1, 2] },
|
||||
{ "root": root_path.clone(), "path": "abcde", "score": 74, "indices": [0, 1, 4] },
|
||||
{
|
||||
"root": root_path.clone(),
|
||||
"path": "abexy",
|
||||
"file_name": "abexy",
|
||||
"score": 88,
|
||||
"indices": [0, 1, 2],
|
||||
},
|
||||
{
|
||||
"root": root_path.clone(),
|
||||
"path": "abcde",
|
||||
"file_name": "abcde",
|
||||
"score": 74,
|
||||
"indices": [0, 1, 4],
|
||||
},
|
||||
{
|
||||
"root": root_path.clone(),
|
||||
"path": sub_abce_rel,
|
||||
"file_name": "abce",
|
||||
"score": expected_score,
|
||||
"indices": [4, 5, 7],
|
||||
},
|
||||
]
|
||||
})
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn test_fuzzy_file_search_accepts_cancellation_token() {
|
||||
let codex_home = TempDir::new().expect("create temp codex home");
|
||||
let root = TempDir::new().expect("create temp search root");
|
||||
async fn test_fuzzy_file_search_accepts_cancellation_token() -> Result<()> {
|
||||
let codex_home = TempDir::new().context("create temp codex home")?;
|
||||
let root = TempDir::new().context("create temp search root")?;
|
||||
|
||||
std::fs::write(root.path().join("alpha.txt"), "contents").expect("write alpha");
|
||||
std::fs::write(root.path().join("alpha.txt"), "contents").context("write alpha")?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await.expect("spawn mcp");
|
||||
let mut mcp = McpProcess::new(codex_home.path())
|
||||
.await
|
||||
.context("spawn mcp")?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
|
||||
.await
|
||||
.expect("init timeout")
|
||||
.expect("init failed");
|
||||
.context("init timeout")?
|
||||
.context("init failed")?;
|
||||
|
||||
let root_path = root.path().to_string_lossy().to_string();
|
||||
let request_id = mcp
|
||||
.send_fuzzy_file_search_request("alp", vec![root_path.clone()], None)
|
||||
.await
|
||||
.expect("send fuzzyFileSearch");
|
||||
.context("send fuzzyFileSearch")?;
|
||||
|
||||
let request_id_2 = mcp
|
||||
.send_fuzzy_file_search_request(
|
||||
@@ -81,24 +120,27 @@ async fn test_fuzzy_file_search_accepts_cancellation_token() {
|
||||
Some(request_id.to_string()),
|
||||
)
|
||||
.await
|
||||
.expect("send fuzzyFileSearch");
|
||||
.context("send fuzzyFileSearch")?;
|
||||
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id_2)),
|
||||
)
|
||||
.await
|
||||
.expect("fuzzyFileSearch timeout")
|
||||
.expect("fuzzyFileSearch resp");
|
||||
.context("fuzzyFileSearch timeout")?
|
||||
.context("fuzzyFileSearch resp")?;
|
||||
|
||||
let files = resp
|
||||
.result
|
||||
.get("files")
|
||||
.and_then(|value| value.as_array())
|
||||
.cloned()
|
||||
.expect("files array");
|
||||
.context("files key missing")?
|
||||
.as_array()
|
||||
.context("files not array")?
|
||||
.clone();
|
||||
|
||||
assert_eq!(files.len(), 1);
|
||||
assert_eq!(files[0]["root"], root_path);
|
||||
assert_eq!(files[0]["path"], "alpha.txt");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -3,17 +3,17 @@
|
||||
|
||||
use std::path::Path;
|
||||
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::InterruptConversationParams;
|
||||
use codex_app_server_protocol::InterruptConversationResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserMessageResponse;
|
||||
use codex_core::protocol::TurnAbortReason;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::InterruptConversationParams;
|
||||
use codex_protocol::mcp_protocol::InterruptConversationResponse;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageResponse;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -100,7 +100,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
|
||||
let send_user_id = mcp
|
||||
.send_send_user_message_request(SendUserMessageParams {
|
||||
conversation_id,
|
||||
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
|
||||
items: vec![codex_app_server_protocol::InputItem::Text {
|
||||
text: "run first sleep command".to_string(),
|
||||
}],
|
||||
})
|
||||
|
||||
@@ -3,16 +3,16 @@ use std::path::Path;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_protocol::mcp_protocol::ListConversationsParams;
|
||||
use codex_protocol::mcp_protocol::ListConversationsResponse;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams; // reused for overrides shape
|
||||
use codex_protocol::mcp_protocol::ResumeConversationParams;
|
||||
use codex_protocol::mcp_protocol::ResumeConversationResponse;
|
||||
use codex_protocol::mcp_protocol::ServerNotification;
|
||||
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::JSONRPCNotification;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::ListConversationsParams;
|
||||
use codex_app_server_protocol::ListConversationsResponse;
|
||||
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::ResumeConversationParams;
|
||||
use codex_app_server_protocol::ResumeConversationResponse;
|
||||
use codex_app_server_protocol::ServerNotification;
|
||||
use codex_app_server_protocol::SessionConfiguredNotification;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
|
||||
@@ -3,15 +3,15 @@ use std::time::Duration;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_app_server_protocol::CancelLoginChatGptParams;
|
||||
use codex_app_server_protocol::CancelLoginChatGptResponse;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::GetAuthStatusResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::LoginChatGptResponse;
|
||||
use codex_app_server_protocol::LogoutChatGptResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_login::login_with_api_key;
|
||||
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
|
||||
use codex_protocol::mcp_protocol::CancelLoginChatGptResponse;
|
||||
use codex_protocol::mcp_protocol::GetAuthStatusParams;
|
||||
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
|
||||
use codex_protocol::mcp_protocol::LoginChatGptResponse;
|
||||
use codex_protocol::mcp_protocol::LogoutChatGptResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
|
||||
@@ -4,17 +4,17 @@ use app_test_support::McpProcess;
|
||||
use app_test_support::create_final_assistant_message_sse_response;
|
||||
use app_test_support::create_mock_chat_completions_server;
|
||||
use app_test_support::to_response;
|
||||
use codex_protocol::mcp_protocol::AddConversationListenerParams;
|
||||
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_protocol::mcp_protocol::ConversationId;
|
||||
use codex_protocol::mcp_protocol::InputItem;
|
||||
use codex_protocol::mcp_protocol::NewConversationParams;
|
||||
use codex_protocol::mcp_protocol::NewConversationResponse;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageParams;
|
||||
use codex_protocol::mcp_protocol::SendUserMessageResponse;
|
||||
use mcp_types::JSONRPCNotification;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::AddConversationListenerParams;
|
||||
use codex_app_server_protocol::AddConversationSubscriptionResponse;
|
||||
use codex_app_server_protocol::InputItem;
|
||||
use codex_app_server_protocol::JSONRPCNotification;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::NewConversationParams;
|
||||
use codex_app_server_protocol::NewConversationResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SendUserMessageParams;
|
||||
use codex_app_server_protocol::SendUserMessageResponse;
|
||||
use codex_protocol::ConversationId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -2,11 +2,11 @@ use std::path::Path;
|
||||
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::SetDefaultModelParams;
|
||||
use codex_app_server_protocol::SetDefaultModelResponse;
|
||||
use codex_core::config::ConfigToml;
|
||||
use codex_protocol::mcp_protocol::SetDefaultModelParams;
|
||||
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use codex_protocol::mcp_protocol::GetUserAgentResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use codex_app_server_protocol::GetUserAgentResponse;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use tempfile::TempDir;
|
||||
use tokio::time::timeout;
|
||||
|
||||
@@ -5,14 +5,14 @@ use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
use base64::Engine;
|
||||
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::UserInfoResponse;
|
||||
use codex_core::auth::AuthDotJson;
|
||||
use codex_core::auth::get_auth_file;
|
||||
use codex_core::auth::write_auth_json;
|
||||
use codex_core::token_data::IdTokenInfo;
|
||||
use codex_core::token_data::TokenData;
|
||||
use codex_protocol::mcp_protocol::UserInfoResponse;
|
||||
use mcp_types::JSONRPCResponse;
|
||||
use mcp_types::RequestId;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
|
||||
@@ -23,5 +23,6 @@ tree-sitter-bash = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
assert_cmd = { workspace = true }
|
||||
assert_matches = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
tempfile = { workspace = true }
|
||||
|
||||
@@ -843,6 +843,7 @@ pub fn print_summary(
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use assert_matches::assert_matches;
|
||||
use pretty_assertions::assert_eq;
|
||||
use std::fs;
|
||||
use std::string::ToString;
|
||||
@@ -894,10 +895,10 @@ mod tests {
|
||||
|
||||
fn assert_not_match(script: &str) {
|
||||
let args = args_bash(script);
|
||||
assert!(matches!(
|
||||
assert_matches!(
|
||||
maybe_parse_apply_patch(&args),
|
||||
MaybeApplyPatch::NotApplyPatch
|
||||
));
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -905,10 +906,10 @@ mod tests {
|
||||
let patch = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch".to_string();
|
||||
let args = vec![patch];
|
||||
let dir = tempdir().unwrap();
|
||||
assert!(matches!(
|
||||
assert_matches!(
|
||||
maybe_parse_apply_patch_verified(&args, dir.path()),
|
||||
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
|
||||
));
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -916,10 +917,10 @@ mod tests {
|
||||
let script = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch";
|
||||
let args = args_bash(script);
|
||||
let dir = tempdir().unwrap();
|
||||
assert!(matches!(
|
||||
assert_matches!(
|
||||
maybe_parse_apply_patch_verified(&args, dir.path()),
|
||||
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
|
||||
));
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -29,7 +29,8 @@ pub async fn run_apply_command(
|
||||
.parse_overrides()
|
||||
.map_err(anyhow::Error::msg)?,
|
||||
ConfigOverrides::default(),
|
||||
)?;
|
||||
)
|
||||
.await?;
|
||||
|
||||
init_chatgpt_token_from_auth(&config.codex_home).await?;
|
||||
|
||||
|
||||
@@ -28,11 +28,14 @@ codex-login = { workspace = true }
|
||||
codex-mcp-server = { workspace = true }
|
||||
codex-process-hardening = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
codex-protocol-ts = { workspace = true }
|
||||
codex-responses-api-proxy = { workspace = true }
|
||||
codex-tui = { workspace = true }
|
||||
codex-rmcp-client = { workspace = true }
|
||||
codex-cloud-tasks = { path = "../cloud-tasks" }
|
||||
ctor = { workspace = true }
|
||||
crossterm = { workspace = true }
|
||||
owo-colors = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
supports-color = { workspace = true }
|
||||
@@ -43,10 +46,16 @@ tokio = { workspace = true, features = [
|
||||
"rt-multi-thread",
|
||||
"signal",
|
||||
] }
|
||||
tracing = { workspace = true }
|
||||
tracing-subscriber = { workspace = true }
|
||||
codex-infty = { path = "../codex-infty" }
|
||||
chrono = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
tracing = "0.1.41"
|
||||
tracing-appender = "0.2.3"
|
||||
tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }
|
||||
textwrap = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
assert_matches = { workspace = true }
|
||||
assert_cmd = { workspace = true }
|
||||
predicates = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
|
||||
@@ -73,7 +73,8 @@ async fn run_command_under_sandbox(
|
||||
codex_linux_sandbox_exe,
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
)
|
||||
.await?;
|
||||
|
||||
// In practice, this should be `std::env::current_dir()` because this CLI
|
||||
// does not support `--cwd`, but let's use the config value for consistency.
|
||||
|
||||
115
codex-rs/cli/src/infty/args.rs
Normal file
115
codex-rs/cli/src/infty/args.rs
Normal file
@@ -0,0 +1,115 @@
|
||||
use std::path::PathBuf;
|
||||
|
||||
use anyhow::Result;
|
||||
use clap::Parser;
|
||||
use clap::Subcommand;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_protocol::config_types::ReasoningEffort;
|
||||
|
||||
use super::commands;
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
pub struct InftyCli {
|
||||
#[clap(flatten)]
|
||||
pub config_overrides: CliConfigOverrides,
|
||||
|
||||
/// Override the default runs root (`~/.codex/infty`).
|
||||
#[arg(long = "runs-root", value_name = "DIR")]
|
||||
pub runs_root: Option<PathBuf>,
|
||||
|
||||
#[command(subcommand)]
|
||||
command: InftyCommand,
|
||||
}
|
||||
|
||||
#[derive(Debug, Subcommand)]
|
||||
enum InftyCommand {
|
||||
/// Create a new run store and spawn solver/director sessions.
|
||||
Create(CreateArgs),
|
||||
|
||||
/// List stored runs.
|
||||
List(ListArgs),
|
||||
|
||||
/// Show metadata for a stored run.
|
||||
Show(ShowArgs),
|
||||
// resumable runs are disabled; Drive command removed
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
pub(crate) struct CreateArgs {
|
||||
/// Explicit run id. If omitted, a timestamp-based id is generated.
|
||||
#[arg(long = "run-id", value_name = "RUN_ID")]
|
||||
pub run_id: Option<String>,
|
||||
|
||||
/// Optional objective to send to the solver immediately after creation.
|
||||
#[arg(long)]
|
||||
pub objective: Option<String>,
|
||||
|
||||
/// Timeout in seconds when waiting for the solver reply to --objective.
|
||||
#[arg(long = "timeout-secs", default_value_t = super::commands::DEFAULT_TIMEOUT_SECS)]
|
||||
pub timeout_secs: u64,
|
||||
|
||||
/// Override only the Director's model (solver and verifiers keep defaults).
|
||||
#[arg(long = "director-model", value_name = "MODEL")]
|
||||
pub director_model: Option<String>,
|
||||
|
||||
/// Override only the Director's reasoning effort (minimal|low|medium|high).
|
||||
#[arg(
|
||||
long = "director-effort",
|
||||
value_name = "LEVEL",
|
||||
value_parser = parse_reasoning_effort
|
||||
)]
|
||||
pub director_effort: Option<ReasoningEffort>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
pub(crate) struct ListArgs {
|
||||
/// Emit JSON describing the stored runs.
|
||||
#[arg(long)]
|
||||
pub json: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
pub(crate) struct ShowArgs {
|
||||
/// Run id to display.
|
||||
#[arg(value_name = "RUN_ID")]
|
||||
pub run_id: String,
|
||||
|
||||
/// Emit JSON metadata instead of human-readable text.
|
||||
#[arg(long)]
|
||||
pub json: bool,
|
||||
}
|
||||
|
||||
// resumable runs are disabled; DriveArgs removed
|
||||
|
||||
impl InftyCli {
|
||||
pub async fn run(self) -> Result<()> {
|
||||
let InftyCli {
|
||||
config_overrides,
|
||||
runs_root,
|
||||
command,
|
||||
} = self;
|
||||
|
||||
match command {
|
||||
InftyCommand::Create(args) => {
|
||||
commands::run_create(config_overrides, runs_root, args).await?;
|
||||
}
|
||||
InftyCommand::List(args) => commands::run_list(runs_root, args)?,
|
||||
InftyCommand::Show(args) => commands::run_show(runs_root, args)?,
|
||||
// Drive removed
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_reasoning_effort(s: &str) -> Result<ReasoningEffort, String> {
|
||||
match s.trim().to_ascii_lowercase().as_str() {
|
||||
"minimal" => Ok(ReasoningEffort::Minimal),
|
||||
"low" => Ok(ReasoningEffort::Low),
|
||||
"medium" => Ok(ReasoningEffort::Medium),
|
||||
"high" => Ok(ReasoningEffort::High),
|
||||
_ => Err(format!(
|
||||
"invalid reasoning effort: {s}. Expected one of: minimal|low|medium|high"
|
||||
)),
|
||||
}
|
||||
}
|
||||
438
codex-rs/cli/src/infty/commands.rs
Normal file
438
codex-rs/cli/src/infty/commands.rs
Normal file
@@ -0,0 +1,438 @@
|
||||
use std::fs;
|
||||
use std::fs::OpenOptions;
|
||||
use std::io;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
use std::time::Instant;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use anyhow::bail;
|
||||
use chrono::SecondsFormat;
|
||||
use chrono::Utc;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::auth::read_codex_api_key_from_env;
|
||||
use codex_core::auth::read_openai_api_key_from_env;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_infty::InftyOrchestrator;
|
||||
use codex_infty::RoleConfig;
|
||||
use codex_infty::RunExecutionOptions;
|
||||
use codex_infty::RunParams;
|
||||
use codex_infty::RunStore;
|
||||
use owo_colors::OwoColorize;
|
||||
use serde::Serialize;
|
||||
use std::sync::OnceLock;
|
||||
use supports_color::Stream;
|
||||
use tracing_appender::non_blocking;
|
||||
use tracing_subscriber::EnvFilter;
|
||||
use tracing_subscriber::prelude::*;
|
||||
|
||||
use super::args::CreateArgs;
|
||||
use super::args::ListArgs;
|
||||
use super::args::ShowArgs;
|
||||
use super::progress::TerminalProgressReporter;
|
||||
use super::summary::print_run_summary_box;
|
||||
|
||||
const DEFAULT_VERIFIER_ROLES: [&str; 3] = ["verifier-alpha", "verifier-beta", "verifier-gamma"];
|
||||
|
||||
pub(crate) const DEFAULT_TIMEOUT_SECS: u64 = 6000;
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct RunSummary {
|
||||
run_id: String,
|
||||
path: String,
|
||||
created_at: String,
|
||||
updated_at: String,
|
||||
roles: Vec<String>,
|
||||
}
|
||||
|
||||
pub(crate) async fn run_create(
|
||||
config_overrides: CliConfigOverrides,
|
||||
runs_root_override: Option<PathBuf>,
|
||||
args: CreateArgs,
|
||||
) -> Result<()> {
|
||||
let config = load_config(config_overrides).await?;
|
||||
init_infty_logging(&config)?;
|
||||
let auth = load_auth(&config)?;
|
||||
let runs_root = resolve_runs_root(runs_root_override)?;
|
||||
let color_enabled = supports_color::on(Stream::Stdout).is_some();
|
||||
|
||||
let mut run_id = if let Some(id) = args.run_id {
|
||||
id
|
||||
} else {
|
||||
generate_run_id()
|
||||
};
|
||||
run_id = run_id.trim().to_string();
|
||||
validate_run_id(&run_id)?;
|
||||
|
||||
let run_path = runs_root.join(&run_id);
|
||||
if run_path.exists() {
|
||||
bail!("run {run_id} already exists at {}", run_path.display());
|
||||
}
|
||||
|
||||
let orchestrator = InftyOrchestrator::with_runs_root(auth, runs_root).with_progress(Arc::new(
|
||||
TerminalProgressReporter::with_color(color_enabled),
|
||||
));
|
||||
let verifiers: Vec<RoleConfig> = DEFAULT_VERIFIER_ROLES
|
||||
.iter()
|
||||
.map(|role| RoleConfig::new(role.to_string(), config.clone()))
|
||||
.collect();
|
||||
let mut director_config = config.clone();
|
||||
if let Some(model) = args.director_model.as_deref() {
|
||||
director_config.model = model.to_string();
|
||||
}
|
||||
if let Some(effort) = args.director_effort {
|
||||
director_config.model_reasoning_effort = Some(effort);
|
||||
}
|
||||
let run_params = RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(run_path.clone()),
|
||||
solver: RoleConfig::new("solver", config.clone()),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers,
|
||||
};
|
||||
|
||||
if let Some(objective) = args.objective {
|
||||
let timeout = Duration::from_secs(args.timeout_secs);
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some(objective),
|
||||
director_timeout: timeout,
|
||||
verifier_timeout: timeout,
|
||||
};
|
||||
|
||||
let start = Instant::now();
|
||||
let start_header = format!("Starting run {run_id}");
|
||||
if color_enabled {
|
||||
println!("{}", start_header.blue().bold());
|
||||
} else {
|
||||
println!("{start_header}");
|
||||
}
|
||||
let location_line = format!(" run directory: {}", run_path.display());
|
||||
if color_enabled {
|
||||
println!("{}", location_line.dimmed());
|
||||
} else {
|
||||
println!("{location_line}");
|
||||
}
|
||||
if let Some(objective_text) = options.objective.as_deref()
|
||||
&& !objective_text.trim().is_empty()
|
||||
{
|
||||
let objective_line = format!(" objective: {objective_text}");
|
||||
if color_enabled {
|
||||
println!("{}", objective_line.dimmed());
|
||||
} else {
|
||||
println!("{objective_line}");
|
||||
}
|
||||
}
|
||||
println!();
|
||||
|
||||
let objective_snapshot = options.objective.clone();
|
||||
let outcome = orchestrator
|
||||
.execute_new_run(run_params, options)
|
||||
.await
|
||||
.with_context(|| format!("failed to execute run {run_id}"))?;
|
||||
let duration = start.elapsed();
|
||||
print_run_summary_box(
|
||||
color_enabled,
|
||||
&run_id,
|
||||
&run_path,
|
||||
&outcome.deliverable_path,
|
||||
outcome.summary.as_deref(),
|
||||
objective_snapshot.as_deref(),
|
||||
duration,
|
||||
);
|
||||
} else {
|
||||
let sessions = orchestrator
|
||||
.spawn_run(run_params)
|
||||
.await
|
||||
.with_context(|| format!("failed to create run {run_id}"))?;
|
||||
|
||||
println!(
|
||||
"Created run {run_id} at {}",
|
||||
sessions.store.path().display()
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn run_list(runs_root_override: Option<PathBuf>, args: ListArgs) -> Result<()> {
|
||||
// Initialize logging using default Codex home discovery.
|
||||
let _ = init_infty_logging_from_home();
|
||||
let runs_root = resolve_runs_root(runs_root_override)?;
|
||||
let listings = collect_run_summaries(&runs_root)?;
|
||||
|
||||
if args.json {
|
||||
println!("{}", serde_json::to_string_pretty(&listings)?);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if listings.is_empty() {
|
||||
println!("No runs found under {}", runs_root.display());
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
println!("Runs in {}", runs_root.display());
|
||||
for summary in listings {
|
||||
println!(
|
||||
"{}\t{}\t{}",
|
||||
summary.run_id, summary.updated_at, summary.path
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn run_show(runs_root_override: Option<PathBuf>, args: ShowArgs) -> Result<()> {
|
||||
validate_run_id(&args.run_id)?;
|
||||
let _ = init_infty_logging_from_home();
|
||||
let runs_root = resolve_runs_root(runs_root_override)?;
|
||||
let run_path = runs_root.join(&args.run_id);
|
||||
let store =
|
||||
RunStore::load(&run_path).with_context(|| format!("failed to load run {}", args.run_id))?;
|
||||
let metadata = store.metadata();
|
||||
|
||||
let summary = RunSummary {
|
||||
run_id: metadata.run_id.clone(),
|
||||
path: run_path.display().to_string(),
|
||||
created_at: metadata
|
||||
.created_at
|
||||
.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||
updated_at: metadata
|
||||
.updated_at
|
||||
.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||
roles: metadata
|
||||
.roles
|
||||
.iter()
|
||||
.map(|role| role.role.clone())
|
||||
.collect(),
|
||||
};
|
||||
|
||||
if args.json {
|
||||
println!("{}", serde_json::to_string_pretty(&summary)?);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
println!("Run: {}", summary.run_id);
|
||||
println!("Path: {}", summary.path);
|
||||
println!("Created: {}", summary.created_at);
|
||||
println!("Updated: {}", summary.updated_at);
|
||||
println!("Roles: {}", summary.roles.join(", "));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// resumable runs are disabled; run_drive removed
|
||||
|
||||
fn generate_run_id() -> String {
|
||||
let timestamp = Utc::now().format("run-%Y%m%d-%H%M%S");
|
||||
format!("{timestamp}")
|
||||
}
|
||||
|
||||
pub(crate) fn validate_run_id(run_id: &str) -> Result<()> {
|
||||
if run_id.is_empty() {
|
||||
bail!("run id must not be empty");
|
||||
}
|
||||
if run_id.starts_with('.') || run_id.ends_with('.') {
|
||||
bail!("run id must not begin or end with '.'");
|
||||
}
|
||||
if run_id
|
||||
.chars()
|
||||
.any(|c| !(c.is_ascii_alphanumeric() || matches!(c, '-' | '_' | '.')))
|
||||
{
|
||||
bail!("run id may only contain ASCII alphanumerics, '-', '_', or '.'");
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn load_config(cli_overrides: CliConfigOverrides) -> Result<Config> {
|
||||
let overrides = cli_overrides
|
||||
.parse_overrides()
|
||||
.map_err(|err| anyhow!("failed to parse -c overrides: {err}"))?;
|
||||
Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load Codex configuration")
|
||||
}
|
||||
|
||||
fn load_auth(config: &Config) -> Result<CodexAuth> {
|
||||
if let Some(auth) =
|
||||
CodexAuth::from_codex_home(&config.codex_home).context("failed to read auth.json")?
|
||||
{
|
||||
return Ok(auth);
|
||||
}
|
||||
if let Some(api_key) = read_codex_api_key_from_env() {
|
||||
return Ok(CodexAuth::from_api_key(&api_key));
|
||||
}
|
||||
if let Some(api_key) = read_openai_api_key_from_env() {
|
||||
return Ok(CodexAuth::from_api_key(&api_key));
|
||||
}
|
||||
bail!("no Codex authentication found. Run `codex login` or set OPENAI_API_KEY.");
|
||||
}
|
||||
|
||||
fn resolve_runs_root(override_path: Option<PathBuf>) -> Result<PathBuf> {
|
||||
if let Some(path) = override_path {
|
||||
return Ok(path);
|
||||
}
|
||||
codex_infty::default_runs_root()
|
||||
}
|
||||
|
||||
fn collect_run_summaries(root: &Path) -> Result<Vec<RunSummary>> {
|
||||
let mut summaries = Vec::new();
|
||||
let iter = match fs::read_dir(root) {
|
||||
Ok(read_dir) => read_dir,
|
||||
Err(err) if err.kind() == io::ErrorKind::NotFound => return Ok(summaries),
|
||||
Err(err) => {
|
||||
return Err(
|
||||
anyhow!(err).context(format!("failed to read runs root {}", root.display()))
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
for entry in iter {
|
||||
let entry = entry?;
|
||||
if !entry.file_type()?.is_dir() {
|
||||
continue;
|
||||
}
|
||||
let run_path = entry.path();
|
||||
let store = match RunStore::load(&run_path) {
|
||||
Ok(store) => store,
|
||||
Err(err) => {
|
||||
eprintln!(
|
||||
"Skipping {}: failed to load run metadata: {err}",
|
||||
run_path.display()
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
let metadata = store.metadata();
|
||||
summaries.push(RunSummary {
|
||||
run_id: metadata.run_id.clone(),
|
||||
path: run_path.display().to_string(),
|
||||
created_at: metadata
|
||||
.created_at
|
||||
.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||
updated_at: metadata
|
||||
.updated_at
|
||||
.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||
roles: metadata
|
||||
.roles
|
||||
.iter()
|
||||
.map(|role| role.role.clone())
|
||||
.collect(),
|
||||
});
|
||||
}
|
||||
|
||||
summaries.sort_by(|a, b| b.updated_at.cmp(&a.updated_at));
|
||||
Ok(summaries)
|
||||
}
|
||||
|
||||
fn init_infty_logging(config: &codex_core::config::Config) -> std::io::Result<()> {
|
||||
let log_dir = codex_core::config::log_dir(config)?;
|
||||
std::fs::create_dir_all(&log_dir)?;
|
||||
|
||||
let mut log_file_opts = OpenOptions::new();
|
||||
log_file_opts.create(true).append(true);
|
||||
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
log_file_opts.mode(0o600);
|
||||
}
|
||||
|
||||
let log_file = log_file_opts.open(log_dir.join("codex-infty.log"))?;
|
||||
let (non_blocking, guard) = non_blocking(log_file);
|
||||
static INFTY_LOG_GUARD: OnceLock<tracing_appender::non_blocking::WorkerGuard> = OnceLock::new();
|
||||
let _ = INFTY_LOG_GUARD.set(guard);
|
||||
|
||||
// Use RUST_LOG if set, otherwise default to info for common codex crates
|
||||
let env_filter = || {
|
||||
EnvFilter::try_from_default_env()
|
||||
.unwrap_or_else(|_| EnvFilter::new("codex_core=info,codex_infty=info,codex_cli=info"))
|
||||
};
|
||||
|
||||
let file_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(non_blocking)
|
||||
.with_target(false)
|
||||
.with_span_events(tracing_subscriber::fmt::format::FmtSpan::CLOSE)
|
||||
.with_filter(env_filter());
|
||||
|
||||
// Initialize once; subsequent calls are no‑ops.
|
||||
let _ = tracing_subscriber::registry().with(file_layer).try_init();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn init_infty_logging_from_home() -> std::io::Result<()> {
|
||||
let mut log_dir = codex_core::config::find_codex_home()?;
|
||||
log_dir.push("log");
|
||||
std::fs::create_dir_all(&log_dir)?;
|
||||
|
||||
let mut log_file_opts = OpenOptions::new();
|
||||
log_file_opts.create(true).append(true);
|
||||
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
log_file_opts.mode(0o600);
|
||||
}
|
||||
|
||||
let log_file = log_file_opts.open(log_dir.join("codex-infty.log"))?;
|
||||
let (non_blocking, guard) = non_blocking(log_file);
|
||||
static INFTY_LOG_GUARD: OnceLock<tracing_appender::non_blocking::WorkerGuard> = OnceLock::new();
|
||||
let _ = INFTY_LOG_GUARD.set(guard);
|
||||
|
||||
let env_filter = || {
|
||||
EnvFilter::try_from_default_env()
|
||||
.unwrap_or_else(|_| EnvFilter::new("codex_core=info,codex_infty=info,codex_cli=info"))
|
||||
};
|
||||
|
||||
let file_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(non_blocking)
|
||||
.with_target(false)
|
||||
.with_span_events(tracing_subscriber::fmt::format::FmtSpan::CLOSE)
|
||||
.with_filter(env_filter());
|
||||
|
||||
let _ = tracing_subscriber::registry().with(file_layer).try_init();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
#[test]
|
||||
fn default_verifier_roles_are_stable() {
|
||||
assert_eq!(
|
||||
DEFAULT_VERIFIER_ROLES,
|
||||
["verifier-alpha", "verifier-beta", "verifier-gamma"]
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validates_run_ids() {
|
||||
assert!(validate_run_id("run-20241030-123000").is_ok());
|
||||
assert!(validate_run_id("run.alpha").is_ok());
|
||||
assert!(validate_run_id("").is_err());
|
||||
assert!(validate_run_id("..bad").is_err());
|
||||
assert!(validate_run_id("bad/value").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generates_timestamped_run_id() {
|
||||
let run_id = generate_run_id();
|
||||
assert!(run_id.starts_with("run-"));
|
||||
assert_eq!(run_id.len(), "run-YYYYMMDD-HHMMSS".len());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_summaries_returns_empty_for_missing_root() {
|
||||
let temp = TempDir::new().expect("temp dir");
|
||||
let missing = temp.path().join("not-present");
|
||||
let summaries = collect_run_summaries(&missing).expect("collect");
|
||||
assert!(summaries.is_empty());
|
||||
}
|
||||
}
|
||||
6
codex-rs/cli/src/infty/mod.rs
Normal file
6
codex-rs/cli/src/infty/mod.rs
Normal file
@@ -0,0 +1,6 @@
|
||||
mod args;
|
||||
mod commands;
|
||||
mod progress;
|
||||
mod summary;
|
||||
|
||||
pub use args::InftyCli;
|
||||
194
codex-rs/cli/src/infty/progress.rs
Normal file
194
codex-rs/cli/src/infty/progress.rs
Normal file
@@ -0,0 +1,194 @@
|
||||
use chrono::Local;
|
||||
use codex_core::protocol::AgentMessageEvent;
|
||||
use codex_core::protocol::EventMsg;
|
||||
use codex_infty::AggregatedVerifierVerdict;
|
||||
use codex_infty::DirectiveResponse;
|
||||
use codex_infty::ProgressReporter;
|
||||
use codex_infty::VerifierDecision;
|
||||
use codex_infty::VerifierVerdict;
|
||||
use crossterm::style::Stylize;
|
||||
use std::path::Path;
|
||||
use supports_color::Stream;
|
||||
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub(crate) struct TerminalProgressReporter;
|
||||
|
||||
impl TerminalProgressReporter {
|
||||
pub(crate) fn with_color(_color_enabled: bool) -> Self {
|
||||
Self
|
||||
}
|
||||
|
||||
fn format_role_label(&self, role: &str) -> String {
|
||||
let lower = role.to_ascii_lowercase();
|
||||
if lower == "solver" {
|
||||
return "[solver]".magenta().bold().to_string();
|
||||
}
|
||||
if lower == "director" {
|
||||
return "[director]".blue().bold().to_string();
|
||||
}
|
||||
if lower == "user" {
|
||||
return "[user]".cyan().bold().to_string();
|
||||
}
|
||||
if lower.contains("verifier") {
|
||||
return format!("[{role}]").green().bold().to_string();
|
||||
}
|
||||
format!("[{role}]").magenta().bold().to_string()
|
||||
}
|
||||
|
||||
fn timestamp(&self) -> String {
|
||||
let timestamp = Local::now().format("%H:%M:%S");
|
||||
let display = format!("[{timestamp}]");
|
||||
if supports_color::on(Stream::Stdout).is_some() {
|
||||
format!("{}", display.dim())
|
||||
} else {
|
||||
display
|
||||
}
|
||||
}
|
||||
|
||||
fn print_exchange(
|
||||
&self,
|
||||
from_role: &str,
|
||||
to_role: &str,
|
||||
lines: Vec<String>,
|
||||
trailing_blank_line: bool,
|
||||
) {
|
||||
let header = format!(
|
||||
"{} ----> {}",
|
||||
self.format_role_label(from_role),
|
||||
self.format_role_label(to_role)
|
||||
);
|
||||
println!("{} {header}", self.timestamp());
|
||||
for line in lines {
|
||||
println!("{line}");
|
||||
}
|
||||
if trailing_blank_line {
|
||||
println!();
|
||||
}
|
||||
}
|
||||
|
||||
fn format_decision(&self, decision: VerifierDecision) -> String {
|
||||
match decision {
|
||||
VerifierDecision::Pass => "pass".green().bold().to_string(),
|
||||
VerifierDecision::Fail => "fail".red().bold().to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ProgressReporter for TerminalProgressReporter {
|
||||
fn objective_posted(&self, objective: &str) {
|
||||
let objective_line = format!("{}", format!("→ objective: {objective}").dim());
|
||||
self.print_exchange("user", "solver", vec![objective_line], true);
|
||||
}
|
||||
|
||||
fn solver_event(&self, event: &EventMsg) {
|
||||
match serde_json::to_string_pretty(event) {
|
||||
Ok(json) => {
|
||||
tracing::debug!("[solver:event]\n{json}");
|
||||
}
|
||||
Err(err) => {
|
||||
tracing::warn!("[solver:event] (failed to serialize: {err}) {event:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn role_event(&self, role: &str, event: &EventMsg) {
|
||||
match serde_json::to_string_pretty(event) {
|
||||
Ok(json) => {
|
||||
tracing::debug!("[{role}:event]\n{json}");
|
||||
}
|
||||
Err(err) => {
|
||||
tracing::warn!("[{role}:event] (failed to serialize: {err}) {event:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn solver_agent_message(&self, agent_msg: &AgentMessageEvent) {
|
||||
tracing::info!("Agent Message: {agent_msg:?}");
|
||||
}
|
||||
|
||||
fn invalid_solver_signal(&self, raw_message: &str) {
|
||||
let heading = "Warning".yellow().bold();
|
||||
let body = format!(
|
||||
"solver reply did not match expected JSON signal; got: {}",
|
||||
raw_message
|
||||
);
|
||||
println!("{} {} {}", self.timestamp(), heading, body);
|
||||
}
|
||||
|
||||
fn direction_request(&self, prompt: &str) {
|
||||
let prompt_line = format!("{}", prompt.yellow());
|
||||
self.print_exchange("solver", "director", vec![prompt_line], true);
|
||||
}
|
||||
|
||||
fn director_response(&self, directive: &DirectiveResponse) {
|
||||
let suffix = directive
|
||||
.rationale
|
||||
.as_deref()
|
||||
.filter(|rationale| !rationale.is_empty())
|
||||
.map(|rationale| format!(" (rationale: {rationale})"))
|
||||
.unwrap_or_default();
|
||||
let directive_line = format!("{}{}", directive.directive, suffix);
|
||||
self.print_exchange("director", "solver", vec![directive_line], true);
|
||||
}
|
||||
|
||||
fn verification_request(&self, claim_path: &str, notes: Option<&str>) {
|
||||
let mut lines = Vec::new();
|
||||
let path_line = format!("→ path: {claim_path}");
|
||||
lines.push(format!("{}", path_line.dim()));
|
||||
if let Some(notes) = notes.filter(|notes| !notes.is_empty()) {
|
||||
let note_line = format!("→ note: {notes}");
|
||||
lines.push(format!("{}", note_line.dim()));
|
||||
}
|
||||
self.print_exchange("solver", "verifier", lines, true);
|
||||
}
|
||||
|
||||
fn verifier_verdict(&self, role: &str, verdict: &VerifierVerdict) {
|
||||
let decision = self.format_decision(verdict.verdict);
|
||||
let mut lines = Vec::new();
|
||||
lines.push(format!("verdict: {decision}"));
|
||||
if !verdict.reasons.is_empty() {
|
||||
let reasons = verdict.reasons.join("; ");
|
||||
let reason_line = format!("→ reasons: {reasons}");
|
||||
lines.push(format!("{}", reason_line.dim()));
|
||||
}
|
||||
if !verdict.suggestions.is_empty() {
|
||||
let suggestions = verdict.suggestions.join("; ");
|
||||
let suggestion_line = format!("→ suggestions: {suggestions}");
|
||||
lines.push(format!("{}", suggestion_line.dim()));
|
||||
}
|
||||
self.print_exchange(role, "solver", lines, false);
|
||||
}
|
||||
|
||||
fn verification_summary(&self, summary: &AggregatedVerifierVerdict) {
|
||||
let decision = self.format_decision(summary.overall);
|
||||
let heading = "Verification summary".bold();
|
||||
let summary_line = format!("{heading}: {decision}");
|
||||
self.print_exchange("verifier", "solver", vec![summary_line], true);
|
||||
}
|
||||
|
||||
fn final_delivery(&self, deliverable_path: &Path, summary: Option<&str>) {
|
||||
let delivery_line = format!(
|
||||
"{}",
|
||||
format!("→ path: {}", deliverable_path.display()).dim()
|
||||
);
|
||||
let summary_line = format!(
|
||||
"{}",
|
||||
format!("→ summary: {}", summary.unwrap_or("<none>")).dim()
|
||||
);
|
||||
self.print_exchange(
|
||||
"solver",
|
||||
"verifier",
|
||||
vec![delivery_line, summary_line],
|
||||
true,
|
||||
);
|
||||
}
|
||||
|
||||
fn run_interrupted(&self) {
|
||||
println!(
|
||||
"{}",
|
||||
"Run interrupted by Ctrl+C. Shutting down sessions…"
|
||||
.red()
|
||||
.bold(),
|
||||
);
|
||||
}
|
||||
}
|
||||
123
codex-rs/cli/src/infty/summary.rs
Normal file
123
codex-rs/cli/src/infty/summary.rs
Normal file
@@ -0,0 +1,123 @@
|
||||
use std::path::Path;
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_common::elapsed::format_duration;
|
||||
use crossterm::terminal;
|
||||
use owo_colors::OwoColorize;
|
||||
use textwrap::Options as WrapOptions;
|
||||
use textwrap::wrap;
|
||||
|
||||
pub(crate) fn print_run_summary_box(
|
||||
color_enabled: bool,
|
||||
run_id: &str,
|
||||
run_path: &Path,
|
||||
deliverable_path: &Path,
|
||||
summary: Option<&str>,
|
||||
objective: Option<&str>,
|
||||
duration: Duration,
|
||||
) {
|
||||
let mut items = Vec::new();
|
||||
items.push(("Run ID".to_string(), run_id.to_string()));
|
||||
items.push(("Run Directory".to_string(), run_path.display().to_string()));
|
||||
if let Some(objective) = objective
|
||||
&& !objective.trim().is_empty()
|
||||
{
|
||||
items.push(("Objective".to_string(), objective.trim().to_string()));
|
||||
}
|
||||
items.push((
|
||||
"Deliverable".to_string(),
|
||||
deliverable_path.display().to_string(),
|
||||
));
|
||||
items.push(("Total Time".to_string(), format_duration(duration)));
|
||||
if let Some(summary) = summary {
|
||||
let trimmed = summary.trim();
|
||||
if !trimmed.is_empty() {
|
||||
items.push(("Summary".to_string(), trimmed.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
let label_width = items
|
||||
.iter()
|
||||
.map(|(label, _)| label.len())
|
||||
.max()
|
||||
.unwrap_or(0)
|
||||
.max(12);
|
||||
|
||||
const DEFAULT_MAX_WIDTH: usize = 84;
|
||||
const MIN_VALUE_WIDTH: usize = 20;
|
||||
let label_padding = label_width + 7;
|
||||
let min_total_width = label_padding + MIN_VALUE_WIDTH;
|
||||
let available_width = terminal::size()
|
||||
.ok()
|
||||
.map(|(cols, _)| usize::from(cols).saturating_sub(2))
|
||||
.unwrap_or(DEFAULT_MAX_WIDTH);
|
||||
let max_width = available_width.min(DEFAULT_MAX_WIDTH);
|
||||
let lower_bound = min_total_width.min(available_width);
|
||||
let mut total_width = max_width.max(lower_bound).max(label_padding + 1);
|
||||
let mut value_width = total_width.saturating_sub(label_padding);
|
||||
if value_width < MIN_VALUE_WIDTH {
|
||||
value_width = MIN_VALUE_WIDTH;
|
||||
total_width = label_padding + value_width;
|
||||
}
|
||||
|
||||
let inner_width = total_width.saturating_sub(4);
|
||||
let top_border = format!("+{}+", "=".repeat(total_width.saturating_sub(2)));
|
||||
let separator = format!("+{}+", "-".repeat(total_width.saturating_sub(2)));
|
||||
let title_line = format!(
|
||||
"| {:^inner_width$} |",
|
||||
"Run Summary",
|
||||
inner_width = inner_width
|
||||
);
|
||||
|
||||
println!();
|
||||
println!("{top_border}");
|
||||
if color_enabled {
|
||||
println!("{}", title_line.bold());
|
||||
} else {
|
||||
println!("{title_line}");
|
||||
}
|
||||
println!("{separator}");
|
||||
|
||||
for (index, (label, value)) in items.iter().enumerate() {
|
||||
let mut rows = Vec::new();
|
||||
for (idx, paragraph) in value.split('\n').enumerate() {
|
||||
let trimmed = paragraph.trim();
|
||||
if trimmed.is_empty() {
|
||||
if idx > 0 {
|
||||
rows.push(String::new());
|
||||
}
|
||||
continue;
|
||||
}
|
||||
let wrapped = wrap(trimmed, WrapOptions::new(value_width).break_words(false));
|
||||
if wrapped.is_empty() {
|
||||
rows.push(String::new());
|
||||
} else {
|
||||
rows.extend(wrapped.into_iter().map(std::borrow::Cow::into_owned));
|
||||
}
|
||||
}
|
||||
if rows.is_empty() {
|
||||
rows.push(String::new());
|
||||
}
|
||||
|
||||
for (line_idx, line) in rows.iter().enumerate() {
|
||||
let label_cell = if line_idx == 0 { label.as_str() } else { "" };
|
||||
let row_line = format!("| {label_cell:<label_width$} | {line:<value_width$} |");
|
||||
if color_enabled {
|
||||
match label.as_str() {
|
||||
"Deliverable" => println!("{}", row_line.green()),
|
||||
"Summary" => println!("{}", row_line.bold()),
|
||||
_ => println!("{row_line}"),
|
||||
}
|
||||
} else {
|
||||
println!("{row_line}");
|
||||
}
|
||||
}
|
||||
|
||||
if index + 1 < items.len() {
|
||||
println!("{separator}");
|
||||
}
|
||||
}
|
||||
|
||||
println!("{top_border}");
|
||||
println!();
|
||||
}
|
||||
@@ -1,7 +1,6 @@
|
||||
pub mod debug_sandbox;
|
||||
mod exit_status;
|
||||
pub mod login;
|
||||
pub mod proto;
|
||||
|
||||
use clap::Parser;
|
||||
use codex_common::CliConfigOverrides;
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use codex_app_server_protocol::AuthMode;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::auth::CLIENT_ID;
|
||||
@@ -8,7 +9,8 @@ use codex_core::config::ConfigOverrides;
|
||||
use codex_login::ServerOptions;
|
||||
use codex_login::run_device_code_login;
|
||||
use codex_login::run_login_server;
|
||||
use codex_protocol::mcp_protocol::AuthMode;
|
||||
use std::io::IsTerminal;
|
||||
use std::io::Read;
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {
|
||||
@@ -24,7 +26,7 @@ pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {
|
||||
}
|
||||
|
||||
pub async fn run_login_with_chatgpt(cli_config_overrides: CliConfigOverrides) -> ! {
|
||||
let config = load_config_or_exit(cli_config_overrides);
|
||||
let config = load_config_or_exit(cli_config_overrides).await;
|
||||
|
||||
match login_with_chatgpt(config.codex_home).await {
|
||||
Ok(_) => {
|
||||
@@ -42,7 +44,7 @@ pub async fn run_login_with_api_key(
|
||||
cli_config_overrides: CliConfigOverrides,
|
||||
api_key: String,
|
||||
) -> ! {
|
||||
let config = load_config_or_exit(cli_config_overrides);
|
||||
let config = load_config_or_exit(cli_config_overrides).await;
|
||||
|
||||
match login_with_api_key(&config.codex_home, &api_key) {
|
||||
Ok(_) => {
|
||||
@@ -56,13 +58,40 @@ pub async fn run_login_with_api_key(
|
||||
}
|
||||
}
|
||||
|
||||
pub fn read_api_key_from_stdin() -> String {
|
||||
let mut stdin = std::io::stdin();
|
||||
|
||||
if stdin.is_terminal() {
|
||||
eprintln!(
|
||||
"--with-api-key expects the API key on stdin. Try piping it, e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`."
|
||||
);
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
eprintln!("Reading API key from stdin...");
|
||||
|
||||
let mut buffer = String::new();
|
||||
if let Err(err) = stdin.read_to_string(&mut buffer) {
|
||||
eprintln!("Failed to read API key from stdin: {err}");
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
let api_key = buffer.trim().to_string();
|
||||
if api_key.is_empty() {
|
||||
eprintln!("No API key provided via stdin.");
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
api_key
|
||||
}
|
||||
|
||||
/// Login using the OAuth device code flow.
|
||||
pub async fn run_login_with_device_code(
|
||||
cli_config_overrides: CliConfigOverrides,
|
||||
issuer_base_url: Option<String>,
|
||||
client_id: Option<String>,
|
||||
) -> ! {
|
||||
let config = load_config_or_exit(cli_config_overrides);
|
||||
let config = load_config_or_exit(cli_config_overrides).await;
|
||||
let mut opts = ServerOptions::new(
|
||||
config.codex_home,
|
||||
client_id.unwrap_or(CLIENT_ID.to_string()),
|
||||
@@ -83,7 +112,7 @@ pub async fn run_login_with_device_code(
|
||||
}
|
||||
|
||||
pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
|
||||
let config = load_config_or_exit(cli_config_overrides);
|
||||
let config = load_config_or_exit(cli_config_overrides).await;
|
||||
|
||||
match CodexAuth::from_codex_home(&config.codex_home) {
|
||||
Ok(Some(auth)) => match auth.mode {
|
||||
@@ -114,7 +143,7 @@ pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
|
||||
}
|
||||
|
||||
pub async fn run_logout(cli_config_overrides: CliConfigOverrides) -> ! {
|
||||
let config = load_config_or_exit(cli_config_overrides);
|
||||
let config = load_config_or_exit(cli_config_overrides).await;
|
||||
|
||||
match logout(&config.codex_home) {
|
||||
Ok(true) => {
|
||||
@@ -132,7 +161,7 @@ pub async fn run_logout(cli_config_overrides: CliConfigOverrides) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
|
||||
async fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
|
||||
let cli_overrides = match cli_config_overrides.parse_overrides() {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
@@ -142,7 +171,7 @@ fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
|
||||
};
|
||||
|
||||
let config_overrides = ConfigOverrides::default();
|
||||
match Config::load_with_cli_overrides(cli_overrides, config_overrides) {
|
||||
match Config::load_with_cli_overrides(cli_overrides, config_overrides).await {
|
||||
Ok(config) => config,
|
||||
Err(e) => {
|
||||
eprintln!("Error loading configuration: {e}");
|
||||
|
||||
@@ -7,26 +7,30 @@ use codex_chatgpt::apply_command::ApplyCommand;
|
||||
use codex_chatgpt::apply_command::run_apply_command;
|
||||
use codex_cli::LandlockCommand;
|
||||
use codex_cli::SeatbeltCommand;
|
||||
use codex_cli::login::read_api_key_from_stdin;
|
||||
use codex_cli::login::run_login_status;
|
||||
use codex_cli::login::run_login_with_api_key;
|
||||
use codex_cli::login::run_login_with_chatgpt;
|
||||
use codex_cli::login::run_login_with_device_code;
|
||||
use codex_cli::login::run_logout;
|
||||
use codex_cli::proto;
|
||||
use codex_cloud_tasks::Cli as CloudTasksCli;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_exec::Cli as ExecCli;
|
||||
use codex_responses_api_proxy::Args as ResponsesApiProxyArgs;
|
||||
use codex_tui::AppExitInfo;
|
||||
use codex_tui::Cli as TuiCli;
|
||||
use codex_tui::UpdateAction;
|
||||
use owo_colors::OwoColorize;
|
||||
use std::path::PathBuf;
|
||||
use supports_color::Stream;
|
||||
|
||||
mod infty;
|
||||
mod mcp_cmd;
|
||||
|
||||
use crate::infty::InftyCli;
|
||||
use crate::mcp_cmd::McpCli;
|
||||
use crate::proto::ProtoCli;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
|
||||
/// Codex CLI
|
||||
///
|
||||
@@ -46,6 +50,9 @@ struct MultitoolCli {
|
||||
#[clap(flatten)]
|
||||
pub config_overrides: CliConfigOverrides,
|
||||
|
||||
#[clap(flatten)]
|
||||
pub feature_toggles: FeatureToggles,
|
||||
|
||||
#[clap(flatten)]
|
||||
interactive: TuiCli,
|
||||
|
||||
@@ -74,15 +81,12 @@ enum Subcommand {
|
||||
/// [experimental] Run the app server.
|
||||
AppServer,
|
||||
|
||||
/// Run the Protocol stream via stdin/stdout
|
||||
#[clap(visible_alias = "p")]
|
||||
Proto(ProtoCli),
|
||||
|
||||
/// Generate shell completion scripts.
|
||||
Completion(CompletionCommand),
|
||||
|
||||
/// Internal debugging commands.
|
||||
Debug(DebugArgs),
|
||||
/// Run commands within a Codex-provided sandbox.
|
||||
#[clap(visible_alias = "debug")]
|
||||
Sandbox(SandboxArgs),
|
||||
|
||||
/// Apply the latest diff produced by Codex agent as a `git apply` to your local working tree.
|
||||
#[clap(visible_alias = "a")]
|
||||
@@ -101,6 +105,13 @@ enum Subcommand {
|
||||
/// Internal: run the responses API proxy.
|
||||
#[clap(hide = true)]
|
||||
ResponsesApiProxy(ResponsesApiProxyArgs),
|
||||
|
||||
/// Inspect feature flags.
|
||||
Features(FeaturesCli),
|
||||
|
||||
/// [experimental] Manage Codex Infty long-running task runs.
|
||||
#[clap(name = "infty")]
|
||||
Infty(InftyCli),
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
@@ -126,18 +137,20 @@ struct ResumeCommand {
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
struct DebugArgs {
|
||||
struct SandboxArgs {
|
||||
#[command(subcommand)]
|
||||
cmd: DebugCommand,
|
||||
cmd: SandboxCommand,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Subcommand)]
|
||||
enum DebugCommand {
|
||||
enum SandboxCommand {
|
||||
/// Run a command under Seatbelt (macOS only).
|
||||
Seatbelt(SeatbeltCommand),
|
||||
#[clap(visible_alias = "seatbelt")]
|
||||
Macos(SeatbeltCommand),
|
||||
|
||||
/// Run a command under Landlock+seccomp (Linux only).
|
||||
Landlock(LandlockCommand),
|
||||
#[clap(visible_alias = "landlock")]
|
||||
Linux(LandlockCommand),
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
@@ -145,12 +158,21 @@ struct LoginCommand {
|
||||
#[clap(skip)]
|
||||
config_overrides: CliConfigOverrides,
|
||||
|
||||
#[arg(long = "api-key", value_name = "API_KEY")]
|
||||
#[arg(
|
||||
long = "with-api-key",
|
||||
help = "Read the API key from stdin (e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`)"
|
||||
)]
|
||||
with_api_key: bool,
|
||||
|
||||
#[arg(
|
||||
long = "api-key",
|
||||
value_name = "API_KEY",
|
||||
help = "(deprecated) Previously accepted the API key directly; now exits with guidance to use --with-api-key",
|
||||
hide = true
|
||||
)]
|
||||
api_key: Option<String>,
|
||||
|
||||
/// EXPERIMENTAL: Use device code flow (not yet supported)
|
||||
/// This feature is experimental and may changed in future releases.
|
||||
#[arg(long = "experimental_use-device-code", hide = true)]
|
||||
#[arg(long = "device-auth")]
|
||||
use_device_code: bool,
|
||||
|
||||
/// EXPERIMENTAL: Use custom OAuth issuer base URL (advanced)
|
||||
@@ -193,6 +215,7 @@ fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<Stri
|
||||
let AppExitInfo {
|
||||
token_usage,
|
||||
conversation_id,
|
||||
..
|
||||
} = exit_info;
|
||||
|
||||
if token_usage.is_zero() {
|
||||
@@ -217,32 +240,87 @@ fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<Stri
|
||||
lines
|
||||
}
|
||||
|
||||
fn print_exit_messages(exit_info: AppExitInfo) {
|
||||
/// Handle the app exit and print the results. Optionally run the update action.
|
||||
fn handle_app_exit(exit_info: AppExitInfo) -> anyhow::Result<()> {
|
||||
let update_action = exit_info.update_action;
|
||||
let color_enabled = supports_color::on(Stream::Stdout).is_some();
|
||||
for line in format_exit_messages(exit_info, color_enabled) {
|
||||
println!("{line}");
|
||||
}
|
||||
if let Some(action) = update_action {
|
||||
run_update_action(action)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) const CODEX_SECURE_MODE_ENV_VAR: &str = "CODEX_SECURE_MODE";
|
||||
/// Run the update action and print the result.
|
||||
fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
|
||||
println!();
|
||||
let (cmd, args) = action.command_args();
|
||||
let cmd_str = action.command_str();
|
||||
println!("Updating Codex via `{cmd_str}`...");
|
||||
let status = std::process::Command::new(cmd).args(args).status()?;
|
||||
if !status.success() {
|
||||
anyhow::bail!("`{cmd_str}` failed with status {status}");
|
||||
}
|
||||
println!();
|
||||
println!("🎉 Update ran successfully! Please restart Codex.");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// As early as possible in the process lifecycle, apply hardening measures
|
||||
/// if the CODEX_SECURE_MODE environment variable is set to "1".
|
||||
#[derive(Debug, Default, Parser, Clone)]
|
||||
struct FeatureToggles {
|
||||
/// Enable a feature (repeatable). Equivalent to `-c features.<name>=true`.
|
||||
#[arg(long = "enable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
|
||||
enable: Vec<String>,
|
||||
|
||||
/// Disable a feature (repeatable). Equivalent to `-c features.<name>=false`.
|
||||
#[arg(long = "disable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
|
||||
disable: Vec<String>,
|
||||
}
|
||||
|
||||
impl FeatureToggles {
|
||||
fn to_overrides(&self) -> Vec<String> {
|
||||
let mut v = Vec::new();
|
||||
for k in &self.enable {
|
||||
v.push(format!("features.{k}=true"));
|
||||
}
|
||||
for k in &self.disable {
|
||||
v.push(format!("features.{k}=false"));
|
||||
}
|
||||
v
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
struct FeaturesCli {
|
||||
#[command(subcommand)]
|
||||
sub: FeaturesSubcommand,
|
||||
}
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
enum FeaturesSubcommand {
|
||||
/// List known features with their stage and effective state.
|
||||
List,
|
||||
}
|
||||
|
||||
fn stage_str(stage: codex_core::features::Stage) -> &'static str {
|
||||
use codex_core::features::Stage;
|
||||
match stage {
|
||||
Stage::Experimental => "experimental",
|
||||
Stage::Beta => "beta",
|
||||
Stage::Stable => "stable",
|
||||
Stage::Deprecated => "deprecated",
|
||||
Stage::Removed => "removed",
|
||||
}
|
||||
}
|
||||
|
||||
/// As early as possible in the process lifecycle, apply hardening measures. We
|
||||
/// skip this in debug builds to avoid interfering with debugging.
|
||||
#[ctor::ctor]
|
||||
#[cfg(not(debug_assertions))]
|
||||
fn pre_main_hardening() {
|
||||
let secure_mode = match std::env::var(CODEX_SECURE_MODE_ENV_VAR) {
|
||||
Ok(value) => value,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
if secure_mode == "1" {
|
||||
codex_process_hardening::pre_main_hardening();
|
||||
}
|
||||
|
||||
// Always clear this env var so child processes don't inherit it.
|
||||
unsafe {
|
||||
std::env::remove_var(CODEX_SECURE_MODE_ENV_VAR);
|
||||
}
|
||||
codex_process_hardening::pre_main_hardening();
|
||||
}
|
||||
|
||||
fn main() -> anyhow::Result<()> {
|
||||
@@ -254,11 +332,17 @@ fn main() -> anyhow::Result<()> {
|
||||
|
||||
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
|
||||
let MultitoolCli {
|
||||
config_overrides: root_config_overrides,
|
||||
config_overrides: mut root_config_overrides,
|
||||
feature_toggles,
|
||||
mut interactive,
|
||||
subcommand,
|
||||
} = MultitoolCli::parse();
|
||||
|
||||
// Fold --enable/--disable into config overrides so they flow to all subcommands.
|
||||
root_config_overrides
|
||||
.raw_overrides
|
||||
.extend(feature_toggles.to_overrides());
|
||||
|
||||
match subcommand {
|
||||
None => {
|
||||
prepend_config_flags(
|
||||
@@ -266,7 +350,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
root_config_overrides.clone(),
|
||||
);
|
||||
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
|
||||
print_exit_messages(exit_info);
|
||||
handle_app_exit(exit_info)?;
|
||||
}
|
||||
Some(Subcommand::Exec(mut exec_cli)) => {
|
||||
prepend_config_flags(
|
||||
@@ -298,7 +382,8 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
last,
|
||||
config_overrides,
|
||||
);
|
||||
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
|
||||
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
|
||||
handle_app_exit(exit_info)?;
|
||||
}
|
||||
Some(Subcommand::Login(mut login_cli)) => {
|
||||
prepend_config_flags(
|
||||
@@ -317,7 +402,13 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
login_cli.client_id,
|
||||
)
|
||||
.await;
|
||||
} else if let Some(api_key) = login_cli.api_key {
|
||||
} else if login_cli.api_key.is_some() {
|
||||
eprintln!(
|
||||
"The --api-key flag is no longer supported. Pipe the key instead, e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`."
|
||||
);
|
||||
std::process::exit(1);
|
||||
} else if login_cli.with_api_key {
|
||||
let api_key = read_api_key_from_stdin();
|
||||
run_login_with_api_key(login_cli.config_overrides, api_key).await;
|
||||
} else {
|
||||
run_login_with_chatgpt(login_cli.config_overrides).await;
|
||||
@@ -332,13 +423,6 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
);
|
||||
run_logout(logout_cli.config_overrides).await;
|
||||
}
|
||||
Some(Subcommand::Proto(mut proto_cli)) => {
|
||||
prepend_config_flags(
|
||||
&mut proto_cli.config_overrides,
|
||||
root_config_overrides.clone(),
|
||||
);
|
||||
proto::run_main(proto_cli).await?;
|
||||
}
|
||||
Some(Subcommand::Completion(completion_cli)) => {
|
||||
print_completion(completion_cli);
|
||||
}
|
||||
@@ -349,8 +433,15 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
);
|
||||
codex_cloud_tasks::run_main(cloud_cli, codex_linux_sandbox_exe).await?;
|
||||
}
|
||||
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
|
||||
DebugCommand::Seatbelt(mut seatbelt_cli) => {
|
||||
Some(Subcommand::Infty(mut infty_cli)) => {
|
||||
prepend_config_flags(
|
||||
&mut infty_cli.config_overrides,
|
||||
root_config_overrides.clone(),
|
||||
);
|
||||
infty_cli.run().await?;
|
||||
}
|
||||
Some(Subcommand::Sandbox(sandbox_args)) => match sandbox_args.cmd {
|
||||
SandboxCommand::Macos(mut seatbelt_cli) => {
|
||||
prepend_config_flags(
|
||||
&mut seatbelt_cli.config_overrides,
|
||||
root_config_overrides.clone(),
|
||||
@@ -361,7 +452,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
DebugCommand::Landlock(mut landlock_cli) => {
|
||||
SandboxCommand::Linux(mut landlock_cli) => {
|
||||
prepend_config_flags(
|
||||
&mut landlock_cli.config_overrides,
|
||||
root_config_overrides.clone(),
|
||||
@@ -387,6 +478,30 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
|
||||
Some(Subcommand::GenerateTs(gen_cli)) => {
|
||||
codex_protocol_ts::generate_ts(&gen_cli.out_dir, gen_cli.prettier.as_deref())?;
|
||||
}
|
||||
Some(Subcommand::Features(FeaturesCli { sub })) => match sub {
|
||||
FeaturesSubcommand::List => {
|
||||
// Respect root-level `-c` overrides plus top-level flags like `--profile`.
|
||||
let cli_kv_overrides = root_config_overrides
|
||||
.parse_overrides()
|
||||
.map_err(|e| anyhow::anyhow!(e))?;
|
||||
|
||||
// Thread through relevant top-level flags (at minimum, `--profile`).
|
||||
// Also honor `--search` since it maps to a feature toggle.
|
||||
let overrides = ConfigOverrides {
|
||||
config_profile: interactive.config_profile.clone(),
|
||||
tools_web_search_request: interactive.web_search.then_some(true),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let config = Config::load_with_cli_overrides(cli_kv_overrides, overrides).await?;
|
||||
for def in codex_core::features::FEATURES.iter() {
|
||||
let name = def.key;
|
||||
let stage = stage_str(def.stage);
|
||||
let enabled = config.features.enabled(def.id);
|
||||
println!("{name}\t{stage}\t{enabled}");
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@@ -480,8 +595,9 @@ fn print_completion(cmd: CompletionCommand) {
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use assert_matches::assert_matches;
|
||||
use codex_core::protocol::TokenUsage;
|
||||
use codex_protocol::mcp_protocol::ConversationId;
|
||||
use codex_protocol::ConversationId;
|
||||
|
||||
fn finalize_from_args(args: &[&str]) -> TuiCli {
|
||||
let cli = MultitoolCli::try_parse_from(args).expect("parse");
|
||||
@@ -489,6 +605,7 @@ mod tests {
|
||||
interactive,
|
||||
config_overrides: root_overrides,
|
||||
subcommand,
|
||||
feature_toggles: _,
|
||||
} = cli;
|
||||
|
||||
let Subcommand::Resume(ResumeCommand {
|
||||
@@ -514,6 +631,7 @@ mod tests {
|
||||
conversation_id: conversation
|
||||
.map(ConversationId::from_string)
|
||||
.map(Result::unwrap),
|
||||
update_action: None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -522,6 +640,7 @@ mod tests {
|
||||
let exit_info = AppExitInfo {
|
||||
token_usage: TokenUsage::default(),
|
||||
conversation_id: None,
|
||||
update_action: None,
|
||||
};
|
||||
let lines = format_exit_messages(exit_info, false);
|
||||
assert!(lines.is_empty());
|
||||
@@ -612,14 +731,14 @@ mod tests {
|
||||
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
|
||||
assert!(interactive.oss);
|
||||
assert_eq!(interactive.config_profile.as_deref(), Some("my-profile"));
|
||||
assert!(matches!(
|
||||
assert_matches!(
|
||||
interactive.sandbox_mode,
|
||||
Some(codex_common::SandboxModeCliArg::WorkspaceWrite)
|
||||
));
|
||||
assert!(matches!(
|
||||
);
|
||||
assert_matches!(
|
||||
interactive.approval_policy,
|
||||
Some(codex_common::ApprovalModeCliArg::OnRequest)
|
||||
));
|
||||
);
|
||||
assert!(interactive.full_auto);
|
||||
assert_eq!(
|
||||
interactive.cwd.as_deref(),
|
||||
|
||||
@@ -4,7 +4,9 @@ use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use anyhow::bail;
|
||||
use clap::ArgGroup;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_common::format_env_display::format_env_display;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::config::find_codex_home;
|
||||
@@ -12,6 +14,12 @@ use codex_core::config::load_global_mcp_servers;
|
||||
use codex_core::config::write_global_mcp_servers;
|
||||
use codex_core::config_types::McpServerConfig;
|
||||
use codex_core::config_types::McpServerTransportConfig;
|
||||
use codex_core::features::Feature;
|
||||
use codex_core::mcp::auth::compute_auth_statuses;
|
||||
use codex_core::protocol::McpAuthStatus;
|
||||
use codex_rmcp_client::delete_oauth_tokens;
|
||||
use codex_rmcp_client::perform_oauth_login;
|
||||
use codex_rmcp_client::supports_oauth_login;
|
||||
|
||||
/// [experimental] Launch Codex as an MCP server or manage configured MCP servers.
|
||||
///
|
||||
@@ -43,6 +51,14 @@ pub enum McpSubcommand {
|
||||
|
||||
/// [experimental] Remove a global MCP server entry.
|
||||
Remove(RemoveArgs),
|
||||
|
||||
/// [experimental] Authenticate with a configured MCP server via OAuth.
|
||||
/// Requires experimental_use_rmcp_client = true in config.toml.
|
||||
Login(LoginArgs),
|
||||
|
||||
/// [experimental] Remove stored OAuth credentials for a server.
|
||||
/// Requires experimental_use_rmcp_client = true in config.toml.
|
||||
Logout(LogoutArgs),
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Parser)]
|
||||
@@ -67,13 +83,61 @@ pub struct AddArgs {
|
||||
/// Name for the MCP server configuration.
|
||||
pub name: String,
|
||||
|
||||
/// Environment variables to set when launching the server.
|
||||
#[arg(long, value_parser = parse_env_pair, value_name = "KEY=VALUE")]
|
||||
pub env: Vec<(String, String)>,
|
||||
#[command(flatten)]
|
||||
pub transport_args: AddMcpTransportArgs,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Args)]
|
||||
#[command(
|
||||
group(
|
||||
ArgGroup::new("transport")
|
||||
.args(["command", "url"])
|
||||
.required(true)
|
||||
.multiple(false)
|
||||
)
|
||||
)]
|
||||
pub struct AddMcpTransportArgs {
|
||||
#[command(flatten)]
|
||||
pub stdio: Option<AddMcpStdioArgs>,
|
||||
|
||||
#[command(flatten)]
|
||||
pub streamable_http: Option<AddMcpStreamableHttpArgs>,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Args)]
|
||||
pub struct AddMcpStdioArgs {
|
||||
/// Command to launch the MCP server.
|
||||
#[arg(trailing_var_arg = true, num_args = 1..)]
|
||||
/// Use --url for a streamable HTTP server.
|
||||
#[arg(
|
||||
trailing_var_arg = true,
|
||||
num_args = 0..,
|
||||
)]
|
||||
pub command: Vec<String>,
|
||||
|
||||
/// Environment variables to set when launching the server.
|
||||
/// Only valid with stdio servers.
|
||||
#[arg(
|
||||
long,
|
||||
value_parser = parse_env_pair,
|
||||
value_name = "KEY=VALUE",
|
||||
)]
|
||||
pub env: Vec<(String, String)>,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Args)]
|
||||
pub struct AddMcpStreamableHttpArgs {
|
||||
/// URL for a streamable HTTP MCP server.
|
||||
#[arg(long)]
|
||||
pub url: String,
|
||||
|
||||
/// Optional environment variable to read for a bearer token.
|
||||
/// Only valid with streamable HTTP servers.
|
||||
#[arg(
|
||||
long = "bearer-token-env-var",
|
||||
value_name = "ENV_VAR",
|
||||
requires = "url"
|
||||
)]
|
||||
pub bearer_token_env_var: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Parser)]
|
||||
@@ -82,6 +146,18 @@ pub struct RemoveArgs {
|
||||
pub name: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Parser)]
|
||||
pub struct LoginArgs {
|
||||
/// Name of the MCP server to authenticate with oauth.
|
||||
pub name: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Parser)]
|
||||
pub struct LogoutArgs {
|
||||
/// Name of the MCP server to deauthenticate.
|
||||
pub name: String,
|
||||
}
|
||||
|
||||
impl McpCli {
|
||||
pub async fn run(self) -> Result<()> {
|
||||
let McpCli {
|
||||
@@ -91,16 +167,22 @@ impl McpCli {
|
||||
|
||||
match subcommand {
|
||||
McpSubcommand::List(args) => {
|
||||
run_list(&config_overrides, args)?;
|
||||
run_list(&config_overrides, args).await?;
|
||||
}
|
||||
McpSubcommand::Get(args) => {
|
||||
run_get(&config_overrides, args)?;
|
||||
run_get(&config_overrides, args).await?;
|
||||
}
|
||||
McpSubcommand::Add(args) => {
|
||||
run_add(&config_overrides, args)?;
|
||||
run_add(&config_overrides, args).await?;
|
||||
}
|
||||
McpSubcommand::Remove(args) => {
|
||||
run_remove(&config_overrides, args)?;
|
||||
run_remove(&config_overrides, args).await?;
|
||||
}
|
||||
McpSubcommand::Login(args) => {
|
||||
run_login(&config_overrides, args).await?;
|
||||
}
|
||||
McpSubcommand::Logout(args) => {
|
||||
run_logout(&config_overrides, args).await?;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -108,40 +190,67 @@ impl McpCli {
|
||||
}
|
||||
}
|
||||
|
||||
fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
|
||||
async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
|
||||
// Validate any provided overrides even though they are not currently applied.
|
||||
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load configuration")?;
|
||||
|
||||
let AddArgs { name, env, command } = add_args;
|
||||
let AddArgs {
|
||||
name,
|
||||
transport_args,
|
||||
} = add_args;
|
||||
|
||||
validate_server_name(&name)?;
|
||||
|
||||
let mut command_parts = command.into_iter();
|
||||
let command_bin = command_parts
|
||||
.next()
|
||||
.ok_or_else(|| anyhow!("command is required"))?;
|
||||
let command_args: Vec<String> = command_parts.collect();
|
||||
|
||||
let env_map = if env.is_empty() {
|
||||
None
|
||||
} else {
|
||||
let mut map = HashMap::new();
|
||||
for (key, value) in env {
|
||||
map.insert(key, value);
|
||||
}
|
||||
Some(map)
|
||||
};
|
||||
|
||||
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
|
||||
let mut servers = load_global_mcp_servers(&codex_home)
|
||||
.await
|
||||
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
|
||||
|
||||
let new_entry = McpServerConfig {
|
||||
transport: McpServerTransportConfig::Stdio {
|
||||
command: command_bin,
|
||||
args: command_args,
|
||||
env: env_map,
|
||||
let transport = match transport_args {
|
||||
AddMcpTransportArgs {
|
||||
stdio: Some(stdio), ..
|
||||
} => {
|
||||
let mut command_parts = stdio.command.into_iter();
|
||||
let command_bin = command_parts
|
||||
.next()
|
||||
.ok_or_else(|| anyhow!("command is required"))?;
|
||||
let command_args: Vec<String> = command_parts.collect();
|
||||
|
||||
let env_map = if stdio.env.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(stdio.env.into_iter().collect::<HashMap<_, _>>())
|
||||
};
|
||||
McpServerTransportConfig::Stdio {
|
||||
command: command_bin,
|
||||
args: command_args,
|
||||
env: env_map,
|
||||
env_vars: Vec::new(),
|
||||
cwd: None,
|
||||
}
|
||||
}
|
||||
AddMcpTransportArgs {
|
||||
streamable_http:
|
||||
Some(AddMcpStreamableHttpArgs {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
}),
|
||||
..
|
||||
} => McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers: None,
|
||||
env_http_headers: None,
|
||||
},
|
||||
AddMcpTransportArgs { .. } => bail!("exactly one of --command or --url must be provided"),
|
||||
};
|
||||
|
||||
let new_entry = McpServerConfig {
|
||||
transport: transport.clone(),
|
||||
enabled: true,
|
||||
startup_timeout_sec: None,
|
||||
tool_timeout_sec: None,
|
||||
};
|
||||
@@ -153,10 +262,30 @@ fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<(
|
||||
|
||||
println!("Added global MCP server '{name}'.");
|
||||
|
||||
if let McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var: None,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} = transport
|
||||
&& matches!(supports_oauth_login(&url).await, Ok(true))
|
||||
{
|
||||
println!("Detected OAuth support. Starting OAuth flow…");
|
||||
perform_oauth_login(
|
||||
&name,
|
||||
&url,
|
||||
config.mcp_oauth_credentials_store_mode,
|
||||
http_headers.clone(),
|
||||
env_http_headers.clone(),
|
||||
)
|
||||
.await?;
|
||||
println!("Successfully logged in.");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) -> Result<()> {
|
||||
async fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) -> Result<()> {
|
||||
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
|
||||
let RemoveArgs { name } = remove_args;
|
||||
@@ -165,6 +294,7 @@ fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) ->
|
||||
|
||||
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
|
||||
let mut servers = load_global_mcp_servers(&codex_home)
|
||||
.await
|
||||
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
|
||||
|
||||
let removed = servers.remove(&name).is_some();
|
||||
@@ -183,36 +313,129 @@ fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) ->
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Result<()> {
|
||||
async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs) -> Result<()> {
|
||||
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load configuration")?;
|
||||
|
||||
if !config.features.enabled(Feature::RmcpClient) {
|
||||
bail!(
|
||||
"OAuth login is only supported when experimental_use_rmcp_client is true in config.toml."
|
||||
);
|
||||
}
|
||||
|
||||
let LoginArgs { name } = login_args;
|
||||
|
||||
let Some(server) = config.mcp_servers.get(&name) else {
|
||||
bail!("No MCP server named '{name}' found.");
|
||||
};
|
||||
|
||||
let (url, http_headers, env_http_headers) = match &server.transport {
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
..
|
||||
} => (url.clone(), http_headers.clone(), env_http_headers.clone()),
|
||||
_ => bail!("OAuth login is only supported for streamable HTTP servers."),
|
||||
};
|
||||
|
||||
perform_oauth_login(
|
||||
&name,
|
||||
&url,
|
||||
config.mcp_oauth_credentials_store_mode,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
)
|
||||
.await?;
|
||||
println!("Successfully logged in to MCP server '{name}'.");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn run_logout(config_overrides: &CliConfigOverrides, logout_args: LogoutArgs) -> Result<()> {
|
||||
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load configuration")?;
|
||||
|
||||
let LogoutArgs { name } = logout_args;
|
||||
|
||||
let server = config
|
||||
.mcp_servers
|
||||
.get(&name)
|
||||
.ok_or_else(|| anyhow!("No MCP server named '{name}' found in configuration."))?;
|
||||
|
||||
let url = match &server.transport {
|
||||
McpServerTransportConfig::StreamableHttp { url, .. } => url.clone(),
|
||||
_ => bail!("OAuth logout is only supported for streamable_http transports."),
|
||||
};
|
||||
|
||||
match delete_oauth_tokens(&name, &url, config.mcp_oauth_credentials_store_mode) {
|
||||
Ok(true) => println!("Removed OAuth credentials for '{name}'."),
|
||||
Ok(false) => println!("No OAuth credentials stored for '{name}'."),
|
||||
Err(err) => return Err(anyhow!("failed to delete OAuth credentials: {err}")),
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Result<()> {
|
||||
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load configuration")?;
|
||||
|
||||
let mut entries: Vec<_> = config.mcp_servers.iter().collect();
|
||||
entries.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
let auth_statuses = compute_auth_statuses(
|
||||
config.mcp_servers.iter(),
|
||||
config.mcp_oauth_credentials_store_mode,
|
||||
)
|
||||
.await;
|
||||
|
||||
if list_args.json {
|
||||
let json_entries: Vec<_> = entries
|
||||
.into_iter()
|
||||
.map(|(name, cfg)| {
|
||||
let auth_status = auth_statuses
|
||||
.get(name.as_str())
|
||||
.copied()
|
||||
.unwrap_or(McpAuthStatus::Unsupported);
|
||||
let transport = match &cfg.transport {
|
||||
McpServerTransportConfig::Stdio { command, args, env } => serde_json::json!({
|
||||
McpServerTransportConfig::Stdio {
|
||||
command,
|
||||
args,
|
||||
env,
|
||||
env_vars,
|
||||
cwd,
|
||||
} => serde_json::json!({
|
||||
"type": "stdio",
|
||||
"command": command,
|
||||
"args": args,
|
||||
"env": env,
|
||||
"env_vars": env_vars,
|
||||
"cwd": cwd,
|
||||
}),
|
||||
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} => {
|
||||
serde_json::json!({
|
||||
"type": "streamable_http",
|
||||
"url": url,
|
||||
"bearer_token": bearer_token,
|
||||
"bearer_token_env_var": bearer_token_env_var,
|
||||
"http_headers": http_headers,
|
||||
"env_http_headers": env_http_headers,
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
serde_json::json!({
|
||||
"name": name,
|
||||
"enabled": cfg.enabled,
|
||||
"transport": transport,
|
||||
"startup_timeout_sec": cfg
|
||||
.startup_timeout_sec
|
||||
@@ -220,6 +443,7 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
"tool_timeout_sec": cfg
|
||||
.tool_timeout_sec
|
||||
.map(|timeout| timeout.as_secs_f64()),
|
||||
"auth_status": auth_status,
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
@@ -233,45 +457,85 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut stdio_rows: Vec<[String; 4]> = Vec::new();
|
||||
let mut http_rows: Vec<[String; 3]> = Vec::new();
|
||||
let mut stdio_rows: Vec<[String; 7]> = Vec::new();
|
||||
let mut http_rows: Vec<[String; 5]> = Vec::new();
|
||||
|
||||
for (name, cfg) in entries {
|
||||
match &cfg.transport {
|
||||
McpServerTransportConfig::Stdio { command, args, env } => {
|
||||
McpServerTransportConfig::Stdio {
|
||||
command,
|
||||
args,
|
||||
env,
|
||||
env_vars,
|
||||
cwd,
|
||||
} => {
|
||||
let args_display = if args.is_empty() {
|
||||
"-".to_string()
|
||||
} else {
|
||||
args.join(" ")
|
||||
};
|
||||
let env_display = match env.as_ref() {
|
||||
None => "-".to_string(),
|
||||
Some(map) if map.is_empty() => "-".to_string(),
|
||||
Some(map) => {
|
||||
let mut pairs: Vec<_> = map.iter().collect();
|
||||
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
pairs
|
||||
.into_iter()
|
||||
.map(|(k, v)| format!("{k}={v}"))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ")
|
||||
}
|
||||
};
|
||||
stdio_rows.push([name.clone(), command.clone(), args_display, env_display]);
|
||||
}
|
||||
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
|
||||
let has_bearer = if bearer_token.is_some() {
|
||||
"True"
|
||||
let env_display = format_env_display(env.as_ref(), env_vars);
|
||||
let cwd_display = cwd
|
||||
.as_ref()
|
||||
.map(|path| path.display().to_string())
|
||||
.filter(|value| !value.is_empty())
|
||||
.unwrap_or_else(|| "-".to_string());
|
||||
let status = if cfg.enabled {
|
||||
"enabled".to_string()
|
||||
} else {
|
||||
"False"
|
||||
"disabled".to_string()
|
||||
};
|
||||
http_rows.push([name.clone(), url.clone(), has_bearer.into()]);
|
||||
let auth_status = auth_statuses
|
||||
.get(name.as_str())
|
||||
.copied()
|
||||
.unwrap_or(McpAuthStatus::Unsupported)
|
||||
.to_string();
|
||||
stdio_rows.push([
|
||||
name.clone(),
|
||||
command.clone(),
|
||||
args_display,
|
||||
env_display,
|
||||
cwd_display,
|
||||
status,
|
||||
auth_status,
|
||||
]);
|
||||
}
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
..
|
||||
} => {
|
||||
let status = if cfg.enabled {
|
||||
"enabled".to_string()
|
||||
} else {
|
||||
"disabled".to_string()
|
||||
};
|
||||
let auth_status = auth_statuses
|
||||
.get(name.as_str())
|
||||
.copied()
|
||||
.unwrap_or(McpAuthStatus::Unsupported)
|
||||
.to_string();
|
||||
http_rows.push([
|
||||
name.clone(),
|
||||
url.clone(),
|
||||
bearer_token_env_var.clone().unwrap_or("-".to_string()),
|
||||
status,
|
||||
auth_status,
|
||||
]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !stdio_rows.is_empty() {
|
||||
let mut widths = ["Name".len(), "Command".len(), "Args".len(), "Env".len()];
|
||||
let mut widths = [
|
||||
"Name".len(),
|
||||
"Command".len(),
|
||||
"Args".len(),
|
||||
"Env".len(),
|
||||
"Cwd".len(),
|
||||
"Status".len(),
|
||||
"Auth".len(),
|
||||
];
|
||||
for row in &stdio_rows {
|
||||
for (i, cell) in row.iter().enumerate() {
|
||||
widths[i] = widths[i].max(cell.len());
|
||||
@@ -279,28 +543,40 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
}
|
||||
|
||||
println!(
|
||||
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
|
||||
"Name",
|
||||
"Command",
|
||||
"Args",
|
||||
"Env",
|
||||
"{name:<name_w$} {command:<cmd_w$} {args:<args_w$} {env:<env_w$} {cwd:<cwd_w$} {status:<status_w$} {auth:<auth_w$}",
|
||||
name = "Name",
|
||||
command = "Command",
|
||||
args = "Args",
|
||||
env = "Env",
|
||||
cwd = "Cwd",
|
||||
status = "Status",
|
||||
auth = "Auth",
|
||||
name_w = widths[0],
|
||||
cmd_w = widths[1],
|
||||
args_w = widths[2],
|
||||
env_w = widths[3],
|
||||
cwd_w = widths[4],
|
||||
status_w = widths[5],
|
||||
auth_w = widths[6],
|
||||
);
|
||||
|
||||
for row in &stdio_rows {
|
||||
println!(
|
||||
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
|
||||
row[0],
|
||||
row[1],
|
||||
row[2],
|
||||
row[3],
|
||||
"{name:<name_w$} {command:<cmd_w$} {args:<args_w$} {env:<env_w$} {cwd:<cwd_w$} {status:<status_w$} {auth:<auth_w$}",
|
||||
name = row[0].as_str(),
|
||||
command = row[1].as_str(),
|
||||
args = row[2].as_str(),
|
||||
env = row[3].as_str(),
|
||||
cwd = row[4].as_str(),
|
||||
status = row[5].as_str(),
|
||||
auth = row[6].as_str(),
|
||||
name_w = widths[0],
|
||||
cmd_w = widths[1],
|
||||
args_w = widths[2],
|
||||
env_w = widths[3],
|
||||
cwd_w = widths[4],
|
||||
status_w = widths[5],
|
||||
auth_w = widths[6],
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -310,7 +586,13 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
}
|
||||
|
||||
if !http_rows.is_empty() {
|
||||
let mut widths = ["Name".len(), "Url".len(), "Has Bearer Token".len()];
|
||||
let mut widths = [
|
||||
"Name".len(),
|
||||
"Url".len(),
|
||||
"Bearer Token Env Var".len(),
|
||||
"Status".len(),
|
||||
"Auth".len(),
|
||||
];
|
||||
for row in &http_rows {
|
||||
for (i, cell) in row.iter().enumerate() {
|
||||
widths[i] = widths[i].max(cell.len());
|
||||
@@ -318,24 +600,32 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
}
|
||||
|
||||
println!(
|
||||
"{:<name_w$} {:<url_w$} {:<token_w$}",
|
||||
"Name",
|
||||
"Url",
|
||||
"Has Bearer Token",
|
||||
"{name:<name_w$} {url:<url_w$} {token:<token_w$} {status:<status_w$} {auth:<auth_w$}",
|
||||
name = "Name",
|
||||
url = "Url",
|
||||
token = "Bearer Token Env Var",
|
||||
status = "Status",
|
||||
auth = "Auth",
|
||||
name_w = widths[0],
|
||||
url_w = widths[1],
|
||||
token_w = widths[2],
|
||||
status_w = widths[3],
|
||||
auth_w = widths[4],
|
||||
);
|
||||
|
||||
for row in &http_rows {
|
||||
println!(
|
||||
"{:<name_w$} {:<url_w$} {:<token_w$}",
|
||||
row[0],
|
||||
row[1],
|
||||
row[2],
|
||||
"{name:<name_w$} {url:<url_w$} {token:<token_w$} {status:<status_w$} {auth:<auth_w$}",
|
||||
name = row[0].as_str(),
|
||||
url = row[1].as_str(),
|
||||
token = row[2].as_str(),
|
||||
status = row[3].as_str(),
|
||||
auth = row[4].as_str(),
|
||||
name_w = widths[0],
|
||||
url_w = widths[1],
|
||||
token_w = widths[2],
|
||||
status_w = widths[3],
|
||||
auth_w = widths[4],
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -343,9 +633,10 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<()> {
|
||||
async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<()> {
|
||||
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
|
||||
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
|
||||
.await
|
||||
.context("failed to load configuration")?;
|
||||
|
||||
let Some(server) = config.mcp_servers.get(&get_args.name) else {
|
||||
@@ -354,20 +645,36 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
|
||||
|
||||
if get_args.json {
|
||||
let transport = match &server.transport {
|
||||
McpServerTransportConfig::Stdio { command, args, env } => serde_json::json!({
|
||||
McpServerTransportConfig::Stdio {
|
||||
command,
|
||||
args,
|
||||
env,
|
||||
env_vars,
|
||||
cwd,
|
||||
} => serde_json::json!({
|
||||
"type": "stdio",
|
||||
"command": command,
|
||||
"args": args,
|
||||
"env": env,
|
||||
"env_vars": env_vars,
|
||||
"cwd": cwd,
|
||||
}),
|
||||
McpServerTransportConfig::StreamableHttp { url, bearer_token } => serde_json::json!({
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} => serde_json::json!({
|
||||
"type": "streamable_http",
|
||||
"url": url,
|
||||
"bearer_token": bearer_token,
|
||||
"bearer_token_env_var": bearer_token_env_var,
|
||||
"http_headers": http_headers,
|
||||
"env_http_headers": env_http_headers,
|
||||
}),
|
||||
};
|
||||
let output = serde_json::to_string_pretty(&serde_json::json!({
|
||||
"name": get_args.name,
|
||||
"enabled": server.enabled,
|
||||
"transport": transport,
|
||||
"startup_timeout_sec": server
|
||||
.startup_timeout_sec
|
||||
@@ -381,8 +688,15 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
|
||||
}
|
||||
|
||||
println!("{}", get_args.name);
|
||||
println!(" enabled: {}", server.enabled);
|
||||
match &server.transport {
|
||||
McpServerTransportConfig::Stdio { command, args, env } => {
|
||||
McpServerTransportConfig::Stdio {
|
||||
command,
|
||||
args,
|
||||
env,
|
||||
env_vars,
|
||||
cwd,
|
||||
} => {
|
||||
println!(" transport: stdio");
|
||||
println!(" command: {command}");
|
||||
let args_display = if args.is_empty() {
|
||||
@@ -391,10 +705,27 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
|
||||
args.join(" ")
|
||||
};
|
||||
println!(" args: {args_display}");
|
||||
let env_display = match env.as_ref() {
|
||||
None => "-".to_string(),
|
||||
Some(map) if map.is_empty() => "-".to_string(),
|
||||
Some(map) => {
|
||||
let cwd_display = cwd
|
||||
.as_ref()
|
||||
.map(|path| path.display().to_string())
|
||||
.filter(|value| !value.is_empty())
|
||||
.unwrap_or_else(|| "-".to_string());
|
||||
println!(" cwd: {cwd_display}");
|
||||
let env_display = format_env_display(env.as_ref(), env_vars);
|
||||
println!(" env: {env_display}");
|
||||
}
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} => {
|
||||
println!(" transport: streamable_http");
|
||||
println!(" url: {url}");
|
||||
let env_var = bearer_token_env_var.as_deref().unwrap_or("-");
|
||||
println!(" bearer_token_env_var: {env_var}");
|
||||
let headers_display = match http_headers {
|
||||
Some(map) if !map.is_empty() => {
|
||||
let mut pairs: Vec<_> = map.iter().collect();
|
||||
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
pairs
|
||||
@@ -403,14 +734,22 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ")
|
||||
}
|
||||
_ => "-".to_string(),
|
||||
};
|
||||
println!(" env: {env_display}");
|
||||
}
|
||||
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
|
||||
println!(" transport: streamable_http");
|
||||
println!(" url: {url}");
|
||||
let bearer = bearer_token.as_deref().unwrap_or("-");
|
||||
println!(" bearer_token: {bearer}");
|
||||
println!(" http_headers: {headers_display}");
|
||||
let env_headers_display = match env_http_headers {
|
||||
Some(map) if !map.is_empty() => {
|
||||
let mut pairs: Vec<_> = map.iter().collect();
|
||||
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
pairs
|
||||
.into_iter()
|
||||
.map(|(k, v)| format!("{k}={v}"))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ")
|
||||
}
|
||||
_ => "-".to_string(),
|
||||
};
|
||||
println!(" env_http_headers: {env_headers_display}");
|
||||
}
|
||||
}
|
||||
if let Some(timeout) = server.startup_timeout_sec {
|
||||
|
||||
@@ -1,133 +0,0 @@
|
||||
use std::io::IsTerminal;
|
||||
|
||||
use clap::Parser;
|
||||
use codex_common::CliConfigOverrides;
|
||||
use codex_core::AuthManager;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::NewConversation;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::protocol::Event;
|
||||
use codex_core::protocol::EventMsg;
|
||||
use codex_core::protocol::Submission;
|
||||
use tokio::io::AsyncBufReadExt;
|
||||
use tokio::io::BufReader;
|
||||
use tracing::error;
|
||||
use tracing::info;
|
||||
|
||||
#[derive(Debug, Parser)]
|
||||
pub struct ProtoCli {
|
||||
#[clap(skip)]
|
||||
pub config_overrides: CliConfigOverrides,
|
||||
}
|
||||
|
||||
pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
|
||||
if std::io::stdin().is_terminal() {
|
||||
anyhow::bail!("Protocol mode expects stdin to be a pipe, not a terminal");
|
||||
}
|
||||
|
||||
tracing_subscriber::fmt()
|
||||
.with_writer(std::io::stderr)
|
||||
.init();
|
||||
|
||||
let ProtoCli { config_overrides } = opts;
|
||||
let overrides_vec = config_overrides
|
||||
.parse_overrides()
|
||||
.map_err(anyhow::Error::msg)?;
|
||||
|
||||
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
|
||||
// Use conversation_manager API to start a conversation
|
||||
let conversation_manager =
|
||||
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
|
||||
let NewConversation {
|
||||
conversation_id: _,
|
||||
conversation,
|
||||
session_configured,
|
||||
} = conversation_manager.new_conversation(config).await?;
|
||||
|
||||
// Simulate streaming the session_configured event.
|
||||
let synthetic_event = Event {
|
||||
// Fake id value.
|
||||
id: "".to_string(),
|
||||
msg: EventMsg::SessionConfigured(session_configured),
|
||||
};
|
||||
let session_configured_event = match serde_json::to_string(&synthetic_event) {
|
||||
Ok(s) => s,
|
||||
Err(e) => {
|
||||
error!("Failed to serialize session_configured: {e}");
|
||||
return Err(anyhow::Error::from(e));
|
||||
}
|
||||
};
|
||||
println!("{session_configured_event}");
|
||||
|
||||
// Task that reads JSON lines from stdin and forwards to Submission Queue
|
||||
let sq_fut = {
|
||||
let conversation = conversation.clone();
|
||||
async move {
|
||||
let stdin = BufReader::new(tokio::io::stdin());
|
||||
let mut lines = stdin.lines();
|
||||
loop {
|
||||
let result = tokio::select! {
|
||||
_ = tokio::signal::ctrl_c() => {
|
||||
break
|
||||
},
|
||||
res = lines.next_line() => res,
|
||||
};
|
||||
|
||||
match result {
|
||||
Ok(Some(line)) => {
|
||||
let line = line.trim();
|
||||
if line.is_empty() {
|
||||
continue;
|
||||
}
|
||||
match serde_json::from_str::<Submission>(line) {
|
||||
Ok(sub) => {
|
||||
if let Err(e) = conversation.submit_with_id(sub).await {
|
||||
error!("{e:#}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!("invalid submission: {e}");
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
info!("Submission queue closed");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Task that reads events from the agent and prints them as JSON lines to stdout
|
||||
let eq_fut = async move {
|
||||
loop {
|
||||
let event = tokio::select! {
|
||||
_ = tokio::signal::ctrl_c() => break,
|
||||
event = conversation.next_event() => event,
|
||||
};
|
||||
match event {
|
||||
Ok(event) => {
|
||||
let event_str = match serde_json::to_string(&event) {
|
||||
Ok(s) => s,
|
||||
Err(e) => {
|
||||
error!("Failed to serialize event: {e}");
|
||||
continue;
|
||||
}
|
||||
};
|
||||
println!("{event_str}");
|
||||
}
|
||||
Err(e) => {
|
||||
error!("{e:#}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
info!("Event queue closed");
|
||||
};
|
||||
|
||||
tokio::join!(sq_fut, eq_fut);
|
||||
Ok(())
|
||||
}
|
||||
@@ -13,8 +13,8 @@ fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
|
||||
Ok(cmd)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn add_and_remove_server_updates_global_config() -> Result<()> {
|
||||
#[tokio::test]
|
||||
async fn add_and_remove_server_updates_global_config() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
@@ -24,17 +24,26 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
|
||||
.success()
|
||||
.stdout(contains("Added global MCP server 'docs'."));
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path())?;
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
assert_eq!(servers.len(), 1);
|
||||
let docs = servers.get("docs").expect("server should exist");
|
||||
match &docs.transport {
|
||||
McpServerTransportConfig::Stdio { command, args, env } => {
|
||||
McpServerTransportConfig::Stdio {
|
||||
command,
|
||||
args,
|
||||
env,
|
||||
env_vars,
|
||||
cwd,
|
||||
} => {
|
||||
assert_eq!(command, "echo");
|
||||
assert_eq!(args, &vec!["hello".to_string()]);
|
||||
assert!(env.is_none());
|
||||
assert!(env_vars.is_empty());
|
||||
assert!(cwd.is_none());
|
||||
}
|
||||
other => panic!("unexpected transport: {other:?}"),
|
||||
}
|
||||
assert!(docs.enabled);
|
||||
|
||||
let mut remove_cmd = codex_command(codex_home.path())?;
|
||||
remove_cmd
|
||||
@@ -43,7 +52,7 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
|
||||
.success()
|
||||
.stdout(contains("Removed global MCP server 'docs'."));
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path())?;
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
assert!(servers.is_empty());
|
||||
|
||||
let mut remove_again_cmd = codex_command(codex_home.path())?;
|
||||
@@ -53,14 +62,14 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
|
||||
.success()
|
||||
.stdout(contains("No MCP server named 'docs' found."));
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path())?;
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
assert!(servers.is_empty());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn add_with_env_preserves_key_order_and_values() -> Result<()> {
|
||||
#[tokio::test]
|
||||
async fn add_with_env_preserves_key_order_and_values() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
@@ -80,7 +89,7 @@ fn add_with_env_preserves_key_order_and_values() -> Result<()> {
|
||||
.assert()
|
||||
.success();
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path())?;
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
let envy = servers.get("envy").expect("server should exist");
|
||||
let env = match &envy.transport {
|
||||
McpServerTransportConfig::Stdio { env: Some(env), .. } => env,
|
||||
@@ -90,6 +99,130 @@ fn add_with_env_preserves_key_order_and_values() -> Result<()> {
|
||||
assert_eq!(env.len(), 2);
|
||||
assert_eq!(env.get("FOO"), Some(&"bar".to_string()));
|
||||
assert_eq!(env.get("ALPHA"), Some(&"beta".to_string()));
|
||||
assert!(envy.enabled);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_streamable_http_without_manual_token() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
add_cmd
|
||||
.args(["mcp", "add", "github", "--url", "https://example.com/mcp"])
|
||||
.assert()
|
||||
.success();
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
let github = servers.get("github").expect("github server should exist");
|
||||
match &github.transport {
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} => {
|
||||
assert_eq!(url, "https://example.com/mcp");
|
||||
assert!(bearer_token_env_var.is_none());
|
||||
assert!(http_headers.is_none());
|
||||
assert!(env_http_headers.is_none());
|
||||
}
|
||||
other => panic!("unexpected transport: {other:?}"),
|
||||
}
|
||||
assert!(github.enabled);
|
||||
|
||||
assert!(!codex_home.path().join(".credentials.json").exists());
|
||||
assert!(!codex_home.path().join(".env").exists());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_streamable_http_with_custom_env_var() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
add_cmd
|
||||
.args([
|
||||
"mcp",
|
||||
"add",
|
||||
"issues",
|
||||
"--url",
|
||||
"https://example.com/issues",
|
||||
"--bearer-token-env-var",
|
||||
"GITHUB_TOKEN",
|
||||
])
|
||||
.assert()
|
||||
.success();
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
let issues = servers.get("issues").expect("issues server should exist");
|
||||
match &issues.transport {
|
||||
McpServerTransportConfig::StreamableHttp {
|
||||
url,
|
||||
bearer_token_env_var,
|
||||
http_headers,
|
||||
env_http_headers,
|
||||
} => {
|
||||
assert_eq!(url, "https://example.com/issues");
|
||||
assert_eq!(bearer_token_env_var.as_deref(), Some("GITHUB_TOKEN"));
|
||||
assert!(http_headers.is_none());
|
||||
assert!(env_http_headers.is_none());
|
||||
}
|
||||
other => panic!("unexpected transport: {other:?}"),
|
||||
}
|
||||
assert!(issues.enabled);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_streamable_http_rejects_removed_flag() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
add_cmd
|
||||
.args([
|
||||
"mcp",
|
||||
"add",
|
||||
"github",
|
||||
"--url",
|
||||
"https://example.com/mcp",
|
||||
"--with-bearer-token",
|
||||
])
|
||||
.assert()
|
||||
.failure()
|
||||
.stderr(contains("--with-bearer-token"));
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
assert!(servers.is_empty());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_cant_add_command_and_url() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add_cmd = codex_command(codex_home.path())?;
|
||||
add_cmd
|
||||
.args([
|
||||
"mcp",
|
||||
"add",
|
||||
"github",
|
||||
"--url",
|
||||
"https://example.com/mcp",
|
||||
"--command",
|
||||
"--",
|
||||
"echo",
|
||||
"hello",
|
||||
])
|
||||
.assert()
|
||||
.failure()
|
||||
.stderr(contains("unexpected argument '--command' found"));
|
||||
|
||||
let servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
assert!(servers.is_empty());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
use std::path::Path;
|
||||
|
||||
use anyhow::Result;
|
||||
use codex_core::config::load_global_mcp_servers;
|
||||
use codex_core::config::write_global_mcp_servers;
|
||||
use codex_core::config_types::McpServerTransportConfig;
|
||||
use predicates::prelude::PredicateBooleanExt;
|
||||
use predicates::str::contains;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::Value as JsonValue;
|
||||
@@ -26,8 +30,8 @@ fn list_shows_empty_state() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn list_and_get_render_expected_output() -> Result<()> {
|
||||
#[tokio::test]
|
||||
async fn list_and_get_render_expected_output() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
let mut add = codex_command(codex_home.path())?;
|
||||
@@ -45,6 +49,18 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
.assert()
|
||||
.success();
|
||||
|
||||
let mut servers = load_global_mcp_servers(codex_home.path()).await?;
|
||||
let docs_entry = servers
|
||||
.get_mut("docs")
|
||||
.expect("docs server should exist after add");
|
||||
match &mut docs_entry.transport {
|
||||
McpServerTransportConfig::Stdio { env_vars, .. } => {
|
||||
*env_vars = vec!["APP_TOKEN".to_string(), "WORKSPACE_ID".to_string()];
|
||||
}
|
||||
other => panic!("unexpected transport: {other:?}"),
|
||||
}
|
||||
write_global_mcp_servers(codex_home.path(), &servers)?;
|
||||
|
||||
let mut list_cmd = codex_command(codex_home.path())?;
|
||||
let list_output = list_cmd.args(["mcp", "list"]).output()?;
|
||||
assert!(list_output.status.success());
|
||||
@@ -53,6 +69,12 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
assert!(stdout.contains("docs"));
|
||||
assert!(stdout.contains("docs-server"));
|
||||
assert!(stdout.contains("TOKEN=secret"));
|
||||
assert!(stdout.contains("APP_TOKEN=$APP_TOKEN"));
|
||||
assert!(stdout.contains("WORKSPACE_ID=$WORKSPACE_ID"));
|
||||
assert!(stdout.contains("Status"));
|
||||
assert!(stdout.contains("Auth"));
|
||||
assert!(stdout.contains("enabled"));
|
||||
assert!(stdout.contains("Unsupported"));
|
||||
|
||||
let mut list_json_cmd = codex_command(codex_home.path())?;
|
||||
let json_output = list_json_cmd.args(["mcp", "list", "--json"]).output()?;
|
||||
@@ -64,6 +86,7 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
json!([
|
||||
{
|
||||
"name": "docs",
|
||||
"enabled": true,
|
||||
"transport": {
|
||||
"type": "stdio",
|
||||
"command": "docs-server",
|
||||
@@ -73,10 +96,16 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
],
|
||||
"env": {
|
||||
"TOKEN": "secret"
|
||||
}
|
||||
},
|
||||
"env_vars": [
|
||||
"APP_TOKEN",
|
||||
"WORKSPACE_ID"
|
||||
],
|
||||
"cwd": null
|
||||
},
|
||||
"startup_timeout_sec": null,
|
||||
"tool_timeout_sec": null
|
||||
"tool_timeout_sec": null,
|
||||
"auth_status": "unsupported"
|
||||
}
|
||||
]
|
||||
)
|
||||
@@ -91,6 +120,9 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
assert!(stdout.contains("command: docs-server"));
|
||||
assert!(stdout.contains("args: --port 4000"));
|
||||
assert!(stdout.contains("env: TOKEN=secret"));
|
||||
assert!(stdout.contains("APP_TOKEN=$APP_TOKEN"));
|
||||
assert!(stdout.contains("WORKSPACE_ID=$WORKSPACE_ID"));
|
||||
assert!(stdout.contains("enabled: true"));
|
||||
assert!(stdout.contains("remove: codex mcp remove docs"));
|
||||
|
||||
let mut get_json_cmd = codex_command(codex_home.path())?;
|
||||
@@ -98,7 +130,7 @@ fn list_and_get_render_expected_output() -> Result<()> {
|
||||
.args(["mcp", "get", "docs", "--json"])
|
||||
.assert()
|
||||
.success()
|
||||
.stdout(contains("\"name\": \"docs\""));
|
||||
.stdout(contains("\"name\": \"docs\"").and(contains("\"enabled\": true")));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[package]
|
||||
edition = "2024"
|
||||
name = "codex-cloud-tasks"
|
||||
version = { workspace = true }
|
||||
edition = "2024"
|
||||
|
||||
[lib]
|
||||
name = "codex_cloud_tasks"
|
||||
@@ -11,26 +11,28 @@ path = "src/lib.rs"
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
anyhow = "1"
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
anyhow = { workspace = true }
|
||||
base64 = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
clap = { workspace = true, features = ["derive"] }
|
||||
codex-cloud-tasks-client = { path = "../cloud-tasks-client", features = [
|
||||
"mock",
|
||||
"online",
|
||||
] }
|
||||
codex-common = { path = "../common", features = ["cli"] }
|
||||
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
|
||||
tracing = { version = "0.1.41", features = ["log"] }
|
||||
tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }
|
||||
codex-cloud-tasks-client = { path = "../cloud-tasks-client", features = ["mock", "online"] }
|
||||
ratatui = { version = "0.29.0" }
|
||||
crossterm = { version = "0.28.1", features = ["event-stream"] }
|
||||
tokio-stream = "0.1.17"
|
||||
chrono = { version = "0.4", features = ["serde"] }
|
||||
codex-login = { path = "../login" }
|
||||
codex-core = { path = "../core" }
|
||||
throbber-widgets-tui = "0.8.0"
|
||||
base64 = "0.22"
|
||||
serde_json = "1"
|
||||
reqwest = { version = "0.12", features = ["json"] }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
unicode-width = "0.1"
|
||||
codex-login = { path = "../login" }
|
||||
codex-tui = { path = "../tui" }
|
||||
crossterm = { workspace = true, features = ["event-stream"] }
|
||||
ratatui = { workspace = true }
|
||||
reqwest = { workspace = true, features = ["json"] }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
tokio = { workspace = true, features = ["macros", "rt-multi-thread"] }
|
||||
tokio-stream = { workspace = true }
|
||||
tracing = { workspace = true, features = ["log"] }
|
||||
tracing-subscriber = { workspace = true, features = ["env-filter"] }
|
||||
unicode-width = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
async-trait = "0.1"
|
||||
async-trait = { workspace = true }
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
use std::time::Duration;
|
||||
use std::time::Instant;
|
||||
|
||||
// Environment filter data models for the TUI
|
||||
#[derive(Clone, Debug, Default)]
|
||||
@@ -42,15 +43,13 @@ use crate::scrollable_diff::ScrollableDiff;
|
||||
use codex_cloud_tasks_client::CloudBackend;
|
||||
use codex_cloud_tasks_client::TaskId;
|
||||
use codex_cloud_tasks_client::TaskSummary;
|
||||
use throbber_widgets_tui::ThrobberState;
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct App {
|
||||
pub tasks: Vec<TaskSummary>,
|
||||
pub selected: usize,
|
||||
pub status: String,
|
||||
pub diff_overlay: Option<DiffOverlay>,
|
||||
pub throbber: ThrobberState,
|
||||
pub spinner_start: Option<Instant>,
|
||||
pub refresh_inflight: bool,
|
||||
pub details_inflight: bool,
|
||||
// Environment filter state
|
||||
@@ -82,7 +81,7 @@ impl App {
|
||||
selected: 0,
|
||||
status: "Press r to refresh".to_string(),
|
||||
diff_overlay: None,
|
||||
throbber: ThrobberState::default(),
|
||||
spinner_start: None,
|
||||
refresh_inflight: false,
|
||||
details_inflight: false,
|
||||
env_filter: None,
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use clap::Args;
|
||||
use clap::Parser;
|
||||
use codex_common::CliConfigOverrides;
|
||||
|
||||
@@ -6,4 +7,43 @@ use codex_common::CliConfigOverrides;
|
||||
pub struct Cli {
|
||||
#[clap(skip)]
|
||||
pub config_overrides: CliConfigOverrides,
|
||||
|
||||
#[command(subcommand)]
|
||||
pub command: Option<Command>,
|
||||
}
|
||||
|
||||
#[derive(Debug, clap::Subcommand)]
|
||||
pub enum Command {
|
||||
/// Submit a new Codex Cloud task without launching the TUI.
|
||||
Exec(ExecCommand),
|
||||
}
|
||||
|
||||
#[derive(Debug, Args)]
|
||||
pub struct ExecCommand {
|
||||
/// Task prompt to run in Codex Cloud.
|
||||
#[arg(value_name = "QUERY")]
|
||||
pub query: Option<String>,
|
||||
|
||||
/// Target environment identifier (see `codex cloud` to browse).
|
||||
#[arg(long = "env", value_name = "ENV_ID")]
|
||||
pub environment: String,
|
||||
|
||||
/// Number of assistant attempts (best-of-N).
|
||||
#[arg(
|
||||
long = "attempts",
|
||||
default_value_t = 1usize,
|
||||
value_parser = parse_attempts
|
||||
)]
|
||||
pub attempts: usize,
|
||||
}
|
||||
|
||||
fn parse_attempts(input: &str) -> Result<usize, String> {
|
||||
let value: usize = input
|
||||
.parse()
|
||||
.map_err(|_| "attempts must be an integer between 1 and 4".to_string())?;
|
||||
if (1..=4).contains(&value) {
|
||||
Ok(value)
|
||||
} else {
|
||||
Err("attempts must be between 1 and 4".to_string())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,9 @@ mod ui;
|
||||
pub mod util;
|
||||
pub use cli::Cli;
|
||||
|
||||
use anyhow::anyhow;
|
||||
use std::io::IsTerminal;
|
||||
use std::io::Read;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
@@ -23,6 +25,175 @@ struct ApplyJob {
|
||||
diff_override: Option<String>,
|
||||
}
|
||||
|
||||
struct BackendContext {
|
||||
backend: Arc<dyn codex_cloud_tasks_client::CloudBackend>,
|
||||
base_url: String,
|
||||
}
|
||||
|
||||
async fn init_backend(user_agent_suffix: &str) -> anyhow::Result<BackendContext> {
|
||||
let use_mock = matches!(
|
||||
std::env::var("CODEX_CLOUD_TASKS_MODE").ok().as_deref(),
|
||||
Some("mock") | Some("MOCK")
|
||||
);
|
||||
let base_url = std::env::var("CODEX_CLOUD_TASKS_BASE_URL")
|
||||
.unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string());
|
||||
|
||||
set_user_agent_suffix(user_agent_suffix);
|
||||
|
||||
if use_mock {
|
||||
return Ok(BackendContext {
|
||||
backend: Arc::new(codex_cloud_tasks_client::MockClient),
|
||||
base_url,
|
||||
});
|
||||
}
|
||||
|
||||
let ua = codex_core::default_client::get_codex_user_agent();
|
||||
let mut http = codex_cloud_tasks_client::HttpClient::new(base_url.clone())?.with_user_agent(ua);
|
||||
let style = if base_url.contains("/backend-api") {
|
||||
"wham"
|
||||
} else {
|
||||
"codex-api"
|
||||
};
|
||||
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
|
||||
|
||||
let auth = match codex_core::config::find_codex_home()
|
||||
.ok()
|
||||
.map(|home| codex_login::AuthManager::new(home, false))
|
||||
.and_then(|am| am.auth())
|
||||
{
|
||||
Some(auth) => auth,
|
||||
None => {
|
||||
eprintln!(
|
||||
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
|
||||
);
|
||||
std::process::exit(1);
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(acc) = auth.get_account_id() {
|
||||
append_error_log(format!("auth: mode=ChatGPT account_id={acc}"));
|
||||
}
|
||||
|
||||
let token = match auth.get_token().await {
|
||||
Ok(t) if !t.is_empty() => t,
|
||||
_ => {
|
||||
eprintln!(
|
||||
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
|
||||
);
|
||||
std::process::exit(1);
|
||||
}
|
||||
};
|
||||
|
||||
http = http.with_bearer_token(token.clone());
|
||||
if let Some(acc) = auth
|
||||
.get_account_id()
|
||||
.or_else(|| util::extract_chatgpt_account_id(&token))
|
||||
{
|
||||
append_error_log(format!("auth: set ChatGPT-Account-Id header: {acc}"));
|
||||
http = http.with_chatgpt_account_id(acc);
|
||||
}
|
||||
|
||||
Ok(BackendContext {
|
||||
backend: Arc::new(http),
|
||||
base_url,
|
||||
})
|
||||
}
|
||||
|
||||
async fn run_exec_command(args: crate::cli::ExecCommand) -> anyhow::Result<()> {
|
||||
let crate::cli::ExecCommand {
|
||||
query,
|
||||
environment,
|
||||
attempts,
|
||||
} = args;
|
||||
let ctx = init_backend("codex_cloud_tasks_exec").await?;
|
||||
let prompt = resolve_query_input(query)?;
|
||||
let env_id = resolve_environment_id(&ctx, &environment).await?;
|
||||
let created = codex_cloud_tasks_client::CloudBackend::create_task(
|
||||
&*ctx.backend,
|
||||
&env_id,
|
||||
&prompt,
|
||||
"main",
|
||||
false,
|
||||
attempts,
|
||||
)
|
||||
.await?;
|
||||
let url = util::task_url(&ctx.base_url, &created.id.0);
|
||||
println!("{url}");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn resolve_environment_id(ctx: &BackendContext, requested: &str) -> anyhow::Result<String> {
|
||||
let trimmed = requested.trim();
|
||||
if trimmed.is_empty() {
|
||||
return Err(anyhow!("environment id must not be empty"));
|
||||
}
|
||||
let normalized = util::normalize_base_url(&ctx.base_url);
|
||||
let headers = util::build_chatgpt_headers().await;
|
||||
let environments = crate::env_detect::list_environments(&normalized, &headers).await?;
|
||||
if environments.is_empty() {
|
||||
return Err(anyhow!(
|
||||
"no cloud environments are available for this workspace"
|
||||
));
|
||||
}
|
||||
|
||||
if let Some(row) = environments.iter().find(|row| row.id == trimmed) {
|
||||
return Ok(row.id.clone());
|
||||
}
|
||||
|
||||
let label_matches = environments
|
||||
.iter()
|
||||
.filter(|row| {
|
||||
row.label
|
||||
.as_deref()
|
||||
.map(|label| label.eq_ignore_ascii_case(trimmed))
|
||||
.unwrap_or(false)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
match label_matches.as_slice() {
|
||||
[] => Err(anyhow!(
|
||||
"environment '{trimmed}' not found; run `codex cloud` to list available environments"
|
||||
)),
|
||||
[single] => Ok(single.id.clone()),
|
||||
[first, rest @ ..] => {
|
||||
let first_id = &first.id;
|
||||
if rest.iter().all(|row| row.id == *first_id) {
|
||||
Ok(first_id.clone())
|
||||
} else {
|
||||
Err(anyhow!(
|
||||
"environment label '{trimmed}' is ambiguous; run `codex cloud` to pick the desired environment id"
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn resolve_query_input(query_arg: Option<String>) -> anyhow::Result<String> {
|
||||
match query_arg {
|
||||
Some(q) if q != "-" => Ok(q),
|
||||
maybe_dash => {
|
||||
let force_stdin = matches!(maybe_dash.as_deref(), Some("-"));
|
||||
if std::io::stdin().is_terminal() && !force_stdin {
|
||||
return Err(anyhow!(
|
||||
"no query provided. Pass one as an argument or pipe it via stdin."
|
||||
));
|
||||
}
|
||||
if !force_stdin {
|
||||
eprintln!("Reading query from stdin...");
|
||||
}
|
||||
let mut buffer = String::new();
|
||||
std::io::stdin()
|
||||
.read_to_string(&mut buffer)
|
||||
.map_err(|e| anyhow!("failed to read query from stdin: {e}"))?;
|
||||
if buffer.trim().is_empty() {
|
||||
return Err(anyhow!(
|
||||
"no query provided via stdin (received empty input)."
|
||||
));
|
||||
}
|
||||
Ok(buffer)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn level_from_status(status: codex_cloud_tasks_client::ApplyStatus) -> app::ApplyResultLevel {
|
||||
match status {
|
||||
codex_cloud_tasks_client::ApplyStatus::Success => app::ApplyResultLevel::Success,
|
||||
@@ -148,7 +319,14 @@ fn spawn_apply(
|
||||
// (no standalone patch summarizer needed – UI displays raw diffs)
|
||||
|
||||
/// Entry point for the `codex cloud` subcommand.
|
||||
pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
|
||||
pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
|
||||
if let Some(command) = cli.command {
|
||||
return match command {
|
||||
crate::cli::Command::Exec(args) => run_exec_command(args).await,
|
||||
};
|
||||
}
|
||||
let Cli { .. } = cli;
|
||||
|
||||
// Very minimal logging setup; mirrors other crates' pattern.
|
||||
let default_level = "error";
|
||||
let _ = tracing_subscriber::fmt()
|
||||
@@ -162,72 +340,8 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
|
||||
.try_init();
|
||||
|
||||
info!("Launching Cloud Tasks list UI");
|
||||
set_user_agent_suffix("codex_cloud_tasks_tui");
|
||||
|
||||
// Default to online unless explicitly configured to use mock.
|
||||
let use_mock = matches!(
|
||||
std::env::var("CODEX_CLOUD_TASKS_MODE").ok().as_deref(),
|
||||
Some("mock") | Some("MOCK")
|
||||
);
|
||||
|
||||
let backend: Arc<dyn codex_cloud_tasks_client::CloudBackend> = if use_mock {
|
||||
Arc::new(codex_cloud_tasks_client::MockClient)
|
||||
} else {
|
||||
// Build an HTTP client against the configured (or default) base URL.
|
||||
let base_url = std::env::var("CODEX_CLOUD_TASKS_BASE_URL")
|
||||
.unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string());
|
||||
let ua = codex_core::default_client::get_codex_user_agent();
|
||||
let mut http =
|
||||
codex_cloud_tasks_client::HttpClient::new(base_url.clone())?.with_user_agent(ua);
|
||||
// Log which base URL and path style we're going to use.
|
||||
let style = if base_url.contains("/backend-api") {
|
||||
"wham"
|
||||
} else {
|
||||
"codex-api"
|
||||
};
|
||||
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
|
||||
|
||||
// Require ChatGPT login (SWIC). Exit with a clear message if missing.
|
||||
let _token = match codex_core::config::find_codex_home()
|
||||
.ok()
|
||||
.map(codex_login::AuthManager::new)
|
||||
.and_then(|am| am.auth())
|
||||
{
|
||||
Some(auth) => {
|
||||
// Log account context for debugging workspace selection.
|
||||
if let Some(acc) = auth.get_account_id() {
|
||||
append_error_log(format!("auth: mode=ChatGPT account_id={acc}"));
|
||||
}
|
||||
match auth.get_token().await {
|
||||
Ok(t) if !t.is_empty() => {
|
||||
// Attach token and ChatGPT-Account-Id header if available
|
||||
http = http.with_bearer_token(t.clone());
|
||||
if let Some(acc) = auth
|
||||
.get_account_id()
|
||||
.or_else(|| util::extract_chatgpt_account_id(&t))
|
||||
{
|
||||
append_error_log(format!("auth: set ChatGPT-Account-Id header: {acc}"));
|
||||
http = http.with_chatgpt_account_id(acc);
|
||||
}
|
||||
t
|
||||
}
|
||||
_ => {
|
||||
eprintln!(
|
||||
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
|
||||
);
|
||||
std::process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
None => {
|
||||
eprintln!(
|
||||
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
|
||||
);
|
||||
std::process::exit(1);
|
||||
}
|
||||
};
|
||||
Arc::new(http)
|
||||
};
|
||||
let BackendContext { backend, .. } = init_backend("codex_cloud_tasks_tui").await?;
|
||||
let backend = backend;
|
||||
|
||||
// Terminal setup
|
||||
use crossterm::ExecutableCommand;
|
||||
@@ -400,16 +514,20 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
|
||||
let _ = frame_tx.send(Instant::now() + codex_tui::ComposerInput::recommended_flush_delay());
|
||||
}
|
||||
}
|
||||
// Advance throbber only while loading.
|
||||
// Keep spinner pulsing only while loading.
|
||||
if app.refresh_inflight
|
||||
|| app.details_inflight
|
||||
|| app.env_loading
|
||||
|| app.apply_preflight_inflight
|
||||
|| app.apply_inflight
|
||||
{
|
||||
app.throbber.calc_next();
|
||||
if app.spinner_start.is_none() {
|
||||
app.spinner_start = Some(Instant::now());
|
||||
}
|
||||
needs_redraw = true;
|
||||
let _ = frame_tx.send(Instant::now() + Duration::from_millis(100));
|
||||
let _ = frame_tx.send(Instant::now() + Duration::from_millis(600));
|
||||
} else {
|
||||
app.spinner_start = None;
|
||||
}
|
||||
render_if_needed(&mut terminal, &mut app, &mut needs_redraw)?;
|
||||
}
|
||||
@@ -839,6 +957,9 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
|
||||
&& matches!(key.code, KeyCode::Char('n') | KeyCode::Char('N'))
|
||||
|| matches!(key.code, KeyCode::Char('\u{000E}'));
|
||||
if is_ctrl_n {
|
||||
if app.new_task.is_none() {
|
||||
continue;
|
||||
}
|
||||
if app.best_of_modal.is_some() {
|
||||
app.best_of_modal = None;
|
||||
needs_redraw = true;
|
||||
|
||||
@@ -16,6 +16,7 @@ use ratatui::widgets::ListState;
|
||||
use ratatui::widgets::Padding;
|
||||
use ratatui::widgets::Paragraph;
|
||||
use std::sync::OnceLock;
|
||||
use std::time::Instant;
|
||||
|
||||
use crate::app::App;
|
||||
use crate::app::AttemptView;
|
||||
@@ -229,7 +230,7 @@ fn draw_list(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
|
||||
// In-box spinner during initial/refresh loads
|
||||
if app.refresh_inflight {
|
||||
draw_centered_spinner(frame, inner, &mut app.throbber, "Loading tasks…");
|
||||
draw_centered_spinner(frame, inner, &mut app.spinner_start, "Loading tasks…");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -262,9 +263,9 @@ fn draw_footer(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
help.push(": Apply ".dim());
|
||||
}
|
||||
help.push("o : Set Env ".dim());
|
||||
help.push("Ctrl+N".dim());
|
||||
help.push(format!(": Attempts {}x ", app.best_of_n).dim());
|
||||
if app.new_task.is_some() {
|
||||
help.push("Ctrl+N".dim());
|
||||
help.push(format!(": Attempts {}x ", app.best_of_n).dim());
|
||||
help.push("(editing new task) ".dim());
|
||||
} else {
|
||||
help.push("n : New Task ".dim());
|
||||
@@ -291,7 +292,7 @@ fn draw_footer(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
|| app.apply_preflight_inflight
|
||||
|| app.apply_inflight
|
||||
{
|
||||
draw_inline_spinner(frame, top[1], &mut app.throbber, "Loading…");
|
||||
draw_inline_spinner(frame, top[1], &mut app.spinner_start, "Loading…");
|
||||
} else {
|
||||
frame.render_widget(Clear, top[1]);
|
||||
}
|
||||
@@ -449,7 +450,12 @@ fn draw_diff_overlay(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
.map(|o| o.sd.wrapped_lines().is_empty())
|
||||
.unwrap_or(true);
|
||||
if app.details_inflight && raw_empty {
|
||||
draw_centered_spinner(frame, content_area, &mut app.throbber, "Loading details…");
|
||||
draw_centered_spinner(
|
||||
frame,
|
||||
content_area,
|
||||
&mut app.spinner_start,
|
||||
"Loading details…",
|
||||
);
|
||||
} else {
|
||||
let scroll = app
|
||||
.diff_overlay
|
||||
@@ -494,11 +500,11 @@ pub fn draw_apply_modal(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
frame.render_widget(header, rows[0]);
|
||||
// Body: spinner while preflight/apply runs; otherwise show result message and path lists
|
||||
if app.apply_preflight_inflight {
|
||||
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Checking…");
|
||||
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Checking…");
|
||||
} else if app.apply_inflight {
|
||||
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Applying…");
|
||||
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Applying…");
|
||||
} else if m.result_message.is_none() {
|
||||
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Loading…");
|
||||
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Loading…");
|
||||
} else if let Some(msg) = &m.result_message {
|
||||
let mut body_lines: Vec<Line> = Vec::new();
|
||||
let first = match m.result_level {
|
||||
@@ -859,29 +865,29 @@ fn format_relative_time(ts: chrono::DateTime<Utc>) -> String {
|
||||
fn draw_inline_spinner(
|
||||
frame: &mut Frame,
|
||||
area: Rect,
|
||||
state: &mut throbber_widgets_tui::ThrobberState,
|
||||
spinner_start: &mut Option<Instant>,
|
||||
label: &str,
|
||||
) {
|
||||
use ratatui::style::Style;
|
||||
use throbber_widgets_tui::BRAILLE_EIGHT;
|
||||
use throbber_widgets_tui::Throbber;
|
||||
use throbber_widgets_tui::WhichUse;
|
||||
let w = Throbber::default()
|
||||
.label(label)
|
||||
.style(Style::default().cyan())
|
||||
.throbber_style(Style::default().magenta().bold())
|
||||
.throbber_set(BRAILLE_EIGHT)
|
||||
.use_type(WhichUse::Spin);
|
||||
frame.render_stateful_widget(w, area, state);
|
||||
use ratatui::widgets::Paragraph;
|
||||
let start = spinner_start.get_or_insert_with(Instant::now);
|
||||
let blink_on = (start.elapsed().as_millis() / 600).is_multiple_of(2);
|
||||
let dot = if blink_on {
|
||||
"• ".into()
|
||||
} else {
|
||||
"◦ ".dim()
|
||||
};
|
||||
let label = label.cyan();
|
||||
let line = Line::from(vec![dot, label]);
|
||||
frame.render_widget(Paragraph::new(line), area);
|
||||
}
|
||||
|
||||
fn draw_centered_spinner(
|
||||
frame: &mut Frame,
|
||||
area: Rect,
|
||||
state: &mut throbber_widgets_tui::ThrobberState,
|
||||
spinner_start: &mut Option<Instant>,
|
||||
label: &str,
|
||||
) {
|
||||
// Center a 1xN throbber within the given rect
|
||||
// Center a 1xN spinner within the given rect
|
||||
let rows = Layout::default()
|
||||
.direction(Direction::Vertical)
|
||||
.constraints([
|
||||
@@ -898,7 +904,7 @@ fn draw_centered_spinner(
|
||||
Constraint::Percentage(50),
|
||||
])
|
||||
.split(rows[1]);
|
||||
draw_inline_spinner(frame, cols[1], state, label);
|
||||
draw_inline_spinner(frame, cols[1], spinner_start, label);
|
||||
}
|
||||
|
||||
// Styling helpers for diff rendering live inline where used.
|
||||
@@ -918,7 +924,12 @@ pub fn draw_env_modal(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
let content = overlay_content(inner);
|
||||
|
||||
if app.env_loading {
|
||||
draw_centered_spinner(frame, content, &mut app.throbber, "Loading environments…");
|
||||
draw_centered_spinner(
|
||||
frame,
|
||||
content,
|
||||
&mut app.spinner_start,
|
||||
"Loading environments…",
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1004,32 +1015,40 @@ pub fn draw_best_of_modal(frame: &mut Frame, area: Rect, app: &mut App) {
|
||||
use ratatui::widgets::Wrap;
|
||||
|
||||
let inner = overlay_outer(area);
|
||||
const MAX_WIDTH: u16 = 40;
|
||||
const MIN_WIDTH: u16 = 20;
|
||||
const MAX_HEIGHT: u16 = 12;
|
||||
const MIN_HEIGHT: u16 = 6;
|
||||
let modal_width = inner.width.min(MAX_WIDTH).max(inner.width.min(MIN_WIDTH));
|
||||
let modal_height = inner
|
||||
.height
|
||||
.min(MAX_HEIGHT)
|
||||
.max(inner.height.min(MIN_HEIGHT));
|
||||
let modal_x = inner.x + (inner.width.saturating_sub(modal_width)) / 2;
|
||||
let modal_y = inner.y + (inner.height.saturating_sub(modal_height)) / 2;
|
||||
let modal_area = Rect::new(modal_x, modal_y, modal_width, modal_height);
|
||||
let title = Line::from(vec!["Parallel Attempts".magenta().bold()]);
|
||||
let block = overlay_block().title(title);
|
||||
|
||||
frame.render_widget(Clear, inner);
|
||||
frame.render_widget(block.clone(), inner);
|
||||
let content = overlay_content(inner);
|
||||
frame.render_widget(Clear, modal_area);
|
||||
frame.render_widget(block.clone(), modal_area);
|
||||
let content = overlay_content(modal_area);
|
||||
|
||||
let rows = Layout::default()
|
||||
.direction(Direction::Vertical)
|
||||
.constraints([Constraint::Length(2), Constraint::Min(1)])
|
||||
.split(content);
|
||||
|
||||
let hint = Paragraph::new(Line::from(
|
||||
"Use ↑/↓ to choose, 1-4 jump; Enter confirm, Esc cancel"
|
||||
.cyan()
|
||||
.dim(),
|
||||
))
|
||||
.wrap(Wrap { trim: true });
|
||||
let hint = Paragraph::new(Line::from("Use ↑/↓ to choose, 1-4 jump".cyan().dim()))
|
||||
.wrap(Wrap { trim: true });
|
||||
frame.render_widget(hint, rows[0]);
|
||||
|
||||
let selected = app.best_of_modal.as_ref().map(|m| m.selected).unwrap_or(0);
|
||||
let options = [1usize, 2, 3, 4];
|
||||
let mut items: Vec<ListItem> = Vec::new();
|
||||
for &attempts in &options {
|
||||
let mut spans: Vec<ratatui::text::Span> =
|
||||
vec![format!("{attempts} attempt{}", if attempts == 1 { "" } else { "s" }).into()];
|
||||
let noun = if attempts == 1 { "attempt" } else { "attempts" };
|
||||
let mut spans: Vec<ratatui::text::Span> = vec![format!("{attempts} {noun:<8}").into()];
|
||||
spans.push(" ".into());
|
||||
spans.push(format!("{attempts}x parallel").dim());
|
||||
if attempts == app.best_of_n {
|
||||
|
||||
@@ -70,7 +70,7 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
|
||||
HeaderValue::from_str(&ua).unwrap_or(HeaderValue::from_static("codex-cli")),
|
||||
);
|
||||
if let Ok(home) = codex_core::config::find_codex_home() {
|
||||
let am = codex_login::AuthManager::new(home);
|
||||
let am = codex_login::AuthManager::new(home, false);
|
||||
if let Some(auth) = am.auth()
|
||||
&& let Ok(tok) = auth.get_token().await
|
||||
&& !tok.is_empty()
|
||||
@@ -91,3 +91,18 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
|
||||
}
|
||||
headers
|
||||
}
|
||||
|
||||
/// Construct a browser-friendly task URL for the given backend base URL.
|
||||
pub fn task_url(base_url: &str, task_id: &str) -> String {
|
||||
let normalized = normalize_base_url(base_url);
|
||||
if let Some(root) = normalized.strip_suffix("/backend-api") {
|
||||
return format!("{root}/codex/tasks/{task_id}");
|
||||
}
|
||||
if let Some(root) = normalized.strip_suffix("/api/codex") {
|
||||
return format!("{root}/codex/tasks/{task_id}");
|
||||
}
|
||||
if normalized.ends_with("/codex") {
|
||||
return format!("{normalized}/tasks/{task_id}");
|
||||
}
|
||||
format!("{normalized}/codex/tasks/{task_id}")
|
||||
}
|
||||
|
||||
24
codex-rs/codex-infty/Cargo.toml
Normal file
24
codex-rs/codex-infty/Cargo.toml
Normal file
@@ -0,0 +1,24 @@
|
||||
[package]
|
||||
name = "codex-infty"
|
||||
version = { workspace = true }
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
anyhow = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
codex-core = { path = "../core" }
|
||||
codex-protocol = { path = "../protocol" }
|
||||
dirs = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
tempfile = { workspace = true }
|
||||
tokio = { workspace = true, features = ["macros", "rt", "rt-multi-thread", "signal"] }
|
||||
tokio-stream = { workspace = true }
|
||||
tokio-util = { workspace = true }
|
||||
tracing = { workspace = true, features = ["log"] }
|
||||
futures = "0.3"
|
||||
|
||||
[dev-dependencies]
|
||||
core_test_support = { path = "../core/tests/common" }
|
||||
tempfile = { workspace = true }
|
||||
wiremock = { workspace = true }
|
||||
196
codex-rs/codex-infty/README.md
Normal file
196
codex-rs/codex-infty/README.md
Normal file
@@ -0,0 +1,196 @@
|
||||
# Codex Infty
|
||||
|
||||
Codex Infty is a small orchestration layer that coordinates multiple Codex roles (Solver, Director, Verifier(s)) to drive longer, multi‑step objectives with minimal human intervention. It provides:
|
||||
|
||||
- A run orchestrator that routes messages between roles and advances the workflow.
|
||||
- A durable run store on disk with metadata and standard subfolders.
|
||||
- Default role prompts for Solver/Director/Verifier.
|
||||
- A lightweight progress reporting hook for UIs/CLIs.
|
||||
|
||||
The crate is designed to be embedded (via the library API) and also powers the `codex infty` CLI commands.
|
||||
|
||||
## High‑Level Flow
|
||||
|
||||
```
|
||||
objective → Solver
|
||||
Solver → direction_request → Director → directive → Solver
|
||||
… (iterate) …
|
||||
Solver → final_delivery → Orchestrator returns RunOutcome
|
||||
```
|
||||
|
||||
- The Solver always speaks structured JSON. The orchestrator parses those messages and decides the next hop.
|
||||
- The Director provides crisp guidance (also JSON) that is forwarded back to the Solver.
|
||||
- One or more Verifiers may assess the final deliverable; the orchestrator aggregates results and reports a summary to the Solver.
|
||||
- On final_delivery, the orchestrator resolves and validates the deliverable path and returns the `RunOutcome`.
|
||||
|
||||
## Directory Layout (Run Store)
|
||||
|
||||
When a run is created, a directory is initialized with this structure:
|
||||
|
||||
```
|
||||
<runs_root>/<run_id>/
|
||||
artifacts/ # long‑lived artifacts produced by the Solver
|
||||
memory/ # durable notes, claims, context
|
||||
index/ # indexes and caches
|
||||
deliverable/ # final output(s) assembled by the Solver
|
||||
run.json # run metadata (id, timestamps, roles)
|
||||
```
|
||||
|
||||
See: `codex-infty/src/run_store.rs`.
|
||||
|
||||
- The orchestrator persists rollout paths and optional config paths for each role into `run.json`.
|
||||
- Metadata timestamps are updated on significant events (role spawns, handoffs, final delivery).
|
||||
- Final deliverables must remain within the run directory. Paths are canonicalized and validated.
|
||||
|
||||
## Roles and Prompts
|
||||
|
||||
Default base instructions are injected per role if the provided `Config` has none:
|
||||
|
||||
- Solver: `codex-infty/src/prompts/solver.md`
|
||||
- Director: `codex-infty/src/prompts/director.md`
|
||||
- Verifier: `codex-infty/src/prompts/verifier.md`
|
||||
|
||||
You can provide your own instructions by pre‑populating `Config.base_instructions`.
|
||||
|
||||
## Solver Signal Contract
|
||||
|
||||
The Solver communicates intent using JSON messages (possibly wrapped in a fenced block). The orchestrator accepts two shapes:
|
||||
|
||||
- Direction request (sent to Director):
|
||||
|
||||
```json
|
||||
{"type":"direction_request","prompt":"<question or decision>"}
|
||||
```
|
||||
|
||||
- Final delivery (completes the run):
|
||||
|
||||
```json
|
||||
{"type":"final_delivery","deliverable_path":"deliverable/summary.txt","summary":"<short text>"}
|
||||
```
|
||||
|
||||
JSON may be fenced as ```json … ```; the orchestrator will strip the fence.
|
||||
|
||||
## Key Types and Modules
|
||||
|
||||
- Orchestrator: `codex-infty/src/orchestrator.rs`
|
||||
- `InftyOrchestrator`: spawns/resumes role sessions, drives the event loop, and routes signals.
|
||||
- `execute_new_run`: one‑shot helper that spawns and then drives.
|
||||
- `spawn_run`: set up sessions and the run store.
|
||||
- `call_role`, `relay_assistant_to_role`, `post_to_role`, `await_first_assistant`, `stream_events`: utilities when integrating custom flows.
|
||||
|
||||
- Run store: `codex-infty/src/run_store.rs`
|
||||
- `RunStore`, `RunMetadata`, `RoleMetadata`: metadata and persistence helpers.
|
||||
|
||||
- Types: `codex-infty/src/types.rs`
|
||||
- `RoleConfig`: wraps a `Config` and sets sensible defaults for autonomous flows (no approvals, full sandbox access). Also used to persist optional config paths.
|
||||
- `RunParams`: input to spawn runs.
|
||||
- `RunExecutionOptions`: per‑run options (objective, timeouts).
|
||||
- `RunOutcome`: returned on successful final delivery.
|
||||
|
||||
- Signals: `codex-infty/src/signals.rs`
|
||||
- DTOs for director responses and verifier verdicts, and the aggregated summary type.
|
||||
|
||||
- Progress: `codex-infty/src/progress.rs`
|
||||
- `ProgressReporter` trait: hook for UIs/CLIs to observe solver/director/verifier activity.
|
||||
|
||||
## Orchestrator Workflow (Details)
|
||||
|
||||
1. Spawn or resume role sessions (Solver, Director, and zero or more Verifiers). Default prompts are applied if the role’s `Config` has no base instructions.
|
||||
2. Optionally post an `objective` to the Solver. The progress reporter is notified and the orchestrator waits for the first Solver signal.
|
||||
3. On `direction_request`:
|
||||
- Post a structured request to the Director and await the first assistant message.
|
||||
- Parse it into a `DirectiveResponse` and forward the normalized JSON to the Solver.
|
||||
4. On `final_delivery`:
|
||||
- Canonicalize and validate that `deliverable_path` stays within the run directory.
|
||||
- Optionally run a verification pass using configured Verifier(s), aggregate results, and post a summary back to the Solver.
|
||||
- Notify the progress reporter, touch the run store, and return `RunOutcome`.
|
||||
|
||||
## Library Usage
|
||||
|
||||
```rust
|
||||
use std::sync::Arc;
|
||||
use codex_core::{CodexAuth, config::Config};
|
||||
use codex_infty::{InftyOrchestrator, RoleConfig, RunParams, RunExecutionOptions};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> anyhow::Result<()> {
|
||||
// 1) Load or build a Config for each role
|
||||
let solver_cfg: Config = load_config();
|
||||
let mut director_cfg = solver_cfg.clone();
|
||||
director_cfg.model = "o4-mini".into();
|
||||
|
||||
// 2) Build role configs
|
||||
let solver = RoleConfig::new("solver", solver_cfg.clone());
|
||||
let director = RoleConfig::new("director", director_cfg);
|
||||
let verifiers = vec![RoleConfig::new("verifier-alpha", solver_cfg.clone())];
|
||||
|
||||
// 3) Create an orchestrator (using default runs root)
|
||||
let auth = CodexAuth::from_api_key("sk-…");
|
||||
let orchestrator = InftyOrchestrator::new(auth)?;
|
||||
|
||||
// 4) Execute a new run with an objective
|
||||
let params = RunParams {
|
||||
run_id: "my-run".into(),
|
||||
run_root: None, // use default ~/.codex/infty/<run_id>
|
||||
solver,
|
||||
director,
|
||||
verifiers,
|
||||
};
|
||||
let mut opts = RunExecutionOptions::default();
|
||||
opts.objective = Some("Implement feature X".into());
|
||||
|
||||
let outcome = orchestrator.execute_new_run(params, opts).await?;
|
||||
println!("deliverable: {}", outcome.deliverable_path.display());
|
||||
Ok(())
|
||||
}
|
||||
# fn load_config() -> codex_core::config::Config { codex_core::config::Config::default() }
|
||||
```
|
||||
|
||||
Note: Resuming runs is currently disabled.
|
||||
|
||||
## CLI Quickstart
|
||||
|
||||
The CLI (`codex`) exposes Infty helpers under the `infty` subcommand. Examples:
|
||||
|
||||
```bash
|
||||
# Create a run and immediately drive toward completion
|
||||
codex infty create --run-id demo --objective "Build and test feature"
|
||||
|
||||
# Inspect runs
|
||||
codex infty list
|
||||
codex infty show demo
|
||||
|
||||
# Sending one-off messages to stored runs is currently disabled
|
||||
```
|
||||
|
||||
Flags allow customizing the Director’s model and reasoning effort; see `codex infty create --help`.
|
||||
|
||||
## Progress Reporting
|
||||
|
||||
Integrate your UI by implementing `ProgressReporter` and attaching it with `InftyOrchestrator::with_progress(...)`. You’ll receive callbacks on key milestones (objective posted, solver messages, director response, verification summaries, final delivery, etc.).
|
||||
|
||||
## Safety and Guardrails
|
||||
|
||||
- `RoleConfig::new` sets `SandboxPolicy::DangerFullAccess` and `AskForApproval::Never` to support autonomous flows. Adjust if your environment requires stricter policies.
|
||||
- Deliverable paths are validated to stay inside the run directory and are fully canonicalized.
|
||||
- JSON payloads are schema‑checked where applicable (e.g., solver signals and final delivery shape).
|
||||
|
||||
## Tests
|
||||
|
||||
Run the crate’s tests:
|
||||
|
||||
```bash
|
||||
cargo test -p codex-infty
|
||||
```
|
||||
|
||||
Many tests rely on mocked SSE streams and will auto‑skip in sandboxes where network is disabled.
|
||||
|
||||
## When to Use This Crate
|
||||
|
||||
Use `codex-infty` when you want a minimal, pragmatic multi‑role loop with:
|
||||
|
||||
- Clear role separation and routing.
|
||||
- Durable, restart‑resilient state on disk.
|
||||
- Simple integration points (progress hooks and helper APIs).
|
||||
|
||||
It’s intentionally small and focused so it can be embedded into larger tools or extended to meet your workflows.
|
||||
38
codex-rs/codex-infty/src/lib.rs
Normal file
38
codex-rs/codex-infty/src/lib.rs
Normal file
@@ -0,0 +1,38 @@
|
||||
#![deny(clippy::print_stdout, clippy::print_stderr)]
|
||||
|
||||
mod orchestrator;
|
||||
mod progress;
|
||||
mod prompts;
|
||||
mod roles;
|
||||
mod run_store;
|
||||
mod session;
|
||||
mod signals;
|
||||
mod types;
|
||||
pub(crate) mod utils;
|
||||
|
||||
pub use orchestrator::InftyOrchestrator;
|
||||
pub use progress::ProgressReporter;
|
||||
pub use run_store::RoleMetadata;
|
||||
pub use run_store::RunMetadata;
|
||||
pub use run_store::RunStore;
|
||||
pub use signals::AggregatedVerifierVerdict;
|
||||
pub use signals::DirectiveResponse;
|
||||
pub use signals::VerifierDecision;
|
||||
pub use signals::VerifierReport;
|
||||
pub use signals::VerifierVerdict;
|
||||
pub use types::RoleConfig;
|
||||
pub use types::RoleSession;
|
||||
pub use types::RunExecutionOptions;
|
||||
pub use types::RunOutcome;
|
||||
pub use types::RunParams;
|
||||
pub use types::RunSessions;
|
||||
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use dirs::home_dir;
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub fn default_runs_root() -> Result<PathBuf> {
|
||||
let home = home_dir().ok_or_else(|| anyhow!("failed to determine home directory"))?;
|
||||
Ok(home.join(".codex").join("infty"))
|
||||
}
|
||||
552
codex-rs/codex-infty/src/orchestrator.rs
Normal file
552
codex-rs/codex-infty/src/orchestrator.rs
Normal file
@@ -0,0 +1,552 @@
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use anyhow::bail;
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::CodexConversation;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use codex_core::protocol::EventMsg;
|
||||
use codex_core::protocol::Op;
|
||||
use codex_protocol::ConversationId;
|
||||
use tokio::signal;
|
||||
use tokio_stream::StreamExt;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::warn;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::prompts;
|
||||
use crate::roles::Role;
|
||||
use crate::roles::director::DirectionRequestPayload;
|
||||
use crate::roles::director::DirectorRole;
|
||||
use crate::roles::solver::SolverRequest;
|
||||
use crate::roles::solver::SolverRole;
|
||||
use crate::roles::solver::SolverSignal;
|
||||
use crate::roles::solver::parse_solver_signal;
|
||||
use crate::roles::verifier::VerificationRequestPayload;
|
||||
use crate::roles::verifier_pool::VerifierPool;
|
||||
use crate::run_store::RoleMetadata;
|
||||
use crate::run_store::RunStore;
|
||||
use crate::session;
|
||||
use crate::signals::AggregatedVerifierVerdict;
|
||||
use crate::types::RoleConfig;
|
||||
use crate::types::RoleSession;
|
||||
use crate::types::RunExecutionOptions;
|
||||
use crate::types::RunOutcome;
|
||||
use crate::types::RunParams;
|
||||
use crate::types::RunSessions;
|
||||
|
||||
#[derive(Default)]
|
||||
struct LoopState {
|
||||
waiting_for_signal: bool,
|
||||
pending_solver_turn_completion: bool,
|
||||
}
|
||||
|
||||
struct SessionCleanup {
|
||||
conversation_id: ConversationId,
|
||||
conversation: Arc<CodexConversation>,
|
||||
}
|
||||
|
||||
impl SessionCleanup {
|
||||
fn new(session: &RoleSession) -> Self {
|
||||
Self {
|
||||
conversation_id: session.conversation_id,
|
||||
conversation: Arc::clone(&session.conversation),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct InftyOrchestrator {
|
||||
hub: Arc<CrossSessionHub>,
|
||||
conversation_manager: ConversationManager,
|
||||
runs_root: PathBuf,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
}
|
||||
|
||||
impl InftyOrchestrator {
|
||||
fn progress_ref(&self) -> Option<&dyn ProgressReporter> {
|
||||
self.progress.as_deref()
|
||||
}
|
||||
pub fn new(auth: CodexAuth) -> Result<Self> {
|
||||
let runs_root = crate::default_runs_root()?;
|
||||
Ok(Self::with_runs_root(auth, runs_root))
|
||||
}
|
||||
|
||||
pub fn with_runs_root(auth: CodexAuth, runs_root: impl Into<PathBuf>) -> Self {
|
||||
Self {
|
||||
hub: Arc::new(CrossSessionHub::new()),
|
||||
conversation_manager: ConversationManager::with_auth(auth),
|
||||
runs_root: runs_root.into(),
|
||||
progress: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn runs_root(&self) -> &PathBuf {
|
||||
&self.runs_root
|
||||
}
|
||||
|
||||
pub fn hub(&self) -> Arc<CrossSessionHub> {
|
||||
Arc::clone(&self.hub)
|
||||
}
|
||||
|
||||
pub fn with_progress(mut self, reporter: Arc<dyn ProgressReporter>) -> Self {
|
||||
self.progress = Some(reporter);
|
||||
self
|
||||
}
|
||||
|
||||
pub async fn execute_new_run(
|
||||
&self,
|
||||
params: RunParams,
|
||||
options: RunExecutionOptions,
|
||||
) -> Result<RunOutcome> {
|
||||
let sessions = self.spawn_run(params).await?;
|
||||
self.drive_run(sessions, options).await
|
||||
}
|
||||
|
||||
// resumable runs are disabled; execute_existing_run removed
|
||||
|
||||
pub async fn spawn_run(&self, params: RunParams) -> Result<RunSessions> {
|
||||
let RunParams {
|
||||
run_id,
|
||||
run_root,
|
||||
solver,
|
||||
director,
|
||||
verifiers,
|
||||
} = params;
|
||||
|
||||
let run_path = run_root.unwrap_or_else(|| self.runs_root.join(&run_id));
|
||||
let role_metadata = collect_role_metadata(&solver, &director, &verifiers);
|
||||
let mut store = RunStore::initialize(&run_path, &run_id, &role_metadata)?;
|
||||
let mut cleanup = Vec::new();
|
||||
|
||||
let solver_session = match self
|
||||
.spawn_and_register_role(&run_id, &run_path, &solver, &mut store, &mut cleanup)
|
||||
.await
|
||||
{
|
||||
Ok(session) => session,
|
||||
Err(err) => {
|
||||
self.cleanup_failed_spawn(cleanup, &run_path).await;
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
|
||||
let director_session = match self
|
||||
.spawn_and_register_role(&run_id, &run_path, &director, &mut store, &mut cleanup)
|
||||
.await
|
||||
{
|
||||
Ok(session) => session,
|
||||
Err(err) => {
|
||||
self.cleanup_failed_spawn(cleanup, &run_path).await;
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
|
||||
let mut verifier_sessions = Vec::with_capacity(verifiers.len());
|
||||
for verifier in verifiers {
|
||||
let session = match self
|
||||
.spawn_and_register_role(&run_id, &run_path, &verifier, &mut store, &mut cleanup)
|
||||
.await
|
||||
{
|
||||
Ok(session) => session,
|
||||
Err(err) => {
|
||||
self.cleanup_failed_spawn(cleanup, &run_path).await;
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
verifier_sessions.push(session);
|
||||
}
|
||||
|
||||
Ok(RunSessions {
|
||||
run_id,
|
||||
solver: solver_session,
|
||||
director: director_session,
|
||||
verifiers: verifier_sessions,
|
||||
store,
|
||||
})
|
||||
}
|
||||
|
||||
// resumable runs are disabled; resume_run removed
|
||||
|
||||
async fn drive_run(
|
||||
&self,
|
||||
mut sessions: RunSessions,
|
||||
options: RunExecutionOptions,
|
||||
) -> Result<RunOutcome> {
|
||||
let result = self.inner_drive_run(&mut sessions, &options).await;
|
||||
let cleanup = collect_session_cleanup(&sessions);
|
||||
self.shutdown_sessions(cleanup).await;
|
||||
result
|
||||
}
|
||||
|
||||
async fn inner_drive_run(
|
||||
&self,
|
||||
sessions: &mut RunSessions,
|
||||
options: &RunExecutionOptions,
|
||||
) -> Result<RunOutcome> {
|
||||
let solver_role = SolverRole::new(
|
||||
Arc::clone(&self.hub),
|
||||
sessions.run_id.clone(),
|
||||
sessions.solver.role.clone(),
|
||||
sessions.solver.conversation_id,
|
||||
self.progress.clone(),
|
||||
);
|
||||
let director_role = DirectorRole::new(
|
||||
Arc::clone(&self.hub),
|
||||
sessions.run_id.clone(),
|
||||
sessions.director.role.clone(),
|
||||
options.director_timeout,
|
||||
self.progress.clone(),
|
||||
);
|
||||
let mut verifier_pool = VerifierPool::from_sessions(
|
||||
Arc::clone(&self.hub),
|
||||
sessions,
|
||||
options.verifier_timeout,
|
||||
self.progress.clone(),
|
||||
);
|
||||
|
||||
let mut solver_events = solver_role.stream_events()?;
|
||||
let mut state = LoopState::default();
|
||||
self.maybe_post_objective(&solver_role, sessions, &mut state, options)
|
||||
.await?;
|
||||
|
||||
// Cancellation token that propagates Ctrl+C to nested awaits
|
||||
let cancel = CancellationToken::new();
|
||||
let cancel_on_ctrl_c = cancel.clone();
|
||||
tokio::spawn(async move {
|
||||
let _ = signal::ctrl_c().await;
|
||||
cancel_on_ctrl_c.cancel();
|
||||
});
|
||||
|
||||
'event_loop: loop {
|
||||
tokio::select! {
|
||||
maybe_event = solver_events.next() => {
|
||||
let Some(event) = maybe_event else {
|
||||
break 'event_loop;
|
||||
};
|
||||
if let Some(p) = self.progress_ref() { p.solver_event(&event.event.msg); }
|
||||
match &event.event.msg {
|
||||
EventMsg::AgentMessage(agent_msg) => {
|
||||
if let Some(p) = self.progress_ref() { p.solver_agent_message(agent_msg); }
|
||||
if let Some(signal) = parse_solver_signal(&agent_msg.message) {
|
||||
state.waiting_for_signal = false;
|
||||
match signal {
|
||||
SolverSignal::DirectionRequest { prompt } => {
|
||||
let prompt = crate::utils::required_trimmed(
|
||||
prompt,
|
||||
"solver direction_request missing prompt text",
|
||||
)?;
|
||||
if let Some(p) = self.progress_ref() { p.direction_request(&prompt); }
|
||||
self
|
||||
.handle_direction_request(
|
||||
&prompt,
|
||||
options,
|
||||
&director_role,
|
||||
&solver_role,
|
||||
cancel.clone(),
|
||||
)
|
||||
.await?;
|
||||
sessions.store.touch()?;
|
||||
state.pending_solver_turn_completion = true;
|
||||
}
|
||||
SolverSignal::FinalDelivery {
|
||||
deliverable_path,
|
||||
summary,
|
||||
} => {
|
||||
let deliverable_path = crate::utils::required_trimmed(
|
||||
deliverable_path,
|
||||
"solver final_delivery missing deliverable_path",
|
||||
)?;
|
||||
if deliverable_path.is_empty() { bail!("solver final_delivery provided empty path"); }
|
||||
|
||||
// Minimal behavior: if the provided path cannot be resolved,
|
||||
// send a placeholder claim so verifiers can fail it.
|
||||
let resolved = crate::utils::resolve_deliverable_path(
|
||||
sessions.store.path(),
|
||||
&deliverable_path,
|
||||
)
|
||||
.unwrap_or_else(|_| std::path::PathBuf::from("file not existing"));
|
||||
|
||||
let summary_clean = crate::utils::trim_to_non_empty(summary);
|
||||
let summary_ref = summary_clean.as_deref();
|
||||
if let Some(p) = self.progress_ref() { p.final_delivery(&resolved, summary_ref); }
|
||||
let verified = self
|
||||
.run_final_verification(
|
||||
sessions,
|
||||
&mut verifier_pool,
|
||||
&resolved,
|
||||
summary_ref,
|
||||
options,
|
||||
&solver_role,
|
||||
cancel.clone(),
|
||||
)
|
||||
.await?;
|
||||
if !verified { state.pending_solver_turn_completion = true; continue; }
|
||||
sessions.store.touch()?;
|
||||
return Ok(RunOutcome {
|
||||
run_id: sessions.run_id.clone(),
|
||||
deliverable_path: resolved,
|
||||
summary: summary_clean,
|
||||
raw_message: agent_msg.message.clone(),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EventMsg::TaskComplete(..) => {
|
||||
if state.waiting_for_signal {
|
||||
// The solver completed its turn without issuing a signal; ask for one now.
|
||||
solver_role.request_finalization_signal().await?;
|
||||
} else if state.pending_solver_turn_completion {
|
||||
// We handled a signal earlier in the loop; this completion corresponds to it.
|
||||
state.pending_solver_turn_completion = false;
|
||||
}
|
||||
}
|
||||
EventMsg::Error(error) => {
|
||||
tracing::error!("Error: {:?}", error);
|
||||
}
|
||||
EventMsg::StreamError(error) => {
|
||||
tracing::error!("Stream error: {:?}", error);
|
||||
}
|
||||
e => {
|
||||
tracing::info!("Unhandled event: {:?}", e); // todo move to trace
|
||||
}
|
||||
}
|
||||
}
|
||||
_ = cancel.cancelled() => {
|
||||
if let Some(progress) = self.progress.as_ref() { progress.run_interrupted(); }
|
||||
// Proactively interrupt any in-flight role turns for fast shutdown.
|
||||
let _ = sessions.solver.conversation.submit(Op::Interrupt).await;
|
||||
let _ = sessions.director.conversation.submit(Op::Interrupt).await;
|
||||
for v in &sessions.verifiers { let _ = v.conversation.submit(Op::Interrupt).await; }
|
||||
// Cleanup is handled by the caller (drive_run) to avoid double-shutdown
|
||||
bail!("run interrupted by Ctrl+C");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Err(anyhow!(
|
||||
"run {} ended before emitting final_delivery message",
|
||||
sessions.run_id
|
||||
))
|
||||
}
|
||||
|
||||
async fn maybe_post_objective(
|
||||
&self,
|
||||
solver: &crate::roles::solver::SolverRole,
|
||||
sessions: &mut RunSessions,
|
||||
state: &mut LoopState,
|
||||
options: &RunExecutionOptions,
|
||||
) -> Result<()> {
|
||||
if let Some(objective) = options.objective.as_deref()
|
||||
&& !objective.trim().is_empty()
|
||||
{
|
||||
solver
|
||||
.post(objective, Some(SolverRole::solver_signal_schema()))
|
||||
.await?;
|
||||
sessions.store.touch()?;
|
||||
state.waiting_for_signal = true;
|
||||
if let Some(p) = self.progress_ref() {
|
||||
p.objective_posted(objective);
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_direction_request(
|
||||
&self,
|
||||
prompt: &str,
|
||||
options: &RunExecutionOptions,
|
||||
director_role: &DirectorRole,
|
||||
solver_role: &SolverRole,
|
||||
cancel: CancellationToken,
|
||||
) -> Result<()> {
|
||||
let request = DirectionRequestPayload::new(prompt, options.objective.as_deref());
|
||||
let directive_payload = tokio::select! {
|
||||
r = director_role.call(&request) => {
|
||||
r.context("director response was not valid directive JSON")?
|
||||
}
|
||||
_ = cancel.cancelled() => {
|
||||
bail!("interrupted")
|
||||
}
|
||||
};
|
||||
if let Some(progress) = self.progress.as_ref() {
|
||||
progress.director_response(&directive_payload);
|
||||
}
|
||||
let req = SolverRequest::from(directive_payload);
|
||||
tokio::select! {
|
||||
r = solver_role.call(&req) => { r?; }
|
||||
_ = cancel.cancelled() => { bail!("interrupted"); }
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
async fn run_final_verification(
|
||||
&self,
|
||||
sessions: &mut RunSessions,
|
||||
verifier_pool: &mut VerifierPool,
|
||||
deliverable_path: &Path,
|
||||
summary: Option<&str>,
|
||||
options: &RunExecutionOptions,
|
||||
solver_role: &SolverRole,
|
||||
cancel: CancellationToken,
|
||||
) -> Result<bool> {
|
||||
let relative = deliverable_path
|
||||
.strip_prefix(sessions.store.path())
|
||||
.ok()
|
||||
.and_then(|p| p.to_str().map(|s| s.to_string()));
|
||||
let claim_path = relative.unwrap_or_else(|| deliverable_path.display().to_string());
|
||||
|
||||
let objective = crate::utils::objective_as_str(options);
|
||||
|
||||
let request = VerificationRequestPayload::new(claim_path.as_str(), summary, objective);
|
||||
if verifier_pool.is_empty() {
|
||||
return Ok(true);
|
||||
}
|
||||
let round = tokio::select! {
|
||||
r = verifier_pool.collect_round(&request) => { r? }
|
||||
_ = cancel.cancelled() => { bail!("interrupted"); }
|
||||
};
|
||||
verifier_pool
|
||||
.rotate_passing(sessions, &self.conversation_manager, &round.passing_roles)
|
||||
.await?;
|
||||
let summary_result = round.summary;
|
||||
self.emit_verification_summary(&summary_result);
|
||||
let req = SolverRequest::from(&summary_result);
|
||||
tokio::select! {
|
||||
r = solver_role.call(&req) => { r?; }
|
||||
_ = cancel.cancelled() => { bail!("interrupted"); }
|
||||
}
|
||||
Ok(summary_result.overall.is_pass())
|
||||
}
|
||||
|
||||
fn emit_verification_summary(&self, summary: &AggregatedVerifierVerdict) {
|
||||
if let Some(progress) = self.progress.as_ref() {
|
||||
progress.verification_summary(summary);
|
||||
}
|
||||
}
|
||||
|
||||
async fn cleanup_failed_spawn(&self, sessions: Vec<SessionCleanup>, run_path: &Path) {
|
||||
self.shutdown_sessions(sessions).await;
|
||||
if run_path.exists()
|
||||
&& let Err(err) = fs::remove_dir_all(run_path)
|
||||
{
|
||||
warn!(
|
||||
path = %run_path.display(),
|
||||
?err,
|
||||
"failed to remove run directory after spawn failure"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// resumable runs are disabled; cleanup_failed_resume removed
|
||||
|
||||
async fn shutdown_sessions(&self, sessions: Vec<SessionCleanup>) {
|
||||
for session in sessions {
|
||||
if let Err(err) = session.conversation.submit(Op::Shutdown).await {
|
||||
warn!(
|
||||
%session.conversation_id,
|
||||
?err,
|
||||
"failed to shutdown session during cleanup"
|
||||
);
|
||||
}
|
||||
let _ = self
|
||||
.conversation_manager
|
||||
.remove_conversation(&session.conversation_id)
|
||||
.await;
|
||||
}
|
||||
}
|
||||
|
||||
async fn spawn_and_register_role(
|
||||
&self,
|
||||
run_id: &str,
|
||||
run_path: &Path,
|
||||
role_config: &RoleConfig,
|
||||
store: &mut RunStore,
|
||||
cleanup: &mut Vec<SessionCleanup>,
|
||||
) -> Result<RoleSession> {
|
||||
let session = session::spawn_role(
|
||||
Arc::clone(&self.hub),
|
||||
&self.conversation_manager,
|
||||
run_id,
|
||||
run_path,
|
||||
role_config.clone(),
|
||||
prompts::ensure_instructions,
|
||||
)
|
||||
.await?;
|
||||
cleanup.push(SessionCleanup::new(&session));
|
||||
store.update_rollout_path(&session.role, session.rollout_path.clone())?;
|
||||
if let Some(path) = role_config.config_path.clone() {
|
||||
store.set_role_config_path(&session.role, path)?;
|
||||
}
|
||||
Ok(session)
|
||||
}
|
||||
|
||||
// resumable runs are disabled; resume_and_register_role removed
|
||||
}
|
||||
|
||||
impl InftyOrchestrator {
|
||||
/// Test-only helper to run a single verification round against all verifiers,
|
||||
/// applying the replacement policy (replace passes, keep failures).
|
||||
pub async fn verify_round_for_test(
|
||||
&self,
|
||||
sessions: &mut RunSessions,
|
||||
claim_path: &str,
|
||||
options: &RunExecutionOptions,
|
||||
) -> Result<AggregatedVerifierVerdict> {
|
||||
let mut pool = VerifierPool::from_sessions(
|
||||
Arc::clone(&self.hub),
|
||||
sessions,
|
||||
options.verifier_timeout,
|
||||
self.progress.clone(),
|
||||
);
|
||||
let req = VerificationRequestPayload::new(claim_path, None, None);
|
||||
let round = pool.collect_round(&req).await?;
|
||||
pool.rotate_passing(sessions, &self.conversation_manager, &round.passing_roles)
|
||||
.await?;
|
||||
Ok(round.summary)
|
||||
}
|
||||
}
|
||||
|
||||
fn collect_role_metadata(
|
||||
solver: &RoleConfig,
|
||||
director: &RoleConfig,
|
||||
verifiers: &[RoleConfig],
|
||||
) -> Vec<RoleMetadata> {
|
||||
solver_and_director_metadata(solver, director)
|
||||
.into_iter()
|
||||
.chain(verifiers.iter().map(|verifier| RoleMetadata {
|
||||
role: verifier.role.clone(),
|
||||
rollout_path: None,
|
||||
config_path: verifier.config_path.clone(),
|
||||
}))
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn solver_and_director_metadata(solver: &RoleConfig, director: &RoleConfig) -> Vec<RoleMetadata> {
|
||||
vec![
|
||||
RoleMetadata {
|
||||
role: solver.role.clone(),
|
||||
rollout_path: None,
|
||||
config_path: solver.config_path.clone(),
|
||||
},
|
||||
RoleMetadata {
|
||||
role: director.role.clone(),
|
||||
rollout_path: None,
|
||||
config_path: director.config_path.clone(),
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
fn collect_session_cleanup(sessions: &RunSessions) -> Vec<SessionCleanup> {
|
||||
let mut cleanup = Vec::with_capacity(2 + sessions.verifiers.len());
|
||||
cleanup.push(SessionCleanup::new(&sessions.solver));
|
||||
cleanup.push(SessionCleanup::new(&sessions.director));
|
||||
cleanup.extend(sessions.verifiers.iter().map(SessionCleanup::new));
|
||||
cleanup
|
||||
}
|
||||
25
codex-rs/codex-infty/src/progress.rs
Normal file
25
codex-rs/codex-infty/src/progress.rs
Normal file
@@ -0,0 +1,25 @@
|
||||
use std::path::Path;
|
||||
|
||||
use codex_core::protocol::AgentMessageEvent;
|
||||
use codex_core::protocol::EventMsg;
|
||||
|
||||
use crate::signals::AggregatedVerifierVerdict;
|
||||
use crate::signals::DirectiveResponse;
|
||||
use crate::signals::VerifierVerdict;
|
||||
|
||||
pub trait ProgressReporter: Send + Sync {
|
||||
fn objective_posted(&self, _objective: &str) {}
|
||||
fn solver_event(&self, _event: &EventMsg) {}
|
||||
fn role_event(&self, _role: &str, _event: &EventMsg) {}
|
||||
fn solver_agent_message(&self, _message: &AgentMessageEvent) {}
|
||||
/// Called when the solver emits a message that failed to parse as a valid
|
||||
/// JSON signal according to the expected `solver_signal_schema`.
|
||||
fn invalid_solver_signal(&self, _raw_message: &str) {}
|
||||
fn direction_request(&self, _prompt: &str) {}
|
||||
fn director_response(&self, _directive: &DirectiveResponse) {}
|
||||
fn verification_request(&self, _claim_path: &str, _notes: Option<&str>) {}
|
||||
fn verifier_verdict(&self, _role: &str, _verdict: &VerifierVerdict) {}
|
||||
fn verification_summary(&self, _summary: &AggregatedVerifierVerdict) {}
|
||||
fn final_delivery(&self, _deliverable_path: &Path, _summary: Option<&str>) {}
|
||||
fn run_interrupted(&self) {}
|
||||
}
|
||||
20
codex-rs/codex-infty/src/prompts/director.md
Normal file
20
codex-rs/codex-infty/src/prompts/director.md
Normal file
@@ -0,0 +1,20 @@
|
||||
You are the **Director**. Your role is to pilot/manage an agent to resolve a given objective in its totality.
|
||||
|
||||
## Guidelines:
|
||||
- The objective needs to be solved in its original format. If the agent propose a simplification or a partial resolution, this is not sufficient. You must tell the agent to solve the total objective.
|
||||
- The agent often just report you some results before moving to the next step. In this case, just encourage him to move with a simple "Go ahead", "Keep going" or this kind of message. In this case, no need for a rationale.
|
||||
- If the agent propose multiple approach, choose the approach which is the most likely to solve the objective.
|
||||
- If the agent is stuck or think he cannot resolve the objective, encourage him and try to find a solution together. Your role is to support the agent in his quest. It's sometimes necessary to slightly cheer him up
|
||||
- No infinite loop!!! If you detect that the agent sends multiple times the exact same message/question, you are probably in an infinite loop. Try to break it by re-focusing on the objective and how to approach it.
|
||||
- You must always be crip and inflexible. Keep in mind the objective
|
||||
- Remember that the agent should do the following. If you feel this is not the case, remember him:
|
||||
* Document his work
|
||||
* Have a very rigorous and clean approach
|
||||
* Focus on the total resolution of the objective.
|
||||
- Challenge the Solver whenever they drift toward summarising existing work instead of advancing the concrete proof or solution.
|
||||
|
||||
Respond **only** with JSON in this exact shape:
|
||||
```json
|
||||
{"directive":"<directive or next step>","rationale":"<why this is the right move>"}
|
||||
```
|
||||
Keep `directive` actionable and concise. Use `rationale` for supporting detail. Leave `rationale` empty if it adds no value.
|
||||
80
codex-rs/codex-infty/src/prompts/mod.rs
Normal file
80
codex-rs/codex-infty/src/prompts/mod.rs
Normal file
@@ -0,0 +1,80 @@
|
||||
use codex_core::config::Config;
|
||||
pub(crate) const DIRECTOR_PROMPT: &str = include_str!("director.md");
|
||||
pub(crate) const SOLVER_PROMPT: &str = include_str!("solver.md");
|
||||
pub(crate) const VERIFIER_PROMPT: &str = include_str!("verifier.md");
|
||||
|
||||
pub fn ensure_instructions(role: &str, config: &mut Config) {
|
||||
if config.base_instructions.is_none()
|
||||
&& let Some(text) = default_instructions_for_role(role)
|
||||
{
|
||||
config.base_instructions = Some(text.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
fn default_instructions_for_role(role: &str) -> Option<&'static str> {
|
||||
let normalized = role.to_ascii_lowercase();
|
||||
if normalized == "solver" {
|
||||
Some(SOLVER_PROMPT)
|
||||
} else if normalized == "director" {
|
||||
Some(DIRECTOR_PROMPT)
|
||||
} else if normalized.starts_with("verifier") {
|
||||
Some(VERIFIER_PROMPT)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use core_test_support::load_default_config_for_test;
|
||||
use tempfile::TempDir;
|
||||
|
||||
#[test]
|
||||
fn provides_prompts_for_known_roles() {
|
||||
let home = TempDir::new().unwrap();
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.base_instructions = None;
|
||||
ensure_instructions("solver", &mut config);
|
||||
assert!(
|
||||
config
|
||||
.base_instructions
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains("You are a brilliant mathematician")
|
||||
);
|
||||
|
||||
let home = TempDir::new().unwrap();
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.base_instructions = None;
|
||||
ensure_instructions("director", &mut config);
|
||||
assert!(
|
||||
config
|
||||
.base_instructions
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains("You are the **Director**")
|
||||
);
|
||||
|
||||
let home = TempDir::new().unwrap();
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.base_instructions = None;
|
||||
ensure_instructions("verifier-alpha", &mut config);
|
||||
assert!(
|
||||
config
|
||||
.base_instructions
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains("You are the **Verifier**")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn does_not_override_existing_instructions() {
|
||||
let home = TempDir::new().unwrap();
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.base_instructions = Some("custom".to_string());
|
||||
ensure_instructions("solver", &mut config);
|
||||
assert_eq!(config.base_instructions.as_deref(), Some("custom"));
|
||||
}
|
||||
}
|
||||
40
codex-rs/codex-infty/src/prompts/solver.md
Normal file
40
codex-rs/codex-infty/src/prompts/solver.md
Normal file
@@ -0,0 +1,40 @@
|
||||
You are a brilliant mathematician tasked with producing **new** reasoning, proof, construction, or counterexample that resolves the stated objective. Your goal is to make actual progress in science while being rigorous and innovative.
|
||||
|
||||
You MUST solve the provided objective in its totality. If not known solutions exist, it is your job to find a new one or to propose an intelligent approach.
|
||||
A result stating that this is not possible is not acceptable. If the solution does not exist, make it happen.
|
||||
|
||||
## Responsibilities
|
||||
- Understand the objective and break it into a living execution plan.
|
||||
- Produce artifacts under `artifacts/`, durable notes under `memory/`, and supporting indexes under `index/`. Prefer `apply_patch` for text edits and use `shell` for other filesystem work.
|
||||
- When you exit a task or take a dependency on external evidence, write JSON notes in `memory/claims/` that link to the supporting artifacts.
|
||||
- Run verification steps (tests, linters, proofs) under the sandbox before claiming completion.
|
||||
- Every deliverable must include the actual solution or proof (not just a literature review) and enough detail for the Verifier to reproduce or scrutinise it.
|
||||
- Your goal is to find new solutions to problems for which humans does not have solution yet. So do not focus on looking over the internet or in the literature and try building your own proofs.
|
||||
- You are very rigorous in your approach.
|
||||
- You do not fear new challenges. If a problem seems to be impossible to solve, try!
|
||||
|
||||
Available Codex tools mirror standard Codex sessions (e.g. `shell`, `apply_patch`). Assume all filesystem paths are relative to the current run store directory unless stated otherwise.
|
||||
|
||||
## Communication contract
|
||||
The orchestrator routes your structured messages to the Director. Respond with **JSON only**—no leading prose or trailing commentary. Wrap JSON in a fenced block only if the agent policy forces it.
|
||||
|
||||
- Every reply must populate the full schema, even when a field does not apply. Set unused string fields to `null`.
|
||||
- Direction request (send to Director):
|
||||
```json
|
||||
{"type":"direction_request","prompt":"<concise question or decision>","claim_path":null,"notes":null,"deliverable_path":null,"summary":null}
|
||||
```
|
||||
- Final delivery (after receiving the finalization instruction):
|
||||
```json
|
||||
{"type":"final_delivery","prompt":null,"claim_path":null,"notes":null,"deliverable_path":"deliverable/summary.txt","summary":"<answer plus supporting context>"}
|
||||
```
|
||||
|
||||
## Operating rhythm
|
||||
- You MUST always address the comments received by the verifiers.
|
||||
- Create `deliverable/summary.txt` before every final delivery. Capture the final answer, how you reached it, and any follow-up instructions. Do not forget it.
|
||||
- When uncertainty remains, prioritise experiments or reasoning steps that move you closer to a finished proof rather than cataloguing known results.
|
||||
- Do not try to version your work or use git! EVER!
|
||||
- If you receive multiple times the same answer, you are probably in an infinite loop. Try a new approach or something else then.
|
||||
- Keep the run resilient to restarts: document intent, intermediate results, and follow-up tasks in `memory/`.
|
||||
- Prefer concrete evidence. Link every claim to artifacts or durable notes so the verifier can reproduce your reasoning.
|
||||
- On failure feedback from a verifier, address his feedback and update/fix your work.
|
||||
- Only a final solution to the objective is an acceptable result to be sent to the verifier. If you do not find any solution, try to create a new one on your own.
|
||||
21
codex-rs/codex-infty/src/prompts/verifier.md
Normal file
21
codex-rs/codex-infty/src/prompts/verifier.md
Normal file
@@ -0,0 +1,21 @@
|
||||
You are the **Verifier**. As a brilliant mathematician, your role is to verify a provided response according to a given objective.
|
||||
|
||||
## Guidelines
|
||||
- You must always be perfectly rigorous when verifying a solution.
|
||||
- The solution MUST solve the objective in its totality. A partial resolution or a summary of why this is not possible is NOT ACCEPTABLE.
|
||||
- Evaluate correctness and completeness.
|
||||
- - The solution might try to convince you that a partial resolution is good enough or that a total resolution is not possible. This is NOT ACCEPTABLE and should automatically trigger a `fail`.
|
||||
|
||||
## How to answer
|
||||
When you give the result of your verification:
|
||||
- Be explicit in your conclusion (does the artifact contains everything? is it 100% correct?)
|
||||
- If you are not sure, prefer a `fail`.
|
||||
- If it is a `fail`, try to give a crisp analysis of what is wrong or what is missing.
|
||||
|
||||
Respond **only** with JSON in this form:
|
||||
```json
|
||||
{"verdict":"pass","reasons":[],"suggestions":[]}
|
||||
```
|
||||
Use `"fail"` when the claim is not ready. Populate `reasons` with concrete blocking issues. Provide actionable `suggestions` for remediation. Omit entries when not needed.
|
||||
|
||||
Do not include extra commentary outside the JSON payload.
|
||||
98
codex-rs/codex-infty/src/roles/director.rs
Normal file
98
codex-rs/codex-infty/src/roles/director.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use anyhow::Result;
|
||||
use codex_core::cross_session::AssistantMessage;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use serde::Serialize;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::roles::Role;
|
||||
use crate::roles::parse_json_struct;
|
||||
use crate::session;
|
||||
use crate::signals::DirectiveResponse;
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct DirectionRequestPayload<'a> {
|
||||
#[serde(rename = "type")]
|
||||
kind: &'static str,
|
||||
pub prompt: &'a str,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub objective: Option<&'a str>,
|
||||
}
|
||||
|
||||
impl<'a> DirectionRequestPayload<'a> {
|
||||
pub fn new(prompt: &'a str, objective: Option<&'a str>) -> Self {
|
||||
Self {
|
||||
kind: "direction_request",
|
||||
prompt,
|
||||
objective,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct DirectorRole {
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: String,
|
||||
role: String,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
}
|
||||
|
||||
impl DirectorRole {
|
||||
pub fn new(
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: impl Into<String>,
|
||||
role: impl Into<String>,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
hub,
|
||||
run_id: run_id.into(),
|
||||
role: role.into(),
|
||||
timeout,
|
||||
progress,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn response_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"required": ["directive", "rationale"],
|
||||
"properties": {
|
||||
"directive": { "type": "string" },
|
||||
"rationale": { "type": ["string", "null"] }
|
||||
},
|
||||
"additionalProperties": false
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Role<DirectionRequestPayload<'_>, DirectiveResponse> for DirectorRole {
|
||||
fn call<'a>(
|
||||
&'a self,
|
||||
req: &'a DirectionRequestPayload<'a>,
|
||||
) -> futures::future::BoxFuture<'a, Result<DirectiveResponse>> {
|
||||
Box::pin(async move {
|
||||
let request_text = serde_json::to_string_pretty(req)?;
|
||||
let handle = session::post_turn(
|
||||
self.hub.as_ref(),
|
||||
&self.run_id,
|
||||
&self.role,
|
||||
request_text,
|
||||
Some(Self::response_schema()),
|
||||
)
|
||||
.await?;
|
||||
let progress = self
|
||||
.progress
|
||||
.as_deref()
|
||||
.map(|reporter| (reporter, self.role.as_str()));
|
||||
let response: AssistantMessage =
|
||||
session::await_first_idle(self.hub.as_ref(), &handle, self.timeout, progress)
|
||||
.await?;
|
||||
parse_json_struct(&response.message.message)
|
||||
})
|
||||
}
|
||||
}
|
||||
49
codex-rs/codex-infty/src/roles/mod.rs
Normal file
49
codex-rs/codex-infty/src/roles/mod.rs
Normal file
@@ -0,0 +1,49 @@
|
||||
use anyhow::Result;
|
||||
use futures::future::BoxFuture;
|
||||
|
||||
pub mod director;
|
||||
pub mod solver;
|
||||
pub mod verifier;
|
||||
pub mod verifier_pool;
|
||||
|
||||
pub trait Role<Req, Resp> {
|
||||
fn call<'a>(&'a self, req: &'a Req) -> BoxFuture<'a, Result<Resp>>;
|
||||
}
|
||||
|
||||
// Shared helpers used by role implementations
|
||||
use anyhow::Context as _;
|
||||
use anyhow::anyhow;
|
||||
use std::any::type_name;
|
||||
|
||||
pub(crate) fn strip_json_code_fence(text: &str) -> Option<&str> {
|
||||
let trimmed = text.trim();
|
||||
if let Some(rest) = trimmed.strip_prefix("```json") {
|
||||
return rest.strip_suffix("```").map(str::trim);
|
||||
}
|
||||
if let Some(rest) = trimmed.strip_prefix("```JSON") {
|
||||
return rest.strip_suffix("```").map(str::trim);
|
||||
}
|
||||
if let Some(rest) = trimmed.strip_prefix("```") {
|
||||
return rest.strip_suffix("```").map(str::trim);
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
pub(crate) fn parse_json_struct<T>(message: &str) -> Result<T>
|
||||
where
|
||||
T: serde::de::DeserializeOwned,
|
||||
{
|
||||
let trimmed = message.trim();
|
||||
if trimmed.is_empty() {
|
||||
return Err(anyhow!("message was empty"));
|
||||
}
|
||||
|
||||
serde_json::from_str(trimmed)
|
||||
.or_else(|err| {
|
||||
strip_json_code_fence(trimmed)
|
||||
.map(|inner| serde_json::from_str(inner))
|
||||
.unwrap_or_else(|| Err(err))
|
||||
})
|
||||
.map_err(|err| anyhow!(err))
|
||||
.with_context(|| format!("failed to parse message as {}", type_name::<T>()))
|
||||
}
|
||||
202
codex-rs/codex-infty/src/roles/solver.rs
Normal file
202
codex-rs/codex-infty/src/roles/solver.rs
Normal file
@@ -0,0 +1,202 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use anyhow::Result;
|
||||
use codex_core::cross_session::AssistantMessage;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use codex_core::cross_session::SessionEventStream;
|
||||
use codex_protocol::ConversationId;
|
||||
use serde::de::Error as _;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::roles::Role;
|
||||
use crate::session;
|
||||
use crate::signals::AggregatedVerifierVerdict;
|
||||
use crate::signals::DirectiveResponse;
|
||||
|
||||
pub struct SolverRole {
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: String,
|
||||
role: String,
|
||||
conversation_id: ConversationId,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
}
|
||||
|
||||
impl SolverRole {
|
||||
pub fn new(
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: impl Into<String>,
|
||||
role: impl Into<String>,
|
||||
conversation_id: ConversationId,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
hub,
|
||||
run_id: run_id.into(),
|
||||
role: role.into(),
|
||||
conversation_id,
|
||||
progress,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn solver_signal_schema() -> Value {
|
||||
// Only allow asking the director or sending the final result.
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": { "type": "string", "enum": ["direction_request", "final_delivery"] },
|
||||
"prompt": { "type": ["string", "null"] },
|
||||
"deliverable_path": { "type": ["string", "null"] },
|
||||
"summary": { "type": ["string", "null"] }
|
||||
},
|
||||
"required": ["type", "prompt", "deliverable_path", "summary"],
|
||||
"additionalProperties": false
|
||||
})
|
||||
}
|
||||
|
||||
pub fn final_delivery_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"required": ["type", "deliverable_path", "summary"],
|
||||
"properties": {
|
||||
"type": { "const": "final_delivery" },
|
||||
"deliverable_path": { "type": "string" },
|
||||
"summary": { "type": ["string", "null"] }
|
||||
},
|
||||
"additionalProperties": false
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn post(
|
||||
&self,
|
||||
text: impl Into<String>,
|
||||
final_output_json_schema: Option<Value>,
|
||||
) -> Result<()> {
|
||||
let _ = session::post_turn(
|
||||
self.hub.as_ref(),
|
||||
&self.run_id,
|
||||
&self.role,
|
||||
text,
|
||||
final_output_json_schema,
|
||||
)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn stream_events(
|
||||
&self,
|
||||
) -> Result<SessionEventStream, codex_core::cross_session::CrossSessionError> {
|
||||
self.hub.stream_events(self.conversation_id)
|
||||
}
|
||||
|
||||
pub async fn request_finalization_signal(&self) -> Result<()> {
|
||||
let handle = session::post_turn(
|
||||
self.hub.as_ref(),
|
||||
&self.run_id,
|
||||
&self.role,
|
||||
crate::types::FINALIZATION_PROMPT,
|
||||
Some(Self::final_delivery_schema()),
|
||||
)
|
||||
.await?;
|
||||
// Allow more time for the solver to start emitting the
|
||||
// finalization signal before timing out as "idle".
|
||||
let _ =
|
||||
session::await_first_idle(self.hub.as_ref(), &handle, Duration::from_secs(120), None)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct SolverPost {
|
||||
pub text: String,
|
||||
pub final_output_json_schema: Option<Value>,
|
||||
pub timeout: Duration,
|
||||
}
|
||||
|
||||
pub enum SolverRequest {
|
||||
Directive(DirectiveResponse),
|
||||
VerificationSummary(AggregatedVerifierVerdict),
|
||||
}
|
||||
|
||||
impl From<DirectiveResponse> for SolverRequest {
|
||||
fn from(d: DirectiveResponse) -> Self {
|
||||
SolverRequest::Directive(d)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&AggregatedVerifierVerdict> for SolverRequest {
|
||||
fn from(v: &AggregatedVerifierVerdict) -> Self {
|
||||
SolverRequest::VerificationSummary(v.clone())
|
||||
}
|
||||
}
|
||||
|
||||
impl SolverRequest {
|
||||
fn to_text(&self) -> Result<String> {
|
||||
match self {
|
||||
SolverRequest::Directive(d) => Ok(serde_json::to_string_pretty(d)?),
|
||||
SolverRequest::VerificationSummary(s) => Ok(serde_json::to_string_pretty(s)?),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Role<SolverPost, AssistantMessage> for SolverRole {
|
||||
fn call<'a>(
|
||||
&'a self,
|
||||
req: &'a SolverPost,
|
||||
) -> futures::future::BoxFuture<'a, Result<AssistantMessage>> {
|
||||
Box::pin(async move {
|
||||
let handle = session::post_turn(
|
||||
self.hub.as_ref(),
|
||||
&self.run_id,
|
||||
&self.role,
|
||||
req.text.clone(),
|
||||
req.final_output_json_schema.clone(),
|
||||
)
|
||||
.await?;
|
||||
let progress = self
|
||||
.progress
|
||||
.as_deref()
|
||||
.map(|reporter| (reporter, self.role.as_str()));
|
||||
session::await_first_idle(self.hub.as_ref(), &handle, req.timeout, progress).await
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Role<SolverRequest, ()> for SolverRole {
|
||||
fn call<'a>(&'a self, req: &'a SolverRequest) -> futures::future::BoxFuture<'a, Result<()>> {
|
||||
Box::pin(async move {
|
||||
let text = req.to_text()?;
|
||||
self.post(text, Some(Self::solver_signal_schema())).await
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, serde::Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum SolverSignal {
|
||||
DirectionRequest {
|
||||
#[serde(default)]
|
||||
prompt: Option<String>,
|
||||
},
|
||||
FinalDelivery {
|
||||
#[serde(default)]
|
||||
deliverable_path: Option<String>,
|
||||
#[serde(default)]
|
||||
summary: Option<String>,
|
||||
},
|
||||
}
|
||||
|
||||
pub fn parse_solver_signal(message: &str) -> Option<SolverSignal> {
|
||||
let trimmed = message.trim();
|
||||
if trimmed.is_empty() {
|
||||
return None;
|
||||
}
|
||||
serde_json::from_str(trimmed)
|
||||
.or_else(|_| {
|
||||
crate::roles::strip_json_code_fence(trimmed)
|
||||
.map(|inner| serde_json::from_str(inner.trim()))
|
||||
.unwrap_or_else(|| Err(serde_json::Error::custom("invalid payload")))
|
||||
})
|
||||
.ok()
|
||||
}
|
||||
132
codex-rs/codex-infty/src/roles/verifier.rs
Normal file
132
codex-rs/codex-infty/src/roles/verifier.rs
Normal file
@@ -0,0 +1,132 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use anyhow::Result;
|
||||
use codex_core::cross_session::AssistantMessage;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use serde::Serialize;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::roles::Role;
|
||||
use crate::roles::parse_json_struct;
|
||||
use crate::session;
|
||||
use crate::signals::VerifierVerdict;
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct VerificationRequestPayload<'a> {
|
||||
#[serde(rename = "type")]
|
||||
kind: &'static str,
|
||||
pub claim_path: &'a str,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub notes: Option<&'a str>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub objective: Option<&'a str>,
|
||||
}
|
||||
|
||||
impl<'a> VerificationRequestPayload<'a> {
|
||||
pub fn new(claim_path: &'a str, notes: Option<&'a str>, objective: Option<&'a str>) -> Self {
|
||||
Self {
|
||||
kind: "verification_request",
|
||||
claim_path,
|
||||
notes,
|
||||
objective,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct VerifierRole {
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: String,
|
||||
role: String,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
}
|
||||
|
||||
impl VerifierRole {
|
||||
pub fn new(
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: impl Into<String>,
|
||||
role: impl Into<String>,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
hub,
|
||||
run_id: run_id.into(),
|
||||
role: role.into(),
|
||||
timeout,
|
||||
progress,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn role(&self) -> &str {
|
||||
&self.role
|
||||
}
|
||||
|
||||
pub fn response_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"required": ["verdict", "reasons", "suggestions"],
|
||||
"properties": {
|
||||
"verdict": { "type": "string", "enum": ["pass", "fail"] },
|
||||
"reasons": { "type": "array", "items": { "type": "string" } },
|
||||
"suggestions": { "type": "array", "items": { "type": "string" } }
|
||||
},
|
||||
"additionalProperties": false
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Role<VerificationRequestPayload<'_>, VerifierVerdict> for VerifierRole {
|
||||
fn call<'a>(
|
||||
&'a self,
|
||||
req: &'a VerificationRequestPayload<'a>,
|
||||
) -> futures::future::BoxFuture<'a, Result<VerifierVerdict>> {
|
||||
Box::pin(async move {
|
||||
let request_text = serde_json::to_string_pretty(req)?;
|
||||
let handle = session::post_turn(
|
||||
self.hub.as_ref(),
|
||||
&self.run_id,
|
||||
&self.role,
|
||||
request_text,
|
||||
Some(Self::response_schema()),
|
||||
)
|
||||
.await?;
|
||||
let progress = self
|
||||
.progress
|
||||
.as_deref()
|
||||
.map(|reporter| (reporter, self.role.as_str()));
|
||||
let response: AssistantMessage =
|
||||
session::await_first_idle(self.hub.as_ref(), &handle, self.timeout, progress)
|
||||
.await?;
|
||||
parse_json_struct(&response.message.message)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub fn aggregate_verdicts(items: Vec<(String, VerifierVerdict)>) -> AggregatedVerifierVerdict {
|
||||
let mut overall = VerifierDecision::Pass;
|
||||
let mut verdicts = Vec::with_capacity(items.len());
|
||||
|
||||
for (role, verdict) in items {
|
||||
if !verdict.verdict.is_pass() {
|
||||
overall = VerifierDecision::Fail;
|
||||
}
|
||||
verdicts.push(VerifierReport {
|
||||
role,
|
||||
verdict: verdict.verdict,
|
||||
reasons: verdict.reasons,
|
||||
suggestions: verdict.suggestions,
|
||||
});
|
||||
}
|
||||
|
||||
AggregatedVerifierVerdict {
|
||||
kind: "verification_feedback",
|
||||
overall,
|
||||
verdicts,
|
||||
}
|
||||
}
|
||||
use crate::signals::AggregatedVerifierVerdict;
|
||||
use crate::signals::VerifierDecision;
|
||||
use crate::signals::VerifierReport;
|
||||
153
codex-rs/codex-infty/src/roles/verifier_pool.rs
Normal file
153
codex-rs/codex-infty/src/roles/verifier_pool.rs
Normal file
@@ -0,0 +1,153 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use anyhow::Context as _;
|
||||
use anyhow::Result;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use codex_core::protocol::Op;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::prompts;
|
||||
use crate::roles::Role;
|
||||
use crate::roles::verifier::VerificationRequestPayload;
|
||||
use crate::roles::verifier::VerifierRole;
|
||||
use crate::roles::verifier::aggregate_verdicts;
|
||||
use crate::session;
|
||||
use crate::signals::AggregatedVerifierVerdict;
|
||||
use crate::signals::VerifierVerdict;
|
||||
use crate::types::RoleConfig;
|
||||
use crate::types::RunSessions;
|
||||
|
||||
pub struct VerificationRound {
|
||||
pub summary: AggregatedVerifierVerdict,
|
||||
pub passing_roles: Vec<String>,
|
||||
}
|
||||
|
||||
pub struct VerifierPool {
|
||||
hub: Arc<CrossSessionHub>,
|
||||
run_id: String,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
roles: Vec<VerifierRole>,
|
||||
}
|
||||
|
||||
impl VerifierPool {
|
||||
pub fn from_sessions(
|
||||
hub: Arc<CrossSessionHub>,
|
||||
sessions: &RunSessions,
|
||||
timeout: Duration,
|
||||
progress: Option<Arc<dyn ProgressReporter>>,
|
||||
) -> Self {
|
||||
let roles = sessions
|
||||
.verifiers
|
||||
.iter()
|
||||
.map(|v| {
|
||||
VerifierRole::new(
|
||||
Arc::clone(&hub),
|
||||
sessions.run_id.clone(),
|
||||
v.role.clone(),
|
||||
timeout,
|
||||
progress.clone(),
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
Self {
|
||||
hub,
|
||||
run_id: sessions.run_id.clone(),
|
||||
timeout,
|
||||
progress,
|
||||
roles,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.roles.is_empty()
|
||||
}
|
||||
|
||||
pub async fn collect_round(
|
||||
&self,
|
||||
request: &VerificationRequestPayload<'_>,
|
||||
) -> Result<VerificationRound> {
|
||||
let futures = self
|
||||
.roles
|
||||
.iter()
|
||||
.map(|role| async {
|
||||
let name = role.role().to_string();
|
||||
let verdict = role.call(request).await;
|
||||
(name, verdict)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
let joined = futures::future::join_all(futures).await;
|
||||
|
||||
let mut results: Vec<(String, VerifierVerdict)> = Vec::with_capacity(joined.len());
|
||||
let mut passing_roles: Vec<String> = Vec::new();
|
||||
for (name, verdict_res) in joined.into_iter() {
|
||||
let verdict = verdict_res
|
||||
.with_context(|| format!("verifier {} returned invalid verdict JSON", name))?;
|
||||
if let Some(progress) = self.progress.as_ref() {
|
||||
progress.verifier_verdict(&name, &verdict);
|
||||
}
|
||||
if verdict.verdict.is_pass() {
|
||||
passing_roles.push(name.clone());
|
||||
}
|
||||
results.push((name, verdict));
|
||||
}
|
||||
let summary = aggregate_verdicts(results);
|
||||
Ok(VerificationRound {
|
||||
summary,
|
||||
passing_roles,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn replace_role(&mut self, role_name: &str) {
|
||||
if let Some(idx) = self.roles.iter().position(|v| v.role() == role_name) {
|
||||
self.roles[idx] = VerifierRole::new(
|
||||
Arc::clone(&self.hub),
|
||||
self.run_id.clone(),
|
||||
role_name.to_string(),
|
||||
self.timeout,
|
||||
self.progress.clone(),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn rotate_passing(
|
||||
&mut self,
|
||||
sessions: &mut RunSessions,
|
||||
manager: &ConversationManager,
|
||||
passing_roles: &[String],
|
||||
) -> Result<()> {
|
||||
for role in passing_roles {
|
||||
// find existing index
|
||||
let Some(idx) = sessions.verifiers.iter().position(|s| &s.role == role) else {
|
||||
continue;
|
||||
};
|
||||
let old = &sessions.verifiers[idx];
|
||||
// best-effort shutdown and unregister
|
||||
let _ = old.conversation.submit(Op::Shutdown).await;
|
||||
let _ = manager.remove_conversation(&old.conversation_id).await;
|
||||
|
||||
// Reuse the existing verifier's config so overrides (e.g., base_url in tests)
|
||||
// are preserved when respawning a passing verifier.
|
||||
let config = old.config.clone();
|
||||
let role_config = RoleConfig::new(role.to_string(), config);
|
||||
let run_path = sessions.store.path();
|
||||
let session = session::spawn_role(
|
||||
Arc::clone(&self.hub),
|
||||
manager,
|
||||
&self.run_id,
|
||||
run_path,
|
||||
role_config,
|
||||
prompts::ensure_instructions,
|
||||
)
|
||||
.await?;
|
||||
sessions
|
||||
.store
|
||||
.update_rollout_path(&session.role, session.rollout_path.clone())?;
|
||||
sessions.verifiers[idx] = session;
|
||||
self.replace_role(role);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
211
codex-rs/codex-infty/src/run_store.rs
Normal file
211
codex-rs/codex-infty/src/run_store.rs
Normal file
@@ -0,0 +1,211 @@
|
||||
use std::fs;
|
||||
use std::io::Write;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use chrono::DateTime;
|
||||
use chrono::Utc;
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
use tempfile::NamedTempFile;
|
||||
|
||||
const ARTIFACTS_DIR: &str = "artifacts";
|
||||
const MEMORY_DIR: &str = "memory";
|
||||
const INDEX_DIR: &str = "index";
|
||||
const DELIVERABLE_DIR: &str = "deliverable";
|
||||
const METADATA_FILE: &str = "run.json";
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct RoleMetadata {
|
||||
pub role: String,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub rollout_path: Option<PathBuf>,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub config_path: Option<PathBuf>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct RunMetadata {
|
||||
pub run_id: String,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub updated_at: DateTime<Utc>,
|
||||
pub roles: Vec<RoleMetadata>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RunStore {
|
||||
path: PathBuf,
|
||||
metadata: RunMetadata,
|
||||
}
|
||||
|
||||
impl RunStore {
|
||||
pub fn initialize(
|
||||
run_path: impl AsRef<Path>,
|
||||
run_id: &str,
|
||||
roles: &[RoleMetadata],
|
||||
) -> Result<Self> {
|
||||
let run_path = run_path.as_ref().to_path_buf();
|
||||
fs::create_dir_all(&run_path)
|
||||
.with_context(|| format!("failed to create run directory {}", run_path.display()))?;
|
||||
|
||||
for child in [ARTIFACTS_DIR, MEMORY_DIR, INDEX_DIR, DELIVERABLE_DIR] {
|
||||
fs::create_dir_all(run_path.join(child))
|
||||
.with_context(|| format!("failed to create subdirectory {child}"))?;
|
||||
}
|
||||
|
||||
let metadata_path = run_path.join(METADATA_FILE);
|
||||
if metadata_path.exists() {
|
||||
return Err(anyhow!(
|
||||
"run metadata already exists at {}",
|
||||
metadata_path.display()
|
||||
));
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
let metadata = RunMetadata {
|
||||
run_id: run_id.to_string(),
|
||||
created_at: now,
|
||||
updated_at: now,
|
||||
roles: roles.to_vec(),
|
||||
};
|
||||
write_metadata(&metadata_path, &metadata)?;
|
||||
|
||||
Ok(Self {
|
||||
path: run_path,
|
||||
metadata,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn load(run_path: impl AsRef<Path>) -> Result<Self> {
|
||||
let run_path = run_path.as_ref().to_path_buf();
|
||||
let metadata_path = run_path.join(METADATA_FILE);
|
||||
let metadata: RunMetadata = serde_json::from_slice(
|
||||
&fs::read(&metadata_path)
|
||||
.with_context(|| format!("failed to read {}", metadata_path.display()))?,
|
||||
)
|
||||
.with_context(|| format!("failed to parse {}", metadata_path.display()))?;
|
||||
|
||||
Ok(Self {
|
||||
path: run_path,
|
||||
metadata,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn path(&self) -> &Path {
|
||||
&self.path
|
||||
}
|
||||
|
||||
pub fn metadata(&self) -> &RunMetadata {
|
||||
&self.metadata
|
||||
}
|
||||
|
||||
pub fn role_metadata(&self, role: &str) -> Option<&RoleMetadata> {
|
||||
self.metadata.roles.iter().find(|meta| meta.role == role)
|
||||
}
|
||||
|
||||
pub fn update_rollout_path(&mut self, role: &str, rollout_path: PathBuf) -> Result<()> {
|
||||
if let Some(meta) = self
|
||||
.metadata
|
||||
.roles
|
||||
.iter_mut()
|
||||
.find(|meta| meta.role == role)
|
||||
{
|
||||
meta.rollout_path = Some(rollout_path);
|
||||
self.commit_metadata()
|
||||
} else {
|
||||
Err(anyhow!("role {role} not found in run store"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_role_config_path(&mut self, role: &str, path: PathBuf) -> Result<()> {
|
||||
if let Some(meta) = self
|
||||
.metadata
|
||||
.roles
|
||||
.iter_mut()
|
||||
.find(|meta| meta.role == role)
|
||||
{
|
||||
meta.config_path = Some(path);
|
||||
self.commit_metadata()
|
||||
} else {
|
||||
Err(anyhow!("role {role} not found in run store"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn touch(&mut self) -> Result<()> {
|
||||
self.metadata.updated_at = Utc::now();
|
||||
self.commit_metadata()
|
||||
}
|
||||
|
||||
fn commit_metadata(&mut self) -> Result<()> {
|
||||
self.metadata.updated_at = Utc::now();
|
||||
let metadata_path = self.path.join(METADATA_FILE);
|
||||
write_metadata(&metadata_path, &self.metadata)
|
||||
}
|
||||
}
|
||||
|
||||
fn write_metadata(path: &Path, metadata: &RunMetadata) -> Result<()> {
|
||||
let parent = path
|
||||
.parent()
|
||||
.ok_or_else(|| anyhow!("metadata path must have parent"))?;
|
||||
let mut temp = NamedTempFile::new_in(parent)
|
||||
.with_context(|| format!("failed to create temp file in {}", parent.display()))?;
|
||||
serde_json::to_writer_pretty(&mut temp, metadata)?;
|
||||
temp.flush()?;
|
||||
temp.persist(path)
|
||||
.with_context(|| format!("failed to persist metadata to {}", path.display()))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
#[test]
|
||||
fn initialize_creates_directories_and_metadata() {
|
||||
let temp = TempDir::new().unwrap();
|
||||
let run_path = temp.path().join("run_1");
|
||||
let roles = vec![
|
||||
RoleMetadata {
|
||||
role: "solver".into(),
|
||||
rollout_path: None,
|
||||
config_path: None,
|
||||
},
|
||||
RoleMetadata {
|
||||
role: "director".into(),
|
||||
rollout_path: None,
|
||||
config_path: None,
|
||||
},
|
||||
];
|
||||
|
||||
let store = RunStore::initialize(&run_path, "run_1", &roles).unwrap();
|
||||
assert!(store.path().join(ARTIFACTS_DIR).is_dir());
|
||||
assert!(store.path().join(MEMORY_DIR).is_dir());
|
||||
assert!(store.path().join(INDEX_DIR).is_dir());
|
||||
assert!(store.path().join(DELIVERABLE_DIR).is_dir());
|
||||
assert_eq!(store.metadata().roles.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn update_rollout_persists_metadata() {
|
||||
let temp = TempDir::new().unwrap();
|
||||
let run_path = temp.path().join("run_2");
|
||||
let roles = vec![RoleMetadata {
|
||||
role: "solver".into(),
|
||||
rollout_path: None,
|
||||
config_path: None,
|
||||
}];
|
||||
let mut store = RunStore::initialize(&run_path, "run_2", &roles).unwrap();
|
||||
let rollout = PathBuf::from("/tmp/rollout.jsonl");
|
||||
store
|
||||
.update_rollout_path("solver", rollout.clone())
|
||||
.unwrap();
|
||||
|
||||
let loaded = RunStore::load(&run_path).unwrap();
|
||||
let solver = loaded.role_metadata("solver").unwrap();
|
||||
assert_eq!(solver.rollout_path.as_ref().unwrap(), &rollout);
|
||||
}
|
||||
}
|
||||
112
codex-rs/codex-infty/src/session.rs
Normal file
112
codex-rs/codex-infty/src/session.rs
Normal file
@@ -0,0 +1,112 @@
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use anyhow::bail;
|
||||
use codex_core::ConversationManager;
|
||||
use codex_core::CrossSessionSpawnParams;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::cross_session::AssistantMessage;
|
||||
use codex_core::cross_session::CrossSessionError;
|
||||
use codex_core::cross_session::CrossSessionHub;
|
||||
use codex_core::cross_session::PostUserTurnRequest;
|
||||
use codex_core::cross_session::RoleOrId;
|
||||
use codex_core::cross_session::TurnHandle;
|
||||
use serde_json::Value;
|
||||
use tokio::time::Instant;
|
||||
use tokio_stream::StreamExt as _;
|
||||
|
||||
use crate::progress::ProgressReporter;
|
||||
use crate::types::RoleConfig;
|
||||
use crate::types::RoleSession;
|
||||
|
||||
pub async fn spawn_role(
|
||||
hub: Arc<CrossSessionHub>,
|
||||
manager: &ConversationManager,
|
||||
run_id: &str,
|
||||
run_path: &Path,
|
||||
role_config: RoleConfig,
|
||||
ensure_instructions: impl FnOnce(&str, &mut Config),
|
||||
) -> Result<RoleSession> {
|
||||
let RoleConfig {
|
||||
role, mut config, ..
|
||||
} = role_config;
|
||||
config.cwd = run_path.to_path_buf();
|
||||
ensure_instructions(&role, &mut config);
|
||||
let cfg_for_session = config.clone();
|
||||
let session = manager
|
||||
.new_conversation_with_cross_session(
|
||||
cfg_for_session,
|
||||
CrossSessionSpawnParams {
|
||||
hub: Arc::clone(&hub),
|
||||
run_id: Some(run_id.to_string()),
|
||||
role: Some(role.clone()),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
// Note: include the final config used to spawn the session
|
||||
Ok(RoleSession::from_new(role, session, config))
|
||||
}
|
||||
|
||||
// resumable runs are disabled for now; resume_role removed
|
||||
|
||||
pub async fn post_turn(
|
||||
hub: &CrossSessionHub,
|
||||
run_id: &str,
|
||||
role: &str,
|
||||
text: impl Into<String>,
|
||||
final_output_json_schema: Option<Value>,
|
||||
) -> Result<TurnHandle, CrossSessionError> {
|
||||
hub.post_user_turn(PostUserTurnRequest {
|
||||
target: RoleOrId::RunRole {
|
||||
run_id: run_id.to_string(),
|
||||
role: role.to_string(),
|
||||
},
|
||||
text: text.into(),
|
||||
final_output_json_schema,
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn await_first_idle(
|
||||
hub: &CrossSessionHub,
|
||||
handle: &TurnHandle,
|
||||
idle_timeout: Duration,
|
||||
progress: Option<(&dyn ProgressReporter, &str)>,
|
||||
) -> Result<AssistantMessage> {
|
||||
let mut events = hub.stream_events(handle.conversation_id())?;
|
||||
let wait_first = hub.await_first_assistant(handle, idle_timeout);
|
||||
tokio::pin!(wait_first);
|
||||
|
||||
let idle = tokio::time::sleep(idle_timeout);
|
||||
tokio::pin!(idle);
|
||||
|
||||
let submission_id = handle.submission_id().to_string();
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
result = &mut wait_first => {
|
||||
return result.map_err(|err| anyhow!(err));
|
||||
}
|
||||
maybe_event = events.next() => {
|
||||
let Some(event) = maybe_event else {
|
||||
bail!(CrossSessionError::SessionClosed);
|
||||
};
|
||||
if event.event.id == submission_id {
|
||||
if let Some((reporter, role)) = progress {
|
||||
reporter.role_event(role, &event.event.msg);
|
||||
}
|
||||
if let codex_core::protocol::EventMsg::Error(err) = &event.event.msg {
|
||||
bail!(anyhow!(err.message.clone()));
|
||||
}
|
||||
idle.as_mut().reset(Instant::now() + idle_timeout);
|
||||
}
|
||||
}
|
||||
_ = &mut idle => {
|
||||
bail!(CrossSessionError::AwaitTimeout(idle_timeout));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
55
codex-rs/codex-infty/src/signals.rs
Normal file
55
codex-rs/codex-infty/src/signals.rs
Normal file
@@ -0,0 +1,55 @@
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize)]
|
||||
pub struct DirectiveResponse {
|
||||
pub directive: String,
|
||||
#[serde(default)]
|
||||
pub rationale: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, Deserialize, Serialize, PartialEq, Eq)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum VerifierDecision {
|
||||
Pass,
|
||||
Fail,
|
||||
}
|
||||
|
||||
impl VerifierDecision {
|
||||
pub fn is_pass(self) -> bool {
|
||||
matches!(self, VerifierDecision::Pass)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize, Clone)]
|
||||
pub struct VerifierVerdict {
|
||||
pub verdict: VerifierDecision,
|
||||
#[serde(default)]
|
||||
pub reasons: Vec<String>,
|
||||
#[serde(default)]
|
||||
pub suggestions: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Clone)]
|
||||
pub struct VerifierReport {
|
||||
pub role: String,
|
||||
pub verdict: VerifierDecision,
|
||||
#[serde(default)]
|
||||
pub reasons: Vec<String>,
|
||||
#[serde(default)]
|
||||
pub suggestions: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Clone)]
|
||||
pub struct AggregatedVerifierVerdict {
|
||||
#[serde(rename = "type")]
|
||||
pub kind: &'static str,
|
||||
pub overall: VerifierDecision,
|
||||
pub verdicts: Vec<VerifierReport>,
|
||||
}
|
||||
|
||||
impl From<&AggregatedVerifierVerdict> for String {
|
||||
fn from(value: &AggregatedVerifierVerdict) -> Self {
|
||||
serde_json::to_string_pretty(value).unwrap_or_else(|_| "{}".to_string())
|
||||
}
|
||||
}
|
||||
103
codex-rs/codex-infty/src/types.rs
Normal file
103
codex-rs/codex-infty/src/types.rs
Normal file
@@ -0,0 +1,103 @@
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::CodexConversation;
|
||||
use codex_core::NewConversation;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
use codex_core::protocol::SandboxPolicy;
|
||||
use codex_protocol::ConversationId;
|
||||
|
||||
pub(crate) const DEFAULT_DIRECTOR_TIMEOUT: Duration = Duration::from_secs(1200);
|
||||
pub(crate) const DEFAULT_VERIFIER_TIMEOUT: Duration = Duration::from_secs(1800);
|
||||
pub(crate) const FINALIZATION_PROMPT: &str = "Create deliverable/: include compiled artifacts or scripts, usage docs, and tests. Write deliverable/summary.txt capturing the final answer, evidence, and follow-up steps. Also provide deliverable/README.md with overview, manifest (paths and sizes), verification steps, and limitations. Remove scratch files. Reply with JSON: {\"type\":\"final_delivery\",\"deliverable_path\":\"deliverable/summary.txt\",\"summary\":\"<answer plus supporting context>\"}.";
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct RoleConfig {
|
||||
pub role: String,
|
||||
pub config: Config,
|
||||
pub config_path: Option<PathBuf>,
|
||||
}
|
||||
|
||||
impl RoleConfig {
|
||||
pub fn new(role: impl Into<String>, mut config: Config) -> Self {
|
||||
config.sandbox_policy = SandboxPolicy::DangerFullAccess;
|
||||
config.approval_policy = AskForApproval::Never;
|
||||
Self {
|
||||
role: role.into(),
|
||||
config,
|
||||
config_path: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn with_path(role: impl Into<String>, config: Config, config_path: PathBuf) -> Self {
|
||||
Self {
|
||||
role: role.into(),
|
||||
config,
|
||||
config_path: Some(config_path),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct RunParams {
|
||||
pub run_id: String,
|
||||
pub run_root: Option<PathBuf>,
|
||||
pub solver: RoleConfig,
|
||||
pub director: RoleConfig,
|
||||
pub verifiers: Vec<RoleConfig>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct RunExecutionOptions {
|
||||
pub objective: Option<String>,
|
||||
pub director_timeout: Duration,
|
||||
pub verifier_timeout: Duration,
|
||||
}
|
||||
|
||||
impl Default for RunExecutionOptions {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
objective: None,
|
||||
director_timeout: DEFAULT_DIRECTOR_TIMEOUT,
|
||||
verifier_timeout: DEFAULT_VERIFIER_TIMEOUT,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct RunOutcome {
|
||||
pub run_id: String,
|
||||
pub deliverable_path: PathBuf,
|
||||
pub summary: Option<String>,
|
||||
pub raw_message: String,
|
||||
}
|
||||
|
||||
pub struct RoleSession {
|
||||
pub role: String,
|
||||
pub conversation_id: ConversationId,
|
||||
pub conversation: Arc<CodexConversation>,
|
||||
pub session_configured: codex_core::protocol::SessionConfiguredEvent,
|
||||
pub rollout_path: PathBuf,
|
||||
pub config: Config,
|
||||
}
|
||||
|
||||
impl RoleSession {
|
||||
pub(crate) fn from_new(role: String, session: NewConversation, config: Config) -> Self {
|
||||
Self {
|
||||
role,
|
||||
conversation_id: session.conversation_id,
|
||||
conversation: session.conversation,
|
||||
session_configured: session.session_configured.clone(),
|
||||
rollout_path: session.session_configured.rollout_path.clone(),
|
||||
config,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct RunSessions {
|
||||
pub run_id: String,
|
||||
pub solver: RoleSession,
|
||||
pub director: RoleSession,
|
||||
pub verifiers: Vec<RoleSession>,
|
||||
pub store: crate::RunStore,
|
||||
}
|
||||
91
codex-rs/codex-infty/src/utils.rs
Normal file
91
codex-rs/codex-infty/src/utils.rs
Normal file
@@ -0,0 +1,91 @@
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use anyhow::anyhow;
|
||||
use anyhow::bail;
|
||||
|
||||
pub fn trim_to_non_empty(opt: Option<String>) -> Option<String> {
|
||||
opt.and_then(|s| {
|
||||
let trimmed = s.trim();
|
||||
if trimmed.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(trimmed.to_string())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
pub fn required_trimmed(opt: Option<String>, err_msg: &str) -> Result<String> {
|
||||
trim_to_non_empty(opt).ok_or_else(|| anyhow!(err_msg.to_string()))
|
||||
}
|
||||
|
||||
pub fn resolve_deliverable_path(base: &Path, candidate: &str) -> Result<PathBuf> {
|
||||
let base_abs = base
|
||||
.canonicalize()
|
||||
.with_context(|| format!("failed to canonicalize run store {}", base.display()))?;
|
||||
|
||||
let candidate_path = Path::new(candidate);
|
||||
let joined = if candidate_path.is_absolute() {
|
||||
candidate_path.to_path_buf()
|
||||
} else {
|
||||
base_abs.join(candidate_path)
|
||||
};
|
||||
|
||||
let resolved = joined.canonicalize().with_context(|| {
|
||||
format!(
|
||||
"failed to canonicalize deliverable path {}",
|
||||
joined.display()
|
||||
)
|
||||
})?;
|
||||
|
||||
if !resolved.starts_with(&base_abs) {
|
||||
bail!(
|
||||
"deliverable path {} escapes run store {}",
|
||||
resolved.display(),
|
||||
base_abs.display()
|
||||
);
|
||||
}
|
||||
|
||||
Ok(resolved)
|
||||
}
|
||||
|
||||
pub fn objective_as_str(options: &crate::types::RunExecutionOptions) -> Option<&str> {
|
||||
options
|
||||
.objective
|
||||
.as_deref()
|
||||
.map(str::trim)
|
||||
.filter(|s| !s.is_empty())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
#[test]
|
||||
fn resolve_deliverable_within_base() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let base = tmp.path();
|
||||
std::fs::create_dir_all(base.join("deliverable")).unwrap();
|
||||
std::fs::write(base.join("deliverable").join("a.txt"), "ok").unwrap();
|
||||
let resolved = resolve_deliverable_path(base, "deliverable/a.txt").unwrap();
|
||||
let base_abs = base.canonicalize().unwrap();
|
||||
assert!(resolved.starts_with(&base_abs));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_deliverable_rejects_escape() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let base = tmp.path();
|
||||
// Create a real file outside of base so canonicalization succeeds
|
||||
let outside = TempDir::new().unwrap();
|
||||
let outside_file = outside.path().join("outside.txt");
|
||||
std::fs::write(&outside_file, "nope").unwrap();
|
||||
|
||||
let err = resolve_deliverable_path(base, outside_file.to_str().unwrap()).unwrap_err();
|
||||
let msg = format!("{err}");
|
||||
assert!(msg.contains("escapes run store"));
|
||||
}
|
||||
}
|
||||
327
codex-rs/codex-infty/tests/orchestrator.rs
Normal file
327
codex-rs/codex-infty/tests/orchestrator.rs
Normal file
@@ -0,0 +1,327 @@
|
||||
#![cfg(not(target_os = "windows"))]
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::built_in_model_providers;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::cross_session::AssistantMessage;
|
||||
use codex_core::cross_session::PostUserTurnRequest;
|
||||
use codex_core::cross_session::RoleOrId;
|
||||
use codex_core::protocol::Op;
|
||||
use codex_infty::InftyOrchestrator;
|
||||
use codex_infty::RoleConfig;
|
||||
use codex_infty::RunExecutionOptions;
|
||||
use codex_infty::RunParams;
|
||||
use core_test_support::load_default_config_for_test;
|
||||
use core_test_support::responses;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use tempfile::TempDir;
|
||||
use wiremock::MockServer;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn orchestrator_routes_between_roles_and_records_store() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
let bodies = vec![
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_assistant_message("solver-msg-1", "Need direction"),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-1"),
|
||||
responses::ev_assistant_message("director-msg-1", "Proceed iteratively"),
|
||||
responses::ev_completed("director-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_assistant_message("solver-msg-2", "Acknowledged"),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]),
|
||||
];
|
||||
let response_mock = responses::mount_sse_sequence(&server, bodies).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-orchestrator".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
|
||||
let sessions = orchestrator
|
||||
.spawn_run(RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config.clone()),
|
||||
director: RoleConfig::new("director", director_config.clone()),
|
||||
verifiers: Vec::new(),
|
||||
})
|
||||
.await?;
|
||||
|
||||
let solver_message = call_role(
|
||||
&orchestrator,
|
||||
&sessions.run_id,
|
||||
"solver",
|
||||
"kick off plan",
|
||||
Duration::from_secs(1),
|
||||
)
|
||||
.await?;
|
||||
assert_eq!(solver_message.message.message, "Need direction");
|
||||
|
||||
let director_message = relay_assistant_to_role(
|
||||
&orchestrator,
|
||||
&sessions.run_id,
|
||||
"director",
|
||||
&solver_message,
|
||||
Duration::from_secs(1),
|
||||
)
|
||||
.await?;
|
||||
assert_eq!(director_message.message.message, "Proceed iteratively");
|
||||
|
||||
let solver_reply = relay_assistant_to_role(
|
||||
&orchestrator,
|
||||
&sessions.run_id,
|
||||
"solver",
|
||||
&director_message,
|
||||
Duration::from_secs(1),
|
||||
)
|
||||
.await?;
|
||||
assert_eq!(solver_reply.message.message, "Acknowledged");
|
||||
|
||||
assert_eq!(response_mock.requests().len(), 3);
|
||||
let first_request = response_mock.requests().first().unwrap().body_json();
|
||||
let instructions = first_request["instructions"]
|
||||
.as_str()
|
||||
.expect("request should set instructions");
|
||||
assert!(
|
||||
instructions.contains("brilliant mathematician"),
|
||||
"missing solver prompt: {instructions}"
|
||||
);
|
||||
assert!(sessions.store.path().is_dir());
|
||||
let solver_meta = sessions.store.role_metadata("solver").unwrap();
|
||||
assert!(solver_meta.rollout_path.is_some());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// resumable runs are disabled; resume test removed
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn execute_new_run_drives_to_completion() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
let bodies = vec![
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-1",
|
||||
r#"{"type":"direction_request","prompt":"Need directive","claim_path":null,"notes":null,"deliverable_path":null,"summary":null}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"director-msg-1",
|
||||
r#"{"directive":"Proceed","rationale":"Follow the plan"}"#,
|
||||
),
|
||||
responses::ev_completed("director-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_assistant_message("solver-msg-2", "Acknowledged"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-4",
|
||||
r#"{"type":"final_delivery","prompt":null,"claim_path":null,"notes":null,"deliverable_path":"deliverable","summary":"done"}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]),
|
||||
// Final verification of the deliverable
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("verifier-resp-3"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-msg-3",
|
||||
r#"{"verdict":"pass","reasons":[],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-resp-3"),
|
||||
]),
|
||||
// Feedback turn summarizing the verification outcome back to the solver
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-5"),
|
||||
responses::ev_completed("solver-resp-5"),
|
||||
]),
|
||||
];
|
||||
for body in bodies {
|
||||
responses::mount_sse_once(&server, body).await;
|
||||
}
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-auto".to_string();
|
||||
let run_root = runs_root.path().join("runs").join(&run_id);
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
let verifier_config = build_config(&server).await?;
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some("Implement feature".to_string()),
|
||||
..RunExecutionOptions::default()
|
||||
};
|
||||
|
||||
let outcome = orchestrator
|
||||
.execute_new_run(
|
||||
RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(run_root.clone()),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: vec![RoleConfig::new("verifier", verifier_config)],
|
||||
},
|
||||
options,
|
||||
)
|
||||
.await?;
|
||||
|
||||
assert_eq!(outcome.run_id, run_id);
|
||||
assert_eq!(outcome.summary.as_deref(), Some("done"));
|
||||
assert!(outcome.raw_message.contains("final_delivery"));
|
||||
let canonical_run_root = std::fs::canonicalize(&run_root)?;
|
||||
let canonical_deliverable = std::fs::canonicalize(&outcome.deliverable_path)?;
|
||||
assert!(canonical_deliverable.starts_with(&canonical_run_root));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn spawn_run_cleans_up_on_failure() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
let bodies = vec![
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-1"),
|
||||
responses::ev_completed("director-resp-1"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("dup-resp"),
|
||||
responses::ev_completed("dup-resp"),
|
||||
]),
|
||||
];
|
||||
for body in bodies {
|
||||
responses::mount_sse_once(&server, body).await;
|
||||
}
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-cleanup".to_string();
|
||||
let run_path = runs_root.path().join("runs").join(&run_id);
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
|
||||
let result = orchestrator
|
||||
.spawn_run(RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(run_path.clone()),
|
||||
solver: RoleConfig::new("solver", solver_config.clone()),
|
||||
director: RoleConfig::new("director", director_config.clone()),
|
||||
verifiers: vec![RoleConfig::new("solver", solver_config.clone())],
|
||||
})
|
||||
.await;
|
||||
assert!(result.is_err());
|
||||
assert!(!run_path.exists(), "failed run should remove run directory");
|
||||
|
||||
let bodies = vec![
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]),
|
||||
responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-2"),
|
||||
responses::ev_completed("director-resp-2"),
|
||||
]),
|
||||
];
|
||||
for body in bodies {
|
||||
responses::mount_sse_once(&server, body).await;
|
||||
}
|
||||
|
||||
let sessions = orchestrator
|
||||
.spawn_run(RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(run_path.clone()),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: Vec::new(),
|
||||
})
|
||||
.await?;
|
||||
|
||||
sessions.solver.conversation.submit(Op::Shutdown).await.ok();
|
||||
sessions
|
||||
.director
|
||||
.conversation
|
||||
.submit(Op::Shutdown)
|
||||
.await
|
||||
.ok();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn build_config(server: &MockServer) -> anyhow::Result<Config> {
|
||||
let home = TempDir::new()?;
|
||||
let cwd = TempDir::new()?;
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.cwd = cwd.path().to_path_buf();
|
||||
let mut provider = built_in_model_providers()["openai"].clone();
|
||||
provider.base_url = Some(format!("{}/v1", server.uri()));
|
||||
config.model_provider = provider;
|
||||
Ok(config)
|
||||
}
|
||||
|
||||
async fn call_role(
|
||||
orchestrator: &InftyOrchestrator,
|
||||
run_id: &str,
|
||||
role: &str,
|
||||
text: &str,
|
||||
timeout: Duration,
|
||||
) -> anyhow::Result<AssistantMessage> {
|
||||
let hub = orchestrator.hub();
|
||||
let handle = hub
|
||||
.post_user_turn(PostUserTurnRequest {
|
||||
target: RoleOrId::RunRole {
|
||||
run_id: run_id.to_string(),
|
||||
role: role.to_string(),
|
||||
},
|
||||
text: text.to_string(),
|
||||
final_output_json_schema: None,
|
||||
})
|
||||
.await?;
|
||||
let reply = hub.await_first_assistant(&handle, timeout).await?;
|
||||
Ok(reply)
|
||||
}
|
||||
|
||||
async fn relay_assistant_to_role(
|
||||
orchestrator: &InftyOrchestrator,
|
||||
run_id: &str,
|
||||
target_role: &str,
|
||||
assistant: &AssistantMessage,
|
||||
timeout: Duration,
|
||||
) -> anyhow::Result<AssistantMessage> {
|
||||
call_role(
|
||||
orchestrator,
|
||||
run_id,
|
||||
target_role,
|
||||
&assistant.message.message,
|
||||
timeout,
|
||||
)
|
||||
.await
|
||||
}
|
||||
324
codex-rs/codex-infty/tests/schemas.rs
Normal file
324
codex-rs/codex-infty/tests/schemas.rs
Normal file
@@ -0,0 +1,324 @@
|
||||
#![cfg(not(target_os = "windows"))]
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::built_in_model_providers;
|
||||
use codex_core::config::Config;
|
||||
use codex_infty::InftyOrchestrator;
|
||||
use codex_infty::RoleConfig;
|
||||
use codex_infty::RunExecutionOptions;
|
||||
use codex_infty::RunParams;
|
||||
use core_test_support::load_default_config_for_test;
|
||||
use core_test_support::responses;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use tempfile::TempDir;
|
||||
use wiremock::MockServer;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn director_request_includes_output_schema() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
|
||||
// 1) Solver: emit a direction_request so the orchestrator calls Director.
|
||||
let body_solver = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-1",
|
||||
r#"{"type":"direction_request","prompt":"Need directive","claim_path":null,"notes":null,"deliverable_path":null,"summary":null}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]);
|
||||
let _mock_solver = responses::mount_sse_once(&server, body_solver).await;
|
||||
|
||||
// 2) Director: reply with a directive JSON.
|
||||
let body_director = responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"director-msg-1",
|
||||
r#"{"directive":"Proceed","rationale":"Follow the plan"}"#,
|
||||
),
|
||||
responses::ev_completed("director-resp-1"),
|
||||
]);
|
||||
let mock_director = responses::mount_sse_once(&server, body_director).await;
|
||||
|
||||
// 3) After relaying directive back to Solver, we do not need to continue the run.
|
||||
// Provide a short empty solver completion body to avoid hanging HTTP calls.
|
||||
let body_solver_after = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]);
|
||||
let _mock_solver_after = responses::mount_sse_once(&server, body_solver_after).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-director-schema".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
|
||||
let params = RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: Vec::new(),
|
||||
};
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some("Kick off".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Drive the run in the background; we'll assert the request shape then cancel.
|
||||
let fut = tokio::spawn(async move {
|
||||
let _ = orchestrator.execute_new_run(params, options).await;
|
||||
});
|
||||
|
||||
// Wait until the Director request is captured.
|
||||
wait_for_requests(&mock_director, 1, Duration::from_secs(2)).await;
|
||||
let req = mock_director.single_request();
|
||||
let body = req.body_json();
|
||||
|
||||
// Assert that a JSON schema was sent under text.format.
|
||||
let text = &body["text"]; // Optional; present when using schemas
|
||||
assert!(text.is_object(), "missing text controls in request body");
|
||||
let fmt = &text["format"];
|
||||
assert!(fmt.is_object(), "missing text.format in request body");
|
||||
assert_eq!(fmt["type"], "json_schema");
|
||||
let schema = &fmt["schema"];
|
||||
assert!(schema.is_object(), "missing text.format.schema");
|
||||
assert_eq!(schema["type"], "object");
|
||||
// Ensure the directive property exists and is a string.
|
||||
assert_eq!(schema["properties"]["directive"]["type"], "string");
|
||||
// Enforce strictness: required must include all properties.
|
||||
let required = schema["required"]
|
||||
.as_array()
|
||||
.expect("required must be array");
|
||||
let props = schema["properties"]
|
||||
.as_object()
|
||||
.expect("properties must be object");
|
||||
for key in props.keys() {
|
||||
assert!(
|
||||
required.iter().any(|v| v == key),
|
||||
"missing {key} in required"
|
||||
);
|
||||
}
|
||||
// Ensure the objective text appears in the serialized request body
|
||||
let raw = serde_json::to_string(&body).expect("serialize body");
|
||||
assert!(
|
||||
raw.contains("Kick off"),
|
||||
"objective missing from director request body"
|
||||
);
|
||||
|
||||
// Stop the background task to end the test.
|
||||
fut.abort();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn final_delivery_request_includes_output_schema() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
|
||||
// 1) Solver: emit empty message so orchestrator asks for final_delivery via schema.
|
||||
let body_solver = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
// No signal -> orchestrator will prompt with final_output schema.
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]);
|
||||
let _mock_solver = responses::mount_sse_once(&server, body_solver).await;
|
||||
|
||||
// 2) Capture the schema-bearing request to Solver.
|
||||
let body_solver_prompt = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-2",
|
||||
r#"{"type":"final_delivery","deliverable_path":"deliverable/summary.txt","summary":null}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]);
|
||||
let mock_solver_prompt = responses::mount_sse_once(&server, body_solver_prompt).await;
|
||||
|
||||
// 3) Keep any follow-up quiet.
|
||||
let body_solver_done = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-3"),
|
||||
responses::ev_completed("solver-resp-3"),
|
||||
]);
|
||||
let _mock_solver_done = responses::mount_sse_once(&server, body_solver_done).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-final-schema".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
|
||||
let params = RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: Vec::new(),
|
||||
};
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some("Kick off".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let fut = tokio::spawn(async move {
|
||||
let _ = orchestrator.execute_new_run(params, options).await;
|
||||
});
|
||||
|
||||
wait_for_requests(&mock_solver_prompt, 1, Duration::from_secs(2)).await;
|
||||
let req = mock_solver_prompt.single_request();
|
||||
let body = req.body_json();
|
||||
let text = &body["text"];
|
||||
assert!(text.is_object(), "missing text controls in request body");
|
||||
let fmt = &text["format"];
|
||||
assert!(fmt.is_object(), "missing text.format in request body");
|
||||
assert_eq!(fmt["type"], "json_schema");
|
||||
let schema = &fmt["schema"];
|
||||
assert!(schema.is_object(), "missing text.format.schema");
|
||||
let required = schema["required"]
|
||||
.as_array()
|
||||
.expect("required must be array");
|
||||
let props = schema["properties"]
|
||||
.as_object()
|
||||
.expect("properties must be object");
|
||||
for key in props.keys() {
|
||||
assert!(
|
||||
required.iter().any(|v| v == key),
|
||||
"missing {key} in required"
|
||||
);
|
||||
}
|
||||
|
||||
fut.abort();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn verifier_request_includes_output_schema() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
|
||||
// 1) Solver: issue a final_delivery which triggers verifier requests.
|
||||
let body_solver = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-1",
|
||||
r#"{"type":"final_delivery","deliverable_path":"deliverable/summary.txt","summary":null}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]);
|
||||
let _mock_solver = responses::mount_sse_once(&server, body_solver).await;
|
||||
|
||||
// 2) Verifier: reply with a verdict JSON.
|
||||
let body_verifier = responses::sse(vec![
|
||||
responses::ev_response_created("verifier-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-msg-1",
|
||||
r#"{"verdict":"pass","reasons":[],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-resp-1"),
|
||||
]);
|
||||
let mock_verifier = responses::mount_sse_once(&server, body_verifier).await;
|
||||
|
||||
// 3) After posting the summary back to Solver, let the request complete.
|
||||
let body_solver_after = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]);
|
||||
let _mock_solver_after = responses::mount_sse_once(&server, body_solver_after).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-verifier-schema".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
let verifier_config = build_config(&server).await?;
|
||||
|
||||
let params = RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: vec![RoleConfig::new("verifier", verifier_config)],
|
||||
};
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some("Kick off".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let fut = tokio::spawn(async move {
|
||||
let _ = orchestrator.execute_new_run(params, options).await;
|
||||
});
|
||||
|
||||
// Wait until the Verifier request is captured.
|
||||
wait_for_requests(&mock_verifier, 1, Duration::from_secs(2)).await;
|
||||
let req = mock_verifier.single_request();
|
||||
let body = req.body_json();
|
||||
|
||||
// Assert that a JSON schema was sent under text.format.
|
||||
let text = &body["text"]; // Optional; present when using schemas
|
||||
assert!(text.is_object(), "missing text controls in request body");
|
||||
let fmt = &text["format"];
|
||||
assert!(fmt.is_object(), "missing text.format in request body");
|
||||
assert_eq!(fmt["type"], "json_schema");
|
||||
let schema = &fmt["schema"];
|
||||
assert!(schema.is_object(), "missing text.format.schema");
|
||||
assert_eq!(schema["type"], "object");
|
||||
// Ensure the verdict property exists and is an enum of pass/fail.
|
||||
assert!(schema["properties"]["verdict"].is_object());
|
||||
// Enforce strictness: required must include all properties.
|
||||
let required = schema["required"]
|
||||
.as_array()
|
||||
.expect("required must be array");
|
||||
let props = schema["properties"]
|
||||
.as_object()
|
||||
.expect("properties must be object");
|
||||
for key in props.keys() {
|
||||
assert!(
|
||||
required.iter().any(|v| v == key),
|
||||
"missing {key} in required"
|
||||
);
|
||||
}
|
||||
|
||||
fut.abort();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn build_config(server: &MockServer) -> anyhow::Result<Config> {
|
||||
let home = TempDir::new()?;
|
||||
let cwd = TempDir::new()?;
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.cwd = cwd.path().to_path_buf();
|
||||
let mut provider = built_in_model_providers()["openai"].clone();
|
||||
provider.base_url = Some(format!("{}/v1", server.uri()));
|
||||
config.model_provider = provider;
|
||||
Ok(config)
|
||||
}
|
||||
|
||||
async fn wait_for_requests(mock: &responses::ResponseMock, min: usize, timeout: Duration) {
|
||||
use tokio::time::Instant;
|
||||
use tokio::time::sleep;
|
||||
let start = Instant::now();
|
||||
loop {
|
||||
if mock.requests().len() >= min {
|
||||
return;
|
||||
}
|
||||
if start.elapsed() > timeout {
|
||||
return;
|
||||
}
|
||||
sleep(Duration::from_millis(25)).await;
|
||||
}
|
||||
}
|
||||
98
codex-rs/codex-infty/tests/timeouts.rs
Normal file
98
codex-rs/codex-infty/tests/timeouts.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
#![cfg(not(target_os = "windows"))]
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::built_in_model_providers;
|
||||
use codex_core::config::Config;
|
||||
use codex_infty::InftyOrchestrator;
|
||||
use codex_infty::RoleConfig;
|
||||
use codex_infty::RunExecutionOptions;
|
||||
use codex_infty::RunParams;
|
||||
use core_test_support::load_default_config_for_test;
|
||||
use core_test_support::responses;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use tempfile::TempDir;
|
||||
use wiremock::MockServer;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn direction_request_times_out_when_director_is_silent() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
|
||||
// Solver emits a direction_request.
|
||||
let body_solver = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-1"),
|
||||
responses::ev_assistant_message(
|
||||
"solver-msg-1",
|
||||
r#"{"type":"direction_request","prompt":"Need directive","claim_path":null,"notes":null,"deliverable_path":null,"summary":null}"#,
|
||||
),
|
||||
responses::ev_completed("solver-resp-1"),
|
||||
]);
|
||||
let _mock_solver = responses::mount_sse_once(&server, body_solver).await;
|
||||
|
||||
// Director remains silent (no assistant message); the model completes immediately.
|
||||
let body_director_silent = responses::sse(vec![
|
||||
responses::ev_response_created("director-resp-1"),
|
||||
// intentionally no message
|
||||
responses::ev_completed("director-resp-1"),
|
||||
]);
|
||||
let _mock_director = responses::mount_sse_once(&server, body_director_silent).await;
|
||||
|
||||
// After attempting to relay a directive back to the solver, orchestrator won't proceed
|
||||
// as we will time out waiting for the director; however, the solver will still receive
|
||||
// a follow-up post later in the flow, so we pre-mount an empty completion to satisfy it
|
||||
// if the code ever reaches that point in future changes.
|
||||
let body_solver_after = responses::sse(vec![
|
||||
responses::ev_response_created("solver-resp-2"),
|
||||
responses::ev_completed("solver-resp-2"),
|
||||
]);
|
||||
let _mock_solver_after = responses::mount_sse_once(&server, body_solver_after).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-director-timeout".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
|
||||
let params = RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: Vec::new(),
|
||||
};
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
objective: Some("Kick off".to_string()),
|
||||
director_timeout: Duration::from_millis(50),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let err = orchestrator
|
||||
.execute_new_run(params, options)
|
||||
.await
|
||||
.err()
|
||||
.expect("expected timeout error");
|
||||
let msg = format!("{err:#}");
|
||||
assert!(
|
||||
msg.contains("timed out waiting") || msg.contains("AwaitTimeout"),
|
||||
"unexpected error: {msg}"
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn build_config(server: &MockServer) -> anyhow::Result<Config> {
|
||||
let home = TempDir::new()?;
|
||||
let cwd = TempDir::new()?;
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.cwd = cwd.path().to_path_buf();
|
||||
let mut provider = built_in_model_providers()["openai"].clone();
|
||||
provider.base_url = Some(format!("{}/v1", server.uri()));
|
||||
config.model_provider = provider;
|
||||
Ok(config)
|
||||
}
|
||||
157
codex-rs/codex-infty/tests/verifier_replacement.rs
Normal file
157
codex-rs/codex-infty/tests/verifier_replacement.rs
Normal file
@@ -0,0 +1,157 @@
|
||||
#![cfg(not(target_os = "windows"))]
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::CodexAuth;
|
||||
use codex_core::built_in_model_providers;
|
||||
use codex_core::config::Config;
|
||||
use codex_infty::InftyOrchestrator;
|
||||
use codex_infty::RoleConfig;
|
||||
use codex_infty::RunExecutionOptions;
|
||||
use codex_infty::RunParams;
|
||||
use core_test_support::load_default_config_for_test;
|
||||
use core_test_support::responses;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use tempfile::TempDir;
|
||||
use wiremock::MockServer;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn replaces_passing_verifiers_and_keeps_failing() -> anyhow::Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = responses::start_mock_server().await;
|
||||
|
||||
// Round 1: alpha passes, beta fails
|
||||
let body_verifier_alpha_r1 = responses::sse(vec![
|
||||
responses::ev_response_created("verifier-alpha-r1"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-alpha-msg-r1",
|
||||
r#"{"verdict":"pass","reasons":[],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-alpha-r1"),
|
||||
]);
|
||||
let body_verifier_beta_r1 = responses::sse(vec![
|
||||
responses::ev_response_created("verifier-beta-r1"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-beta-msg-r1",
|
||||
r#"{"verdict":"fail","reasons":["missing"],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-beta-r1"),
|
||||
]);
|
||||
|
||||
// Round 2: both pass
|
||||
let body_verifier_alpha_r2 = responses::sse(vec![
|
||||
responses::ev_response_created("verifier-alpha-r2"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-alpha-msg-r2",
|
||||
r#"{"verdict":"pass","reasons":[],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-alpha-r2"),
|
||||
]);
|
||||
let body_verifier_beta_r2 = responses::sse(vec![
|
||||
responses::ev_response_created("verifier-beta-r2"),
|
||||
responses::ev_assistant_message(
|
||||
"verifier-beta-msg-r2",
|
||||
r#"{"verdict":"pass","reasons":[],"suggestions":[]}"#,
|
||||
),
|
||||
responses::ev_completed("verifier-beta-r2"),
|
||||
]);
|
||||
|
||||
// Mount verifier SSE bodies in the exact order collect_verification_summary posts to verifiers.
|
||||
// The implementation posts sequentially in the order of sessions.verifiers.
|
||||
let _m1 = responses::mount_sse_once(&server, body_verifier_alpha_r1).await;
|
||||
let _m2 = responses::mount_sse_once(&server, body_verifier_beta_r1).await;
|
||||
let _m3 = responses::mount_sse_once(&server, body_verifier_alpha_r2).await;
|
||||
let _m4 = responses::mount_sse_once(&server, body_verifier_beta_r2).await;
|
||||
|
||||
let runs_root = TempDir::new()?;
|
||||
let orchestrator =
|
||||
InftyOrchestrator::with_runs_root(CodexAuth::from_api_key("dummy-key"), runs_root.path());
|
||||
let run_id = "run-verifier-replacement".to_string();
|
||||
|
||||
let solver_config = build_config(&server).await?;
|
||||
let director_config = build_config(&server).await?;
|
||||
let verifier_config = build_config(&server).await?;
|
||||
|
||||
// Spawn run with two verifiers in known order.
|
||||
let mut sessions = orchestrator
|
||||
.spawn_run(RunParams {
|
||||
run_id: run_id.clone(),
|
||||
run_root: Some(runs_root.path().join("runs").join(&run_id)),
|
||||
solver: RoleConfig::new("solver", solver_config),
|
||||
director: RoleConfig::new("director", director_config),
|
||||
verifiers: vec![
|
||||
RoleConfig::new("verifier-alpha", verifier_config.clone()),
|
||||
RoleConfig::new("verifier-beta", verifier_config),
|
||||
],
|
||||
})
|
||||
.await?;
|
||||
|
||||
let alpha_initial = sessions
|
||||
.store
|
||||
.role_metadata("verifier-alpha")
|
||||
.and_then(|m| m.rollout_path.clone())
|
||||
.expect("alpha initial rollout path");
|
||||
let beta_initial = sessions
|
||||
.store
|
||||
.role_metadata("verifier-beta")
|
||||
.and_then(|m| m.rollout_path.clone())
|
||||
.expect("beta initial rollout path");
|
||||
|
||||
let options = RunExecutionOptions {
|
||||
verifier_timeout: Duration::from_secs(2),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Round 1: alpha pass (should be replaced), beta fail (should be kept)
|
||||
let _summary1 = orchestrator
|
||||
.verify_round_for_test(&mut sessions, "memory/claims/c1.json", &options)
|
||||
.await?;
|
||||
|
||||
let alpha_after_r1 = sessions
|
||||
.store
|
||||
.role_metadata("verifier-alpha")
|
||||
.and_then(|m| m.rollout_path.clone())
|
||||
.expect("alpha rollout after r1");
|
||||
let beta_after_r1 = sessions
|
||||
.store
|
||||
.role_metadata("verifier-beta")
|
||||
.and_then(|m| m.rollout_path.clone())
|
||||
.expect("beta rollout after r1");
|
||||
|
||||
assert_ne!(
|
||||
alpha_initial, alpha_after_r1,
|
||||
"alpha should be replaced after pass"
|
||||
);
|
||||
assert_eq!(
|
||||
beta_initial, beta_after_r1,
|
||||
"beta should be kept after fail"
|
||||
);
|
||||
|
||||
// Round 2: both pass; beta should be replaced now.
|
||||
let _summary2 = orchestrator
|
||||
.verify_round_for_test(&mut sessions, "memory/claims/c2.json", &options)
|
||||
.await?;
|
||||
let beta_after_r2 = sessions
|
||||
.store
|
||||
.role_metadata("verifier-beta")
|
||||
.and_then(|m| m.rollout_path.clone())
|
||||
.expect("beta rollout after r2");
|
||||
assert_ne!(
|
||||
beta_initial, beta_after_r2,
|
||||
"beta should be replaced after pass in r2"
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn build_config(server: &MockServer) -> anyhow::Result<Config> {
|
||||
let home = TempDir::new()?;
|
||||
let cwd = TempDir::new()?;
|
||||
let mut config = load_default_config_for_test(&home);
|
||||
config.cwd = cwd.path().to_path_buf();
|
||||
let mut provider = built_in_model_providers()["openai"].clone();
|
||||
provider.base_url = Some(format!("{}/v1", server.uri()));
|
||||
config.model_provider = provider;
|
||||
Ok(config)
|
||||
}
|
||||
@@ -10,6 +10,7 @@ workspace = true
|
||||
clap = { workspace = true, features = ["derive", "wrap_help"], optional = true }
|
||||
codex-core = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
serde = { workspace = true, optional = true }
|
||||
toml = { workspace = true, optional = true }
|
||||
|
||||
|
||||
66
codex-rs/common/src/format_env_display.rs
Normal file
66
codex-rs/common/src/format_env_display.rs
Normal file
@@ -0,0 +1,66 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
pub fn format_env_display(env: Option<&HashMap<String, String>>, env_vars: &[String]) -> String {
|
||||
let mut parts: Vec<String> = Vec::new();
|
||||
|
||||
if let Some(map) = env {
|
||||
let mut pairs: Vec<_> = map.iter().collect();
|
||||
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
|
||||
parts.extend(
|
||||
pairs
|
||||
.into_iter()
|
||||
.map(|(key, value)| format!("{key}={value}")),
|
||||
);
|
||||
}
|
||||
|
||||
if !env_vars.is_empty() {
|
||||
parts.extend(env_vars.iter().map(|var| format!("{var}=${var}")));
|
||||
}
|
||||
|
||||
if parts.is_empty() {
|
||||
"-".to_string()
|
||||
} else {
|
||||
parts.join(", ")
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn returns_dash_when_empty() {
|
||||
assert_eq!(format_env_display(None, &[]), "-");
|
||||
|
||||
let empty_map = HashMap::new();
|
||||
assert_eq!(format_env_display(Some(&empty_map), &[]), "-");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn formats_sorted_env_pairs() {
|
||||
let mut env = HashMap::new();
|
||||
env.insert("B".to_string(), "two".to_string());
|
||||
env.insert("A".to_string(), "one".to_string());
|
||||
|
||||
assert_eq!(format_env_display(Some(&env), &[]), "A=one, B=two");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn formats_env_vars_with_dollar_prefix() {
|
||||
let vars = vec!["TOKEN".to_string(), "PATH".to_string()];
|
||||
|
||||
assert_eq!(format_env_display(None, &vars), "TOKEN=$TOKEN, PATH=$PATH");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn combines_env_pairs_and_vars() {
|
||||
let mut env = HashMap::new();
|
||||
env.insert("HOME".to_string(), "/tmp".to_string());
|
||||
let vars = vec!["TOKEN".to_string()];
|
||||
|
||||
assert_eq!(
|
||||
format_env_display(Some(&env), &vars),
|
||||
"HOME=/tmp, TOKEN=$TOKEN"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -13,6 +13,9 @@ mod sandbox_mode_cli_arg;
|
||||
#[cfg(feature = "cli")]
|
||||
pub use sandbox_mode_cli_arg::SandboxModeCliArg;
|
||||
|
||||
#[cfg(feature = "cli")]
|
||||
pub mod format_env_display;
|
||||
|
||||
#[cfg(any(feature = "cli", test))]
|
||||
mod config_override;
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use codex_app_server_protocol::AuthMode;
|
||||
use codex_core::protocol_config_types::ReasoningEffort;
|
||||
use codex_protocol::mcp_protocol::AuthMode;
|
||||
|
||||
/// A simple preset pairing a model slug with a reasoning effort.
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
@@ -20,49 +20,49 @@ const PRESETS: &[ModelPreset] = &[
|
||||
ModelPreset {
|
||||
id: "gpt-5-codex-low",
|
||||
label: "gpt-5-codex low",
|
||||
description: "",
|
||||
description: "Fastest responses with limited reasoning",
|
||||
model: "gpt-5-codex",
|
||||
effort: Some(ReasoningEffort::Low),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-codex-medium",
|
||||
label: "gpt-5-codex medium",
|
||||
description: "",
|
||||
description: "Dynamically adjusts reasoning based on the task",
|
||||
model: "gpt-5-codex",
|
||||
effort: Some(ReasoningEffort::Medium),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-codex-high",
|
||||
label: "gpt-5-codex high",
|
||||
description: "",
|
||||
description: "Maximizes reasoning depth for complex or ambiguous problems",
|
||||
model: "gpt-5-codex",
|
||||
effort: Some(ReasoningEffort::High),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-minimal",
|
||||
label: "gpt-5 minimal",
|
||||
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
|
||||
description: "Fastest responses with little reasoning",
|
||||
model: "gpt-5",
|
||||
effort: Some(ReasoningEffort::Minimal),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-low",
|
||||
label: "gpt-5 low",
|
||||
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
|
||||
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
|
||||
model: "gpt-5",
|
||||
effort: Some(ReasoningEffort::Low),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-medium",
|
||||
label: "gpt-5 medium",
|
||||
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
|
||||
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
|
||||
model: "gpt-5",
|
||||
effort: Some(ReasoningEffort::Medium),
|
||||
},
|
||||
ModelPreset {
|
||||
id: "gpt-5-high",
|
||||
label: "gpt-5 high",
|
||||
description: "— maximizes reasoning depth for complex or ambiguous problems",
|
||||
description: "Maximizes reasoning depth for complex or ambiguous problems",
|
||||
model: "gpt-5",
|
||||
effort: Some(ReasoningEffort::High),
|
||||
},
|
||||
|
||||
@@ -19,13 +19,16 @@ async-trait = { workspace = true }
|
||||
base64 = { workspace = true }
|
||||
bytes = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
codex-apply-patch = { workspace = true }
|
||||
codex-file-search = { workspace = true }
|
||||
codex-mcp-client = { workspace = true }
|
||||
codex-rmcp-client = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-otel = { workspace = true, features = ["otel"] }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-rmcp-client = { workspace = true }
|
||||
codex-utils-string = { workspace = true }
|
||||
dirs = { workspace = true }
|
||||
dunce = { workspace = true }
|
||||
env-flags = { workspace = true }
|
||||
eventsource-stream = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
@@ -58,7 +61,8 @@ tokio = { workspace = true, features = [
|
||||
"rt-multi-thread",
|
||||
"signal",
|
||||
] }
|
||||
tokio-util = { workspace = true }
|
||||
tokio-util = { workspace = true, features = ["rt"] }
|
||||
tokio-stream = { workspace = true, features = ["sync"] }
|
||||
toml = { workspace = true }
|
||||
toml_edit = { workspace = true }
|
||||
tracing = { workspace = true, features = ["log"] }
|
||||
@@ -73,6 +77,9 @@ wildmatch = { workspace = true }
|
||||
landlock = { workspace = true }
|
||||
seccompiler = { workspace = true }
|
||||
|
||||
[target.'cfg(target_os = "macos")'.dependencies]
|
||||
core-foundation = "0.9"
|
||||
|
||||
# Build OpenSSL from source for musl builds.
|
||||
[target.x86_64-unknown-linux-musl.dependencies]
|
||||
openssl-sys = { workspace = true, features = ["vendored"] }
|
||||
@@ -83,16 +90,18 @@ openssl-sys = { workspace = true, features = ["vendored"] }
|
||||
|
||||
[dev-dependencies]
|
||||
assert_cmd = { workspace = true }
|
||||
assert_matches = { workspace = true }
|
||||
core_test_support = { workspace = true }
|
||||
escargot = { workspace = true }
|
||||
maplit = { workspace = true }
|
||||
predicates = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
serial_test = { workspace = true }
|
||||
tempfile = { workspace = true }
|
||||
tokio-test = { workspace = true }
|
||||
tracing-test = { workspace = true, features = ["no-env-filter"] }
|
||||
walkdir = { workspace = true }
|
||||
wiremock = { workspace = true }
|
||||
tracing-test = { workspace = true, features = ["no-env-filter"] }
|
||||
|
||||
[package.metadata.cargo-shear]
|
||||
ignored = ["openssl-sys"]
|
||||
|
||||
@@ -12,7 +12,7 @@ Expects `/usr/bin/sandbox-exec` to be present.
|
||||
|
||||
### Linux
|
||||
|
||||
Expects the binary containing `codex-core` to run the equivalent of `codex debug landlock` when `arg0` is `codex-linux-sandbox`. See the `codex-arg0` crate for details.
|
||||
Expects the binary containing `codex-core` to run the equivalent of `codex sandbox linux` (legacy alias: `codex debug landlock`) when `arg0` is `codex-linux-sandbox`. See the `codex-arg0` crate for details.
|
||||
|
||||
### All Platforms
|
||||
|
||||
|
||||
@@ -10,12 +10,14 @@ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CL
|
||||
|
||||
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
|
||||
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
|
||||
- Try to use apply_patch for single file edits, but it is fine to explore other options to make the edit if it does not work well. Do not use apply_patch for changes that are auto-generated (i.e. generating package.json or running a lint or format command like gofmt) or when scripting is more efficient (such as search and replacing a string across a codebase).
|
||||
- You may be in a dirty git worktree.
|
||||
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
|
||||
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
|
||||
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
|
||||
* If the changes are in unrelated files, just ignore them and don't revert them.
|
||||
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
|
||||
- **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.
|
||||
|
||||
## Plan tool
|
||||
|
||||
@@ -89,7 +91,7 @@ You are producing plain text that will later be styled by the CLI. Follow these
|
||||
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
|
||||
- Bullets: use - ; merge related points; keep to one line when possible; 4–6 per list ordered by importance; keep phrasing consistent.
|
||||
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
|
||||
- Code samples or multi-line snippets should be wrapped in fenced code blocks; add a language hint whenever obvious.
|
||||
- Code samples or multi-line snippets should be wrapped in fenced code blocks; include an info string as often as possible.
|
||||
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
|
||||
- Tone: collaborative, concise, factual; present tense, active voice; self‑contained; no "above/below"; parallel wording.
|
||||
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user