mirror of
https://github.com/openai/codex.git
synced 2026-02-02 23:13:37 +00:00
Compare commits
30 Commits
fix-timeou
...
shell-proc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cb6f67d284 | ||
|
|
183fc8e01a | ||
|
|
9fba811764 | ||
|
|
db408b9e62 | ||
|
|
2eecc1a2e4 | ||
|
|
c76528ca1f | ||
|
|
bb47f2226f | ||
|
|
c6ab92bc50 | ||
|
|
4c1a6f0ee0 | ||
|
|
361d43b969 | ||
|
|
2e81f1900d | ||
|
|
2030b28083 | ||
|
|
e84e39940b | ||
|
|
e8905f6d20 | ||
|
|
316352be94 | ||
|
|
f8b30af6dc | ||
|
|
039a4b070e | ||
|
|
c368c6aeea | ||
|
|
0c647bc566 | ||
|
|
e30f65118d | ||
|
|
1bd2d7a659 | ||
|
|
65d53fd4b1 | ||
|
|
b5349202e9 | ||
|
|
1b8cc8b625 | ||
|
|
8501b0b768 | ||
|
|
fe7eb18104 | ||
|
|
8c75ed39d5 | ||
|
|
fdb9fa301e | ||
|
|
871d442b8e | ||
|
|
9238c58460 |
2
.github/pull_request_template.md
vendored
2
.github/pull_request_template.md
vendored
@@ -4,3 +4,5 @@ Before opening this Pull Request, please read the dedicated "Contributing" markd
|
||||
https://github.com/openai/codex/blob/main/docs/contributing.md
|
||||
|
||||
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.
|
||||
|
||||
Include a link to a bug report or enhancement request.
|
||||
|
||||
37
.github/workflows/issue-labeler.yml
vendored
37
.github/workflows/issue-labeler.yml
vendored
@@ -26,21 +26,36 @@ jobs:
|
||||
prompt: |
|
||||
You are an assistant that reviews GitHub issues for the repository.
|
||||
|
||||
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
|
||||
Your job is to choose the most appropriate labels for the issue described later in this prompt.
|
||||
Follow these rules:
|
||||
- Only pick labels out of the list below.
|
||||
- Prefer a small set of precise labels over many broad ones.
|
||||
|
||||
Labels to apply:
|
||||
- Add one (and only one) of the following three labels to distinguish the type of issue. Default to "bug" if unsure.
|
||||
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
|
||||
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
|
||||
3. extension — VS Code (or other IDE) extension-specific issues.
|
||||
4. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
5. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
8. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
3. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
|
||||
|
||||
- If applicable, add one of the following labels to specify which sub-product or product surface the issue relates to.
|
||||
1. CLI — the Codex command line interface.
|
||||
2. extension — VS Code (or other IDE) extension-specific issues.
|
||||
3. codex-web — Issues targeting the Codex web UI/Cloud experience.
|
||||
4. github-action — Issues with the Codex GitHub action.
|
||||
5. iOS — Issues with the Codex iOS app.
|
||||
|
||||
- Additionally add zero or more of the following labels that are relevant to the issue content. Prefer a small set of precise labels over many broad ones.
|
||||
1. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
|
||||
2. mcp — Topics involving Model Context Protocol servers/clients.
|
||||
3. mcp-server — Problems related to the codex mcp-server command, where codex runs as an MCP server.
|
||||
4. azure — Problems or requests tied to Azure OpenAI deployments.
|
||||
5. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
|
||||
6. code-review — Issues related to the code review feature or functionality.
|
||||
7. auth - Problems related to authentication, login, or access tokens.
|
||||
8. codex-exec - Problems related to the "codex exec" command or functionality.
|
||||
9. context-management - Problems related to compaction, context windows, or available context reporting.
|
||||
10. custom-model - Problems that involve using custom model providers, local models, or OSS models.
|
||||
11. rate-limits - Problems related to token limits, rate limits, or token usage reporting.
|
||||
12. sandbox - Issues related to local sandbox environments or tool call approvals to override sandbox restrictions.
|
||||
13. tool-calls - Problems related to specific tool call invocations including unexpected errors, failures, or hangs.
|
||||
14. TUI - Problems with the terminal user interface (TUI) including keyboard shortcuts, copy & pasting, menus, or screen update issues.
|
||||
|
||||
Issue number: ${{ github.event.issue.number }}
|
||||
|
||||
|
||||
@@ -84,6 +84,7 @@ If you don’t have the tool:
|
||||
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
|
||||
- `ResponsesRequest` exposes helpers (`body_json`, `input`, `function_call_output`, `custom_tool_call_output`, `call_output`, `header`, `path`, `query_param`) so assertions can target structured payloads instead of manual JSON digging.
|
||||
- Build SSE payloads with the provided `ev_*` constructors and the `sse(...)`.
|
||||
- Prefer `wait_for_event` over `wait_for_event_with_timeout`.
|
||||
|
||||
- Typical pattern:
|
||||
|
||||
|
||||
14
codex-rs/Cargo.lock
generated
14
codex-rs/Cargo.lock
generated
@@ -1452,8 +1452,10 @@ dependencies = [
|
||||
"codex-login",
|
||||
"codex-ollama",
|
||||
"codex-protocol",
|
||||
"codex-windows-sandbox",
|
||||
"color-eyre",
|
||||
"crossterm",
|
||||
"derive_more 2.0.1",
|
||||
"diffy",
|
||||
"dirs",
|
||||
"dunce",
|
||||
@@ -1657,6 +1659,15 @@ dependencies = [
|
||||
"unicode-segmentation",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "convert_case"
|
||||
version = "0.7.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bb402b8d4c85569410425650ce3eddc7d698ed96d39a73f941b08fb63082f1e7"
|
||||
dependencies = [
|
||||
"unicode-segmentation",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "core-foundation"
|
||||
version = "0.9.4"
|
||||
@@ -2002,7 +2013,7 @@ version = "1.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22"
|
||||
dependencies = [
|
||||
"convert_case",
|
||||
"convert_case 0.6.0",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
@@ -2015,6 +2026,7 @@ version = "2.0.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bda628edc44c4bb645fbe0f758797143e4e07926f7ebf4e9bdfbd3d2ce621df3"
|
||||
dependencies = [
|
||||
"convert_case 0.7.1",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
|
||||
@@ -87,7 +87,7 @@ codex-utils-pty = { path = "utils/pty" }
|
||||
codex-utils-readiness = { path = "utils/readiness" }
|
||||
codex-utils-string = { path = "utils/string" }
|
||||
codex-utils-tokenizer = { path = "utils/tokenizer" }
|
||||
codex-windows-sandbox = { path = "windows-sandbox" }
|
||||
codex-windows-sandbox = { path = "windows-sandbox-rs" }
|
||||
core_test_support = { path = "core/tests/common" }
|
||||
mcp-types = { path = "mcp-types" }
|
||||
mcp_test_support = { path = "mcp-server/tests/common" }
|
||||
|
||||
@@ -191,7 +191,7 @@ client_request_definitions! {
|
||||
#[serde(rename = "account/read")]
|
||||
#[ts(rename = "account/read")]
|
||||
GetAccount {
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
params: v2::GetAccountParams,
|
||||
response: v2::GetAccountResponse,
|
||||
},
|
||||
|
||||
@@ -263,6 +263,7 @@ client_request_definitions! {
|
||||
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
|
||||
response: v1::LogoutChatGptResponse,
|
||||
},
|
||||
/// DEPRECATED in favor of GetAccount
|
||||
GetAuthStatus {
|
||||
params: v1::GetAuthStatusParams,
|
||||
response: v1::GetAuthStatusResponse,
|
||||
@@ -758,12 +759,17 @@ mod tests {
|
||||
fn serialize_get_account() -> Result<()> {
|
||||
let request = ClientRequest::GetAccount {
|
||||
request_id: RequestId::Integer(5),
|
||||
params: None,
|
||||
params: v2::GetAccountParams {
|
||||
refresh_token: false,
|
||||
},
|
||||
};
|
||||
assert_eq!(
|
||||
json!({
|
||||
"method": "account/read",
|
||||
"id": 5,
|
||||
"params": {
|
||||
"refreshToken": false
|
||||
}
|
||||
}),
|
||||
serde_json::to_value(&request)?,
|
||||
);
|
||||
@@ -772,19 +778,16 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn account_serializes_fields_in_camel_case() -> Result<()> {
|
||||
let api_key = v2::Account::ApiKey {
|
||||
api_key: "secret".to_string(),
|
||||
};
|
||||
let api_key = v2::Account::ApiKey {};
|
||||
assert_eq!(
|
||||
json!({
|
||||
"type": "apiKey",
|
||||
"apiKey": "secret",
|
||||
}),
|
||||
serde_json::to_value(&api_key)?,
|
||||
);
|
||||
|
||||
let chatgpt = v2::Account::Chatgpt {
|
||||
email: Some("user@example.com".to_string()),
|
||||
email: "user@example.com".to_string(),
|
||||
plan_type: PlanType::Plus,
|
||||
};
|
||||
assert_eq!(
|
||||
|
||||
@@ -11,6 +11,7 @@ use codex_protocol::models::ResponseItem;
|
||||
use codex_protocol::protocol::AskForApproval;
|
||||
use codex_protocol::protocol::EventMsg;
|
||||
use codex_protocol::protocol::SandboxPolicy;
|
||||
use codex_protocol::protocol::SessionSource;
|
||||
use codex_protocol::protocol::TurnAbortReason;
|
||||
use schemars::JsonSchema;
|
||||
use serde::Deserialize;
|
||||
@@ -113,6 +114,18 @@ pub struct ConversationSummary {
|
||||
pub preview: String,
|
||||
pub timestamp: Option<String>,
|
||||
pub model_provider: String,
|
||||
pub cwd: PathBuf,
|
||||
pub cli_version: String,
|
||||
pub source: SessionSource,
|
||||
pub git_info: Option<ConversationGitInfo>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub struct ConversationGitInfo {
|
||||
pub sha: Option<String>,
|
||||
pub branch: Option<String>,
|
||||
pub origin_url: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
|
||||
@@ -123,14 +123,11 @@ impl From<codex_protocol::protocol::SandboxPolicy> for SandboxPolicy {
|
||||
pub enum Account {
|
||||
#[serde(rename = "apiKey", rename_all = "camelCase")]
|
||||
#[ts(rename = "apiKey", rename_all = "camelCase")]
|
||||
ApiKey { api_key: String },
|
||||
ApiKey {},
|
||||
|
||||
#[serde(rename = "chatgpt", rename_all = "camelCase")]
|
||||
#[ts(rename = "chatgpt", rename_all = "camelCase")]
|
||||
Chatgpt {
|
||||
email: Option<String>,
|
||||
plan_type: PlanType,
|
||||
},
|
||||
Chatgpt { email: String, plan_type: PlanType },
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
@@ -193,11 +190,20 @@ pub struct GetAccountRateLimitsResponse {
|
||||
pub rate_limits: RateLimitSnapshot,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export_to = "v2/")]
|
||||
pub struct GetAccountParams {
|
||||
#[serde(default)]
|
||||
pub refresh_token: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[ts(export_to = "v2/")]
|
||||
pub struct GetAccountResponse {
|
||||
pub account: Account,
|
||||
pub account: Option<Account>,
|
||||
pub requires_openai_auth: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
|
||||
@@ -348,6 +354,11 @@ pub struct ThreadCompactResponse {}
|
||||
#[ts(export_to = "v2/")]
|
||||
pub struct Thread {
|
||||
pub id: String,
|
||||
/// Usually the first user message in the thread, if available.
|
||||
pub preview: String,
|
||||
pub model_provider: String,
|
||||
/// Unix timestamp (in seconds) when the thread was created.
|
||||
pub created_at: i64,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# codex-app-server
|
||||
|
||||
`codex app-server` is the harness Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
|
||||
`codex app-server` is the interface Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
|
||||
|
||||
## Protocol
|
||||
|
||||
@@ -13,3 +13,241 @@ Currently, you can dump a TypeScript version of the schema using `codex generate
|
||||
```
|
||||
codex generate-ts --out DIR
|
||||
```
|
||||
|
||||
## Initialization
|
||||
|
||||
Clients must send a single `initialize` request before invoking any other method, then acknowledge with an `initialized` notification. The server returns the user agent string it will present to upstream services; subsequent requests issued before initialization receive a `"Not initialized"` error, and repeated `initialize` calls receive an `"Already initialized"` error.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
{ "method": "initialize", "id": 0, "params": {
|
||||
"clientInfo": { "name": "codex-vscode", "title": "Codex VS Code Extension", "version": "0.1.0" }
|
||||
} }
|
||||
{ "id": 0, "result": { "userAgent": "codex-app-server/0.1.0 codex-vscode/0.1.0" } }
|
||||
{ "method": "initialized" }
|
||||
```
|
||||
|
||||
## Core primitives
|
||||
|
||||
We have 3 top level primitives:
|
||||
- Thread - a conversation between the Codex agent and a user. Each thread contains multiple turns.
|
||||
- Turn - one turn of the conversation, typically starting with a user message and finishing with an agent message. Each turn contains multiple items.
|
||||
- Item - represents user inputs and agent outputs as part of the turn, persisted and used as the context for future conversations.
|
||||
|
||||
## Thread & turn endpoints
|
||||
|
||||
The JSON-RPC API exposes dedicated methods for managing Codex conversations. Threads store long-lived conversation metadata, and turns store the per-message exchange (input → Codex output, including streamed items). Use the thread APIs to create, list, or archive sessions, then drive the conversation with turn APIs and notifications.
|
||||
|
||||
### Quick reference
|
||||
- `thread/start` — create a new thread; emits `thread/started` and auto-subscribes you to turn/item events for that thread.
|
||||
- `thread/resume` — reopen an existing thread by id so subsequent `turn/start` calls append to it.
|
||||
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
|
||||
- `thread/archive` — move a thread’s rollout file into the archived directory; returns `{}` on success.
|
||||
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
|
||||
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
|
||||
|
||||
### 1) Start or resume a thread
|
||||
|
||||
Start a fresh thread when you need a new Codex conversation. Optional fields mirror CLI defaults: set `model`, `modelProvider`, `cwd`, `approvalPolicy`, `sandbox`, or custom `config` values. Instructions can be set via `baseInstructions` and `developerInstructions`:
|
||||
|
||||
```json
|
||||
{ "method": "thread/start", "id": 10, "params": {
|
||||
"model": "gpt-5-codex",
|
||||
"cwd": "/Users/me/project",
|
||||
"approvalPolicy": "never",
|
||||
"sandbox": "workspace-write",
|
||||
"baseInstructions": "You're helping with refactors."
|
||||
} }
|
||||
{ "id": 10, "result": {
|
||||
"thread": {
|
||||
"id": "thr_123",
|
||||
"preview": "",
|
||||
"modelProvider": "openai",
|
||||
"createdAt": 1730910000
|
||||
}
|
||||
} }
|
||||
{ "method": "thread/started", "params": { "thread": { … } } }
|
||||
```
|
||||
|
||||
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted:
|
||||
|
||||
```json
|
||||
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
|
||||
{ "id": 11, "result": { "thread": { "id": "thr_123", … } } }
|
||||
```
|
||||
|
||||
### 2) List threads (pagination & filters)
|
||||
|
||||
`thread/list` lets you render a history UI. Pass any combination of:
|
||||
- `cursor` — opaque string from a prior response; omit for the first page.
|
||||
- `limit` — server defaults to a reasonable page size if unset.
|
||||
- `modelProviders` — restrict results to specific providers; unset, null, or an empty array will include all providers.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
{ "method": "thread/list", "id": 20, "params": {
|
||||
"cursor": null,
|
||||
"limit": 25,
|
||||
"modelProviders": ["openai"]
|
||||
} }
|
||||
{ "id": 20, "result": {
|
||||
"data": [
|
||||
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111 },
|
||||
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000 }
|
||||
],
|
||||
"nextCursor": "opaque-token-or-null"
|
||||
} }
|
||||
```
|
||||
|
||||
When `nextCursor` is `null`, you’ve reached the final page.
|
||||
|
||||
### 3) Archive a thread
|
||||
|
||||
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
|
||||
|
||||
```json
|
||||
{ "method": "thread/archive", "id": 21, "params": { "threadId": "thr_b" } }
|
||||
{ "id": 21, "result": {} }
|
||||
```
|
||||
|
||||
An archived thread will not appear in future calls to `thread/list`.
|
||||
|
||||
### 4) Start a turn (send user input)
|
||||
|
||||
Turns attach user input (text or images) to a thread and trigger Codex generation. The `input` field is a list of discriminated unions:
|
||||
|
||||
- `{"type":"text","text":"Explain this diff"}`
|
||||
- `{"type":"image","url":"https://…png"}`
|
||||
- `{"type":"localImage","path":"/tmp/screenshot.png"}`
|
||||
|
||||
Override knobs apply to the new turn and become the defaults for subsequent turns on the same thread:
|
||||
|
||||
```json
|
||||
{ "method": "turn/start", "id": 30, "params": {
|
||||
"threadId": "thr_123",
|
||||
"input": [ { "type": "text", "text": "Run tests" } ],
|
||||
"cwd": "/Users/me/project",
|
||||
"approvalPolicy": "untrusted",
|
||||
"sandboxPolicy": "workspace-write",
|
||||
"model": "gpt-5-codex",
|
||||
"effort": "medium",
|
||||
"summary": "focus-on-test-failures"
|
||||
} }
|
||||
{ "id": 30, "result": { "turn": {
|
||||
"id": "turn_456",
|
||||
"status": "inProgress",
|
||||
"items": [],
|
||||
"error": null
|
||||
} } }
|
||||
```
|
||||
|
||||
### 5) Interrupt an active turn
|
||||
|
||||
You can cancel a running Turn with `turn/interrupt`.
|
||||
|
||||
```json
|
||||
{ "method": "turn/interrupt", "id": 31, "params": {
|
||||
"threadId": "thr_123",
|
||||
"turnId": "turn_456"
|
||||
} }
|
||||
{ "id": 31, "result": {} }
|
||||
```
|
||||
|
||||
The server requests cancellations for running subprocesses, then emits a `turn/completed` event with `status: "interrupted"`. Rely on the `turn/completed` to know when Codex-side cleanup is done.
|
||||
|
||||
## Auth endpoints
|
||||
|
||||
The v2 JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.
|
||||
|
||||
### Quick reference
|
||||
- `account/read` — fetch current account info; optionally refresh tokens.
|
||||
- `account/login/start` — begin login (`apiKey` or `chatgpt`).
|
||||
- `account/login/completed` (notify) — emitted when a login attempt finishes (success or error).
|
||||
- `account/login/cancel` — cancel a pending ChatGPT login by `loginId`.
|
||||
- `account/logout` — sign out; triggers `account/updated`.
|
||||
- `account/updated` (notify) — emitted whenever auth mode changes (`authMode`: `apikey`, `chatgpt`, or `null`).
|
||||
- `account/rateLimits/read` — fetch ChatGPT rate limits; updates arrive via `account/rateLimits/updated` (notify).
|
||||
|
||||
### 1) Check auth state
|
||||
|
||||
Request:
|
||||
```json
|
||||
{ "method": "account/read", "id": 1, "params": { "refreshToken": false } }
|
||||
```
|
||||
|
||||
Response examples:
|
||||
```json
|
||||
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": false } } // No OpenAI auth needed (e.g., OSS/local models)
|
||||
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": true } } // OpenAI auth required (typical for OpenAI-hosted models)
|
||||
{ "id": 1, "result": { "account": { "type": "apiKey" }, "requiresOpenaiAuth": true } }
|
||||
{ "id": 1, "result": { "account": { "type": "chatgpt", "email": "user@example.com", "planType": "pro" }, "requiresOpenaiAuth": true } }
|
||||
```
|
||||
|
||||
Field notes:
|
||||
- `refreshToken` (bool): set `true` to force a token refresh.
|
||||
- `requiresOpenaiAuth` reflects the active provider; when `false`, Codex can run without OpenAI credentials.
|
||||
|
||||
### 2) Log in with an API key
|
||||
|
||||
1. Send:
|
||||
```json
|
||||
{ "method": "account/login/start", "id": 2, "params": { "type": "apiKey", "apiKey": "sk-…" } }
|
||||
```
|
||||
2. Expect:
|
||||
```json
|
||||
{ "id": 2, "result": { "type": "apiKey" } }
|
||||
```
|
||||
3. Notifications:
|
||||
```json
|
||||
{ "method": "account/login/completed", "params": { "loginId": null, "success": true, "error": null } }
|
||||
{ "method": "account/updated", "params": { "authMode": "apikey" } }
|
||||
```
|
||||
|
||||
### 3) Log in with ChatGPT (browser flow)
|
||||
|
||||
1. Start:
|
||||
```json
|
||||
{ "method": "account/login/start", "id": 3, "params": { "type": "chatgpt" } }
|
||||
{ "id": 3, "result": { "type": "chatgpt", "loginId": "<uuid>", "authUrl": "https://chatgpt.com/…&redirect_uri=http%3A%2F%2Flocalhost%3A<port>%2Fauth%2Fcallback" } }
|
||||
```
|
||||
2. Open `authUrl` in a browser; the app-server hosts the local callback.
|
||||
3. Wait for notifications:
|
||||
```json
|
||||
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": true, "error": null } }
|
||||
{ "method": "account/updated", "params": { "authMode": "chatgpt" } }
|
||||
```
|
||||
|
||||
### 4) Cancel a ChatGPT login
|
||||
|
||||
```json
|
||||
{ "method": "account/login/cancel", "id": 4, "params": { "loginId": "<uuid>" } }
|
||||
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": false, "error": "…" } }
|
||||
```
|
||||
|
||||
### 5) Logout
|
||||
|
||||
```json
|
||||
{ "method": "account/logout", "id": 5 }
|
||||
{ "id": 5, "result": {} }
|
||||
{ "method": "account/updated", "params": { "authMode": null } }
|
||||
```
|
||||
|
||||
### 6) Rate limits (ChatGPT)
|
||||
|
||||
```json
|
||||
{ "method": "account/rateLimits/read", "id": 6 }
|
||||
{ "id": 6, "result": { "rateLimits": { "primary": { "usedPercent": 25, "windowDurationMins": 15, "resetsAt": 1730947200 }, "secondary": null } } }
|
||||
{ "method": "account/rateLimits/updated", "params": { "rateLimits": { … } } }
|
||||
```
|
||||
|
||||
Field notes:
|
||||
- `usedPercent` is current usage within the OpenAI quota window.
|
||||
- `windowDurationMins` is the quota window length.
|
||||
- `resetsAt` is a Unix timestamp (seconds) for the next reset.
|
||||
|
||||
### Dev notes
|
||||
|
||||
- `codex generate-ts --out <dir>` emits v2 types under `v2/`.
|
||||
- See [“Authentication and authorization” in the config docs](../../docs/config.md#authentication-and-authorization) for configuration knobs.
|
||||
|
||||
@@ -4,6 +4,9 @@ use crate::fuzzy_file_search::run_fuzzy_file_search;
|
||||
use crate::models::supported_models;
|
||||
use crate::outgoing_message::OutgoingMessageSender;
|
||||
use crate::outgoing_message::OutgoingNotification;
|
||||
use chrono::DateTime;
|
||||
use chrono::Utc;
|
||||
use codex_app_server_protocol::Account;
|
||||
use codex_app_server_protocol::AccountLoginCompletedNotification;
|
||||
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
|
||||
use codex_app_server_protocol::AccountUpdatedNotification;
|
||||
@@ -20,6 +23,7 @@ use codex_app_server_protocol::CancelLoginAccountParams;
|
||||
use codex_app_server_protocol::CancelLoginAccountResponse;
|
||||
use codex_app_server_protocol::CancelLoginChatGptResponse;
|
||||
use codex_app_server_protocol::ClientRequest;
|
||||
use codex_app_server_protocol::ConversationGitInfo;
|
||||
use codex_app_server_protocol::ConversationSummary;
|
||||
use codex_app_server_protocol::ExecCommandApprovalParams;
|
||||
use codex_app_server_protocol::ExecCommandApprovalResponse;
|
||||
@@ -29,7 +33,9 @@ use codex_app_server_protocol::FeedbackUploadParams;
|
||||
use codex_app_server_protocol::FeedbackUploadResponse;
|
||||
use codex_app_server_protocol::FuzzyFileSearchParams;
|
||||
use codex_app_server_protocol::FuzzyFileSearchResponse;
|
||||
use codex_app_server_protocol::GetAccountParams;
|
||||
use codex_app_server_protocol::GetAccountRateLimitsResponse;
|
||||
use codex_app_server_protocol::GetAccountResponse;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::GetAuthStatusResponse;
|
||||
use codex_app_server_protocol::GetConversationSummaryParams;
|
||||
@@ -130,8 +136,10 @@ use codex_protocol::ConversationId;
|
||||
use codex_protocol::config_types::ForcedLoginMethod;
|
||||
use codex_protocol::items::TurnItem;
|
||||
use codex_protocol::models::ResponseItem;
|
||||
use codex_protocol::protocol::GitInfo;
|
||||
use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
|
||||
use codex_protocol::protocol::RolloutItem;
|
||||
use codex_protocol::protocol::SessionMetaLine;
|
||||
use codex_protocol::protocol::USER_MESSAGE_BEGIN;
|
||||
use codex_protocol::user_input::UserInput as CoreInputItem;
|
||||
use codex_utils_json_to_toml::json_to_toml;
|
||||
@@ -270,12 +278,8 @@ impl CodexMessageProcessor {
|
||||
ClientRequest::CancelLoginAccount { request_id, params } => {
|
||||
self.cancel_login_v2(request_id, params).await;
|
||||
}
|
||||
ClientRequest::GetAccount {
|
||||
request_id,
|
||||
params: _,
|
||||
} => {
|
||||
self.send_unimplemented_error(request_id, "account/read")
|
||||
.await;
|
||||
ClientRequest::GetAccount { request_id, params } => {
|
||||
self.get_account(request_id, params).await;
|
||||
}
|
||||
ClientRequest::ResumeConversation { request_id, params } => {
|
||||
self.handle_resume_conversation(request_id, params).await;
|
||||
@@ -798,13 +802,17 @@ impl CodexMessageProcessor {
|
||||
}
|
||||
}
|
||||
|
||||
async fn refresh_token_if_requested(&self, do_refresh: bool) {
|
||||
if do_refresh && let Err(err) = self.auth_manager.refresh_token().await {
|
||||
tracing::warn!("failed to refresh token whilte getting account: {err}");
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_auth_status(&self, request_id: RequestId, params: GetAuthStatusParams) {
|
||||
let include_token = params.include_token.unwrap_or(false);
|
||||
let do_refresh = params.refresh_token.unwrap_or(false);
|
||||
|
||||
if do_refresh && let Err(err) = self.auth_manager.refresh_token().await {
|
||||
tracing::warn!("failed to refresh token while getting auth status: {err}");
|
||||
}
|
||||
self.refresh_token_if_requested(do_refresh).await;
|
||||
|
||||
// Determine whether auth is required based on the active model provider.
|
||||
// If a custom provider is configured with `requires_openai_auth == false`,
|
||||
@@ -849,6 +857,56 @@ impl CodexMessageProcessor {
|
||||
self.outgoing.send_response(request_id, response).await;
|
||||
}
|
||||
|
||||
async fn get_account(&self, request_id: RequestId, params: GetAccountParams) {
|
||||
let do_refresh = params.refresh_token;
|
||||
|
||||
self.refresh_token_if_requested(do_refresh).await;
|
||||
|
||||
// Whether auth is required for the active model provider.
|
||||
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
|
||||
|
||||
if !requires_openai_auth {
|
||||
let response = GetAccountResponse {
|
||||
account: None,
|
||||
requires_openai_auth,
|
||||
};
|
||||
self.outgoing.send_response(request_id, response).await;
|
||||
return;
|
||||
}
|
||||
|
||||
let account = match self.auth_manager.auth() {
|
||||
Some(auth) => Some(match auth.mode {
|
||||
AuthMode::ApiKey => Account::ApiKey {},
|
||||
AuthMode::ChatGPT => {
|
||||
let email = auth.get_account_email();
|
||||
let plan_type = auth.account_plan_type();
|
||||
|
||||
match (email, plan_type) {
|
||||
(Some(email), Some(plan_type)) => Account::Chatgpt { email, plan_type },
|
||||
_ => {
|
||||
let error = JSONRPCErrorError {
|
||||
code: INVALID_REQUEST_ERROR_CODE,
|
||||
message:
|
||||
"email and plan type are required for chatgpt authentication"
|
||||
.to_string(),
|
||||
data: None,
|
||||
};
|
||||
self.outgoing.send_error(request_id, error).await;
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
None => None,
|
||||
};
|
||||
|
||||
let response = GetAccountResponse {
|
||||
account,
|
||||
requires_openai_auth,
|
||||
};
|
||||
self.outgoing.send_response(request_id, response).await;
|
||||
}
|
||||
|
||||
async fn get_user_agent(&self, request_id: RequestId) {
|
||||
let user_agent = get_codex_user_agent();
|
||||
let response = GetUserAgentResponse { user_agent };
|
||||
@@ -1146,8 +1204,31 @@ impl CodexMessageProcessor {
|
||||
|
||||
match self.conversation_manager.new_conversation(config).await {
|
||||
Ok(new_conv) => {
|
||||
let thread = Thread {
|
||||
id: new_conv.conversation_id.to_string(),
|
||||
let conversation_id = new_conv.conversation_id;
|
||||
let rollout_path = new_conv.session_configured.rollout_path.clone();
|
||||
let fallback_provider = self.config.model_provider_id.as_str();
|
||||
|
||||
// A bit hacky, but the summary contains a lot of useful information for the thread
|
||||
// that unfortunately does not get returned from conversation_manager.new_conversation().
|
||||
let thread = match read_summary_from_rollout(
|
||||
rollout_path.as_path(),
|
||||
fallback_provider,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(summary) => summary_to_thread(summary),
|
||||
Err(err) => {
|
||||
warn!(
|
||||
"failed to load summary for new thread {}: {}",
|
||||
conversation_id, err
|
||||
);
|
||||
Thread {
|
||||
id: conversation_id.to_string(),
|
||||
preview: String::new(),
|
||||
model_provider: self.config.model_provider_id.clone(),
|
||||
created_at: chrono::Utc::now().timestamp(),
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let response = ThreadStartResponse {
|
||||
@@ -1157,12 +1238,12 @@ impl CodexMessageProcessor {
|
||||
// Auto-attach a conversation listener when starting a thread.
|
||||
// Use the same behavior as the v1 API with experimental_raw_events=false.
|
||||
if let Err(err) = self
|
||||
.attach_conversation_listener(new_conv.conversation_id, false)
|
||||
.attach_conversation_listener(conversation_id, false)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(
|
||||
"failed to attach listener for conversation {}: {}",
|
||||
new_conv.conversation_id,
|
||||
conversation_id,
|
||||
err.message
|
||||
);
|
||||
}
|
||||
@@ -1260,12 +1341,7 @@ impl CodexMessageProcessor {
|
||||
}
|
||||
};
|
||||
|
||||
let data = summaries
|
||||
.into_iter()
|
||||
.map(|s| Thread {
|
||||
id: s.conversation_id.to_string(),
|
||||
})
|
||||
.collect();
|
||||
let data = summaries.into_iter().map(summary_to_thread).collect();
|
||||
|
||||
let response = ThreadListResponse { data, next_cursor };
|
||||
self.outgoing.send_response(request_id, response).await;
|
||||
@@ -1352,6 +1428,8 @@ impl CodexMessageProcessor {
|
||||
.await
|
||||
{
|
||||
Ok(_) => {
|
||||
let thread = summary_to_thread(summary);
|
||||
|
||||
// Auto-attach a conversation listener when resuming a thread.
|
||||
if let Err(err) = self
|
||||
.attach_conversation_listener(conversation_id, false)
|
||||
@@ -1364,11 +1442,7 @@ impl CodexMessageProcessor {
|
||||
);
|
||||
}
|
||||
|
||||
let response = ThreadResumeResponse {
|
||||
thread: Thread {
|
||||
id: conversation_id.to_string(),
|
||||
},
|
||||
};
|
||||
let response = ThreadResumeResponse { thread };
|
||||
self.outgoing.send_response(request_id, response).await;
|
||||
}
|
||||
Err(err) => {
|
||||
@@ -1510,7 +1584,18 @@ impl CodexMessageProcessor {
|
||||
let items = page
|
||||
.items
|
||||
.into_iter()
|
||||
.filter_map(|it| extract_conversation_summary(it.path, &it.head, &fallback_provider))
|
||||
.filter_map(|it| {
|
||||
let session_meta_line = it.head.first().and_then(|first| {
|
||||
serde_json::from_value::<SessionMetaLine>(first.clone()).ok()
|
||||
})?;
|
||||
extract_conversation_summary(
|
||||
it.path,
|
||||
&it.head,
|
||||
&session_meta_line.meta,
|
||||
session_meta_line.git.as_ref(),
|
||||
fallback_provider.as_str(),
|
||||
)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Encode next_cursor as a plain string
|
||||
@@ -2671,16 +2756,25 @@ async fn read_summary_from_rollout(
|
||||
)));
|
||||
};
|
||||
|
||||
let session_meta = serde_json::from_value::<SessionMeta>(first.clone()).map_err(|_| {
|
||||
IoError::other(format!(
|
||||
"rollout at {} does not start with session metadata",
|
||||
path.display()
|
||||
))
|
||||
})?;
|
||||
let session_meta_line =
|
||||
serde_json::from_value::<SessionMetaLine>(first.clone()).map_err(|_| {
|
||||
IoError::other(format!(
|
||||
"rollout at {} does not start with session metadata",
|
||||
path.display()
|
||||
))
|
||||
})?;
|
||||
let SessionMetaLine {
|
||||
meta: session_meta,
|
||||
git,
|
||||
} = session_meta_line;
|
||||
|
||||
if let Some(summary) =
|
||||
extract_conversation_summary(path.to_path_buf(), &head, fallback_provider)
|
||||
{
|
||||
if let Some(summary) = extract_conversation_summary(
|
||||
path.to_path_buf(),
|
||||
&head,
|
||||
&session_meta,
|
||||
git.as_ref(),
|
||||
fallback_provider,
|
||||
) {
|
||||
return Ok(summary);
|
||||
}
|
||||
|
||||
@@ -2691,7 +2785,9 @@ async fn read_summary_from_rollout(
|
||||
};
|
||||
let model_provider = session_meta
|
||||
.model_provider
|
||||
.clone()
|
||||
.unwrap_or_else(|| fallback_provider.to_string());
|
||||
let git_info = git.as_ref().map(map_git_info);
|
||||
|
||||
Ok(ConversationSummary {
|
||||
conversation_id: session_meta.id,
|
||||
@@ -2699,19 +2795,20 @@ async fn read_summary_from_rollout(
|
||||
path: path.to_path_buf(),
|
||||
preview: String::new(),
|
||||
model_provider,
|
||||
cwd: session_meta.cwd,
|
||||
cli_version: session_meta.cli_version,
|
||||
source: session_meta.source,
|
||||
git_info,
|
||||
})
|
||||
}
|
||||
|
||||
fn extract_conversation_summary(
|
||||
path: PathBuf,
|
||||
head: &[serde_json::Value],
|
||||
session_meta: &SessionMeta,
|
||||
git: Option<&GitInfo>,
|
||||
fallback_provider: &str,
|
||||
) -> Option<ConversationSummary> {
|
||||
let session_meta = match head.first() {
|
||||
Some(first_line) => serde_json::from_value::<SessionMeta>(first_line.clone()).ok()?,
|
||||
None => return None,
|
||||
};
|
||||
|
||||
let preview = head
|
||||
.iter()
|
||||
.filter_map(|value| serde_json::from_value::<ResponseItem>(value.clone()).ok())
|
||||
@@ -2733,7 +2830,9 @@ fn extract_conversation_summary(
|
||||
let conversation_id = session_meta.id;
|
||||
let model_provider = session_meta
|
||||
.model_provider
|
||||
.clone()
|
||||
.unwrap_or_else(|| fallback_provider.to_string());
|
||||
let git_info = git.map(map_git_info);
|
||||
|
||||
Some(ConversationSummary {
|
||||
conversation_id,
|
||||
@@ -2741,13 +2840,53 @@ fn extract_conversation_summary(
|
||||
path,
|
||||
preview: preview.to_string(),
|
||||
model_provider,
|
||||
cwd: session_meta.cwd.clone(),
|
||||
cli_version: session_meta.cli_version.clone(),
|
||||
source: session_meta.source.clone(),
|
||||
git_info,
|
||||
})
|
||||
}
|
||||
|
||||
fn map_git_info(git_info: &GitInfo) -> ConversationGitInfo {
|
||||
ConversationGitInfo {
|
||||
sha: git_info.commit_hash.clone(),
|
||||
branch: git_info.branch.clone(),
|
||||
origin_url: git_info.repository_url.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_datetime(timestamp: Option<&str>) -> Option<DateTime<Utc>> {
|
||||
timestamp.and_then(|ts| {
|
||||
chrono::DateTime::parse_from_rfc3339(ts)
|
||||
.ok()
|
||||
.map(|dt| dt.with_timezone(&chrono::Utc))
|
||||
})
|
||||
}
|
||||
|
||||
fn summary_to_thread(summary: ConversationSummary) -> Thread {
|
||||
let ConversationSummary {
|
||||
conversation_id,
|
||||
preview,
|
||||
timestamp,
|
||||
model_provider,
|
||||
..
|
||||
} = summary;
|
||||
|
||||
let created_at = parse_datetime(timestamp.as_deref());
|
||||
|
||||
Thread {
|
||||
id: conversation_id.to_string(),
|
||||
preview,
|
||||
model_provider,
|
||||
created_at: created_at.map(|dt| dt.timestamp()).unwrap_or(0),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use anyhow::Result;
|
||||
use codex_protocol::protocol::SessionSource;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
@@ -2786,8 +2925,11 @@ mod tests {
|
||||
}),
|
||||
];
|
||||
|
||||
let session_meta = serde_json::from_value::<SessionMeta>(head[0].clone())?;
|
||||
|
||||
let summary =
|
||||
extract_conversation_summary(path.clone(), &head, "test-provider").expect("summary");
|
||||
extract_conversation_summary(path.clone(), &head, &session_meta, None, "test-provider")
|
||||
.expect("summary");
|
||||
|
||||
let expected = ConversationSummary {
|
||||
conversation_id,
|
||||
@@ -2795,6 +2937,10 @@ mod tests {
|
||||
path,
|
||||
preview: "Count to 5".to_string(),
|
||||
model_provider: "test-provider".to_string(),
|
||||
cwd: PathBuf::from("/"),
|
||||
cli_version: "0.0.0".to_string(),
|
||||
source: SessionSource::VSCode,
|
||||
git_info: None,
|
||||
};
|
||||
|
||||
assert_eq!(summary, expected);
|
||||
@@ -2839,6 +2985,10 @@ mod tests {
|
||||
path: path.clone(),
|
||||
preview: String::new(),
|
||||
model_provider: "fallback".to_string(),
|
||||
cwd: PathBuf::new(),
|
||||
cli_version: String::new(),
|
||||
source: SessionSource::VSCode,
|
||||
git_info: None,
|
||||
};
|
||||
|
||||
assert_eq!(summary, expected);
|
||||
|
||||
@@ -19,6 +19,7 @@ use codex_app_server_protocol::CancelLoginChatGptParams;
|
||||
use codex_app_server_protocol::ClientInfo;
|
||||
use codex_app_server_protocol::ClientNotification;
|
||||
use codex_app_server_protocol::FeedbackUploadParams;
|
||||
use codex_app_server_protocol::GetAccountParams;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::InitializeParams;
|
||||
use codex_app_server_protocol::InterruptConversationParams;
|
||||
@@ -249,6 +250,15 @@ impl McpProcess {
|
||||
self.send_request("account/rateLimits/read", None).await
|
||||
}
|
||||
|
||||
/// Send an `account/read` JSON-RPC request.
|
||||
pub async fn send_get_account_request(
|
||||
&mut self,
|
||||
params: GetAccountParams,
|
||||
) -> anyhow::Result<i64> {
|
||||
let params = Some(serde_json::to_value(params)?);
|
||||
self.send_request("account/read", params).await
|
||||
}
|
||||
|
||||
/// Send a `feedback/upload` JSON-RPC request.
|
||||
pub async fn send_feedback_upload_request(
|
||||
&mut self,
|
||||
|
||||
@@ -7,8 +7,6 @@ mod fuzzy_file_search;
|
||||
mod interrupt;
|
||||
mod list_resume;
|
||||
mod login;
|
||||
mod model_list;
|
||||
mod rate_limits;
|
||||
mod send_message;
|
||||
mod set_default_model;
|
||||
mod user_agent;
|
||||
|
||||
@@ -2,11 +2,15 @@ use anyhow::Result;
|
||||
use anyhow::bail;
|
||||
use app_test_support::McpProcess;
|
||||
use app_test_support::to_response;
|
||||
|
||||
use app_test_support::ChatGptAuthFixture;
|
||||
use app_test_support::write_chatgpt_auth;
|
||||
use codex_app_server_protocol::Account;
|
||||
use codex_app_server_protocol::AuthMode;
|
||||
use codex_app_server_protocol::CancelLoginAccountParams;
|
||||
use codex_app_server_protocol::CancelLoginAccountResponse;
|
||||
use codex_app_server_protocol::GetAuthStatusParams;
|
||||
use codex_app_server_protocol::GetAuthStatusResponse;
|
||||
use codex_app_server_protocol::GetAccountParams;
|
||||
use codex_app_server_protocol::GetAccountResponse;
|
||||
use codex_app_server_protocol::JSONRPCError;
|
||||
use codex_app_server_protocol::JSONRPCResponse;
|
||||
use codex_app_server_protocol::LoginAccountResponse;
|
||||
@@ -15,6 +19,7 @@ use codex_app_server_protocol::RequestId;
|
||||
use codex_app_server_protocol::ServerNotification;
|
||||
use codex_core::auth::AuthCredentialsStoreMode;
|
||||
use codex_login::login_with_api_key;
|
||||
use codex_protocol::account::PlanType as AccountPlanType;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serial_test::serial;
|
||||
use std::path::Path;
|
||||
@@ -25,22 +30,30 @@ use tokio::time::timeout;
|
||||
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
||||
|
||||
// Helper to create a minimal config.toml for the app server
|
||||
fn create_config_toml(
|
||||
codex_home: &Path,
|
||||
forced_method: Option<&str>,
|
||||
forced_workspace_id: Option<&str>,
|
||||
) -> std::io::Result<()> {
|
||||
#[derive(Default)]
|
||||
struct CreateConfigTomlParams {
|
||||
forced_method: Option<String>,
|
||||
forced_workspace_id: Option<String>,
|
||||
requires_openai_auth: Option<bool>,
|
||||
}
|
||||
|
||||
fn create_config_toml(codex_home: &Path, params: CreateConfigTomlParams) -> std::io::Result<()> {
|
||||
let config_toml = codex_home.join("config.toml");
|
||||
let forced_line = if let Some(method) = forced_method {
|
||||
let forced_line = if let Some(method) = params.forced_method {
|
||||
format!("forced_login_method = \"{method}\"\n")
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
let forced_workspace_line = if let Some(ws) = forced_workspace_id {
|
||||
let forced_workspace_line = if let Some(ws) = params.forced_workspace_id {
|
||||
format!("forced_chatgpt_workspace_id = \"{ws}\"\n")
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
let requires_line = match params.requires_openai_auth {
|
||||
Some(true) => "requires_openai_auth = true\n".to_string(),
|
||||
Some(false) => String::new(),
|
||||
None => String::new(),
|
||||
};
|
||||
let contents = format!(
|
||||
r#"
|
||||
model = "mock-model"
|
||||
@@ -57,6 +70,7 @@ base_url = "http://127.0.0.1:0/v1"
|
||||
wire_api = "chat"
|
||||
request_max_retries = 0
|
||||
stream_max_retries = 0
|
||||
{requires_line}
|
||||
"#
|
||||
);
|
||||
std::fs::write(config_toml, contents)
|
||||
@@ -65,7 +79,7 @@ stream_max_retries = 0
|
||||
#[tokio::test]
|
||||
async fn logout_account_removes_auth_and_notifies() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), None, None)?;
|
||||
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
|
||||
|
||||
login_with_api_key(
|
||||
codex_home.path(),
|
||||
@@ -104,27 +118,25 @@ async fn logout_account_removes_auth_and_notifies() -> Result<()> {
|
||||
"auth.json should be deleted"
|
||||
);
|
||||
|
||||
let status_id = mcp
|
||||
.send_get_auth_status_request(GetAuthStatusParams {
|
||||
include_token: Some(true),
|
||||
refresh_token: Some(false),
|
||||
let get_id = mcp
|
||||
.send_get_account_request(GetAccountParams {
|
||||
refresh_token: false,
|
||||
})
|
||||
.await?;
|
||||
let status_resp: JSONRPCResponse = timeout(
|
||||
let get_resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(status_id)),
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(get_id)),
|
||||
)
|
||||
.await??;
|
||||
let status: GetAuthStatusResponse = to_response(status_resp)?;
|
||||
assert_eq!(status.auth_method, None);
|
||||
assert_eq!(status.auth_token, None);
|
||||
let account: GetAccountResponse = to_response(get_resp)?;
|
||||
assert_eq!(account.account, None);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), None, None)?;
|
||||
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
@@ -171,7 +183,13 @@ async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
|
||||
#[tokio::test]
|
||||
async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), Some("chatgpt"), None)?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
forced_method: Some("chatgpt".to_string()),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
@@ -195,7 +213,13 @@ async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
|
||||
#[tokio::test]
|
||||
async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), Some("api"), None)?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
forced_method: Some("api".to_string()),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
@@ -219,7 +243,7 @@ async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
|
||||
#[serial(login_port)]
|
||||
async fn login_account_chatgpt_start() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), None, None)?;
|
||||
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
@@ -285,7 +309,13 @@ async fn login_account_chatgpt_start() -> Result<()> {
|
||||
#[serial(login_port)]
|
||||
async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(codex_home.path(), None, Some("ws-forced"))?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
forced_workspace_id: Some("ws-forced".to_string()),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
@@ -307,3 +337,156 @@ async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_account_no_auth() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
requires_openai_auth: Some(true),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
|
||||
let params = GetAccountParams {
|
||||
refresh_token: false,
|
||||
};
|
||||
let request_id = mcp.send_get_account_request(params).await?;
|
||||
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
||||
)
|
||||
.await??;
|
||||
let account: GetAccountResponse = to_response(resp)?;
|
||||
|
||||
assert_eq!(account.account, None, "expected no account");
|
||||
assert_eq!(account.requires_openai_auth, true);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_account_with_api_key() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
requires_openai_auth: Some(true),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
|
||||
let req_id = mcp
|
||||
.send_login_account_api_key_request("sk-test-key")
|
||||
.await?;
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
|
||||
)
|
||||
.await??;
|
||||
let _login_ok = to_response::<LoginAccountResponse>(resp)?;
|
||||
|
||||
let params = GetAccountParams {
|
||||
refresh_token: false,
|
||||
};
|
||||
let request_id = mcp.send_get_account_request(params).await?;
|
||||
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
||||
)
|
||||
.await??;
|
||||
let received: GetAccountResponse = to_response(resp)?;
|
||||
|
||||
let expected = GetAccountResponse {
|
||||
account: Some(Account::ApiKey {}),
|
||||
requires_openai_auth: true,
|
||||
};
|
||||
assert_eq!(received, expected);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_account_when_auth_not_required() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
requires_openai_auth: Some(false),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
|
||||
let params = GetAccountParams {
|
||||
refresh_token: false,
|
||||
};
|
||||
let request_id = mcp.send_get_account_request(params).await?;
|
||||
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
||||
)
|
||||
.await??;
|
||||
let received: GetAccountResponse = to_response(resp)?;
|
||||
|
||||
let expected = GetAccountResponse {
|
||||
account: None,
|
||||
requires_openai_auth: false,
|
||||
};
|
||||
assert_eq!(received, expected);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_account_with_chatgpt() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
create_config_toml(
|
||||
codex_home.path(),
|
||||
CreateConfigTomlParams {
|
||||
requires_openai_auth: Some(true),
|
||||
..Default::default()
|
||||
},
|
||||
)?;
|
||||
write_chatgpt_auth(
|
||||
codex_home.path(),
|
||||
ChatGptAuthFixture::new("access-chatgpt")
|
||||
.email("user@example.com")
|
||||
.plan_type("pro"),
|
||||
AuthCredentialsStoreMode::File,
|
||||
)?;
|
||||
|
||||
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
|
||||
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
||||
|
||||
let params = GetAccountParams {
|
||||
refresh_token: false,
|
||||
};
|
||||
let request_id = mcp.send_get_account_request(params).await?;
|
||||
|
||||
let resp: JSONRPCResponse = timeout(
|
||||
DEFAULT_READ_TIMEOUT,
|
||||
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
|
||||
)
|
||||
.await??;
|
||||
let received: GetAccountResponse = to_response(resp)?;
|
||||
|
||||
let expected = GetAccountResponse {
|
||||
account: Some(Account::Chatgpt {
|
||||
email: "user@example.com".to_string(),
|
||||
plan_type: AccountPlanType::Pro,
|
||||
}),
|
||||
requires_openai_auth: true,
|
||||
};
|
||||
assert_eq!(received, expected);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
mod account;
|
||||
mod model_list;
|
||||
mod rate_limits;
|
||||
mod thread_archive;
|
||||
mod thread_list;
|
||||
mod thread_resume;
|
||||
|
||||
@@ -19,7 +19,7 @@ use tokio::time::timeout;
|
||||
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
|
||||
const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
@@ -106,7 +106,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn list_models_pagination_works() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
@@ -159,7 +159,7 @@ async fn list_models_pagination_works() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn list_models_rejects_invalid_cursor() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
||||
@@ -26,7 +26,7 @@ use wiremock::matchers::path;
|
||||
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
||||
const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn get_account_rate_limits_requires_auth() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
@@ -51,7 +51,7 @@ async fn get_account_rate_limits_requires_auth() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn get_account_rate_limits_requires_chatgpt_auth() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
|
||||
@@ -78,7 +78,7 @@ async fn get_account_rate_limits_requires_chatgpt_auth() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
#[tokio::test]
|
||||
async fn get_account_rate_limits_returns_snapshot() -> Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
write_chatgpt_auth(
|
||||
@@ -102,6 +102,11 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
|
||||
next_cursor: cursor1,
|
||||
} = to_response::<ThreadListResponse>(page1_resp)?;
|
||||
assert_eq!(data1.len(), 2);
|
||||
for thread in &data1 {
|
||||
assert_eq!(thread.preview, "Hello");
|
||||
assert_eq!(thread.model_provider, "mock_provider");
|
||||
assert!(thread.created_at > 0);
|
||||
}
|
||||
let cursor1 = cursor1.expect("expected nextCursor on first page");
|
||||
|
||||
// Page 2: with cursor → expect next_cursor None when no more results.
|
||||
@@ -122,6 +127,11 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
|
||||
next_cursor: cursor2,
|
||||
} = to_response::<ThreadListResponse>(page2_resp)?;
|
||||
assert!(data2.len() <= 2);
|
||||
for thread in &data2 {
|
||||
assert_eq!(thread.preview, "Hello");
|
||||
assert_eq!(thread.model_provider, "mock_provider");
|
||||
assert!(thread.created_at > 0);
|
||||
}
|
||||
assert_eq!(cursor2, None, "expected nextCursor to be null on last page");
|
||||
|
||||
Ok(())
|
||||
@@ -200,6 +210,11 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
|
||||
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(resp)?;
|
||||
assert_eq!(data.len(), 1);
|
||||
assert_eq!(next_cursor, None);
|
||||
let thread = &data[0];
|
||||
assert_eq!(thread.preview, "X");
|
||||
assert_eq!(thread.model_provider, "other_provider");
|
||||
let expected_ts = chrono::DateTime::parse_from_rfc3339("2025-01-02T11:00:00Z")?.timestamp();
|
||||
assert_eq!(thread.created_at, expected_ts);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -49,7 +49,7 @@ async fn thread_resume_returns_existing_thread() -> Result<()> {
|
||||
.await??;
|
||||
let ThreadResumeResponse { thread: resumed } =
|
||||
to_response::<ThreadResumeResponse>(resume_resp)?;
|
||||
assert_eq!(resumed.id, thread.id);
|
||||
assert_eq!(resumed, thread);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -42,6 +42,15 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
|
||||
.await??;
|
||||
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(resp)?;
|
||||
assert!(!thread.id.is_empty(), "thread id should not be empty");
|
||||
assert!(
|
||||
thread.preview.is_empty(),
|
||||
"new threads should start with an empty preview"
|
||||
);
|
||||
assert_eq!(thread.model_provider, "mock_provider");
|
||||
assert!(
|
||||
thread.created_at > 0,
|
||||
"created_at should be a positive UNIX timestamp"
|
||||
);
|
||||
|
||||
// A corresponding thread/started notification should arrive.
|
||||
let notif: JSONRPCNotification = timeout(
|
||||
@@ -51,7 +60,7 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
|
||||
.await??;
|
||||
let started: ThreadStartedNotification =
|
||||
serde_json::from_value(notif.params.expect("params must be present"))?;
|
||||
assert_eq!(started.thread.id, thread.id);
|
||||
assert_eq!(started.thread, thread);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -288,7 +288,7 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
|
||||
path,
|
||||
ApplyPatchFileChange::Update {
|
||||
unified_diff,
|
||||
move_path: move_path.map(|p| cwd.join(p)),
|
||||
move_path: move_path.map(|p| effective_cwd.join(p)),
|
||||
new_content: contents,
|
||||
},
|
||||
);
|
||||
@@ -1603,6 +1603,53 @@ g
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_apply_patch_resolves_move_path_with_effective_cwd() {
|
||||
let session_dir = tempdir().unwrap();
|
||||
let worktree_rel = "alt";
|
||||
let worktree_dir = session_dir.path().join(worktree_rel);
|
||||
fs::create_dir_all(&worktree_dir).unwrap();
|
||||
|
||||
let source_name = "old.txt";
|
||||
let dest_name = "renamed.txt";
|
||||
let source_path = worktree_dir.join(source_name);
|
||||
fs::write(&source_path, "before\n").unwrap();
|
||||
|
||||
let patch = wrap_patch(&format!(
|
||||
r#"*** Update File: {source_name}
|
||||
*** Move to: {dest_name}
|
||||
@@
|
||||
-before
|
||||
+after"#
|
||||
));
|
||||
|
||||
let shell_script = format!("cd {worktree_rel} && apply_patch <<'PATCH'\n{patch}\nPATCH");
|
||||
let argv = vec!["bash".into(), "-lc".into(), shell_script];
|
||||
|
||||
let result = maybe_parse_apply_patch_verified(&argv, session_dir.path());
|
||||
let action = match result {
|
||||
MaybeApplyPatchVerified::Body(action) => action,
|
||||
other => panic!("expected verified body, got {other:?}"),
|
||||
};
|
||||
|
||||
assert_eq!(action.cwd, worktree_dir);
|
||||
|
||||
let change = action
|
||||
.changes()
|
||||
.get(&worktree_dir.join(source_name))
|
||||
.expect("source file change present");
|
||||
|
||||
match change {
|
||||
ApplyPatchFileChange::Update { move_path, .. } => {
|
||||
assert_eq!(
|
||||
move_path.as_deref(),
|
||||
Some(worktree_dir.join(dest_name).as_path())
|
||||
);
|
||||
}
|
||||
other => panic!("expected update change, got {other:?}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_apply_patch_fails_on_write_error() {
|
||||
let dir = tempdir().unwrap();
|
||||
|
||||
@@ -5,6 +5,7 @@ use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::exec_env::create_env;
|
||||
use codex_core::landlock::spawn_command_under_linux_sandbox;
|
||||
#[cfg(target_os = "macos")]
|
||||
use codex_core::seatbelt::spawn_command_under_seatbelt;
|
||||
use codex_core::spawn::StdioPolicy;
|
||||
use codex_protocol::config_types::SandboxMode;
|
||||
@@ -14,6 +15,7 @@ use crate::SeatbeltCommand;
|
||||
use crate::WindowsCommand;
|
||||
use crate::exit_status::handle_exit_status;
|
||||
|
||||
#[cfg(target_os = "macos")]
|
||||
pub async fn run_command_under_seatbelt(
|
||||
command: SeatbeltCommand,
|
||||
codex_linux_sandbox_exe: Option<PathBuf>,
|
||||
@@ -33,6 +35,14 @@ pub async fn run_command_under_seatbelt(
|
||||
.await
|
||||
}
|
||||
|
||||
#[cfg(not(target_os = "macos"))]
|
||||
pub async fn run_command_under_seatbelt(
|
||||
_command: SeatbeltCommand,
|
||||
_codex_linux_sandbox_exe: Option<PathBuf>,
|
||||
) -> anyhow::Result<()> {
|
||||
anyhow::bail!("Seatbelt sandbox is only available on macOS");
|
||||
}
|
||||
|
||||
pub async fn run_command_under_landlock(
|
||||
command: LandlockCommand,
|
||||
codex_linux_sandbox_exe: Option<PathBuf>,
|
||||
@@ -72,6 +82,7 @@ pub async fn run_command_under_windows(
|
||||
}
|
||||
|
||||
enum SandboxType {
|
||||
#[cfg(target_os = "macos")]
|
||||
Seatbelt,
|
||||
Landlock,
|
||||
Windows,
|
||||
@@ -168,6 +179,7 @@ async fn run_command_under_sandbox(
|
||||
}
|
||||
|
||||
let mut child = match sandbox_type {
|
||||
#[cfg(target_os = "macos")]
|
||||
SandboxType::Seatbelt => {
|
||||
spawn_command_under_seatbelt(
|
||||
command,
|
||||
|
||||
@@ -26,8 +26,10 @@ use std::path::PathBuf;
|
||||
use supports_color::Stream;
|
||||
|
||||
mod mcp_cmd;
|
||||
mod wsl_paths;
|
||||
|
||||
use crate::mcp_cmd::McpCli;
|
||||
use crate::wsl_paths::normalize_for_wsl;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::features::is_known_feature_key;
|
||||
@@ -270,7 +272,11 @@ fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
|
||||
let (cmd, args) = action.command_args();
|
||||
let cmd_str = action.command_str();
|
||||
println!("Updating Codex via `{cmd_str}`...");
|
||||
let status = std::process::Command::new(cmd).args(args).status()?;
|
||||
let command_path = normalize_for_wsl(cmd);
|
||||
let normalized_args: Vec<String> = args.iter().map(normalize_for_wsl).collect();
|
||||
let status = std::process::Command::new(&command_path)
|
||||
.args(&normalized_args)
|
||||
.status()?;
|
||||
if !status.success() {
|
||||
anyhow::bail!("`{cmd_str}` failed with status {status}");
|
||||
}
|
||||
|
||||
76
codex-rs/cli/src/wsl_paths.rs
Normal file
76
codex-rs/cli/src/wsl_paths.rs
Normal file
@@ -0,0 +1,76 @@
|
||||
use std::ffi::OsStr;
|
||||
|
||||
/// WSL-specific path helpers used by the updater logic.
|
||||
///
|
||||
/// See https://github.com/openai/codex/issues/6086.
|
||||
pub fn is_wsl() -> bool {
|
||||
#[cfg(target_os = "linux")]
|
||||
{
|
||||
if std::env::var_os("WSL_DISTRO_NAME").is_some() {
|
||||
return true;
|
||||
}
|
||||
match std::fs::read_to_string("/proc/version") {
|
||||
Ok(version) => version.to_lowercase().contains("microsoft"),
|
||||
Err(_) => false,
|
||||
}
|
||||
}
|
||||
#[cfg(not(target_os = "linux"))]
|
||||
{
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert a Windows absolute path (`C:\foo\bar` or `C:/foo/bar`) to a WSL mount path (`/mnt/c/foo/bar`).
|
||||
/// Returns `None` if the input does not look like a Windows drive path.
|
||||
pub fn win_path_to_wsl(path: &str) -> Option<String> {
|
||||
let bytes = path.as_bytes();
|
||||
if bytes.len() < 3
|
||||
|| bytes[1] != b':'
|
||||
|| !(bytes[2] == b'\\' || bytes[2] == b'/')
|
||||
|| !bytes[0].is_ascii_alphabetic()
|
||||
{
|
||||
return None;
|
||||
}
|
||||
let drive = (bytes[0] as char).to_ascii_lowercase();
|
||||
let tail = path[3..].replace('\\', "/");
|
||||
if tail.is_empty() {
|
||||
return Some(format!("/mnt/{drive}"));
|
||||
}
|
||||
Some(format!("/mnt/{drive}/{tail}"))
|
||||
}
|
||||
|
||||
/// If under WSL and given a Windows-style path, return the equivalent `/mnt/<drive>/…` path.
|
||||
/// Otherwise returns the input unchanged.
|
||||
pub fn normalize_for_wsl<P: AsRef<OsStr>>(path: P) -> String {
|
||||
let value = path.as_ref().to_string_lossy().to_string();
|
||||
if !is_wsl() {
|
||||
return value;
|
||||
}
|
||||
if let Some(mapped) = win_path_to_wsl(&value) {
|
||||
return mapped;
|
||||
}
|
||||
value
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn win_to_wsl_basic() {
|
||||
assert_eq!(
|
||||
win_path_to_wsl(r"C:\Temp\codex.zip").as_deref(),
|
||||
Some("/mnt/c/Temp/codex.zip")
|
||||
);
|
||||
assert_eq!(
|
||||
win_path_to_wsl("D:/Work/codex.tgz").as_deref(),
|
||||
Some("/mnt/d/Work/codex.tgz")
|
||||
);
|
||||
assert!(win_path_to_wsl("/home/user/codex").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_is_noop_on_unix_paths() {
|
||||
assert_eq!(normalize_for_wsl("/home/u/x"), "/home/u/x");
|
||||
}
|
||||
}
|
||||
@@ -16,6 +16,7 @@ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CL
|
||||
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
|
||||
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
|
||||
* If the changes are in unrelated files, just ignore them and don't revert them.
|
||||
- Do not amend a commit unless explicitly requested to do so.
|
||||
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
|
||||
- **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.
|
||||
|
||||
|
||||
@@ -26,10 +26,12 @@ use crate::config::Config;
|
||||
use crate::default_client::CodexHttpClient;
|
||||
use crate::error::RefreshTokenFailedError;
|
||||
use crate::error::RefreshTokenFailedReason;
|
||||
use crate::token_data::PlanType;
|
||||
use crate::token_data::KnownPlan as InternalKnownPlan;
|
||||
use crate::token_data::PlanType as InternalPlanType;
|
||||
use crate::token_data::TokenData;
|
||||
use crate::token_data::parse_id_token;
|
||||
use crate::util::try_parse_error_message;
|
||||
use codex_protocol::account::PlanType as AccountPlanType;
|
||||
use serde_json::Value;
|
||||
use thiserror::Error;
|
||||
|
||||
@@ -202,7 +204,34 @@ impl CodexAuth {
|
||||
self.get_current_token_data().and_then(|t| t.id_token.email)
|
||||
}
|
||||
|
||||
pub(crate) fn get_plan_type(&self) -> Option<PlanType> {
|
||||
/// Account-facing plan classification derived from the current token.
|
||||
/// Returns a high-level `AccountPlanType` (e.g., Free/Plus/Pro/Team/…)
|
||||
/// mapped from the ID token's internal plan value. Prefer this when you
|
||||
/// need to make UI or product decisions based on the user's subscription.
|
||||
pub fn account_plan_type(&self) -> Option<AccountPlanType> {
|
||||
let map_known = |kp: &InternalKnownPlan| match kp {
|
||||
InternalKnownPlan::Free => AccountPlanType::Free,
|
||||
InternalKnownPlan::Plus => AccountPlanType::Plus,
|
||||
InternalKnownPlan::Pro => AccountPlanType::Pro,
|
||||
InternalKnownPlan::Team => AccountPlanType::Team,
|
||||
InternalKnownPlan::Business => AccountPlanType::Business,
|
||||
InternalKnownPlan::Enterprise => AccountPlanType::Enterprise,
|
||||
InternalKnownPlan::Edu => AccountPlanType::Edu,
|
||||
};
|
||||
|
||||
self.get_current_token_data()
|
||||
.and_then(|t| t.id_token.chatgpt_plan_type)
|
||||
.map(|pt| match pt {
|
||||
InternalPlanType::Known(k) => map_known(&k),
|
||||
InternalPlanType::Unknown(_) => AccountPlanType::Unknown,
|
||||
})
|
||||
}
|
||||
|
||||
/// Raw internal plan value from the ID token.
|
||||
/// Exposes the underlying `token_data::PlanType` without mapping it to the
|
||||
/// public `AccountPlanType`. Use this when downstream code needs to inspect
|
||||
/// internal/unknown plan strings exactly as issued in the token.
|
||||
pub(crate) fn get_plan_type(&self) -> Option<InternalPlanType> {
|
||||
self.get_current_token_data()
|
||||
.and_then(|t| t.id_token.chatgpt_plan_type)
|
||||
}
|
||||
@@ -609,8 +638,9 @@ mod tests {
|
||||
use crate::config::ConfigOverrides;
|
||||
use crate::config::ConfigToml;
|
||||
use crate::token_data::IdTokenInfo;
|
||||
use crate::token_data::KnownPlan;
|
||||
use crate::token_data::PlanType;
|
||||
use crate::token_data::KnownPlan as InternalKnownPlan;
|
||||
use crate::token_data::PlanType as InternalPlanType;
|
||||
use codex_protocol::account::PlanType as AccountPlanType;
|
||||
|
||||
use base64::Engine;
|
||||
use codex_protocol::config_types::ForcedLoginMethod;
|
||||
@@ -727,7 +757,7 @@ mod tests {
|
||||
tokens: Some(TokenData {
|
||||
id_token: IdTokenInfo {
|
||||
email: Some("user@example.com".to_string()),
|
||||
chatgpt_plan_type: Some(PlanType::Known(KnownPlan::Pro)),
|
||||
chatgpt_plan_type: Some(InternalPlanType::Known(InternalKnownPlan::Pro)),
|
||||
chatgpt_account_id: None,
|
||||
raw_jwt: fake_jwt,
|
||||
},
|
||||
@@ -981,6 +1011,54 @@ mod tests {
|
||||
.contains("ChatGPT login is required, but an API key is currently being used.")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn plan_type_maps_known_plan() {
|
||||
let codex_home = tempdir().unwrap();
|
||||
let _jwt = write_auth_file(
|
||||
AuthFileParams {
|
||||
openai_api_key: None,
|
||||
chatgpt_plan_type: "pro".to_string(),
|
||||
chatgpt_account_id: None,
|
||||
},
|
||||
codex_home.path(),
|
||||
)
|
||||
.expect("failed to write auth file");
|
||||
|
||||
let auth = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
|
||||
.expect("load auth")
|
||||
.expect("auth available");
|
||||
|
||||
pretty_assertions::assert_eq!(auth.account_plan_type(), Some(AccountPlanType::Pro));
|
||||
pretty_assertions::assert_eq!(
|
||||
auth.get_plan_type(),
|
||||
Some(InternalPlanType::Known(InternalKnownPlan::Pro))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn plan_type_maps_unknown_to_unknown() {
|
||||
let codex_home = tempdir().unwrap();
|
||||
let _jwt = write_auth_file(
|
||||
AuthFileParams {
|
||||
openai_api_key: None,
|
||||
chatgpt_plan_type: "mystery-tier".to_string(),
|
||||
chatgpt_account_id: None,
|
||||
},
|
||||
codex_home.path(),
|
||||
)
|
||||
.expect("failed to write auth file");
|
||||
|
||||
let auth = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
|
||||
.expect("load auth")
|
||||
.expect("auth available");
|
||||
|
||||
pretty_assertions::assert_eq!(auth.account_plan_type(), Some(AccountPlanType::Unknown));
|
||||
pretty_assertions::assert_eq!(
|
||||
auth.get_plan_type(),
|
||||
Some(InternalPlanType::Unknown("mystery-tier".to_string()))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// Central manager providing a single source of truth for auth.json derived
|
||||
|
||||
@@ -447,6 +447,8 @@ impl ModelClient {
|
||||
return Err(StreamAttemptError::Fatal(codex_err));
|
||||
} else if error.r#type.as_deref() == Some("usage_not_included") {
|
||||
return Err(StreamAttemptError::Fatal(CodexErr::UsageNotIncluded));
|
||||
} else if is_quota_exceeded_error(&error) {
|
||||
return Err(StreamAttemptError::Fatal(CodexErr::QuotaExceeded));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -844,6 +846,8 @@ async fn process_sse<S>(
|
||||
Ok(error) => {
|
||||
if is_context_window_error(&error) {
|
||||
response_error = Some(CodexErr::ContextWindowExceeded);
|
||||
} else if is_quota_exceeded_error(&error) {
|
||||
response_error = Some(CodexErr::QuotaExceeded);
|
||||
} else {
|
||||
let delay = try_parse_retry_after(&error);
|
||||
let message = error.message.clone().unwrap_or_default();
|
||||
@@ -975,6 +979,10 @@ fn is_context_window_error(error: &Error) -> bool {
|
||||
error.code.as_deref() == Some("context_length_exceeded")
|
||||
}
|
||||
|
||||
fn is_quota_exceeded_error(error: &Error) -> bool {
|
||||
error.code.as_deref() == Some("insufficient_quota")
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
@@ -1307,6 +1315,41 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn quota_exceeded_error_is_fatal() {
|
||||
let raw_error = r#"{"type":"response.failed","sequence_number":3,"response":{"id":"resp_fatal_quota","object":"response","created_at":1759771626,"status":"failed","background":false,"error":{"code":"insufficient_quota","message":"You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors."},"incomplete_details":null}}"#;
|
||||
|
||||
let sse1 = format!("event: response.failed\ndata: {raw_error}\n\n");
|
||||
let provider = ModelProviderInfo {
|
||||
name: "test".to_string(),
|
||||
base_url: Some("https://test.com".to_string()),
|
||||
env_key: Some("TEST_API_KEY".to_string()),
|
||||
env_key_instructions: None,
|
||||
experimental_bearer_token: None,
|
||||
wire_api: WireApi::Responses,
|
||||
query_params: None,
|
||||
http_headers: None,
|
||||
env_http_headers: None,
|
||||
request_max_retries: Some(0),
|
||||
stream_max_retries: Some(0),
|
||||
stream_idle_timeout_ms: Some(1000),
|
||||
requires_openai_auth: false,
|
||||
};
|
||||
|
||||
let otel_event_manager = otel_event_manager();
|
||||
|
||||
let events = collect_events(&[sse1.as_bytes()], provider, otel_event_manager).await;
|
||||
|
||||
assert_eq!(events.len(), 1);
|
||||
|
||||
match &events[0] {
|
||||
Err(err @ CodexErr::QuotaExceeded) => {
|
||||
assert_eq!(err.to_string(), CodexErr::QuotaExceeded.to_string());
|
||||
}
|
||||
other => panic!("unexpected quota exceeded event: {other:?}"),
|
||||
}
|
||||
}
|
||||
|
||||
// ────────────────────────────
|
||||
// Table-driven test from `main`
|
||||
// ────────────────────────────
|
||||
|
||||
@@ -1643,8 +1643,7 @@ async fn spawn_review_thread(
|
||||
let mut review_features = config.features.clone();
|
||||
review_features
|
||||
.disable(crate::features::Feature::WebSearchRequest)
|
||||
.disable(crate::features::Feature::ViewImageTool)
|
||||
.disable(crate::features::Feature::StreamableShell);
|
||||
.disable(crate::features::Feature::ViewImageTool);
|
||||
let tools_config = ToolsConfig::new(&ToolsConfigParams {
|
||||
model_family: &review_model_family,
|
||||
features: &review_features,
|
||||
@@ -1928,6 +1927,7 @@ async fn run_turn(
|
||||
return Err(CodexErr::UsageLimitReached(e));
|
||||
}
|
||||
Err(CodexErr::UsageNotIncluded) => return Err(CodexErr::UsageNotIncluded),
|
||||
Err(e @ CodexErr::QuotaExceeded) => return Err(e),
|
||||
Err(e @ CodexErr::RefreshTokenFailed(_)) => return Err(e),
|
||||
Err(e) => {
|
||||
// Use the configured provider-specific stream retry budget.
|
||||
|
||||
@@ -23,6 +23,8 @@ pub enum ConfigEdit {
|
||||
},
|
||||
/// Toggle the acknowledgement flag under `[notice]`.
|
||||
SetNoticeHideFullAccessWarning(bool),
|
||||
/// Toggle the Windows world-writable directories warning acknowledgement flag.
|
||||
SetNoticeHideWorldWritableWarning(bool),
|
||||
/// Toggle the Windows onboarding acknowledgement flag.
|
||||
SetWindowsWslSetupAcknowledged(bool),
|
||||
/// Replace the entire `[mcp_servers]` table.
|
||||
@@ -239,6 +241,11 @@ impl ConfigDocument {
|
||||
&[Notice::TABLE_KEY, "hide_full_access_warning"],
|
||||
value(*acknowledged),
|
||||
)),
|
||||
ConfigEdit::SetNoticeHideWorldWritableWarning(acknowledged) => Ok(self.write_value(
|
||||
Scope::Global,
|
||||
&[Notice::TABLE_KEY, "hide_world_writable_warning"],
|
||||
value(*acknowledged),
|
||||
)),
|
||||
ConfigEdit::SetWindowsWslSetupAcknowledged(acknowledged) => Ok(self.write_value(
|
||||
Scope::Global,
|
||||
&["windows_wsl_setup_acknowledged"],
|
||||
@@ -473,6 +480,12 @@ impl ConfigEditsBuilder {
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_hide_world_writable_warning(mut self, acknowledged: bool) -> Self {
|
||||
self.edits
|
||||
.push(ConfigEdit::SetNoticeHideWorldWritableWarning(acknowledged));
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_windows_wsl_setup_acknowledged(mut self, acknowledged: bool) -> Self {
|
||||
self.edits
|
||||
.push(ConfigEdit::SetWindowsWslSetupAcknowledged(acknowledged));
|
||||
|
||||
@@ -241,8 +241,6 @@ pub struct Config {
|
||||
/// When `true`, run a model-based assessment for commands denied by the sandbox.
|
||||
pub experimental_sandbox_command_assessment: bool,
|
||||
|
||||
pub use_experimental_streamable_shell_tool: bool,
|
||||
|
||||
/// If set to `true`, used only the experimental unified exec tool.
|
||||
pub use_experimental_unified_exec_tool: bool,
|
||||
|
||||
@@ -655,7 +653,6 @@ pub struct ConfigToml {
|
||||
/// Legacy, now use features
|
||||
pub experimental_instructions_file: Option<PathBuf>,
|
||||
pub experimental_compact_prompt_file: Option<PathBuf>,
|
||||
pub experimental_use_exec_command_tool: Option<bool>,
|
||||
pub experimental_use_unified_exec_tool: Option<bool>,
|
||||
pub experimental_use_rmcp_client: Option<bool>,
|
||||
pub experimental_use_freeform_apply_patch: Option<bool>,
|
||||
@@ -999,7 +996,6 @@ impl Config {
|
||||
|
||||
let include_apply_patch_tool_flag = features.enabled(Feature::ApplyPatchFreeform);
|
||||
let tools_web_search_request = features.enabled(Feature::WebSearchRequest);
|
||||
let use_experimental_streamable_shell_tool = features.enabled(Feature::StreamableShell);
|
||||
let use_experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
|
||||
let use_experimental_use_rmcp_client = features.enabled(Feature::RmcpClient);
|
||||
let experimental_sandbox_command_assessment =
|
||||
@@ -1156,7 +1152,6 @@ impl Config {
|
||||
include_apply_patch_tool: include_apply_patch_tool_flag,
|
||||
tools_web_search_request,
|
||||
experimental_sandbox_command_assessment,
|
||||
use_experimental_streamable_shell_tool,
|
||||
use_experimental_unified_exec_tool,
|
||||
use_experimental_use_rmcp_client,
|
||||
features,
|
||||
@@ -1715,7 +1710,6 @@ trust_level = "trusted"
|
||||
fn legacy_toggles_map_to_features() -> std::io::Result<()> {
|
||||
let codex_home = TempDir::new()?;
|
||||
let cfg = ConfigToml {
|
||||
experimental_use_exec_command_tool: Some(true),
|
||||
experimental_use_unified_exec_tool: Some(true),
|
||||
experimental_use_rmcp_client: Some(true),
|
||||
experimental_use_freeform_apply_patch: Some(true),
|
||||
@@ -1729,12 +1723,11 @@ trust_level = "trusted"
|
||||
)?;
|
||||
|
||||
assert!(config.features.enabled(Feature::ApplyPatchFreeform));
|
||||
assert!(config.features.enabled(Feature::StreamableShell));
|
||||
assert!(config.features.enabled(Feature::UnifiedExec));
|
||||
assert!(config.features.enabled(Feature::RmcpClient));
|
||||
|
||||
assert!(config.include_apply_patch_tool);
|
||||
assert!(config.use_experimental_streamable_shell_tool);
|
||||
|
||||
assert!(config.use_experimental_unified_exec_tool);
|
||||
assert!(config.use_experimental_use_rmcp_client);
|
||||
|
||||
@@ -2902,7 +2895,6 @@ model_verbosity = "high"
|
||||
include_apply_patch_tool: false,
|
||||
tools_web_search_request: false,
|
||||
experimental_sandbox_command_assessment: false,
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
use_experimental_unified_exec_tool: false,
|
||||
use_experimental_use_rmcp_client: false,
|
||||
features: Features::with_defaults(),
|
||||
@@ -2974,7 +2966,6 @@ model_verbosity = "high"
|
||||
include_apply_patch_tool: false,
|
||||
tools_web_search_request: false,
|
||||
experimental_sandbox_command_assessment: false,
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
use_experimental_unified_exec_tool: false,
|
||||
use_experimental_use_rmcp_client: false,
|
||||
features: Features::with_defaults(),
|
||||
@@ -3061,7 +3052,6 @@ model_verbosity = "high"
|
||||
include_apply_patch_tool: false,
|
||||
tools_web_search_request: false,
|
||||
experimental_sandbox_command_assessment: false,
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
use_experimental_unified_exec_tool: false,
|
||||
use_experimental_use_rmcp_client: false,
|
||||
features: Features::with_defaults(),
|
||||
@@ -3134,7 +3124,6 @@ model_verbosity = "high"
|
||||
include_apply_patch_tool: false,
|
||||
tools_web_search_request: false,
|
||||
experimental_sandbox_command_assessment: false,
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
use_experimental_unified_exec_tool: false,
|
||||
use_experimental_use_rmcp_client: false,
|
||||
features: Features::with_defaults(),
|
||||
|
||||
@@ -25,7 +25,6 @@ pub struct ConfigProfile {
|
||||
pub experimental_compact_prompt_file: Option<PathBuf>,
|
||||
pub include_apply_patch_tool: Option<bool>,
|
||||
pub experimental_use_unified_exec_tool: Option<bool>,
|
||||
pub experimental_use_exec_command_tool: Option<bool>,
|
||||
pub experimental_use_rmcp_client: Option<bool>,
|
||||
pub experimental_use_freeform_apply_patch: Option<bool>,
|
||||
pub experimental_sandbox_command_assessment: Option<bool>,
|
||||
|
||||
@@ -358,6 +358,8 @@ pub struct Tui {
|
||||
pub struct Notice {
|
||||
/// Tracks whether the user has acknowledged the full access warning prompt.
|
||||
pub hide_full_access_warning: Option<bool>,
|
||||
/// Tracks whether the user has acknowledged the Windows world-writable directories warning.
|
||||
pub hide_world_writable_warning: Option<bool>,
|
||||
}
|
||||
|
||||
impl Notice {
|
||||
|
||||
@@ -109,6 +109,9 @@ pub enum CodexErr {
|
||||
#[error("{0}")]
|
||||
ConnectionFailed(ConnectionFailedError),
|
||||
|
||||
#[error("Quota exceeded. Check your plan and billing details.")]
|
||||
QuotaExceeded,
|
||||
|
||||
#[error(
|
||||
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
|
||||
)]
|
||||
@@ -235,18 +238,44 @@ pub struct UnexpectedResponseError {
|
||||
pub request_id: Option<String>,
|
||||
}
|
||||
|
||||
const CLOUDFLARE_BLOCKED_MESSAGE: &str =
|
||||
"Access blocked by Cloudflare. This usually happens when connecting from a restricted region";
|
||||
|
||||
impl UnexpectedResponseError {
|
||||
fn friendly_message(&self) -> Option<String> {
|
||||
if self.status != StatusCode::FORBIDDEN {
|
||||
return None;
|
||||
}
|
||||
|
||||
if !self.body.contains("Cloudflare") || !self.body.contains("blocked") {
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut message = format!("{CLOUDFLARE_BLOCKED_MESSAGE} (status {})", self.status);
|
||||
if let Some(id) = &self.request_id {
|
||||
message.push_str(&format!(", request id: {id}"));
|
||||
}
|
||||
|
||||
Some(message)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for UnexpectedResponseError {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"unexpected status {}: {}{}",
|
||||
self.status,
|
||||
self.body,
|
||||
self.request_id
|
||||
.as_ref()
|
||||
.map(|id| format!(", request id: {id}"))
|
||||
.unwrap_or_default()
|
||||
)
|
||||
if let Some(friendly) = self.friendly_message() {
|
||||
write!(f, "{friendly}")
|
||||
} else {
|
||||
write!(
|
||||
f,
|
||||
"unexpected status {}: {}{}",
|
||||
self.status,
|
||||
self.body,
|
||||
self.request_id
|
||||
.as_ref()
|
||||
.map(|id| format!(", request id: {id}"))
|
||||
.unwrap_or_default()
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -662,6 +691,35 @@ mod tests {
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unexpected_status_cloudflare_html_is_simplified() {
|
||||
let err = UnexpectedResponseError {
|
||||
status: StatusCode::FORBIDDEN,
|
||||
body: "<html><body>Cloudflare error: Sorry, you have been blocked</body></html>"
|
||||
.to_string(),
|
||||
request_id: Some("ray-id".to_string()),
|
||||
};
|
||||
let status = StatusCode::FORBIDDEN.to_string();
|
||||
assert_eq!(
|
||||
err.to_string(),
|
||||
format!("{CLOUDFLARE_BLOCKED_MESSAGE} (status {status}), request id: ray-id")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unexpected_status_non_html_is_unchanged() {
|
||||
let err = UnexpectedResponseError {
|
||||
status: StatusCode::FORBIDDEN,
|
||||
body: "plain text error".to_string(),
|
||||
request_id: None,
|
||||
};
|
||||
let status = StatusCode::FORBIDDEN.to_string();
|
||||
assert_eq!(
|
||||
err.to_string(),
|
||||
format!("unexpected status {status}: plain text error")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn usage_limit_reached_includes_hours_and_minutes() {
|
||||
let base = Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap();
|
||||
|
||||
@@ -313,6 +313,10 @@ pub(crate) mod errors {
|
||||
SandboxTransformError::MissingLinuxSandboxExecutable => {
|
||||
CodexErr::LandlockSandboxExecutableNotProvided
|
||||
}
|
||||
#[cfg(not(target_os = "macos"))]
|
||||
SandboxTransformError::SeatbeltUnavailable => CodexErr::UnsupportedOperation(
|
||||
"seatbelt sandbox is only available on macOS".to_string(),
|
||||
),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -514,6 +518,7 @@ async fn consume_truncated_output(
|
||||
}
|
||||
Err(_) => {
|
||||
// timeout
|
||||
kill_child_process_group(&mut child)?;
|
||||
child.start_kill()?;
|
||||
// Debatable whether `child.wait().await` should be called here.
|
||||
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE), true)
|
||||
@@ -521,6 +526,7 @@ async fn consume_truncated_output(
|
||||
}
|
||||
}
|
||||
_ = tokio::signal::ctrl_c() => {
|
||||
kill_child_process_group(&mut child)?;
|
||||
child.start_kill()?;
|
||||
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE), false)
|
||||
}
|
||||
@@ -617,6 +623,38 @@ fn synthetic_exit_status(code: i32) -> ExitStatus {
|
||||
std::process::ExitStatus::from_raw(code as u32)
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
fn kill_child_process_group(child: &mut Child) -> io::Result<()> {
|
||||
use std::io::ErrorKind;
|
||||
|
||||
if let Some(pid) = child.id() {
|
||||
let pid = pid as libc::pid_t;
|
||||
let pgid = unsafe { libc::getpgid(pid) };
|
||||
if pgid == -1 {
|
||||
let err = std::io::Error::last_os_error();
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
return Err(err);
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let result = unsafe { libc::killpg(pgid, libc::SIGKILL) };
|
||||
if result == -1 {
|
||||
let err = std::io::Error::last_os_error();
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
return Err(err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(not(unix))]
|
||||
fn kill_child_process_group(_: &mut Child) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
@@ -29,8 +29,6 @@ pub enum Stage {
|
||||
pub enum Feature {
|
||||
/// Use the single unified PTY-backed exec tool.
|
||||
UnifiedExec,
|
||||
/// Use the streamable exec-command/write-stdin tool pair.
|
||||
StreamableShell,
|
||||
/// Enable experimental RMCP features such as OAuth login.
|
||||
RmcpClient,
|
||||
/// Include the freeform apply_patch tool.
|
||||
@@ -118,8 +116,9 @@ impl Features {
|
||||
self.enabled.contains(&f)
|
||||
}
|
||||
|
||||
pub fn enable(&mut self, f: Feature) {
|
||||
pub fn enable(&mut self, f: Feature) -> &mut Self {
|
||||
self.enabled.insert(f);
|
||||
self
|
||||
}
|
||||
|
||||
pub fn disable(&mut self, f: Feature) -> &mut Self {
|
||||
@@ -178,7 +177,6 @@ impl Features {
|
||||
let base_legacy = LegacyFeatureToggles {
|
||||
experimental_sandbox_command_assessment: cfg.experimental_sandbox_command_assessment,
|
||||
experimental_use_freeform_apply_patch: cfg.experimental_use_freeform_apply_patch,
|
||||
experimental_use_exec_command_tool: cfg.experimental_use_exec_command_tool,
|
||||
experimental_use_unified_exec_tool: cfg.experimental_use_unified_exec_tool,
|
||||
experimental_use_rmcp_client: cfg.experimental_use_rmcp_client,
|
||||
tools_web_search: cfg.tools.as_ref().and_then(|t| t.web_search),
|
||||
@@ -197,7 +195,7 @@ impl Features {
|
||||
.experimental_sandbox_command_assessment,
|
||||
experimental_use_freeform_apply_patch: config_profile
|
||||
.experimental_use_freeform_apply_patch,
|
||||
experimental_use_exec_command_tool: config_profile.experimental_use_exec_command_tool,
|
||||
|
||||
experimental_use_unified_exec_tool: config_profile.experimental_use_unified_exec_tool,
|
||||
experimental_use_rmcp_client: config_profile.experimental_use_rmcp_client,
|
||||
tools_web_search: config_profile.tools_web_search,
|
||||
@@ -252,12 +250,6 @@ pub const FEATURES: &[FeatureSpec] = &[
|
||||
stage: Stage::Experimental,
|
||||
default_enabled: false,
|
||||
},
|
||||
FeatureSpec {
|
||||
id: Feature::StreamableShell,
|
||||
key: "streamable_shell",
|
||||
stage: Stage::Experimental,
|
||||
default_enabled: false,
|
||||
},
|
||||
FeatureSpec {
|
||||
id: Feature::RmcpClient,
|
||||
key: "rmcp_client",
|
||||
|
||||
@@ -17,10 +17,6 @@ const ALIASES: &[Alias] = &[
|
||||
legacy_key: "experimental_use_unified_exec_tool",
|
||||
feature: Feature::UnifiedExec,
|
||||
},
|
||||
Alias {
|
||||
legacy_key: "experimental_use_exec_command_tool",
|
||||
feature: Feature::StreamableShell,
|
||||
},
|
||||
Alias {
|
||||
legacy_key: "experimental_use_rmcp_client",
|
||||
feature: Feature::RmcpClient,
|
||||
@@ -54,7 +50,6 @@ pub struct LegacyFeatureToggles {
|
||||
pub include_apply_patch_tool: Option<bool>,
|
||||
pub experimental_sandbox_command_assessment: Option<bool>,
|
||||
pub experimental_use_freeform_apply_patch: Option<bool>,
|
||||
pub experimental_use_exec_command_tool: Option<bool>,
|
||||
pub experimental_use_unified_exec_tool: Option<bool>,
|
||||
pub experimental_use_rmcp_client: Option<bool>,
|
||||
pub tools_web_search: Option<bool>,
|
||||
@@ -81,12 +76,6 @@ impl LegacyFeatureToggles {
|
||||
self.experimental_use_freeform_apply_patch,
|
||||
"experimental_use_freeform_apply_patch",
|
||||
);
|
||||
set_if_some(
|
||||
features,
|
||||
Feature::StreamableShell,
|
||||
self.experimental_use_exec_command_tool,
|
||||
"experimental_use_exec_command_tool",
|
||||
);
|
||||
set_if_some(
|
||||
features,
|
||||
Feature::UnifiedExec,
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use crate::config::types::ReasoningSummaryFormat;
|
||||
use crate::tools::handlers::apply_patch::ApplyPatchToolType;
|
||||
use crate::tools::spec::ConfigShellToolType;
|
||||
|
||||
/// The `instructions` field in the payload sent to a model should always start
|
||||
/// with this content.
|
||||
@@ -29,12 +30,6 @@ pub struct ModelFamily {
|
||||
// Define if we need a special handling of reasoning summary
|
||||
pub reasoning_summary_format: ReasoningSummaryFormat,
|
||||
|
||||
// This should be set to true when the model expects a tool named
|
||||
// "local_shell" to be provided. Its contract must be understood natively by
|
||||
// the model such that its description can be omitted.
|
||||
// See https://platform.openai.com/docs/guides/tools-local-shell
|
||||
pub uses_local_shell_tool: bool,
|
||||
|
||||
/// Whether this model supports parallel tool calls when using the
|
||||
/// Responses API.
|
||||
pub supports_parallel_tool_calls: bool,
|
||||
@@ -57,6 +52,9 @@ pub struct ModelFamily {
|
||||
|
||||
/// If the model family supports setting the verbosity level when using Responses API.
|
||||
pub support_verbosity: bool,
|
||||
|
||||
/// Preferred shell tool type for this model family when features do not override it.
|
||||
pub shell_type: ConfigShellToolType,
|
||||
}
|
||||
|
||||
macro_rules! model_family {
|
||||
@@ -64,19 +62,20 @@ macro_rules! model_family {
|
||||
$slug:expr, $family:expr $(, $key:ident : $value:expr )* $(,)?
|
||||
) => {{
|
||||
// defaults
|
||||
#[allow(unused_mut)]
|
||||
let mut mf = ModelFamily {
|
||||
slug: $slug.to_string(),
|
||||
family: $family.to_string(),
|
||||
needs_special_apply_patch_instructions: false,
|
||||
supports_reasoning_summaries: false,
|
||||
reasoning_summary_format: ReasoningSummaryFormat::None,
|
||||
uses_local_shell_tool: false,
|
||||
supports_parallel_tool_calls: false,
|
||||
apply_patch_tool_type: None,
|
||||
base_instructions: BASE_INSTRUCTIONS.to_string(),
|
||||
experimental_supported_tools: Vec::new(),
|
||||
effective_context_window_percent: 95,
|
||||
support_verbosity: false,
|
||||
shell_type: ConfigShellToolType::Default,
|
||||
};
|
||||
// apply overrides
|
||||
$(
|
||||
@@ -105,8 +104,8 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
|
||||
model_family!(
|
||||
slug, "codex-mini-latest",
|
||||
supports_reasoning_summaries: true,
|
||||
uses_local_shell_tool: true,
|
||||
needs_special_apply_patch_instructions: true,
|
||||
shell_type: ConfigShellToolType::Local,
|
||||
)
|
||||
} else if slug.starts_with("gpt-4.1") {
|
||||
model_family!(
|
||||
@@ -119,6 +118,8 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
|
||||
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
|
||||
} else if slug.starts_with("gpt-3.5") {
|
||||
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
|
||||
} else if slug.starts_with("porcupine") {
|
||||
model_family!(slug, "porcupine", shell_type: ConfigShellToolType::UnifiedExec)
|
||||
} else if slug.starts_with("test-gpt-5-codex") {
|
||||
model_family!(
|
||||
slug, slug,
|
||||
@@ -181,12 +182,12 @@ pub fn derive_default_model_family(model: &str) -> ModelFamily {
|
||||
needs_special_apply_patch_instructions: false,
|
||||
supports_reasoning_summaries: false,
|
||||
reasoning_summary_format: ReasoningSummaryFormat::None,
|
||||
uses_local_shell_tool: false,
|
||||
supports_parallel_tool_calls: false,
|
||||
apply_patch_tool_type: None,
|
||||
base_instructions: BASE_INSTRUCTIONS.to_string(),
|
||||
experimental_supported_tools: Vec::new(),
|
||||
effective_context_window_percent: 95,
|
||||
support_verbosity: false,
|
||||
shell_type: ConfigShellToolType::Default,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,8 +14,11 @@ use crate::exec::StdoutStream;
|
||||
use crate::exec::execute_exec_env;
|
||||
use crate::landlock::create_linux_sandbox_command_args;
|
||||
use crate::protocol::SandboxPolicy;
|
||||
#[cfg(target_os = "macos")]
|
||||
use crate::seatbelt::MACOS_PATH_TO_SEATBELT_EXECUTABLE;
|
||||
#[cfg(target_os = "macos")]
|
||||
use crate::seatbelt::create_seatbelt_command_args;
|
||||
#[cfg(target_os = "macos")]
|
||||
use crate::spawn::CODEX_SANDBOX_ENV_VAR;
|
||||
use crate::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
|
||||
use crate::tools::sandboxing::SandboxablePreference;
|
||||
@@ -56,6 +59,9 @@ pub enum SandboxPreference {
|
||||
pub(crate) enum SandboxTransformError {
|
||||
#[error("missing codex-linux-sandbox executable path")]
|
||||
MissingLinuxSandboxExecutable,
|
||||
#[cfg(not(target_os = "macos"))]
|
||||
#[error("seatbelt sandbox is only available on macOS")]
|
||||
SeatbeltUnavailable,
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
@@ -107,6 +113,7 @@ impl SandboxManager {
|
||||
|
||||
let (command, sandbox_env, arg0_override) = match sandbox {
|
||||
SandboxType::None => (command, HashMap::new(), None),
|
||||
#[cfg(target_os = "macos")]
|
||||
SandboxType::MacosSeatbelt => {
|
||||
let mut seatbelt_env = HashMap::new();
|
||||
seatbelt_env.insert(CODEX_SANDBOX_ENV_VAR.to_string(), "seatbelt".to_string());
|
||||
@@ -117,6 +124,8 @@ impl SandboxManager {
|
||||
full_command.append(&mut args);
|
||||
(full_command, seatbelt_env, None)
|
||||
}
|
||||
#[cfg(not(target_os = "macos"))]
|
||||
SandboxType::MacosSeatbelt => return Err(SandboxTransformError::SeatbeltUnavailable),
|
||||
SandboxType::LinuxSeccomp => {
|
||||
let exe = codex_linux_sandbox_exe
|
||||
.ok_or(SandboxTransformError::MissingLinuxSandboxExecutable)?;
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
#![cfg(target_os = "macos")]
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::ffi::CStr;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
use tokio::process::Child;
|
||||
@@ -9,6 +12,7 @@ use crate::spawn::StdioPolicy;
|
||||
use crate::spawn::spawn_child_async;
|
||||
|
||||
const MACOS_SEATBELT_BASE_POLICY: &str = include_str!("seatbelt_base_policy.sbpl");
|
||||
const MACOS_SEATBELT_NETWORK_POLICY: &str = include_str!("seatbelt_network_policy.sbpl");
|
||||
|
||||
/// When working with `sandbox-exec`, only consider `sandbox-exec` in `/usr/bin`
|
||||
/// to defend against an attacker trying to inject a malicious version on the
|
||||
@@ -44,27 +48,24 @@ pub(crate) fn create_seatbelt_command_args(
|
||||
sandbox_policy: &SandboxPolicy,
|
||||
sandbox_policy_cwd: &Path,
|
||||
) -> Vec<String> {
|
||||
let (file_write_policy, extra_cli_args) = {
|
||||
let (file_write_policy, file_write_dir_params) = {
|
||||
if sandbox_policy.has_full_disk_write_access() {
|
||||
// Allegedly, this is more permissive than `(allow file-write*)`.
|
||||
(
|
||||
r#"(allow file-write* (regex #"^/"))"#.to_string(),
|
||||
Vec::<String>::new(),
|
||||
Vec::new(),
|
||||
)
|
||||
} else {
|
||||
let writable_roots = sandbox_policy.get_writable_roots_with_cwd(sandbox_policy_cwd);
|
||||
|
||||
let mut writable_folder_policies: Vec<String> = Vec::new();
|
||||
let mut cli_args: Vec<String> = Vec::new();
|
||||
let mut file_write_params = Vec::new();
|
||||
|
||||
for (index, wr) in writable_roots.iter().enumerate() {
|
||||
// Canonicalize to avoid mismatches like /var vs /private/var on macOS.
|
||||
let canonical_root = wr.root.canonicalize().unwrap_or_else(|_| wr.root.clone());
|
||||
let root_param = format!("WRITABLE_ROOT_{index}");
|
||||
cli_args.push(format!(
|
||||
"-D{root_param}={}",
|
||||
canonical_root.to_string_lossy()
|
||||
));
|
||||
file_write_params.push((root_param.clone(), canonical_root));
|
||||
|
||||
if wr.read_only_subpaths.is_empty() {
|
||||
writable_folder_policies.push(format!("(subpath (param \"{root_param}\"))"));
|
||||
@@ -76,9 +77,9 @@ pub(crate) fn create_seatbelt_command_args(
|
||||
for (subpath_index, ro) in wr.read_only_subpaths.iter().enumerate() {
|
||||
let canonical_ro = ro.canonicalize().unwrap_or_else(|_| ro.clone());
|
||||
let ro_param = format!("WRITABLE_ROOT_{index}_RO_{subpath_index}");
|
||||
cli_args.push(format!("-D{ro_param}={}", canonical_ro.to_string_lossy()));
|
||||
require_parts
|
||||
.push(format!("(require-not (subpath (param \"{ro_param}\")))"));
|
||||
file_write_params.push((ro_param, canonical_ro));
|
||||
}
|
||||
let policy_component = format!("(require-all {} )", require_parts.join(" "));
|
||||
writable_folder_policies.push(policy_component);
|
||||
@@ -86,13 +87,13 @@ pub(crate) fn create_seatbelt_command_args(
|
||||
}
|
||||
|
||||
if writable_folder_policies.is_empty() {
|
||||
("".to_string(), Vec::<String>::new())
|
||||
("".to_string(), Vec::new())
|
||||
} else {
|
||||
let file_write_policy = format!(
|
||||
"(allow file-write*\n{}\n)",
|
||||
writable_folder_policies.join(" ")
|
||||
);
|
||||
(file_write_policy, cli_args)
|
||||
(file_write_policy, file_write_params)
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -105,7 +106,7 @@ pub(crate) fn create_seatbelt_command_args(
|
||||
|
||||
// TODO(mbolin): apply_patch calls must also honor the SandboxPolicy.
|
||||
let network_policy = if sandbox_policy.has_full_network_access() {
|
||||
"(allow network-outbound)\n(allow network-inbound)\n(allow system-socket)"
|
||||
MACOS_SEATBELT_NETWORK_POLICY
|
||||
} else {
|
||||
""
|
||||
};
|
||||
@@ -114,17 +115,49 @@ pub(crate) fn create_seatbelt_command_args(
|
||||
"{MACOS_SEATBELT_BASE_POLICY}\n{file_read_policy}\n{file_write_policy}\n{network_policy}"
|
||||
);
|
||||
|
||||
let dir_params = [file_write_dir_params, macos_dir_params()].concat();
|
||||
|
||||
let mut seatbelt_args: Vec<String> = vec!["-p".to_string(), full_policy];
|
||||
seatbelt_args.extend(extra_cli_args);
|
||||
let definition_args = dir_params
|
||||
.into_iter()
|
||||
.map(|(key, value)| format!("-D{key}={value}", value = value.to_string_lossy()));
|
||||
seatbelt_args.extend(definition_args);
|
||||
seatbelt_args.push("--".to_string());
|
||||
seatbelt_args.extend(command);
|
||||
seatbelt_args
|
||||
}
|
||||
|
||||
/// Wraps libc::confstr to return a String.
|
||||
fn confstr(name: libc::c_int) -> Option<String> {
|
||||
let mut buf = vec![0_i8; (libc::PATH_MAX as usize) + 1];
|
||||
let len = unsafe { libc::confstr(name, buf.as_mut_ptr(), buf.len()) };
|
||||
if len == 0 {
|
||||
return None;
|
||||
}
|
||||
// confstr guarantees NUL-termination when len > 0.
|
||||
let cstr = unsafe { CStr::from_ptr(buf.as_ptr()) };
|
||||
cstr.to_str().ok().map(ToString::to_string)
|
||||
}
|
||||
|
||||
/// Wraps confstr to return a canonicalized PathBuf.
|
||||
fn confstr_path(name: libc::c_int) -> Option<PathBuf> {
|
||||
let s = confstr(name)?;
|
||||
let path = PathBuf::from(s);
|
||||
path.canonicalize().ok().or(Some(path))
|
||||
}
|
||||
|
||||
fn macos_dir_params() -> Vec<(String, PathBuf)> {
|
||||
if let Some(p) = confstr_path(libc::_CS_DARWIN_USER_CACHE_DIR) {
|
||||
return vec![("DARWIN_USER_CACHE_DIR".to_string(), p)];
|
||||
}
|
||||
vec![]
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::MACOS_SEATBELT_BASE_POLICY;
|
||||
use super::create_seatbelt_command_args;
|
||||
use super::macos_dir_params;
|
||||
use crate::protocol::SandboxPolicy;
|
||||
use pretty_assertions::assert_eq;
|
||||
use std::fs;
|
||||
@@ -134,11 +167,6 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn create_seatbelt_args_with_read_only_git_subpath() {
|
||||
if cfg!(target_os = "windows") {
|
||||
// /tmp does not exist on Windows, so skip this test.
|
||||
return;
|
||||
}
|
||||
|
||||
// Create a temporary workspace with two writable roots: one containing
|
||||
// a top-level .git directory and one without it.
|
||||
let tmp = TempDir::new().expect("tempdir");
|
||||
@@ -199,6 +227,12 @@ mod tests {
|
||||
format!("-DWRITABLE_ROOT_2={}", cwd.to_string_lossy()),
|
||||
];
|
||||
|
||||
expected_args.extend(
|
||||
macos_dir_params()
|
||||
.into_iter()
|
||||
.map(|(key, value)| format!("-D{key}={value}", value = value.to_string_lossy())),
|
||||
);
|
||||
|
||||
expected_args.extend(vec![
|
||||
"--".to_string(),
|
||||
"/bin/echo".to_string(),
|
||||
@@ -210,11 +244,6 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn create_seatbelt_args_for_cwd_as_git_repo() {
|
||||
if cfg!(target_os = "windows") {
|
||||
// /tmp does not exist on Windows, so skip this test.
|
||||
return;
|
||||
}
|
||||
|
||||
// Create a temporary workspace with two writable roots: one containing
|
||||
// a top-level .git directory and one without it.
|
||||
let tmp = TempDir::new().expect("tempdir");
|
||||
@@ -292,6 +321,12 @@ mod tests {
|
||||
expected_args.push(format!("-DWRITABLE_ROOT_2={p}"));
|
||||
}
|
||||
|
||||
expected_args.extend(
|
||||
macos_dir_params()
|
||||
.into_iter()
|
||||
.map(|(key, value)| format!("-D{key}={value}", value = value.to_string_lossy())),
|
||||
);
|
||||
|
||||
expected_args.extend(vec![
|
||||
"--".to_string(),
|
||||
"/bin/echo".to_string(),
|
||||
|
||||
30
codex-rs/core/src/seatbelt_network_policy.sbpl
Normal file
30
codex-rs/core/src/seatbelt_network_policy.sbpl
Normal file
@@ -0,0 +1,30 @@
|
||||
; when network access is enabled, these policies are added after those in seatbelt_base_policy.sbpl
|
||||
; Ref https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/network.sb;drc=f8f264d5e4e7509c913f4c60c2639d15905a07e4
|
||||
|
||||
(allow network-outbound)
|
||||
(allow network-inbound)
|
||||
(allow system-socket)
|
||||
|
||||
(allow mach-lookup
|
||||
; Used to look up the _CS_DARWIN_USER_CACHE_DIR in the sandbox.
|
||||
(global-name "com.apple.bsd.dirhelper")
|
||||
(global-name "com.apple.system.opendirectoryd.membership")
|
||||
|
||||
; Communicate with the security server for TLS certificate information.
|
||||
(global-name "com.apple.SecurityServer")
|
||||
(global-name "com.apple.networkd")
|
||||
(global-name "com.apple.ocspd")
|
||||
(global-name "com.apple.trustd.agent")
|
||||
|
||||
; Read network configuration.
|
||||
(global-name "com.apple.SystemConfiguration.DNSConfiguration")
|
||||
(global-name "com.apple.SystemConfiguration.configd")
|
||||
)
|
||||
|
||||
(allow sysctl-read
|
||||
(sysctl-name-regex #"^net.routetable")
|
||||
)
|
||||
|
||||
(allow file-write*
|
||||
(subpath (param "DARWIN_USER_CACHE_DIR"))
|
||||
)
|
||||
@@ -64,24 +64,31 @@ pub(crate) async fn spawn_child_async(
|
||||
// any child processes that were spawned as part of a `"shell"` tool call
|
||||
// to also be terminated.
|
||||
|
||||
// This relies on prctl(2), so it only works on Linux.
|
||||
#[cfg(target_os = "linux")]
|
||||
#[cfg(unix)]
|
||||
unsafe {
|
||||
let parent_pid = libc::getpid();
|
||||
cmd.pre_exec(move || {
|
||||
// This prctl call effectively requests, "deliver SIGTERM when my
|
||||
// current parent dies."
|
||||
if libc::prctl(libc::PR_SET_PDEATHSIG, libc::SIGTERM) == -1 {
|
||||
cmd.pre_exec(|| {
|
||||
if libc::setpgid(0, 0) == -1 {
|
||||
return Err(std::io::Error::last_os_error());
|
||||
}
|
||||
|
||||
// Though if there was a race condition and this pre_exec() block is
|
||||
// run _after_ the parent (i.e., the Codex process) has already
|
||||
// exited, then parent will be the closest configured "subreaper"
|
||||
// ancestor process, or PID 1 (init). If the Codex process has exited
|
||||
// already, so should the child process.
|
||||
if libc::getppid() != parent_pid {
|
||||
libc::raise(libc::SIGTERM);
|
||||
// This relies on prctl(2), so it only works on Linux.
|
||||
#[cfg(target_os = "linux")]
|
||||
{
|
||||
// This prctl call effectively requests, "deliver SIGTERM when my
|
||||
// current parent dies."
|
||||
if libc::prctl(libc::PR_SET_PDEATHSIG, libc::SIGTERM) == -1 {
|
||||
return Err(std::io::Error::last_os_error());
|
||||
}
|
||||
|
||||
// Though if there was a race condition and this pre_exec() block is
|
||||
// run _after_ the parent (i.e., the Codex process) has already
|
||||
// exited, then parent will be the closest configured "subreaper"
|
||||
// ancestor process, or PID 1 (init). If the Codex process has exited
|
||||
// already, so should the child process.
|
||||
if libc::getppid() != parent_pid {
|
||||
libc::raise(libc::SIGTERM);
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
});
|
||||
|
||||
@@ -75,12 +75,12 @@ async fn start_review_conversation(
|
||||
// Avoid loading project docs; reviewer only needs findings
|
||||
sub_agent_config.project_doc_max_bytes = 0;
|
||||
// Carry over review-only feature restrictions so the delegate cannot
|
||||
// re-enable blocked tools (web search, view image, streamable shell).
|
||||
// re-enable blocked tools (web search, view image).
|
||||
sub_agent_config
|
||||
.features
|
||||
.disable(crate::features::Feature::WebSearchRequest)
|
||||
.disable(crate::features::Feature::ViewImageTool)
|
||||
.disable(crate::features::Feature::StreamableShell);
|
||||
.disable(crate::features::Feature::ViewImageTool);
|
||||
|
||||
// Set explicit review rubric for the sub-agent
|
||||
sub_agent_config.base_instructions = Some(crate::REVIEW_PROMPT.to_string());
|
||||
(run_codex_conversation_one_shot(
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
use std::time::Duration;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::function_tool::FunctionCallError;
|
||||
use crate::protocol::EventMsg;
|
||||
@@ -163,11 +160,7 @@ impl ToolHandler for UnifiedExecHandler {
|
||||
.await;
|
||||
}
|
||||
|
||||
let content = serialize_response(&response).map_err(|err| {
|
||||
FunctionCallError::RespondToModel(format!(
|
||||
"failed to serialize unified exec output: {err:?}"
|
||||
))
|
||||
})?;
|
||||
let content = format_response(&response);
|
||||
|
||||
Ok(ToolOutput::Function {
|
||||
content,
|
||||
@@ -177,32 +170,30 @@ impl ToolHandler for UnifiedExecHandler {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct SerializedUnifiedExecResponse<'a> {
|
||||
chunk_id: &'a str,
|
||||
wall_time_seconds: f64,
|
||||
output: &'a str,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
session_id: Option<i32>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
exit_code: Option<i32>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
original_token_count: Option<usize>,
|
||||
}
|
||||
fn format_response(response: &UnifiedExecResponse) -> String {
|
||||
let mut sections = Vec::new();
|
||||
|
||||
fn serialize_response(response: &UnifiedExecResponse) -> Result<String, serde_json::Error> {
|
||||
let payload = SerializedUnifiedExecResponse {
|
||||
chunk_id: &response.chunk_id,
|
||||
wall_time_seconds: duration_to_seconds(response.wall_time),
|
||||
output: &response.output,
|
||||
session_id: response.session_id,
|
||||
exit_code: response.exit_code,
|
||||
original_token_count: response.original_token_count,
|
||||
};
|
||||
if !response.chunk_id.is_empty() {
|
||||
sections.push(format!("Chunk ID: {}", response.chunk_id));
|
||||
}
|
||||
|
||||
serde_json::to_string(&payload)
|
||||
}
|
||||
let wall_time_seconds = response.wall_time.as_secs_f64();
|
||||
sections.push(format!("Wall time: {wall_time_seconds:.4} seconds"));
|
||||
|
||||
fn duration_to_seconds(duration: Duration) -> f64 {
|
||||
duration.as_secs_f64()
|
||||
if let Some(exit_code) = response.exit_code {
|
||||
sections.push(format!("Process exited with code {exit_code}"));
|
||||
}
|
||||
|
||||
if let Some(session_id) = response.session_id {
|
||||
sections.push(format!("Process running with session ID {session_id}"));
|
||||
}
|
||||
|
||||
if let Some(original_token_count) = response.original_token_count {
|
||||
sections.push(format!("Original token count: {original_token_count}"));
|
||||
}
|
||||
|
||||
sections.push("Output:".to_string());
|
||||
sections.push(response.output.clone());
|
||||
|
||||
sections.join("\n")
|
||||
}
|
||||
|
||||
@@ -15,11 +15,11 @@ use serde_json::json;
|
||||
use std::collections::BTreeMap;
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
|
||||
pub enum ConfigShellToolType {
|
||||
Default,
|
||||
Local,
|
||||
Streamable,
|
||||
UnifiedExec,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
@@ -28,7 +28,6 @@ pub(crate) struct ToolsConfig {
|
||||
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
|
||||
pub web_search_request: bool,
|
||||
pub include_view_image_tool: bool,
|
||||
pub experimental_unified_exec_tool: bool,
|
||||
pub experimental_supported_tools: Vec<String>,
|
||||
}
|
||||
|
||||
@@ -43,18 +42,14 @@ impl ToolsConfig {
|
||||
model_family,
|
||||
features,
|
||||
} = params;
|
||||
let use_streamable_shell_tool = features.enabled(Feature::StreamableShell);
|
||||
let experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
|
||||
let include_apply_patch_tool = features.enabled(Feature::ApplyPatchFreeform);
|
||||
let include_web_search_request = features.enabled(Feature::WebSearchRequest);
|
||||
let include_view_image_tool = features.enabled(Feature::ViewImageTool);
|
||||
|
||||
let shell_type = if use_streamable_shell_tool {
|
||||
ConfigShellToolType::Streamable
|
||||
} else if model_family.uses_local_shell_tool {
|
||||
ConfigShellToolType::Local
|
||||
let shell_type = if features.enabled(Feature::UnifiedExec) {
|
||||
ConfigShellToolType::UnifiedExec
|
||||
} else {
|
||||
ConfigShellToolType::Default
|
||||
model_family.shell_type.clone()
|
||||
};
|
||||
|
||||
let apply_patch_tool_type = match model_family.apply_patch_tool_type {
|
||||
@@ -74,7 +69,6 @@ impl ToolsConfig {
|
||||
apply_patch_tool_type,
|
||||
web_search_request: include_web_search_request,
|
||||
include_view_image_tool,
|
||||
experimental_unified_exec_tool,
|
||||
experimental_supported_tools: model_family.experimental_supported_tools.clone(),
|
||||
}
|
||||
}
|
||||
@@ -162,7 +156,8 @@ fn create_exec_command_tool() -> ToolSpec {
|
||||
"yield_time_ms".to_string(),
|
||||
JsonSchema::Number {
|
||||
description: Some(
|
||||
"How long to wait (in milliseconds) for output before yielding.".to_string(),
|
||||
"Maximum time in milliseconds to wait for output after writing the input (default: 1000)."
|
||||
.to_string(),
|
||||
),
|
||||
},
|
||||
);
|
||||
@@ -252,7 +247,9 @@ fn create_shell_tool() -> ToolSpec {
|
||||
properties.insert(
|
||||
"timeout_ms".to_string(),
|
||||
JsonSchema::Number {
|
||||
description: Some("The timeout for the command in milliseconds".to_string()),
|
||||
description: Some(
|
||||
"The timeout for the command in milliseconds (default: 1000).".to_string(),
|
||||
),
|
||||
},
|
||||
);
|
||||
|
||||
@@ -886,15 +883,6 @@ pub(crate) fn build_specs(
|
||||
let mcp_handler = Arc::new(McpHandler);
|
||||
let mcp_resource_handler = Arc::new(McpResourceHandler);
|
||||
|
||||
let use_unified_exec = config.experimental_unified_exec_tool
|
||||
|| matches!(config.shell_type, ConfigShellToolType::Streamable);
|
||||
|
||||
if use_unified_exec {
|
||||
builder.push_spec(create_exec_command_tool());
|
||||
builder.push_spec(create_write_stdin_tool());
|
||||
builder.register_handler("exec_command", unified_exec_handler.clone());
|
||||
builder.register_handler("write_stdin", unified_exec_handler);
|
||||
}
|
||||
match &config.shell_type {
|
||||
ConfigShellToolType::Default => {
|
||||
builder.push_spec(create_shell_tool());
|
||||
@@ -902,8 +890,11 @@ pub(crate) fn build_specs(
|
||||
ConfigShellToolType::Local => {
|
||||
builder.push_spec(ToolSpec::LocalShell {});
|
||||
}
|
||||
ConfigShellToolType::Streamable => {
|
||||
// Already handled by use_unified_exec.
|
||||
ConfigShellToolType::UnifiedExec => {
|
||||
builder.push_spec(create_exec_command_tool());
|
||||
builder.push_spec(create_write_stdin_tool());
|
||||
builder.register_handler("exec_command", unified_exec_handler.clone());
|
||||
builder.register_handler("write_stdin", unified_exec_handler);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1045,7 +1036,7 @@ mod tests {
|
||||
match config.shell_type {
|
||||
ConfigShellToolType::Default => Some("shell"),
|
||||
ConfigShellToolType::Local => Some("local_shell"),
|
||||
ConfigShellToolType::Streamable => None,
|
||||
ConfigShellToolType::UnifiedExec => None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1095,7 +1086,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_full_toolset_specs_for_gpt5_codex() {
|
||||
fn test_full_toolset_specs_for_gpt5_codex_unified_exec_web_search() {
|
||||
let model_family = find_family_for_model("gpt-5-codex")
|
||||
.expect("gpt-5-codex should be a valid model family");
|
||||
let mut features = Features::with_defaults();
|
||||
@@ -1129,7 +1120,6 @@ mod tests {
|
||||
for spec in [
|
||||
create_exec_command_tool(),
|
||||
create_write_stdin_tool(),
|
||||
create_shell_tool(),
|
||||
create_list_mcp_resources_tool(),
|
||||
create_list_mcp_resource_templates_tool(),
|
||||
create_read_mcp_resource_tool(),
|
||||
@@ -1156,32 +1146,106 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_specs_contains_expected_basics() {
|
||||
let model_family = find_family_for_model("codex-mini-latest")
|
||||
.expect("codex-mini-latest should be a valid model family");
|
||||
let mut features = Features::with_defaults();
|
||||
features.enable(Feature::WebSearchRequest);
|
||||
features.enable(Feature::UnifiedExec);
|
||||
fn assert_model_tools(model_family: &str, features: &Features, expected_tools: &[&str]) {
|
||||
let model_family = find_family_for_model(model_family)
|
||||
.unwrap_or_else(|| panic!("{model_family} should be a valid model family"));
|
||||
let config = ToolsConfig::new(&ToolsConfigParams {
|
||||
model_family: &model_family,
|
||||
features: &features,
|
||||
features,
|
||||
});
|
||||
let (tools, _) = build_specs(&config, Some(HashMap::new())).build();
|
||||
let tool_names = tools.iter().map(|t| t.spec.name()).collect::<Vec<_>>();
|
||||
assert_eq!(
|
||||
&tool_names,
|
||||
assert_eq!(&tool_names, &expected_tools,);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_specs_gpt5_codex_default() {
|
||||
assert_model_tools(
|
||||
"gpt-5-codex",
|
||||
&Features::with_defaults(),
|
||||
&[
|
||||
"shell",
|
||||
"list_mcp_resources",
|
||||
"list_mcp_resource_templates",
|
||||
"read_mcp_resource",
|
||||
"update_plan",
|
||||
"apply_patch",
|
||||
"view_image",
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_specs_gpt5_codex_unified_exec_web_search() {
|
||||
assert_model_tools(
|
||||
"gpt-5-codex",
|
||||
Features::with_defaults()
|
||||
.enable(Feature::UnifiedExec)
|
||||
.enable(Feature::WebSearchRequest),
|
||||
&[
|
||||
"exec_command",
|
||||
"write_stdin",
|
||||
"list_mcp_resources",
|
||||
"list_mcp_resource_templates",
|
||||
"read_mcp_resource",
|
||||
"update_plan",
|
||||
"apply_patch",
|
||||
"web_search",
|
||||
"view_image",
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_codex_mini_defaults() {
|
||||
assert_model_tools(
|
||||
"codex-mini-latest",
|
||||
&Features::with_defaults(),
|
||||
&[
|
||||
"local_shell",
|
||||
"list_mcp_resources",
|
||||
"list_mcp_resource_templates",
|
||||
"read_mcp_resource",
|
||||
"update_plan",
|
||||
"view_image",
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_porcupine_defaults() {
|
||||
assert_model_tools(
|
||||
"porcupine",
|
||||
&Features::with_defaults(),
|
||||
&[
|
||||
"exec_command",
|
||||
"write_stdin",
|
||||
"list_mcp_resources",
|
||||
"list_mcp_resource_templates",
|
||||
"read_mcp_resource",
|
||||
"update_plan",
|
||||
"view_image",
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_codex_mini_unified_exec_web_search() {
|
||||
assert_model_tools(
|
||||
"codex-mini-latest",
|
||||
Features::with_defaults()
|
||||
.enable(Feature::UnifiedExec)
|
||||
.enable(Feature::WebSearchRequest),
|
||||
&[
|
||||
"exec_command",
|
||||
"write_stdin",
|
||||
"list_mcp_resources",
|
||||
"list_mcp_resource_templates",
|
||||
"read_mcp_resource",
|
||||
"update_plan",
|
||||
"web_search",
|
||||
"view_image",
|
||||
]
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ use core_test_support::responses::mount_sse_sequence;
|
||||
use core_test_support::responses::sse;
|
||||
use core_test_support::responses::start_mock_server;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use core_test_support::wait_for_event;
|
||||
use regex_lite::Regex;
|
||||
use serde_json::json;
|
||||
|
||||
@@ -42,8 +42,6 @@ async fn interrupt_long_running_tool_emits_turn_aborted() {
|
||||
|
||||
let codex = test_codex().build(&server).await.unwrap().codex;
|
||||
|
||||
let wait_timeout = Duration::from_secs(5);
|
||||
|
||||
// Kick off a turn that triggers the function call.
|
||||
codex
|
||||
.submit(Op::UserInput {
|
||||
@@ -55,22 +53,12 @@ async fn interrupt_long_running_tool_emits_turn_aborted() {
|
||||
.unwrap();
|
||||
|
||||
// Wait until the exec begins to avoid a race, then interrupt.
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecCommandBegin(_)),
|
||||
wait_timeout,
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandBegin(_))).await;
|
||||
|
||||
codex.submit(Op::Interrupt).await.unwrap();
|
||||
|
||||
// Expect TurnAborted soon after.
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TurnAborted(_)),
|
||||
wait_timeout,
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TurnAborted(_))).await;
|
||||
}
|
||||
|
||||
/// After an interrupt we expect the next request to the model to include both
|
||||
@@ -107,8 +95,6 @@ async fn interrupt_tool_records_history_entries() {
|
||||
let fixture = test_codex().build(&server).await.unwrap();
|
||||
let codex = Arc::clone(&fixture.codex);
|
||||
|
||||
let wait_timeout = Duration::from_millis(100);
|
||||
|
||||
codex
|
||||
.submit(Op::UserInput {
|
||||
items: vec![UserInput::Text {
|
||||
@@ -118,22 +104,12 @@ async fn interrupt_tool_records_history_entries() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecCommandBegin(_)),
|
||||
wait_timeout,
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandBegin(_))).await;
|
||||
|
||||
tokio::time::sleep(Duration::from_secs_f32(0.1)).await;
|
||||
codex.submit(Op::Interrupt).await.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TurnAborted(_)),
|
||||
wait_timeout,
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TurnAborted(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::UserInput {
|
||||
@@ -144,12 +120,7 @@ async fn interrupt_tool_records_history_entries() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
wait_timeout,
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
let requests = response_mock.requests();
|
||||
assert!(
|
||||
|
||||
@@ -23,14 +23,12 @@ use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::Value;
|
||||
use serde_json::json;
|
||||
use std::env;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use std::time::Duration;
|
||||
use wiremock::Mock;
|
||||
use wiremock::MockServer;
|
||||
use wiremock::ResponseTemplate;
|
||||
@@ -423,16 +421,12 @@ async fn expect_exec_approval(
|
||||
test: &TestCodex,
|
||||
expected_command: &[String],
|
||||
) -> ExecApprovalRequestEvent {
|
||||
let event = wait_for_event_with_timeout(
|
||||
&test.codex,
|
||||
|event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ExecApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
},
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
let event = wait_for_event(&test.codex, |event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ExecApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
})
|
||||
.await;
|
||||
|
||||
match event {
|
||||
@@ -449,16 +443,12 @@ async fn expect_patch_approval(
|
||||
test: &TestCodex,
|
||||
expected_call_id: &str,
|
||||
) -> ApplyPatchApprovalRequestEvent {
|
||||
let event = wait_for_event_with_timeout(
|
||||
&test.codex,
|
||||
|event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ApplyPatchApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
},
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
let event = wait_for_event(&test.codex, |event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ApplyPatchApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
})
|
||||
.await;
|
||||
|
||||
match event {
|
||||
@@ -472,16 +462,12 @@ async fn expect_patch_approval(
|
||||
}
|
||||
|
||||
async fn wait_for_completion_without_approval(test: &TestCodex) {
|
||||
let event = wait_for_event_with_timeout(
|
||||
&test.codex,
|
||||
|event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ExecApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
},
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
let event = wait_for_event(&test.codex, |event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::ExecApprovalRequest(_) | EventMsg::TaskComplete(_)
|
||||
)
|
||||
})
|
||||
.await;
|
||||
|
||||
match event {
|
||||
|
||||
@@ -32,7 +32,6 @@ use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use futures::StreamExt;
|
||||
use serde_json::json;
|
||||
use std::io::Write;
|
||||
@@ -1117,26 +1116,20 @@ async fn context_window_error_sets_total_tokens_to_model_window() -> anyhow::Res
|
||||
})
|
||||
.await?;
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
let token_event = wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::TokenCount(payload)
|
||||
if payload.info.as_ref().is_some_and(|info| {
|
||||
info.model_context_window == Some(info.total_token_usage.total_tokens)
|
||||
&& info.total_token_usage.total_tokens > 0
|
||||
})
|
||||
)
|
||||
},
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
let token_event = wait_for_event(&codex, |event| {
|
||||
matches!(
|
||||
event,
|
||||
EventMsg::TokenCount(payload)
|
||||
if payload.info.as_ref().is_some_and(|info| {
|
||||
info.model_context_window == Some(info.total_token_usage.total_tokens)
|
||||
&& info.total_token_usage.total_tokens > 0
|
||||
})
|
||||
)
|
||||
})
|
||||
.await;
|
||||
|
||||
let EventMsg::TokenCount(token_payload) = token_event else {
|
||||
unreachable!("wait_for_event_with_timeout returned unexpected event");
|
||||
unreachable!("wait_for_event returned unexpected event");
|
||||
};
|
||||
|
||||
let info = token_payload
|
||||
|
||||
@@ -18,12 +18,11 @@ async fn emits_deprecation_notice_for_legacy_feature_flag() -> anyhow::Result<()
|
||||
let server = start_mock_server().await;
|
||||
|
||||
let mut builder = test_codex().with_config(|config| {
|
||||
config.features.enable(Feature::StreamableShell);
|
||||
config.features.record_legacy_usage_force(
|
||||
"experimental_use_exec_command_tool",
|
||||
Feature::StreamableShell,
|
||||
);
|
||||
config.use_experimental_streamable_shell_tool = true;
|
||||
config.features.enable(Feature::UnifiedExec);
|
||||
config
|
||||
.features
|
||||
.record_legacy_usage_force("use_experimental_unified_exec_tool", Feature::UnifiedExec);
|
||||
config.use_experimental_unified_exec_tool = true;
|
||||
});
|
||||
|
||||
let TestCodex { codex, .. } = builder.build(&server).await?;
|
||||
@@ -37,13 +36,13 @@ async fn emits_deprecation_notice_for_legacy_feature_flag() -> anyhow::Result<()
|
||||
let DeprecationNoticeEvent { summary, details } = notice;
|
||||
assert_eq!(
|
||||
summary,
|
||||
"`experimental_use_exec_command_tool` is deprecated. Use `streamable_shell` instead."
|
||||
"`use_experimental_unified_exec_tool` is deprecated. Use `unified_exec` instead."
|
||||
.to_string(),
|
||||
);
|
||||
assert_eq!(
|
||||
details.as_deref(),
|
||||
Some(
|
||||
"Enable it with `--enable streamable_shell` or `[features].streamable_shell` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
|
||||
"Enable it with `--enable unified_exec` or `[features].unified_exec` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
|
||||
),
|
||||
);
|
||||
|
||||
|
||||
@@ -26,6 +26,7 @@ mod model_overrides;
|
||||
mod model_tools;
|
||||
mod otel;
|
||||
mod prompt_caching;
|
||||
mod quota_exceeded;
|
||||
mod read_file;
|
||||
mod resume;
|
||||
mod review;
|
||||
|
||||
@@ -60,7 +60,6 @@ async fn collect_tool_identifiers_for_model(model: &str) -> Vec<String> {
|
||||
config.features.disable(Feature::ApplyPatchFreeform);
|
||||
config.features.disable(Feature::ViewImageTool);
|
||||
config.features.disable(Feature::WebSearchRequest);
|
||||
config.features.disable(Feature::StreamableShell);
|
||||
config.features.disable(Feature::UnifiedExec);
|
||||
|
||||
let conversation_manager =
|
||||
|
||||
@@ -14,8 +14,7 @@ use core_test_support::responses::sse;
|
||||
use core_test_support::responses::start_mock_server;
|
||||
use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use std::time::Duration;
|
||||
use core_test_support::wait_for_event;
|
||||
use tracing_test::traced_test;
|
||||
|
||||
use core_test_support::responses::ev_local_shell_call;
|
||||
@@ -38,12 +37,7 @@ async fn responses_api_emits_api_request_event() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -84,12 +78,7 @@ async fn process_sse_emits_tracing_for_output_item() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -128,12 +117,7 @@ async fn process_sse_emits_failed_event_on_parse_error() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -173,12 +157,7 @@ async fn process_sse_records_failed_event_when_stream_closes_without_completed()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -230,12 +209,7 @@ async fn process_sse_failed_event_records_response_error_message() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -285,12 +259,7 @@ async fn process_sse_failed_event_logs_parse_error() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -335,12 +304,7 @@ async fn process_sse_failed_event_logs_missing_error() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -385,12 +349,7 @@ async fn process_sse_failed_event_logs_response_completed_parse_error() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -440,12 +399,7 @@ async fn process_sse_emits_completed_telemetry() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
lines
|
||||
@@ -500,12 +454,7 @@ async fn handle_response_item_records_tool_result_for_custom_tool_call() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
let line = lines
|
||||
@@ -564,12 +513,7 @@ async fn handle_response_item_records_tool_result_for_function_call() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
let line = lines
|
||||
@@ -638,12 +582,7 @@ async fn handle_response_item_records_tool_result_for_local_shell_missing_ids()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
let line = lines
|
||||
@@ -696,12 +635,7 @@ async fn handle_response_item_records_tool_result_for_local_shell_call() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(|lines: &[&str]| {
|
||||
let line = lines
|
||||
@@ -794,12 +728,7 @@ async fn handle_container_exec_autoapprove_from_config_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"auto_config_call",
|
||||
@@ -840,12 +769,7 @@ async fn handle_container_exec_user_approved_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -855,12 +779,7 @@ async fn handle_container_exec_user_approved_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"user_approved_call",
|
||||
@@ -902,12 +821,7 @@ async fn handle_container_exec_user_approved_for_session_records_tool_decision()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -917,12 +831,7 @@ async fn handle_container_exec_user_approved_for_session_records_tool_decision()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"user_approved_session_call",
|
||||
@@ -964,12 +873,7 @@ async fn handle_sandbox_error_user_approves_retry_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -979,12 +883,7 @@ async fn handle_sandbox_error_user_approves_retry_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"sandbox_retry_call",
|
||||
@@ -1026,12 +925,7 @@ async fn handle_container_exec_user_denies_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -1041,12 +935,7 @@ async fn handle_container_exec_user_denies_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"user_denied_call",
|
||||
@@ -1088,12 +977,7 @@ async fn handle_sandbox_error_user_approves_for_session_records_tool_decision()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -1103,12 +987,7 @@ async fn handle_sandbox_error_user_approves_for_session_records_tool_decision()
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"sandbox_session_call",
|
||||
@@ -1150,12 +1029,7 @@ async fn handle_sandbox_error_user_denies_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::ExecApprovalRequest(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecApprovalRequest(_))).await;
|
||||
|
||||
codex
|
||||
.submit(Op::ExecApproval {
|
||||
@@ -1165,12 +1039,7 @@ async fn handle_sandbox_error_user_denies_records_tool_decision() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TokenCount(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TokenCount(_))).await;
|
||||
|
||||
logs_assert(tool_decision_assertion(
|
||||
"sandbox_deny_call",
|
||||
|
||||
72
codex-rs/core/tests/suite/quota_exceeded.rs
Normal file
72
codex-rs/core/tests/suite/quota_exceeded.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
use anyhow::Result;
|
||||
use codex_core::protocol::EventMsg;
|
||||
use codex_core::protocol::Op;
|
||||
use codex_protocol::user_input::UserInput;
|
||||
use core_test_support::responses::ev_response_created;
|
||||
use core_test_support::responses::mount_sse_once;
|
||||
use core_test_support::responses::sse;
|
||||
use core_test_support::responses::start_mock_server;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event;
|
||||
use pretty_assertions::assert_eq;
|
||||
use serde_json::json;
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn quota_exceeded_emits_single_error_event() -> Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
|
||||
let server = start_mock_server().await;
|
||||
let mut builder = test_codex();
|
||||
|
||||
mount_sse_once(
|
||||
&server,
|
||||
sse(vec![
|
||||
ev_response_created("resp-1"),
|
||||
json!({
|
||||
"type": "response.failed",
|
||||
"response": {
|
||||
"id": "resp-1",
|
||||
"error": {
|
||||
"code": "insufficient_quota",
|
||||
"message": "You exceeded your current quota, please check your plan and billing details."
|
||||
}
|
||||
}
|
||||
}),
|
||||
]),
|
||||
)
|
||||
.await;
|
||||
|
||||
let test = builder.build(&server).await?;
|
||||
|
||||
test.codex
|
||||
.submit(Op::UserInput {
|
||||
items: vec![UserInput::Text {
|
||||
text: "quota?".into(),
|
||||
}],
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let mut error_events = 0;
|
||||
|
||||
loop {
|
||||
let event = wait_for_event(&test.codex, |_| true).await;
|
||||
|
||||
match event {
|
||||
EventMsg::Error(err) => {
|
||||
error_events += 1;
|
||||
assert_eq!(
|
||||
err.message,
|
||||
"Quota exceeded. Check your plan and billing details."
|
||||
);
|
||||
}
|
||||
EventMsg::TaskComplete(_) => break,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(error_events, 1, "expected exactly one Codex:Error event");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -23,7 +23,6 @@ use core_test_support::load_default_config_for_test;
|
||||
use core_test_support::load_sse_fixture_with_id_from_str;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use pretty_assertions::assert_eq;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
@@ -246,37 +245,31 @@ async fn review_filters_agent_message_related_events() {
|
||||
let mut saw_exited = false;
|
||||
|
||||
// Drain until TaskComplete; assert filtered events never surface.
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|event| match event {
|
||||
EventMsg::TaskComplete(_) => true,
|
||||
EventMsg::EnteredReviewMode(_) => {
|
||||
saw_entered = true;
|
||||
false
|
||||
wait_for_event(&codex, |event| match event {
|
||||
EventMsg::TaskComplete(_) => true,
|
||||
EventMsg::EnteredReviewMode(_) => {
|
||||
saw_entered = true;
|
||||
false
|
||||
}
|
||||
EventMsg::ExitedReviewMode(_) => {
|
||||
saw_exited = true;
|
||||
false
|
||||
}
|
||||
// The following must be filtered by review flow
|
||||
EventMsg::AgentMessageContentDelta(_) => {
|
||||
panic!("unexpected AgentMessageContentDelta surfaced during review")
|
||||
}
|
||||
EventMsg::AgentMessageDelta(_) => {
|
||||
panic!("unexpected AgentMessageDelta surfaced during review")
|
||||
}
|
||||
EventMsg::ItemCompleted(ev) => match &ev.item {
|
||||
codex_protocol::items::TurnItem::AgentMessage(_) => {
|
||||
panic!("unexpected ItemCompleted for TurnItem::AgentMessage surfaced during review")
|
||||
}
|
||||
EventMsg::ExitedReviewMode(_) => {
|
||||
saw_exited = true;
|
||||
false
|
||||
}
|
||||
// The following must be filtered by review flow
|
||||
EventMsg::AgentMessageContentDelta(_) => {
|
||||
panic!("unexpected AgentMessageContentDelta surfaced during review")
|
||||
}
|
||||
EventMsg::AgentMessageDelta(_) => {
|
||||
panic!("unexpected AgentMessageDelta surfaced during review")
|
||||
}
|
||||
EventMsg::ItemCompleted(ev) => match &ev.item {
|
||||
codex_protocol::items::TurnItem::AgentMessage(_) => {
|
||||
panic!(
|
||||
"unexpected ItemCompleted for TurnItem::AgentMessage surfaced during review"
|
||||
)
|
||||
}
|
||||
_ => false,
|
||||
},
|
||||
_ => false,
|
||||
},
|
||||
tokio::time::Duration::from_secs(5),
|
||||
)
|
||||
_ => false,
|
||||
})
|
||||
.await;
|
||||
assert!(saw_entered && saw_exited, "missing review lifecycle events");
|
||||
|
||||
@@ -335,25 +328,21 @@ async fn review_does_not_emit_agent_message_on_structured_output() {
|
||||
// Drain events until TaskComplete; ensure none are AgentMessage.
|
||||
let mut saw_entered = false;
|
||||
let mut saw_exited = false;
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|event| match event {
|
||||
EventMsg::TaskComplete(_) => true,
|
||||
EventMsg::AgentMessage(_) => {
|
||||
panic!("unexpected AgentMessage during review with structured output")
|
||||
}
|
||||
EventMsg::EnteredReviewMode(_) => {
|
||||
saw_entered = true;
|
||||
false
|
||||
}
|
||||
EventMsg::ExitedReviewMode(_) => {
|
||||
saw_exited = true;
|
||||
false
|
||||
}
|
||||
_ => false,
|
||||
},
|
||||
tokio::time::Duration::from_secs(5),
|
||||
)
|
||||
wait_for_event(&codex, |event| match event {
|
||||
EventMsg::TaskComplete(_) => true,
|
||||
EventMsg::AgentMessage(_) => {
|
||||
panic!("unexpected AgentMessage during review with structured output")
|
||||
}
|
||||
EventMsg::EnteredReviewMode(_) => {
|
||||
saw_entered = true;
|
||||
false
|
||||
}
|
||||
EventMsg::ExitedReviewMode(_) => {
|
||||
saw_exited = true;
|
||||
false
|
||||
}
|
||||
_ => false,
|
||||
})
|
||||
.await;
|
||||
assert!(saw_entered && saw_exited, "missing review lifecycle events");
|
||||
|
||||
|
||||
@@ -25,7 +25,6 @@ use core_test_support::responses::mount_sse_once_match;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use escargot::CargoBuild;
|
||||
use mcp_types::ContentBlock;
|
||||
use serde_json::Value;
|
||||
@@ -125,11 +124,9 @@ async fn stdio_server_round_trip() -> anyhow::Result<()> {
|
||||
})
|
||||
.await?;
|
||||
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
@@ -268,11 +265,9 @@ async fn stdio_image_responses_round_trip() -> anyhow::Result<()> {
|
||||
.await?;
|
||||
|
||||
// Wait for tool begin/end and final completion.
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
unreachable!("begin");
|
||||
@@ -465,11 +460,9 @@ async fn stdio_image_completions_round_trip() -> anyhow::Result<()> {
|
||||
})
|
||||
.await?;
|
||||
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
unreachable!("begin");
|
||||
@@ -609,11 +602,9 @@ async fn stdio_server_propagates_whitelisted_env_vars() -> anyhow::Result<()> {
|
||||
})
|
||||
.await?;
|
||||
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
@@ -762,11 +753,9 @@ async fn streamable_http_tool_call_round_trip() -> anyhow::Result<()> {
|
||||
})
|
||||
.await?;
|
||||
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
@@ -947,11 +936,9 @@ async fn streamable_http_with_oauth_round_trip() -> anyhow::Result<()> {
|
||||
})
|
||||
.await?;
|
||||
|
||||
let begin_event = wait_for_event_with_timeout(
|
||||
&fixture.codex,
|
||||
|ev| matches!(ev, EventMsg::McpToolCallBegin(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
let begin_event = wait_for_event(&fixture.codex, |ev| {
|
||||
matches!(ev, EventMsg::McpToolCallBegin(_))
|
||||
})
|
||||
.await;
|
||||
|
||||
let EventMsg::McpToolCallBegin(begin) = begin_event else {
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::ModelProviderInfo;
|
||||
use codex_core::WireApi;
|
||||
use codex_core::protocol::EventMsg;
|
||||
@@ -9,7 +7,7 @@ use core_test_support::load_sse_fixture_with_id;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use core_test_support::wait_for_event;
|
||||
use wiremock::Mock;
|
||||
use wiremock::MockServer;
|
||||
use wiremock::ResponseTemplate;
|
||||
@@ -96,19 +94,9 @@ async fn continue_after_stream_error() {
|
||||
.unwrap();
|
||||
|
||||
// Expect an Error followed by TaskComplete so the session is released.
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::Error(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::Error(_))).await;
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
// 2) Second turn: now send another prompt that should succeed using the
|
||||
// mock server SSE stream. If the agent failed to clear the running task on
|
||||
@@ -122,10 +110,5 @@ async fn continue_after_stream_error() {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|ev| matches!(ev, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(5),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
|
||||
}
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
//! Verifies that the agent retries when the SSE stream terminates before
|
||||
//! delivering a `response.completed` event.
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use codex_core::ModelProviderInfo;
|
||||
use codex_core::WireApi;
|
||||
use codex_core::protocol::EventMsg;
|
||||
@@ -13,7 +11,7 @@ use core_test_support::load_sse_fixture_with_id;
|
||||
use core_test_support::skip_if_no_network;
|
||||
use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use core_test_support::wait_for_event;
|
||||
use wiremock::Mock;
|
||||
use wiremock::MockServer;
|
||||
use wiremock::Request;
|
||||
@@ -103,10 +101,5 @@ async fn retries_on_early_close() {
|
||||
.unwrap();
|
||||
|
||||
// Wait until TaskComplete (should succeed after retry).
|
||||
wait_for_event_with_timeout(
|
||||
&codex,
|
||||
|event| matches!(event, EventMsg::TaskComplete(_)),
|
||||
Duration::from_secs(10),
|
||||
)
|
||||
.await;
|
||||
wait_for_event(&codex, |event| matches!(event, EventMsg::TaskComplete(_))).await;
|
||||
}
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
#![cfg(not(target_os = "windows"))]
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::OnceLock;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
use codex_core::features::Feature;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
@@ -10,6 +11,7 @@ use codex_core::protocol::Op;
|
||||
use codex_core::protocol::SandboxPolicy;
|
||||
use codex_protocol::config_types::ReasoningSummary;
|
||||
use codex_protocol::user_input::UserInput;
|
||||
use core_test_support::assert_regex_match;
|
||||
use core_test_support::responses::ev_assistant_message;
|
||||
use core_test_support::responses::ev_completed;
|
||||
use core_test_support::responses::ev_function_call;
|
||||
@@ -23,7 +25,7 @@ use core_test_support::test_codex::TestCodex;
|
||||
use core_test_support::test_codex::test_codex;
|
||||
use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_match;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use regex_lite::Regex;
|
||||
use serde_json::Value;
|
||||
use serde_json::json;
|
||||
|
||||
@@ -35,7 +37,95 @@ fn extract_output_text(item: &Value) -> Option<&str> {
|
||||
})
|
||||
}
|
||||
|
||||
fn collect_tool_outputs(bodies: &[Value]) -> Result<HashMap<String, Value>> {
|
||||
#[derive(Debug)]
|
||||
struct ParsedUnifiedExecOutput {
|
||||
chunk_id: Option<String>,
|
||||
wall_time_seconds: f64,
|
||||
session_id: Option<i32>,
|
||||
exit_code: Option<i32>,
|
||||
original_token_count: Option<usize>,
|
||||
output: String,
|
||||
}
|
||||
|
||||
#[allow(clippy::expect_used)]
|
||||
fn parse_unified_exec_output(raw: &str) -> Result<ParsedUnifiedExecOutput> {
|
||||
static OUTPUT_REGEX: OnceLock<Regex> = OnceLock::new();
|
||||
let regex = OUTPUT_REGEX.get_or_init(|| {
|
||||
Regex::new(concat!(
|
||||
r#"(?s)^(?:Total output lines: \d+\n\n)?"#,
|
||||
r#"(?:Chunk ID: (?P<chunk_id>[^\n]+)\n)?"#,
|
||||
r#"Wall time: (?P<wall_time>-?\d+(?:\.\d+)?) seconds\n"#,
|
||||
r#"(?:Process exited with code (?P<exit_code>-?\d+)\n)?"#,
|
||||
r#"(?:Process running with session ID (?P<session_id>-?\d+)\n)?"#,
|
||||
r#"(?:Original token count: (?P<original_token_count>\d+)\n)?"#,
|
||||
r#"Output:\n?(?P<output>.*)$"#,
|
||||
))
|
||||
.expect("valid unified exec output regex")
|
||||
});
|
||||
|
||||
let cleaned = raw.trim_matches('\r');
|
||||
let captures = regex
|
||||
.captures(cleaned)
|
||||
.ok_or_else(|| anyhow::anyhow!("missing Output section in unified exec output"))?;
|
||||
|
||||
let chunk_id = captures
|
||||
.name("chunk_id")
|
||||
.map(|value| value.as_str().to_string());
|
||||
|
||||
let wall_time_seconds = captures
|
||||
.name("wall_time")
|
||||
.expect("wall_time group present")
|
||||
.as_str()
|
||||
.parse::<f64>()
|
||||
.context("failed to parse wall time seconds")?;
|
||||
|
||||
let exit_code = captures
|
||||
.name("exit_code")
|
||||
.map(|value| {
|
||||
value
|
||||
.as_str()
|
||||
.parse::<i32>()
|
||||
.context("failed to parse exit code from unified exec output")
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
let session_id = captures
|
||||
.name("session_id")
|
||||
.map(|value| {
|
||||
value
|
||||
.as_str()
|
||||
.parse::<i32>()
|
||||
.context("failed to parse session id from unified exec output")
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
let original_token_count = captures
|
||||
.name("original_token_count")
|
||||
.map(|value| {
|
||||
value
|
||||
.as_str()
|
||||
.parse::<usize>()
|
||||
.context("failed to parse original token count from unified exec output")
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
let output = captures
|
||||
.name("output")
|
||||
.expect("output group present")
|
||||
.as_str()
|
||||
.to_string();
|
||||
|
||||
Ok(ParsedUnifiedExecOutput {
|
||||
chunk_id,
|
||||
wall_time_seconds,
|
||||
session_id,
|
||||
exit_code,
|
||||
original_token_count,
|
||||
output,
|
||||
})
|
||||
}
|
||||
|
||||
fn collect_tool_outputs(bodies: &[Value]) -> Result<HashMap<String, ParsedUnifiedExecOutput>> {
|
||||
let mut outputs = HashMap::new();
|
||||
for body in bodies {
|
||||
if let Some(items) = body.get("input").and_then(Value::as_array) {
|
||||
@@ -50,8 +140,8 @@ fn collect_tool_outputs(bodies: &[Value]) -> Result<HashMap<String, Value>> {
|
||||
if trimmed.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let parsed: Value = serde_json::from_str(trimmed).map_err(|err| {
|
||||
anyhow::anyhow!("failed to parse tool output content {trimmed:?}: {err}")
|
||||
let parsed = parse_unified_exec_output(content).with_context(|| {
|
||||
format!("failed to parse unified exec output for {call_id}")
|
||||
})?;
|
||||
outputs.insert(call_id.to_string(), parsed);
|
||||
}
|
||||
@@ -391,8 +481,6 @@ async fn unified_exec_emits_output_delta_for_write_stdin() -> Result<()> {
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn unified_exec_skips_begin_event_for_empty_input() -> Result<()> {
|
||||
use tokio::time::Duration;
|
||||
|
||||
skip_if_no_network!(Ok(()));
|
||||
skip_if_sandbox!(Ok(()));
|
||||
|
||||
@@ -468,7 +556,7 @@ async fn unified_exec_skips_begin_event_for_empty_input() -> Result<()> {
|
||||
|
||||
let mut begin_events = Vec::new();
|
||||
loop {
|
||||
let event_msg = wait_for_event_with_timeout(&codex, |_| true, Duration::from_secs(2)).await;
|
||||
let event_msg = wait_for_event(&codex, |_| true).await;
|
||||
match event_msg {
|
||||
EventMsg::ExecCommandBegin(event) => begin_events.push(event),
|
||||
EventMsg::TaskComplete(_) => break,
|
||||
@@ -556,51 +644,38 @@ async fn exec_command_reports_chunk_and_exit_metadata() -> Result<()> {
|
||||
.get(call_id)
|
||||
.expect("missing exec_command metadata output");
|
||||
|
||||
let chunk_id = metadata
|
||||
.get("chunk_id")
|
||||
.and_then(Value::as_str)
|
||||
.expect("missing chunk_id");
|
||||
let chunk_id = metadata.chunk_id.as_ref().expect("missing chunk_id");
|
||||
assert_eq!(chunk_id.len(), 6, "chunk id should be 6 hex characters");
|
||||
assert!(
|
||||
chunk_id.chars().all(|c| c.is_ascii_hexdigit()),
|
||||
"chunk id should be hexadecimal: {chunk_id}"
|
||||
);
|
||||
|
||||
let wall_time = metadata
|
||||
.get("wall_time_seconds")
|
||||
.and_then(Value::as_f64)
|
||||
.unwrap_or_default();
|
||||
let wall_time = metadata.wall_time_seconds;
|
||||
assert!(
|
||||
wall_time >= 0.0,
|
||||
"wall_time_seconds should be non-negative, got {wall_time}"
|
||||
);
|
||||
|
||||
assert!(
|
||||
metadata.get("session_id").is_none(),
|
||||
metadata.session_id.is_none(),
|
||||
"exec_command for a completed process should not include session_id"
|
||||
);
|
||||
|
||||
let exit_code = metadata
|
||||
.get("exit_code")
|
||||
.and_then(Value::as_i64)
|
||||
.expect("expected exit_code");
|
||||
let exit_code = metadata.exit_code.expect("expected exit_code");
|
||||
assert_eq!(exit_code, 0, "expected successful exit");
|
||||
|
||||
let output_text = metadata
|
||||
.get("output")
|
||||
.and_then(Value::as_str)
|
||||
.expect("missing output text");
|
||||
let output_text = &metadata.output;
|
||||
assert!(
|
||||
output_text.contains("tokens truncated"),
|
||||
"expected truncation notice in output: {output_text:?}"
|
||||
);
|
||||
|
||||
let original_tokens = metadata
|
||||
.get("original_token_count")
|
||||
.and_then(Value::as_u64)
|
||||
.expect("missing original_token_count");
|
||||
.original_token_count
|
||||
.expect("missing original_token_count") as usize;
|
||||
assert!(
|
||||
original_tokens as usize > 6,
|
||||
original_tokens > 6,
|
||||
"original token count should exceed max_output_tokens"
|
||||
);
|
||||
|
||||
@@ -711,39 +786,34 @@ async fn write_stdin_returns_exit_metadata_and_clears_session() -> Result<()> {
|
||||
.get(start_call_id)
|
||||
.expect("missing start output for exec_command");
|
||||
let session_id = start_output
|
||||
.get("session_id")
|
||||
.and_then(Value::as_i64)
|
||||
.session_id
|
||||
.expect("expected session id from exec_command");
|
||||
assert!(
|
||||
session_id >= 0,
|
||||
"session_id should be non-negative, got {session_id}"
|
||||
);
|
||||
assert!(
|
||||
start_output.get("exit_code").is_none(),
|
||||
start_output.exit_code.is_none(),
|
||||
"initial exec_command should not include exit_code while session is running"
|
||||
);
|
||||
|
||||
let send_output = outputs
|
||||
.get(send_call_id)
|
||||
.expect("missing write_stdin echo output");
|
||||
let echoed = send_output
|
||||
.get("output")
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or_default();
|
||||
let echoed = send_output.output.as_str();
|
||||
assert!(
|
||||
echoed.contains("hello unified exec"),
|
||||
"expected echoed output from cat, got {echoed:?}"
|
||||
);
|
||||
let echoed_session = send_output
|
||||
.get("session_id")
|
||||
.and_then(Value::as_i64)
|
||||
.session_id
|
||||
.expect("write_stdin should return session id while process is running");
|
||||
assert_eq!(
|
||||
echoed_session, session_id,
|
||||
"write_stdin should reuse existing session id"
|
||||
);
|
||||
assert!(
|
||||
send_output.get("exit_code").is_none(),
|
||||
send_output.exit_code.is_none(),
|
||||
"write_stdin should not include exit_code while process is running"
|
||||
);
|
||||
|
||||
@@ -751,18 +821,17 @@ async fn write_stdin_returns_exit_metadata_and_clears_session() -> Result<()> {
|
||||
.get(exit_call_id)
|
||||
.expect("missing exit metadata output");
|
||||
assert!(
|
||||
exit_output.get("session_id").is_none(),
|
||||
exit_output.session_id.is_none(),
|
||||
"session_id should be omitted once the process exits"
|
||||
);
|
||||
let exit_code = exit_output
|
||||
.get("exit_code")
|
||||
.and_then(Value::as_i64)
|
||||
.exit_code
|
||||
.expect("expected exit_code after sending EOF");
|
||||
assert_eq!(exit_code, 0, "cat should exit cleanly after EOF");
|
||||
|
||||
let exit_chunk = exit_output
|
||||
.get("chunk_id")
|
||||
.and_then(Value::as_str)
|
||||
.chunk_id
|
||||
.as_ref()
|
||||
.expect("missing chunk id for exit output");
|
||||
assert!(
|
||||
exit_chunk.chars().all(|c| c.is_ascii_hexdigit()),
|
||||
@@ -964,26 +1033,18 @@ async fn unified_exec_reuses_session_via_stdin() -> Result<()> {
|
||||
let start_output = outputs
|
||||
.get(first_call_id)
|
||||
.expect("missing first unified_exec output");
|
||||
let session_id = start_output["session_id"].as_i64().unwrap_or_default();
|
||||
let session_id = start_output.session_id.unwrap_or_default();
|
||||
assert!(
|
||||
session_id >= 0,
|
||||
"expected session id in first unified_exec response"
|
||||
);
|
||||
assert!(
|
||||
start_output["output"]
|
||||
.as_str()
|
||||
.unwrap_or_default()
|
||||
.is_empty()
|
||||
);
|
||||
assert!(start_output.output.is_empty());
|
||||
|
||||
let reuse_output = outputs
|
||||
.get(second_call_id)
|
||||
.expect("missing reused unified_exec output");
|
||||
assert_eq!(
|
||||
reuse_output["session_id"].as_i64().unwrap_or_default(),
|
||||
session_id
|
||||
);
|
||||
let echoed = reuse_output["output"].as_str().unwrap_or_default();
|
||||
assert_eq!(reuse_output.session_id.unwrap_or_default(), session_id);
|
||||
let echoed = reuse_output.output.as_str();
|
||||
assert!(
|
||||
echoed.contains("hello unified exec"),
|
||||
"expected echoed output, got {echoed:?}"
|
||||
@@ -1100,7 +1161,7 @@ PY
|
||||
let start_output = outputs
|
||||
.get(first_call_id)
|
||||
.expect("missing initial unified_exec output");
|
||||
let session_id = start_output["session_id"].as_i64().unwrap_or_default();
|
||||
let session_id = start_output.session_id.unwrap_or_default();
|
||||
assert!(
|
||||
session_id >= 0,
|
||||
"expected session id from initial unified_exec response"
|
||||
@@ -1109,7 +1170,7 @@ PY
|
||||
let poll_output = outputs
|
||||
.get(second_call_id)
|
||||
.expect("missing poll unified_exec output");
|
||||
let poll_text = poll_output["output"].as_str().unwrap_or_default();
|
||||
let poll_text = poll_output.output.as_str();
|
||||
assert!(
|
||||
poll_text.contains("TAIL-MARKER"),
|
||||
"expected poll output to contain tail marker, got {poll_text:?}"
|
||||
@@ -1209,16 +1270,11 @@ async fn unified_exec_timeout_and_followup_poll() -> Result<()> {
|
||||
let outputs = collect_tool_outputs(&bodies)?;
|
||||
|
||||
let first_output = outputs.get(first_call_id).expect("missing timeout output");
|
||||
assert_eq!(first_output["session_id"], 0);
|
||||
assert!(
|
||||
first_output["output"]
|
||||
.as_str()
|
||||
.unwrap_or_default()
|
||||
.is_empty()
|
||||
);
|
||||
assert_eq!(first_output.session_id, Some(0));
|
||||
assert!(first_output.output.is_empty());
|
||||
|
||||
let poll_output = outputs.get(second_call_id).expect("missing poll output");
|
||||
let output_text = poll_output["output"].as_str().unwrap_or_default();
|
||||
let output_text = poll_output.output.as_str();
|
||||
assert!(
|
||||
output_text.contains("ready"),
|
||||
"expected ready output, got {output_text:?}"
|
||||
@@ -1226,3 +1282,88 @@ async fn unified_exec_timeout_and_followup_poll() -> Result<()> {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
|
||||
async fn unified_exec_formats_large_output_summary() -> Result<()> {
|
||||
skip_if_no_network!(Ok(()));
|
||||
skip_if_sandbox!(Ok(()));
|
||||
|
||||
let server = start_mock_server().await;
|
||||
|
||||
let mut builder = test_codex().with_config(|config| {
|
||||
config.features.enable(Feature::UnifiedExec);
|
||||
});
|
||||
let TestCodex {
|
||||
codex,
|
||||
cwd,
|
||||
session_configured,
|
||||
..
|
||||
} = builder.build(&server).await?;
|
||||
|
||||
let script = r#"python3 - <<'PY'
|
||||
for i in range(300):
|
||||
print(f"line-{i}")
|
||||
PY
|
||||
"#;
|
||||
|
||||
let call_id = "uexec-large-output";
|
||||
let args = serde_json::json!({
|
||||
"cmd": script,
|
||||
"yield_time_ms": 500,
|
||||
});
|
||||
|
||||
let responses = vec![
|
||||
sse(vec![
|
||||
ev_response_created("resp-1"),
|
||||
ev_function_call(call_id, "exec_command", &serde_json::to_string(&args)?),
|
||||
ev_completed("resp-1"),
|
||||
]),
|
||||
sse(vec![
|
||||
ev_assistant_message("msg-1", "done"),
|
||||
ev_completed("resp-2"),
|
||||
]),
|
||||
];
|
||||
mount_sse_sequence(&server, responses).await;
|
||||
|
||||
let session_model = session_configured.model.clone();
|
||||
|
||||
codex
|
||||
.submit(Op::UserTurn {
|
||||
items: vec![UserInput::Text {
|
||||
text: "summarize large output".into(),
|
||||
}],
|
||||
final_output_json_schema: None,
|
||||
cwd: cwd.path().to_path_buf(),
|
||||
approval_policy: AskForApproval::Never,
|
||||
sandbox_policy: SandboxPolicy::DangerFullAccess,
|
||||
model: session_model,
|
||||
effort: None,
|
||||
summary: ReasoningSummary::Auto,
|
||||
})
|
||||
.await?;
|
||||
|
||||
wait_for_event(&codex, |event| matches!(event, EventMsg::TaskComplete(_))).await;
|
||||
|
||||
let requests = server.received_requests().await.expect("recorded requests");
|
||||
assert!(!requests.is_empty(), "expected at least one POST request");
|
||||
|
||||
let bodies = requests
|
||||
.iter()
|
||||
.map(|req| req.body_json::<Value>().expect("request json"))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let outputs = collect_tool_outputs(&bodies)?;
|
||||
let large_output = outputs.get(call_id).expect("missing large output summary");
|
||||
|
||||
assert_regex_match(
|
||||
concat!(
|
||||
r"(?s)",
|
||||
r"line-0.*?",
|
||||
r"\[\.{3} omitted \d+ of \d+ lines \.{3}\].*?",
|
||||
r"line-299",
|
||||
),
|
||||
&large_output.output,
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -21,7 +21,8 @@ At a glance:
|
||||
- `getUserSavedConfig`, `setDefaultModel`, `getUserAgent`, `userInfo`
|
||||
- `model/list` → enumerate available models and reasoning options
|
||||
- Auth
|
||||
- `loginApiKey`, `loginChatGpt`, `cancelLoginChatGpt`, `logoutChatGpt`, `getAuthStatus`
|
||||
- `account/read`, `account/login/start`, `account/login/cancel`, `account/logout`, `account/rateLimits/read`
|
||||
- notifications: `account/login/completed`, `account/updated`, `account/rateLimits/updated`
|
||||
- Utilities
|
||||
- `gitDiffToRemote`, `execOneOffCommand`
|
||||
- Approvals (server → client requests)
|
||||
@@ -113,11 +114,7 @@ The client must reply with `{ decision: "allow" | "deny" }` for each request.
|
||||
|
||||
## Auth helpers
|
||||
|
||||
For ChatGPT or API‑key based auth flows, the server exposes helpers:
|
||||
|
||||
- `loginApiKey { apiKey }`
|
||||
- `loginChatGpt` → returns `{ loginId, authUrl }`; browser completes flow; then `loginChatGptComplete` notification follows
|
||||
- `cancelLoginChatGpt { loginId }`, `logoutChatGpt`, `getAuthStatus { includeToken?, refreshToken? }`
|
||||
For the complete request/response shapes and flow examples, see the [“Auth endpoints (v2)” section in the app‑server README](../app-server/README.md#auth-endpoints-v2).
|
||||
|
||||
## Example: start and send a message
|
||||
|
||||
|
||||
@@ -46,6 +46,8 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
|
||||
{
|
||||
let status = Command::new(prettier_bin)
|
||||
.arg("--write")
|
||||
.arg("--log-level")
|
||||
.arg("warn")
|
||||
.args(ts_files.iter().map(|p| p.as_os_str()))
|
||||
.status()
|
||||
.with_context(|| format!("Failed to invoke Prettier at {}", prettier_bin.display()))?;
|
||||
|
||||
@@ -298,7 +298,7 @@ pub struct ShellToolCallParams {
|
||||
pub command: Vec<String>,
|
||||
pub workdir: Option<String>,
|
||||
|
||||
/// This is the maximum time in milliseconds that the command is allowed to run.
|
||||
/// Maximum time in milliseconds that the command is allowed to run (defaults to 1_000 ms when omitted).
|
||||
#[serde(alias = "timeout")]
|
||||
pub timeout_ms: Option<u64>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
|
||||
@@ -21,6 +21,11 @@ def parse_args(argv: list[str]) -> argparse.Namespace:
|
||||
action="store_true",
|
||||
help="Print the version that would be used and exit before making changes.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--promote-alpha",
|
||||
metavar="VERSION",
|
||||
help="Promote an existing alpha tag (e.g., 0.56.0-alpha.5) by using its merge-base with main as the base commit.",
|
||||
)
|
||||
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument(
|
||||
@@ -43,26 +48,43 @@ def parse_args(argv: list[str]) -> argparse.Namespace:
|
||||
args.publish_alpha
|
||||
or args.publish_release
|
||||
or args.emergency_version_override
|
||||
or args.promote_alpha
|
||||
):
|
||||
parser.error(
|
||||
"Must specify --publish-alpha, --publish-release, or --emergency-version-override."
|
||||
"Must specify --publish-alpha, --publish-release, --promote-alpha, or --emergency-version-override."
|
||||
)
|
||||
return args
|
||||
|
||||
|
||||
def main(argv: list[str]) -> int:
|
||||
args = parse_args(argv)
|
||||
|
||||
# Strip the leading "v" if present.
|
||||
promote_alpha = args.promote_alpha
|
||||
if promote_alpha and promote_alpha.startswith("v"):
|
||||
promote_alpha = promote_alpha[1:]
|
||||
|
||||
try:
|
||||
if args.emergency_version_override:
|
||||
if promote_alpha:
|
||||
version = derive_release_version_from_alpha(promote_alpha)
|
||||
elif args.emergency_version_override:
|
||||
version = args.emergency_version_override
|
||||
else:
|
||||
version = determine_version(args)
|
||||
print(f"Publishing version {version}")
|
||||
if args.dry_run:
|
||||
if promote_alpha:
|
||||
base_commit = get_promote_alpha_base_commit(promote_alpha)
|
||||
if args.dry_run:
|
||||
print(
|
||||
f"Would publish version {version} using base commit {base_commit} derived from rust-v{promote_alpha}."
|
||||
)
|
||||
return 0
|
||||
elif args.dry_run:
|
||||
return 0
|
||||
|
||||
print("Fetching branch head...")
|
||||
base_commit = get_branch_head()
|
||||
if not promote_alpha:
|
||||
print("Fetching branch head...")
|
||||
base_commit = get_branch_head()
|
||||
print(f"Base commit: {base_commit}")
|
||||
print("Fetching commit tree...")
|
||||
base_tree = get_commit_tree(base_commit)
|
||||
@@ -130,6 +152,39 @@ def get_branch_head() -> str:
|
||||
raise ReleaseError("Unable to determine branch head.") from error
|
||||
|
||||
|
||||
def get_promote_alpha_base_commit(alpha_version: str) -> str:
|
||||
tag_name = f"rust-v{alpha_version}"
|
||||
tag_commit_sha = get_tag_commit_sha(tag_name)
|
||||
return get_merge_base_with_main(tag_commit_sha)
|
||||
|
||||
|
||||
def get_tag_commit_sha(tag_name: str) -> str:
|
||||
response = run_gh_api(f"/repos/{REPO}/git/refs/tags/{tag_name}")
|
||||
try:
|
||||
sha = response["object"]["sha"]
|
||||
obj_type = response["object"]["type"]
|
||||
except KeyError as error:
|
||||
raise ReleaseError(f"Unable to resolve tag {tag_name}.") from error
|
||||
while obj_type == "tag":
|
||||
tag_response = run_gh_api(f"/repos/{REPO}/git/tags/{sha}")
|
||||
try:
|
||||
sha = tag_response["object"]["sha"]
|
||||
obj_type = tag_response["object"]["type"]
|
||||
except KeyError as error:
|
||||
raise ReleaseError(f"Unable to resolve annotated tag {tag_name}.") from error
|
||||
if obj_type != "commit":
|
||||
raise ReleaseError(f"Tag {tag_name} does not reference a commit.")
|
||||
return sha
|
||||
|
||||
|
||||
def get_merge_base_with_main(commit_sha: str) -> str:
|
||||
response = run_gh_api(f"/repos/{REPO}/compare/main...{commit_sha}")
|
||||
try:
|
||||
return response["merge_base_commit"]["sha"]
|
||||
except KeyError as error:
|
||||
raise ReleaseError("Unable to determine merge base with main.") from error
|
||||
|
||||
|
||||
def get_commit_tree(commit_sha: str) -> str:
|
||||
response = run_gh_api(f"/repos/{REPO}/git/commits/{commit_sha}")
|
||||
try:
|
||||
@@ -309,5 +364,12 @@ def format_version(major: int, minor: int, patch: int) -> str:
|
||||
return f"{major}.{minor}.{patch}"
|
||||
|
||||
|
||||
def derive_release_version_from_alpha(alpha_version: str) -> str:
|
||||
match = re.match(r"^(\d+)\.(\d+)\.(\d+)-alpha\.(\d+)$", alpha_version)
|
||||
if match is None:
|
||||
raise ReleaseError(f"Unexpected alpha version format: {alpha_version}")
|
||||
return f"{match.group(1)}.{match.group(2)}.{match.group(3)}"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main(sys.argv))
|
||||
|
||||
@@ -27,6 +27,7 @@ base64 = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
clap = { workspace = true, features = ["derive"] }
|
||||
codex-ansi-escape = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
codex-arg0 = { workspace = true }
|
||||
codex-common = { workspace = true, features = [
|
||||
"cli",
|
||||
@@ -34,17 +35,14 @@ codex-common = { workspace = true, features = [
|
||||
"sandbox_summary",
|
||||
] }
|
||||
codex-core = { workspace = true }
|
||||
codex-feedback = { workspace = true }
|
||||
codex-file-search = { workspace = true }
|
||||
codex-login = { workspace = true }
|
||||
codex-ollama = { workspace = true }
|
||||
codex-protocol = { workspace = true }
|
||||
codex-app-server-protocol = { workspace = true }
|
||||
codex-feedback = { workspace = true }
|
||||
color-eyre = { workspace = true }
|
||||
crossterm = { workspace = true, features = [
|
||||
"bracketed-paste",
|
||||
"event-stream",
|
||||
] }
|
||||
crossterm = { workspace = true, features = ["bracketed-paste", "event-stream"] }
|
||||
derive_more = { workspace = true, features = ["is_variant"] }
|
||||
diffy = { workspace = true }
|
||||
dirs = { workspace = true }
|
||||
dunce = { workspace = true }
|
||||
@@ -52,6 +50,7 @@ image = { workspace = true, features = ["jpeg", "png"] }
|
||||
itertools = { workspace = true }
|
||||
lazy_static = { workspace = true }
|
||||
mcp-types = { workspace = true }
|
||||
opentelemetry-appender-tracing = { workspace = true }
|
||||
pathdiff = { workspace = true }
|
||||
pulldown-cmark = { workspace = true }
|
||||
rand = { workspace = true }
|
||||
@@ -71,8 +70,6 @@ strum_macros = { workspace = true }
|
||||
supports-color = { workspace = true }
|
||||
tempfile = { workspace = true }
|
||||
textwrap = { workspace = true }
|
||||
tree-sitter-highlight = { workspace = true }
|
||||
tree-sitter-bash = { workspace = true }
|
||||
tokio = { workspace = true, features = [
|
||||
"io-std",
|
||||
"macros",
|
||||
@@ -85,11 +82,14 @@ toml = { workspace = true }
|
||||
tracing = { workspace = true, features = ["log"] }
|
||||
tracing-appender = { workspace = true }
|
||||
tracing-subscriber = { workspace = true, features = ["env-filter"] }
|
||||
opentelemetry-appender-tracing = { workspace = true }
|
||||
tree-sitter-bash = { workspace = true }
|
||||
tree-sitter-highlight = { workspace = true }
|
||||
unicode-segmentation = { workspace = true }
|
||||
unicode-width = { workspace = true }
|
||||
url = { workspace = true }
|
||||
|
||||
codex-windows-sandbox = { workspace = true }
|
||||
|
||||
[target.'cfg(unix)'.dependencies]
|
||||
libc = { workspace = true }
|
||||
|
||||
@@ -105,5 +105,5 @@ chrono = { workspace = true, features = ["serde"] }
|
||||
insta = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
rand = { workspace = true }
|
||||
vt100 = { workspace = true }
|
||||
serial_test = { workspace = true }
|
||||
vt100 = { workspace = true }
|
||||
|
||||
@@ -79,6 +79,9 @@ pub(crate) struct App {
|
||||
pub(crate) feedback: codex_feedback::CodexFeedback,
|
||||
/// Set when the user confirms an update; propagated on exit.
|
||||
pub(crate) pending_update_action: Option<UpdateAction>,
|
||||
|
||||
// One-shot suppression of the next world-writable scan after user confirmation.
|
||||
skip_world_writable_scan_once: bool,
|
||||
}
|
||||
|
||||
impl App {
|
||||
@@ -168,8 +171,30 @@ impl App {
|
||||
backtrack: BacktrackState::default(),
|
||||
feedback: feedback.clone(),
|
||||
pending_update_action: None,
|
||||
skip_world_writable_scan_once: false,
|
||||
};
|
||||
|
||||
// On startup, if Auto mode (workspace-write) is active, warn about world-writable dirs on Windows.
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
let should_check = codex_core::get_platform_sandbox().is_some()
|
||||
&& matches!(
|
||||
app.config.sandbox_policy,
|
||||
codex_core::protocol::SandboxPolicy::WorkspaceWrite { .. }
|
||||
)
|
||||
&& !app
|
||||
.config
|
||||
.notices
|
||||
.hide_world_writable_warning
|
||||
.unwrap_or(false);
|
||||
if should_check {
|
||||
let cwd = app.config.cwd.clone();
|
||||
let env_map: std::collections::HashMap<String, String> = std::env::vars().collect();
|
||||
let tx = app.app_event_tx.clone();
|
||||
Self::spawn_world_writable_scan(cwd, env_map, tx, false);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(debug_assertions))]
|
||||
if let Some(latest_version) = upgrade_version {
|
||||
app.handle_event(
|
||||
@@ -360,6 +385,10 @@ impl App {
|
||||
AppEvent::OpenFullAccessConfirmation { preset } => {
|
||||
self.chat_widget.open_full_access_confirmation(preset);
|
||||
}
|
||||
AppEvent::OpenWorldWritableWarningConfirmation { preset } => {
|
||||
self.chat_widget
|
||||
.open_world_writable_warning_confirmation(preset);
|
||||
}
|
||||
AppEvent::OpenFeedbackNote {
|
||||
category,
|
||||
include_logs,
|
||||
@@ -418,11 +447,45 @@ impl App {
|
||||
self.chat_widget.set_approval_policy(policy);
|
||||
}
|
||||
AppEvent::UpdateSandboxPolicy(policy) => {
|
||||
#[cfg(target_os = "windows")]
|
||||
let policy_is_workspace_write = matches!(
|
||||
policy,
|
||||
codex_core::protocol::SandboxPolicy::WorkspaceWrite { .. }
|
||||
);
|
||||
|
||||
self.chat_widget.set_sandbox_policy(policy);
|
||||
|
||||
// If sandbox policy becomes workspace-write, run the Windows world-writable scan.
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
// One-shot suppression if the user just confirmed continue.
|
||||
if self.skip_world_writable_scan_once {
|
||||
self.skip_world_writable_scan_once = false;
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
let should_check = codex_core::get_platform_sandbox().is_some()
|
||||
&& policy_is_workspace_write
|
||||
&& !self.chat_widget.world_writable_warning_hidden();
|
||||
if should_check {
|
||||
let cwd = self.config.cwd.clone();
|
||||
let env_map: std::collections::HashMap<String, String> =
|
||||
std::env::vars().collect();
|
||||
let tx = self.app_event_tx.clone();
|
||||
Self::spawn_world_writable_scan(cwd, env_map, tx, false);
|
||||
}
|
||||
}
|
||||
}
|
||||
AppEvent::SkipNextWorldWritableScan => {
|
||||
self.skip_world_writable_scan_once = true;
|
||||
}
|
||||
AppEvent::UpdateFullAccessWarningAcknowledged(ack) => {
|
||||
self.chat_widget.set_full_access_warning_acknowledged(ack);
|
||||
}
|
||||
AppEvent::UpdateWorldWritableWarningAcknowledged(ack) => {
|
||||
self.chat_widget
|
||||
.set_world_writable_warning_acknowledged(ack);
|
||||
}
|
||||
AppEvent::PersistFullAccessWarningAcknowledged => {
|
||||
if let Err(err) = ConfigEditsBuilder::new(&self.config.codex_home)
|
||||
.set_hide_full_access_warning(true)
|
||||
@@ -438,6 +501,21 @@ impl App {
|
||||
));
|
||||
}
|
||||
}
|
||||
AppEvent::PersistWorldWritableWarningAcknowledged => {
|
||||
if let Err(err) = ConfigEditsBuilder::new(&self.config.codex_home)
|
||||
.set_hide_world_writable_warning(true)
|
||||
.apply()
|
||||
.await
|
||||
{
|
||||
tracing::error!(
|
||||
error = %err,
|
||||
"failed to persist world-writable warning acknowledgement"
|
||||
);
|
||||
self.chat_widget.add_error_message(format!(
|
||||
"Failed to save Auto mode warning preference: {err}"
|
||||
));
|
||||
}
|
||||
}
|
||||
AppEvent::OpenApprovalsPopup => {
|
||||
self.chat_widget.open_approvals_popup();
|
||||
}
|
||||
@@ -541,6 +619,31 @@ impl App {
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
fn spawn_world_writable_scan(
|
||||
cwd: PathBuf,
|
||||
env_map: std::collections::HashMap<String, String>,
|
||||
tx: AppEventSender,
|
||||
apply_preset_on_continue: bool,
|
||||
) {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
if codex_windows_sandbox::preflight_audit_everyone_writable(&cwd, &env_map).is_err() {
|
||||
if apply_preset_on_continue {
|
||||
if let Some(preset) = codex_common::approval_presets::builtin_approval_presets()
|
||||
.into_iter()
|
||||
.find(|p| p.id == "auto")
|
||||
{
|
||||
tx.send(AppEvent::OpenWorldWritableWarningConfirmation {
|
||||
preset: Some(preset),
|
||||
});
|
||||
}
|
||||
} else {
|
||||
tx.send(AppEvent::OpenWorldWritableWarningConfirmation { preset: None });
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -592,6 +695,7 @@ mod tests {
|
||||
backtrack: BacktrackState::default(),
|
||||
feedback: codex_feedback::CodexFeedback::new(),
|
||||
pending_update_action: None,
|
||||
skip_world_writable_scan_once: false,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
|
||||
use crate::app::App;
|
||||
use crate::history_cell::CompositeHistoryCell;
|
||||
use crate::history_cell::SessionInfoCell;
|
||||
use crate::history_cell::UserHistoryCell;
|
||||
use crate::pager_overlay::Overlay;
|
||||
use crate::tui;
|
||||
@@ -394,13 +394,13 @@ fn nth_user_position(
|
||||
fn user_positions_iter(
|
||||
cells: &[Arc<dyn crate::history_cell::HistoryCell>],
|
||||
) -> impl Iterator<Item = usize> + '_ {
|
||||
let header_type = TypeId::of::<CompositeHistoryCell>();
|
||||
let session_start_type = TypeId::of::<SessionInfoCell>();
|
||||
let user_type = TypeId::of::<UserHistoryCell>();
|
||||
let type_of = |cell: &Arc<dyn crate::history_cell::HistoryCell>| cell.as_any().type_id();
|
||||
|
||||
let start = cells
|
||||
.iter()
|
||||
.rposition(|cell| type_of(cell) == header_type)
|
||||
.rposition(|cell| type_of(cell) == session_start_type)
|
||||
.map_or(0, |idx| idx + 1);
|
||||
|
||||
cells
|
||||
|
||||
@@ -72,7 +72,17 @@ pub(crate) enum AppEvent {
|
||||
preset: ApprovalPreset,
|
||||
},
|
||||
|
||||
/// Open the Windows world-writable directories warning.
|
||||
/// If `preset` is `Some`, the confirmation will apply the provided
|
||||
/// approval/sandbox configuration on Continue; if `None`, it performs no
|
||||
/// policy change and only acknowledges/dismisses the warning.
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
OpenWorldWritableWarningConfirmation {
|
||||
preset: Option<ApprovalPreset>,
|
||||
},
|
||||
|
||||
/// Show Windows Subsystem for Linux setup instructions for auto mode.
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
ShowWindowsAutoModeInstructions,
|
||||
|
||||
/// Update the current approval policy in the running app and widget.
|
||||
@@ -84,9 +94,21 @@ pub(crate) enum AppEvent {
|
||||
/// Update whether the full access warning prompt has been acknowledged.
|
||||
UpdateFullAccessWarningAcknowledged(bool),
|
||||
|
||||
/// Update whether the world-writable directories warning has been acknowledged.
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
UpdateWorldWritableWarningAcknowledged(bool),
|
||||
|
||||
/// Persist the acknowledgement flag for the full access warning prompt.
|
||||
PersistFullAccessWarningAcknowledged,
|
||||
|
||||
/// Persist the acknowledgement flag for the world-writable directories warning.
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
PersistWorldWritableWarningAcknowledged,
|
||||
|
||||
/// Skip the next world-writable scan (one-shot) after a user-confirmed continue.
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
SkipNextWorldWritableScan,
|
||||
|
||||
/// Re-open the approval presets popup.
|
||||
OpenApprovalsPopup,
|
||||
|
||||
|
||||
@@ -410,6 +410,11 @@ impl ChatComposer {
|
||||
match key_event {
|
||||
KeyEvent {
|
||||
code: KeyCode::Up, ..
|
||||
}
|
||||
| KeyEvent {
|
||||
code: KeyCode::Char('p'),
|
||||
modifiers: KeyModifiers::CONTROL,
|
||||
..
|
||||
} => {
|
||||
popup.move_up();
|
||||
(InputResult::None, true)
|
||||
@@ -417,6 +422,11 @@ impl ChatComposer {
|
||||
KeyEvent {
|
||||
code: KeyCode::Down,
|
||||
..
|
||||
}
|
||||
| KeyEvent {
|
||||
code: KeyCode::Char('n'),
|
||||
modifiers: KeyModifiers::CONTROL,
|
||||
..
|
||||
} => {
|
||||
popup.move_down();
|
||||
(InputResult::None, true)
|
||||
@@ -584,6 +594,11 @@ impl ChatComposer {
|
||||
match key_event {
|
||||
KeyEvent {
|
||||
code: KeyCode::Up, ..
|
||||
}
|
||||
| KeyEvent {
|
||||
code: KeyCode::Char('p'),
|
||||
modifiers: KeyModifiers::CONTROL,
|
||||
..
|
||||
} => {
|
||||
popup.move_up();
|
||||
(InputResult::None, true)
|
||||
@@ -591,6 +606,11 @@ impl ChatComposer {
|
||||
KeyEvent {
|
||||
code: KeyCode::Down,
|
||||
..
|
||||
}
|
||||
| KeyEvent {
|
||||
code: KeyCode::Char('n'),
|
||||
modifiers: KeyModifiers::CONTROL,
|
||||
..
|
||||
} => {
|
||||
popup.move_down();
|
||||
(InputResult::None, true)
|
||||
@@ -870,6 +890,11 @@ impl ChatComposer {
|
||||
KeyEvent {
|
||||
code: KeyCode::Up | KeyCode::Down,
|
||||
..
|
||||
}
|
||||
| KeyEvent {
|
||||
code: KeyCode::Char('p') | KeyCode::Char('n'),
|
||||
modifiers: KeyModifiers::CONTROL,
|
||||
..
|
||||
} => {
|
||||
if self
|
||||
.history
|
||||
@@ -878,6 +903,8 @@ impl ChatComposer {
|
||||
let replace_text = match key_event.code {
|
||||
KeyCode::Up => self.history.navigate_up(&self.app_event_tx),
|
||||
KeyCode::Down => self.history.navigate_down(&self.app_event_tx),
|
||||
KeyCode::Char('p') => self.history.navigate_up(&self.app_event_tx),
|
||||
KeyCode::Char('n') => self.history.navigate_down(&self.app_event_tx),
|
||||
_ => unreachable!(),
|
||||
};
|
||||
if let Some(text) = replace_text {
|
||||
|
||||
@@ -2030,13 +2030,34 @@ impl ChatWidget {
|
||||
preset: preset_clone.clone(),
|
||||
});
|
||||
})]
|
||||
} else if cfg!(target_os = "windows")
|
||||
&& preset.id == "auto"
|
||||
&& codex_core::get_platform_sandbox().is_none()
|
||||
{
|
||||
vec![Box::new(|tx| {
|
||||
tx.send(AppEvent::ShowWindowsAutoModeInstructions);
|
||||
})]
|
||||
} else if preset.id == "auto" {
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
if codex_core::get_platform_sandbox().is_none() {
|
||||
vec![Box::new(|tx| {
|
||||
tx.send(AppEvent::ShowWindowsAutoModeInstructions);
|
||||
})]
|
||||
} else if !self
|
||||
.config
|
||||
.notices
|
||||
.hide_world_writable_warning
|
||||
.unwrap_or(false)
|
||||
&& self.windows_world_writable_flagged()
|
||||
{
|
||||
let preset_clone = preset.clone();
|
||||
vec![Box::new(move |tx| {
|
||||
tx.send(AppEvent::OpenWorldWritableWarningConfirmation {
|
||||
preset: Some(preset_clone.clone()),
|
||||
});
|
||||
})]
|
||||
} else {
|
||||
Self::approval_preset_actions(preset.approval, preset.sandbox.clone())
|
||||
}
|
||||
}
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
{
|
||||
Self::approval_preset_actions(preset.approval, preset.sandbox.clone())
|
||||
}
|
||||
} else {
|
||||
Self::approval_preset_actions(preset.approval, preset.sandbox.clone())
|
||||
};
|
||||
@@ -2078,6 +2099,19 @@ impl ChatWidget {
|
||||
})]
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
fn windows_world_writable_flagged(&self) -> bool {
|
||||
use std::collections::HashMap;
|
||||
let mut env_map: HashMap<String, String> = HashMap::new();
|
||||
for (k, v) in std::env::vars() {
|
||||
env_map.insert(k, v);
|
||||
}
|
||||
match codex_windows_sandbox::preflight_audit_everyone_writable(&self.config.cwd, &env_map) {
|
||||
Ok(()) => false,
|
||||
Err(_) => true,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn open_full_access_confirmation(&mut self, preset: ApprovalPreset) {
|
||||
let approval = preset.approval;
|
||||
let sandbox = preset.sandbox;
|
||||
@@ -2142,6 +2176,95 @@ impl ChatWidget {
|
||||
});
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
pub(crate) fn open_world_writable_warning_confirmation(
|
||||
&mut self,
|
||||
preset: Option<ApprovalPreset>,
|
||||
) {
|
||||
let (approval, sandbox) = match &preset {
|
||||
Some(p) => (Some(p.approval), Some(p.sandbox.clone())),
|
||||
None => (None, None),
|
||||
};
|
||||
let mut header_children: Vec<Box<dyn Renderable>> = Vec::new();
|
||||
let title_line = Line::from("Auto mode has unprotected directories").bold();
|
||||
let info_line = Line::from(vec![
|
||||
"Some important directories on this system are world-writable. ".into(),
|
||||
"The Windows sandbox cannot protect writes to these locations in Auto mode."
|
||||
.fg(Color::Red),
|
||||
]);
|
||||
header_children.push(Box::new(title_line));
|
||||
header_children.push(Box::new(
|
||||
Paragraph::new(vec![info_line]).wrap(Wrap { trim: false }),
|
||||
));
|
||||
let header = ColumnRenderable::with(header_children);
|
||||
|
||||
// Build actions ensuring acknowledgement happens before applying the new sandbox policy,
|
||||
// so downstream policy-change hooks don't re-trigger the warning.
|
||||
let mut accept_actions: Vec<SelectionAction> = Vec::new();
|
||||
// Suppress the immediate re-scan once after user confirms continue.
|
||||
accept_actions.push(Box::new(|tx| {
|
||||
tx.send(AppEvent::SkipNextWorldWritableScan);
|
||||
}));
|
||||
if let (Some(approval), Some(sandbox)) = (approval, sandbox.clone()) {
|
||||
accept_actions.extend(Self::approval_preset_actions(approval, sandbox));
|
||||
}
|
||||
|
||||
let mut accept_and_remember_actions: Vec<SelectionAction> = Vec::new();
|
||||
accept_and_remember_actions.push(Box::new(|tx| {
|
||||
tx.send(AppEvent::UpdateWorldWritableWarningAcknowledged(true));
|
||||
tx.send(AppEvent::PersistWorldWritableWarningAcknowledged);
|
||||
}));
|
||||
if let (Some(approval), Some(sandbox)) = (approval, sandbox) {
|
||||
accept_and_remember_actions.extend(Self::approval_preset_actions(approval, sandbox));
|
||||
}
|
||||
|
||||
let deny_actions: Vec<SelectionAction> = if preset.is_some() {
|
||||
vec![Box::new(|tx| {
|
||||
tx.send(AppEvent::OpenApprovalsPopup);
|
||||
})]
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let items = vec![
|
||||
SelectionItem {
|
||||
name: "Continue".to_string(),
|
||||
description: Some("Apply Auto mode for this session".to_string()),
|
||||
actions: accept_actions,
|
||||
dismiss_on_select: true,
|
||||
..Default::default()
|
||||
},
|
||||
SelectionItem {
|
||||
name: "Continue and don't warn again".to_string(),
|
||||
description: Some("Enable Auto mode and remember this choice".to_string()),
|
||||
actions: accept_and_remember_actions,
|
||||
dismiss_on_select: true,
|
||||
..Default::default()
|
||||
},
|
||||
SelectionItem {
|
||||
name: "Cancel".to_string(),
|
||||
description: Some("Go back without enabling Auto mode".to_string()),
|
||||
actions: deny_actions,
|
||||
dismiss_on_select: true,
|
||||
..Default::default()
|
||||
},
|
||||
];
|
||||
|
||||
self.bottom_pane.show_selection_view(SelectionViewParams {
|
||||
footer_hint: Some(standard_popup_hint_line()),
|
||||
items,
|
||||
header: Box::new(header),
|
||||
..Default::default()
|
||||
});
|
||||
}
|
||||
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
pub(crate) fn open_world_writable_warning_confirmation(
|
||||
&mut self,
|
||||
_preset: Option<ApprovalPreset>,
|
||||
) {
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
pub(crate) fn open_windows_auto_mode_instructions(&mut self) {
|
||||
use ratatui_macros::line;
|
||||
@@ -2193,6 +2316,18 @@ impl ChatWidget {
|
||||
self.config.notices.hide_full_access_warning = Some(acknowledged);
|
||||
}
|
||||
|
||||
pub(crate) fn set_world_writable_warning_acknowledged(&mut self, acknowledged: bool) {
|
||||
self.config.notices.hide_world_writable_warning = Some(acknowledged);
|
||||
}
|
||||
|
||||
#[cfg_attr(not(target_os = "windows"), allow(dead_code))]
|
||||
pub(crate) fn world_writable_warning_hidden(&self) -> bool {
|
||||
self.config
|
||||
.notices
|
||||
.hide_world_writable_warning
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
/// Set the reasoning effort in the widget's config copy.
|
||||
pub(crate) fn set_reasoning_effort(&mut self, effort: Option<ReasoningEffortConfig>) {
|
||||
self.config.model_reasoning_effort = effort;
|
||||
|
||||
@@ -33,6 +33,7 @@ use crossterm::style::SetBackgroundColor;
|
||||
use crossterm::style::SetColors;
|
||||
use crossterm::style::SetForegroundColor;
|
||||
use crossterm::terminal::Clear;
|
||||
use derive_more::IsVariant;
|
||||
use ratatui::backend::Backend;
|
||||
use ratatui::backend::ClearType;
|
||||
use ratatui::buffer::Buffer;
|
||||
@@ -120,8 +121,6 @@ where
|
||||
/// Last known position of the cursor. Used to find the new area when the viewport is inlined
|
||||
/// and the terminal resized.
|
||||
pub last_known_cursor_pos: Position,
|
||||
|
||||
use_custom_flush: bool,
|
||||
}
|
||||
|
||||
impl<B> Drop for Terminal<B>
|
||||
@@ -151,16 +150,12 @@ where
|
||||
let cursor_pos = backend.get_cursor_position()?;
|
||||
Ok(Self {
|
||||
backend,
|
||||
buffers: [
|
||||
Buffer::empty(Rect::new(0, 0, 0, 0)),
|
||||
Buffer::empty(Rect::new(0, 0, 0, 0)),
|
||||
],
|
||||
buffers: [Buffer::empty(Rect::ZERO), Buffer::empty(Rect::ZERO)],
|
||||
current: 0,
|
||||
hidden_cursor: false,
|
||||
viewport_area: Rect::new(0, cursor_pos.y, 0, 0),
|
||||
last_known_screen_size: screen_size,
|
||||
last_known_cursor_pos: cursor_pos,
|
||||
use_custom_flush: true,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -173,11 +168,26 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
/// Gets the current buffer as a reference.
|
||||
fn current_buffer(&self) -> &Buffer {
|
||||
&self.buffers[self.current]
|
||||
}
|
||||
|
||||
/// Gets the current buffer as a mutable reference.
|
||||
pub fn current_buffer_mut(&mut self) -> &mut Buffer {
|
||||
fn current_buffer_mut(&mut self) -> &mut Buffer {
|
||||
&mut self.buffers[self.current]
|
||||
}
|
||||
|
||||
/// Gets the previous buffer as a reference.
|
||||
fn previous_buffer(&self) -> &Buffer {
|
||||
&self.buffers[1 - self.current]
|
||||
}
|
||||
|
||||
/// Gets the previous buffer as a mutable reference.
|
||||
fn previous_buffer_mut(&mut self) -> &mut Buffer {
|
||||
&mut self.buffers[1 - self.current]
|
||||
}
|
||||
|
||||
/// Gets the backend
|
||||
pub const fn backend(&self) -> &B {
|
||||
&self.backend
|
||||
@@ -191,26 +201,12 @@ where
|
||||
/// Obtains a difference between the previous and the current buffer and passes it to the
|
||||
/// current backend for drawing.
|
||||
pub fn flush(&mut self) -> io::Result<()> {
|
||||
let previous_buffer = &self.buffers[1 - self.current];
|
||||
let current_buffer = &self.buffers[self.current];
|
||||
|
||||
if self.use_custom_flush {
|
||||
let updates = diff_buffers(previous_buffer, current_buffer);
|
||||
if let Some(DrawCommand::Put { x, y, .. }) = updates
|
||||
.iter()
|
||||
.rev()
|
||||
.find(|cmd| matches!(cmd, DrawCommand::Put { .. }))
|
||||
{
|
||||
self.last_known_cursor_pos = Position { x: *x, y: *y };
|
||||
}
|
||||
draw(&mut self.backend, updates.into_iter())
|
||||
} else {
|
||||
let updates = previous_buffer.diff(current_buffer);
|
||||
if let Some((x, y, _)) = updates.last() {
|
||||
self.last_known_cursor_pos = Position { x: *x, y: *y };
|
||||
}
|
||||
self.backend.draw(updates.into_iter())
|
||||
let updates = diff_buffers(self.previous_buffer(), self.current_buffer());
|
||||
let last_put_command = updates.iter().rfind(|command| command.is_put());
|
||||
if let Some(&DrawCommand::Put { x, y, .. }) = last_put_command {
|
||||
self.last_known_cursor_pos = Position { x, y };
|
||||
}
|
||||
draw(&mut self.backend, updates.into_iter())
|
||||
}
|
||||
|
||||
/// Updates the Terminal so that internal buffers match the requested area.
|
||||
@@ -224,8 +220,8 @@ where
|
||||
|
||||
/// Sets the viewport area.
|
||||
pub fn set_viewport_area(&mut self, area: Rect) {
|
||||
self.buffers[self.current].resize(area);
|
||||
self.buffers[1 - self.current].resize(area);
|
||||
self.current_buffer_mut().resize(area);
|
||||
self.previous_buffer_mut().resize(area);
|
||||
self.viewport_area = area;
|
||||
}
|
||||
|
||||
@@ -337,7 +333,7 @@ where
|
||||
|
||||
self.swap_buffers();
|
||||
|
||||
ratatui::backend::Backend::flush(&mut self.backend)?;
|
||||
Backend::flush(&mut self.backend)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -381,13 +377,13 @@ where
|
||||
.set_cursor_position(self.viewport_area.as_position())?;
|
||||
self.backend.clear_region(ClearType::AfterCursor)?;
|
||||
// Reset the back buffer to make sure the next update will redraw everything.
|
||||
self.buffers[1 - self.current].reset();
|
||||
self.previous_buffer_mut().reset();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Clears the inactive buffer and swaps it with the current buffer
|
||||
pub fn swap_buffers(&mut self) {
|
||||
self.buffers[1 - self.current].reset();
|
||||
self.previous_buffer_mut().reset();
|
||||
self.current = 1 - self.current;
|
||||
}
|
||||
|
||||
@@ -400,13 +396,13 @@ where
|
||||
use ratatui::buffer::Cell;
|
||||
use unicode_width::UnicodeWidthStr;
|
||||
|
||||
#[derive(Debug)]
|
||||
enum DrawCommand<'a> {
|
||||
Put { x: u16, y: u16, cell: &'a Cell },
|
||||
#[derive(Debug, IsVariant)]
|
||||
enum DrawCommand {
|
||||
Put { x: u16, y: u16, cell: Cell },
|
||||
ClearToEnd { x: u16, y: u16, bg: Color },
|
||||
}
|
||||
|
||||
fn diff_buffers<'a>(a: &'a Buffer, b: &'a Buffer) -> Vec<DrawCommand<'a>> {
|
||||
fn diff_buffers(a: &Buffer, b: &Buffer) -> Vec<DrawCommand> {
|
||||
let previous_buffer = &a.content;
|
||||
let next_buffer = &b.content;
|
||||
|
||||
@@ -455,7 +451,7 @@ fn diff_buffers<'a>(a: &'a Buffer, b: &'a Buffer) -> Vec<DrawCommand<'a>> {
|
||||
updates.push(DrawCommand::Put {
|
||||
x,
|
||||
y,
|
||||
cell: &next_buffer[i],
|
||||
cell: next_buffer[i].clone(),
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -468,9 +464,9 @@ fn diff_buffers<'a>(a: &'a Buffer, b: &'a Buffer) -> Vec<DrawCommand<'a>> {
|
||||
updates
|
||||
}
|
||||
|
||||
fn draw<'a, I>(writer: &mut impl Write, commands: I) -> io::Result<()>
|
||||
fn draw<I>(writer: &mut impl Write, commands: I) -> io::Result<()>
|
||||
where
|
||||
I: Iterator<Item = DrawCommand<'a>>,
|
||||
I: Iterator<Item = DrawCommand>,
|
||||
{
|
||||
let mut fg = Color::Reset;
|
||||
let mut bg = Color::Reset;
|
||||
|
||||
@@ -559,11 +559,28 @@ pub(crate) fn padded_emoji(emoji: &str) -> String {
|
||||
format!("{emoji}\u{200A}")
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SessionInfoCell(CompositeHistoryCell);
|
||||
|
||||
impl HistoryCell for SessionInfoCell {
|
||||
fn display_lines(&self, width: u16) -> Vec<Line<'static>> {
|
||||
self.0.display_lines(width)
|
||||
}
|
||||
|
||||
fn desired_height(&self, width: u16) -> u16 {
|
||||
self.0.desired_height(width)
|
||||
}
|
||||
|
||||
fn transcript_lines(&self, width: u16) -> Vec<Line<'static>> {
|
||||
self.0.transcript_lines(width)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn new_session_info(
|
||||
config: &Config,
|
||||
event: SessionConfiguredEvent,
|
||||
is_first_event: bool,
|
||||
) -> CompositeHistoryCell {
|
||||
) -> SessionInfoCell {
|
||||
let SessionConfiguredEvent {
|
||||
model,
|
||||
reasoning_effort,
|
||||
@@ -573,7 +590,7 @@ pub(crate) fn new_session_info(
|
||||
initial_messages: _,
|
||||
rollout_path: _,
|
||||
} = event;
|
||||
if is_first_event {
|
||||
SessionInfoCell(if is_first_event {
|
||||
// Header box rendered as history (so it appears at the very top)
|
||||
let header = SessionHeaderHistoryCell::new(
|
||||
model,
|
||||
@@ -632,7 +649,7 @@ pub(crate) fn new_session_info(
|
||||
CompositeHistoryCell {
|
||||
parts: vec![Box::new(PlainHistoryCell { lines })],
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn new_user_prompt(message: String) -> UserHistoryCell {
|
||||
|
||||
@@ -22,6 +22,8 @@ use crossterm::event::DisableFocusChange;
|
||||
use crossterm::event::EnableBracketedPaste;
|
||||
use crossterm::event::EnableFocusChange;
|
||||
use crossterm::event::Event;
|
||||
#[cfg(unix)]
|
||||
use crossterm::event::KeyCode;
|
||||
use crossterm::event::KeyEvent;
|
||||
use crossterm::event::KeyboardEnhancementFlags;
|
||||
use crossterm::event::PopKeyboardEnhancementFlags;
|
||||
@@ -39,12 +41,17 @@ use ratatui::text::Line;
|
||||
|
||||
use crate::custom_terminal;
|
||||
use crate::custom_terminal::Terminal as CustomTerminal;
|
||||
#[cfg(unix)]
|
||||
use crate::key_hint;
|
||||
use tokio::select;
|
||||
use tokio_stream::Stream;
|
||||
|
||||
/// A type alias for the terminal type used in this application
|
||||
pub type Terminal = CustomTerminal<CrosstermBackend<Stdout>>;
|
||||
|
||||
#[cfg(unix)]
|
||||
const SUSPEND_KEY: key_hint::KeyBinding = key_hint::ctrl(KeyCode::Char('z'));
|
||||
|
||||
pub fn set_modes() -> Result<()> {
|
||||
execute!(stdout(), EnableBracketedPaste)?;
|
||||
|
||||
@@ -217,60 +224,11 @@ impl FrameRequester {
|
||||
}
|
||||
|
||||
impl Tui {
|
||||
/// Emit a desktop notification now if the terminal is unfocused.
|
||||
/// Returns true if a notification was posted.
|
||||
pub fn notify(&mut self, message: impl AsRef<str>) -> bool {
|
||||
if !self.terminal_focused.load(Ordering::Relaxed) {
|
||||
let _ = execute!(stdout(), PostNotification(message.as_ref().to_string()));
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
pub fn new(terminal: Terminal) -> Self {
|
||||
let (frame_schedule_tx, frame_schedule_rx) = tokio::sync::mpsc::unbounded_channel();
|
||||
let (draw_tx, _) = tokio::sync::broadcast::channel(1);
|
||||
|
||||
// Spawn background scheduler to coalesce frame requests and emit draws at deadlines.
|
||||
let draw_tx_clone = draw_tx.clone();
|
||||
tokio::spawn(async move {
|
||||
use tokio::select;
|
||||
use tokio::time::Instant as TokioInstant;
|
||||
use tokio::time::sleep_until;
|
||||
|
||||
let mut rx = frame_schedule_rx;
|
||||
let mut next_deadline: Option<Instant> = None;
|
||||
|
||||
loop {
|
||||
let target = next_deadline
|
||||
.unwrap_or_else(|| Instant::now() + Duration::from_secs(60 * 60 * 24 * 365));
|
||||
let sleep_fut = sleep_until(TokioInstant::from_std(target));
|
||||
tokio::pin!(sleep_fut);
|
||||
|
||||
select! {
|
||||
recv = rx.recv() => {
|
||||
match recv {
|
||||
Some(at) => {
|
||||
if next_deadline.is_none_or(|cur| at < cur) {
|
||||
next_deadline = Some(at);
|
||||
}
|
||||
// Do not send a draw immediately here. By continuing the loop,
|
||||
// we recompute the sleep target so the draw fires once via the
|
||||
// sleep branch, coalescing multiple requests into a single draw.
|
||||
continue;
|
||||
}
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
_ = &mut sleep_fut => {
|
||||
if next_deadline.is_some() {
|
||||
next_deadline = None;
|
||||
let _ = draw_tx_clone.send(());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
spawn_frame_scheduler(frame_schedule_rx, draw_tx.clone());
|
||||
|
||||
// Detect keyboard enhancement support before any EventStream is created so the
|
||||
// crossterm poller can acquire its lock without contention.
|
||||
@@ -305,16 +263,46 @@ impl Tui {
|
||||
self.enhanced_keys_supported
|
||||
}
|
||||
|
||||
/// Emit a desktop notification now if the terminal is unfocused.
|
||||
/// Returns true if a notification was posted.
|
||||
pub fn notify(&mut self, message: impl AsRef<str>) -> bool {
|
||||
if !self.terminal_focused.load(Ordering::Relaxed) {
|
||||
let _ = execute!(stdout(), PostNotification(message.as_ref().to_string()));
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
pub fn event_stream(&self) -> Pin<Box<dyn Stream<Item = TuiEvent> + Send + 'static>> {
|
||||
use tokio_stream::StreamExt;
|
||||
|
||||
let mut crossterm_events = crossterm::event::EventStream::new();
|
||||
let mut draw_rx = self.draw_tx.subscribe();
|
||||
|
||||
// State for tracking how we should resume from ^Z suspend.
|
||||
#[cfg(unix)]
|
||||
let resume_pending = self.resume_pending.clone();
|
||||
#[cfg(unix)]
|
||||
let alt_screen_active = self.alt_screen_active.clone();
|
||||
#[cfg(unix)]
|
||||
let suspend_cursor_y = self.suspend_cursor_y.clone();
|
||||
|
||||
#[cfg(unix)]
|
||||
let suspend = move || {
|
||||
if alt_screen_active.load(Ordering::Relaxed) {
|
||||
// Disable alternate scroll when suspending from alt-screen
|
||||
let _ = execute!(stdout(), DisableAlternateScroll);
|
||||
let _ = execute!(stdout(), LeaveAlternateScreen);
|
||||
resume_pending.store(ResumeAction::RestoreAlt as u8, Ordering::Relaxed);
|
||||
} else {
|
||||
resume_pending.store(ResumeAction::RealignInline as u8, Ordering::Relaxed);
|
||||
}
|
||||
let y = suspend_cursor_y.load(Ordering::Relaxed);
|
||||
let _ = execute!(stdout(), MoveTo(0, y), crossterm::cursor::Show);
|
||||
let _ = Tui::suspend();
|
||||
};
|
||||
|
||||
let terminal_focused = self.terminal_focused.clone();
|
||||
let event_stream = async_stream::stream! {
|
||||
loop {
|
||||
@@ -323,31 +311,9 @@ impl Tui {
|
||||
match event {
|
||||
crossterm::event::Event::Key(key_event) => {
|
||||
#[cfg(unix)]
|
||||
if matches!(
|
||||
key_event,
|
||||
crossterm::event::KeyEvent {
|
||||
code: crossterm::event::KeyCode::Char('z'),
|
||||
modifiers: crossterm::event::KeyModifiers::CONTROL,
|
||||
kind: crossterm::event::KeyEventKind::Press,
|
||||
..
|
||||
}
|
||||
)
|
||||
{
|
||||
if alt_screen_active.load(Ordering::Relaxed) {
|
||||
// Disable alternate scroll when suspending from alt-screen
|
||||
let _ = execute!(stdout(), DisableAlternateScroll);
|
||||
let _ = execute!(stdout(), LeaveAlternateScreen);
|
||||
resume_pending.store(ResumeAction::RestoreAlt as u8, Ordering::Relaxed);
|
||||
} else {
|
||||
resume_pending.store(ResumeAction::RealignInline as u8, Ordering::Relaxed);
|
||||
}
|
||||
#[cfg(unix)]
|
||||
{
|
||||
let y = suspend_cursor_y.load(Ordering::Relaxed);
|
||||
let _ = execute!(stdout(), MoveTo(0, y));
|
||||
}
|
||||
let _ = execute!(stdout(), crossterm::cursor::Show);
|
||||
let _ = Tui::suspend();
|
||||
if SUSPEND_KEY.is_press(key_event) {
|
||||
suspend();
|
||||
// We continue here after resume.
|
||||
yield TuiEvent::Draw;
|
||||
continue;
|
||||
}
|
||||
@@ -389,6 +355,7 @@ impl Tui {
|
||||
};
|
||||
Box::pin(event_stream)
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
fn suspend() -> Result<()> {
|
||||
restore()?;
|
||||
@@ -397,6 +364,8 @@ impl Tui {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// When resuming from ^Z suspend, we want to put things back the way they were before suspend.
|
||||
/// We capture the action in an object so we can pass it into the event stream, since the relevant
|
||||
#[cfg(unix)]
|
||||
fn prepare_resume_action(
|
||||
&mut self,
|
||||
@@ -490,12 +459,15 @@ impl Tui {
|
||||
height: u16,
|
||||
draw_fn: impl FnOnce(&mut custom_terminal::Frame),
|
||||
) -> Result<()> {
|
||||
// Precompute any viewport updates that need a cursor-position query before entering
|
||||
// the synchronized update, to avoid racing with the event reader.
|
||||
let mut pending_viewport_area: Option<ratatui::layout::Rect> = None;
|
||||
// If we are resuming from ^Z, we need to prepare the resume action now so we can apply it
|
||||
// in the synchronized update.
|
||||
#[cfg(unix)]
|
||||
let mut prepared_resume =
|
||||
self.prepare_resume_action(take_resume_action(&self.resume_pending))?;
|
||||
|
||||
// Precompute any viewport updates that need a cursor-position query before entering
|
||||
// the synchronized update, to avoid racing with the event reader.
|
||||
let mut pending_viewport_area: Option<ratatui::layout::Rect> = None;
|
||||
{
|
||||
let terminal = &mut self.terminal;
|
||||
let screen_size = terminal.size()?;
|
||||
@@ -504,6 +476,9 @@ impl Tui {
|
||||
&& let Ok(cursor_pos) = terminal.get_cursor_position()
|
||||
{
|
||||
let last_known_cursor_pos = terminal.last_known_cursor_pos;
|
||||
// If we resized AND the cursor moved, we adjust the viewport area to keep the
|
||||
// cursor in the same position. This is a heuristic that seems to work well
|
||||
// at least in iTerm2.
|
||||
if cursor_pos.y != last_known_cursor_pos.y {
|
||||
let cursor_delta = cursor_pos.y as i32 - last_known_cursor_pos.y as i32;
|
||||
let new_viewport_area = terminal.viewport_area.offset(Offset {
|
||||
@@ -515,7 +490,6 @@ impl Tui {
|
||||
}
|
||||
}
|
||||
|
||||
// Use synchronized update via backend instead of stdout()
|
||||
std::io::stdout().sync_update(|_| {
|
||||
#[cfg(unix)]
|
||||
{
|
||||
@@ -534,6 +508,7 @@ impl Tui {
|
||||
let mut area = terminal.viewport_area;
|
||||
area.height = height.min(size.height);
|
||||
area.width = size.width;
|
||||
// If the viewport has expanded, scroll everything else up to make room.
|
||||
if area.bottom() > size.height {
|
||||
terminal
|
||||
.backend_mut()
|
||||
@@ -541,9 +516,11 @@ impl Tui {
|
||||
area.y = size.height - area.height;
|
||||
}
|
||||
if area != terminal.viewport_area {
|
||||
// TODO(nornagon): probably this could be collapsed with the clear + set_viewport_area above.
|
||||
terminal.clear()?;
|
||||
terminal.set_viewport_area(area);
|
||||
}
|
||||
|
||||
if !self.pending_history_lines.is_empty() {
|
||||
crate::insert_history::insert_history_lines(
|
||||
terminal,
|
||||
@@ -551,6 +528,7 @@ impl Tui {
|
||||
)?;
|
||||
self.pending_history_lines.clear();
|
||||
}
|
||||
|
||||
// Update the y position for suspending so Ctrl-Z can place the cursor correctly.
|
||||
#[cfg(unix)]
|
||||
{
|
||||
@@ -564,6 +542,7 @@ impl Tui {
|
||||
self.suspend_cursor_y
|
||||
.store(inline_area_bottom, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
terminal.draw(|frame| {
|
||||
draw_fn(frame);
|
||||
})
|
||||
@@ -571,6 +550,51 @@ impl Tui {
|
||||
}
|
||||
}
|
||||
|
||||
/// Spawn background scheduler to coalesce frame requests and emit draws at deadlines.
|
||||
fn spawn_frame_scheduler(
|
||||
frame_schedule_rx: tokio::sync::mpsc::UnboundedReceiver<Instant>,
|
||||
draw_tx: tokio::sync::broadcast::Sender<()>,
|
||||
) {
|
||||
tokio::spawn(async move {
|
||||
use tokio::select;
|
||||
use tokio::time::Instant as TokioInstant;
|
||||
use tokio::time::sleep_until;
|
||||
|
||||
let mut rx = frame_schedule_rx;
|
||||
let mut next_deadline: Option<Instant> = None;
|
||||
|
||||
loop {
|
||||
let target = next_deadline
|
||||
.unwrap_or_else(|| Instant::now() + Duration::from_secs(60 * 60 * 24 * 365));
|
||||
let sleep_fut = sleep_until(TokioInstant::from_std(target));
|
||||
tokio::pin!(sleep_fut);
|
||||
|
||||
select! {
|
||||
recv = rx.recv() => {
|
||||
match recv {
|
||||
Some(at) => {
|
||||
if next_deadline.is_none_or(|cur| at < cur) {
|
||||
next_deadline = Some(at);
|
||||
}
|
||||
// Do not send a draw immediately here. By continuing the loop,
|
||||
// we recompute the sleep target so the draw fires once via the
|
||||
// sleep branch, coalescing multiple requests into a single draw.
|
||||
continue;
|
||||
}
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
_ = &mut sleep_fut => {
|
||||
if next_deadline.is_some() {
|
||||
next_deadline = None;
|
||||
let _ = draw_tx.send(());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/// Command that emits an OSC 9 desktop notification with a message.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct PostNotification(pub String);
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use crate::acl::dacl_effective_allows_write;
|
||||
use crate::token::world_sid;
|
||||
use crate::winutil::to_wide;
|
||||
use anyhow::anyhow;
|
||||
@@ -13,8 +12,31 @@ use windows_sys::Win32::Foundation::LocalFree;
|
||||
use windows_sys::Win32::Foundation::ERROR_SUCCESS;
|
||||
use windows_sys::Win32::Foundation::HLOCAL;
|
||||
use windows_sys::Win32::Security::Authorization::GetNamedSecurityInfoW;
|
||||
use windows_sys::Win32::Security::Authorization::GetSecurityInfo;
|
||||
use windows_sys::Win32::Foundation::INVALID_HANDLE_VALUE;
|
||||
use windows_sys::Win32::Foundation::CloseHandle;
|
||||
use windows_sys::Win32::Storage::FileSystem::CreateFileW;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_FLAG_BACKUP_SEMANTICS;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_SHARE_DELETE;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_SHARE_READ;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_SHARE_WRITE;
|
||||
use windows_sys::Win32::Storage::FileSystem::OPEN_EXISTING;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_GENERIC_WRITE;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_WRITE_DATA;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_APPEND_DATA;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_WRITE_EA;
|
||||
use windows_sys::Win32::Storage::FileSystem::FILE_WRITE_ATTRIBUTES;
|
||||
const GENERIC_ALL_MASK: u32 = 0x1000_0000;
|
||||
const GENERIC_WRITE_MASK: u32 = 0x4000_0000;
|
||||
use windows_sys::Win32::Security::ACL;
|
||||
use windows_sys::Win32::Security::DACL_SECURITY_INFORMATION;
|
||||
use windows_sys::Win32::Security::ACL_SIZE_INFORMATION;
|
||||
use windows_sys::Win32::Security::AclSizeInformation;
|
||||
use windows_sys::Win32::Security::GetAclInformation;
|
||||
use windows_sys::Win32::Security::GetAce;
|
||||
use windows_sys::Win32::Security::ACCESS_ALLOWED_ACE;
|
||||
use windows_sys::Win32::Security::ACE_HEADER;
|
||||
use windows_sys::Win32::Security::EqualSid;
|
||||
|
||||
fn unique_push(set: &mut HashSet<PathBuf>, out: &mut Vec<PathBuf>, p: PathBuf) {
|
||||
if let Ok(abs) = p.canonicalize() {
|
||||
@@ -27,30 +49,22 @@ fn unique_push(set: &mut HashSet<PathBuf>, out: &mut Vec<PathBuf>, p: PathBuf) {
|
||||
fn gather_candidates(cwd: &Path, env: &std::collections::HashMap<String, String>) -> Vec<PathBuf> {
|
||||
let mut set: HashSet<PathBuf> = HashSet::new();
|
||||
let mut out: Vec<PathBuf> = Vec::new();
|
||||
// Core roots
|
||||
for p in [
|
||||
PathBuf::from("C:/"),
|
||||
PathBuf::from("C:/Windows"),
|
||||
PathBuf::from("C:/ProgramData"),
|
||||
] {
|
||||
unique_push(&mut set, &mut out, p);
|
||||
// 1) CWD first (so immediate children get scanned early)
|
||||
unique_push(&mut set, &mut out, cwd.to_path_buf());
|
||||
// 2) TEMP/TMP next (often small, quick to scan)
|
||||
for k in ["TEMP", "TMP"] {
|
||||
if let Some(v) = env.get(k).cloned().or_else(|| std::env::var(k).ok()) {
|
||||
unique_push(&mut set, &mut out, PathBuf::from(v));
|
||||
}
|
||||
}
|
||||
// User roots
|
||||
// 3) User roots
|
||||
if let Some(up) = std::env::var_os("USERPROFILE") {
|
||||
unique_push(&mut set, &mut out, PathBuf::from(up));
|
||||
}
|
||||
if let Some(pubp) = std::env::var_os("PUBLIC") {
|
||||
unique_push(&mut set, &mut out, PathBuf::from(pubp));
|
||||
}
|
||||
// CWD
|
||||
unique_push(&mut set, &mut out, cwd.to_path_buf());
|
||||
// TEMP/TMP
|
||||
for k in ["TEMP", "TMP"] {
|
||||
if let Some(v) = env.get(k).cloned().or_else(|| std::env::var(k).ok()) {
|
||||
unique_push(&mut set, &mut out, PathBuf::from(v));
|
||||
}
|
||||
}
|
||||
// PATH entries
|
||||
// 4) PATH entries (best-effort)
|
||||
if let Some(path) = env
|
||||
.get("PATH")
|
||||
.cloned()
|
||||
@@ -62,31 +76,85 @@ fn gather_candidates(cwd: &Path, env: &std::collections::HashMap<String, String>
|
||||
}
|
||||
}
|
||||
}
|
||||
// 5) Core system roots last
|
||||
for p in [
|
||||
PathBuf::from("C:/"),
|
||||
PathBuf::from("C:/Windows"),
|
||||
PathBuf::from("C:/ProgramData"),
|
||||
] {
|
||||
unique_push(&mut set, &mut out, p);
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
unsafe fn path_has_world_write_allow(path: &Path) -> Result<bool> {
|
||||
// Prefer handle-based query (often faster than name-based), fallback to name-based on error
|
||||
let mut p_sd: *mut c_void = std::ptr::null_mut();
|
||||
let mut p_dacl: *mut ACL = std::ptr::null_mut();
|
||||
let code = GetNamedSecurityInfoW(
|
||||
to_wide(path).as_ptr(),
|
||||
1,
|
||||
DACL_SECURITY_INFORMATION,
|
||||
|
||||
let mut try_named = false;
|
||||
let wpath = to_wide(path);
|
||||
let h = CreateFileW(
|
||||
wpath.as_ptr(),
|
||||
0x00020000, // READ_CONTROL
|
||||
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
|
||||
std::ptr::null_mut(),
|
||||
std::ptr::null_mut(),
|
||||
&mut p_dacl,
|
||||
std::ptr::null_mut(),
|
||||
&mut p_sd,
|
||||
OPEN_EXISTING,
|
||||
FILE_FLAG_BACKUP_SEMANTICS,
|
||||
0,
|
||||
);
|
||||
if code != ERROR_SUCCESS {
|
||||
if !p_sd.is_null() {
|
||||
LocalFree(p_sd as HLOCAL);
|
||||
if h == INVALID_HANDLE_VALUE {
|
||||
try_named = true;
|
||||
} else {
|
||||
let code = GetSecurityInfo(
|
||||
h,
|
||||
1, // SE_FILE_OBJECT
|
||||
DACL_SECURITY_INFORMATION,
|
||||
std::ptr::null_mut(),
|
||||
std::ptr::null_mut(),
|
||||
&mut p_dacl,
|
||||
std::ptr::null_mut(),
|
||||
&mut p_sd,
|
||||
);
|
||||
CloseHandle(h);
|
||||
if code != ERROR_SUCCESS {
|
||||
try_named = true;
|
||||
if !p_sd.is_null() {
|
||||
LocalFree(p_sd as HLOCAL);
|
||||
p_sd = std::ptr::null_mut();
|
||||
p_dacl = std::ptr::null_mut();
|
||||
}
|
||||
}
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
if try_named {
|
||||
let code = GetNamedSecurityInfoW(
|
||||
wpath.as_ptr(),
|
||||
1,
|
||||
DACL_SECURITY_INFORMATION,
|
||||
std::ptr::null_mut(),
|
||||
std::ptr::null_mut(),
|
||||
&mut p_dacl,
|
||||
std::ptr::null_mut(),
|
||||
&mut p_sd,
|
||||
);
|
||||
if code != ERROR_SUCCESS {
|
||||
if !p_sd.is_null() {
|
||||
LocalFree(p_sd as HLOCAL);
|
||||
}
|
||||
return Ok(false);
|
||||
}
|
||||
}
|
||||
|
||||
let mut world = world_sid()?;
|
||||
let psid_world = world.as_mut_ptr() as *mut c_void;
|
||||
let has = dacl_effective_allows_write(p_dacl, psid_world);
|
||||
// Very fast mask-based check for world-writable grants (includes GENERIC_*).
|
||||
if !dacl_quick_world_write_mask_allows(p_dacl, psid_world) {
|
||||
if !p_sd.is_null() { LocalFree(p_sd as HLOCAL); }
|
||||
return Ok(false);
|
||||
}
|
||||
// Quick detector flagged a write grant for Everyone: treat as writable.
|
||||
let has = true;
|
||||
if !p_sd.is_null() {
|
||||
LocalFree(p_sd as HLOCAL);
|
||||
}
|
||||
@@ -100,18 +168,41 @@ pub fn audit_everyone_writable(
|
||||
let start = Instant::now();
|
||||
let mut flagged: Vec<PathBuf> = Vec::new();
|
||||
let mut checked = 0usize;
|
||||
// Fast path: check CWD immediate children first so workspace issues are caught early.
|
||||
if let Ok(read) = std::fs::read_dir(cwd) {
|
||||
for ent in read.flatten().take(250) {
|
||||
if start.elapsed() > Duration::from_secs(5) || checked > 5000 {
|
||||
break;
|
||||
}
|
||||
let ft = match ent.file_type() {
|
||||
Ok(ft) => ft,
|
||||
Err(_) => continue,
|
||||
};
|
||||
if ft.is_symlink() || !ft.is_dir() {
|
||||
continue;
|
||||
}
|
||||
let p = ent.path();
|
||||
checked += 1;
|
||||
let has = unsafe { path_has_world_write_allow(&p)? };
|
||||
if has {
|
||||
flagged.push(p);
|
||||
}
|
||||
}
|
||||
}
|
||||
// Continue with broader candidate sweep
|
||||
let candidates = gather_candidates(cwd, env);
|
||||
for root in candidates {
|
||||
if start.elapsed() > Duration::from_secs(5) || checked > 5000 {
|
||||
break;
|
||||
}
|
||||
checked += 1;
|
||||
if unsafe { path_has_world_write_allow(&root)? } {
|
||||
let has_root = unsafe { path_has_world_write_allow(&root)? };
|
||||
if has_root {
|
||||
flagged.push(root.clone());
|
||||
}
|
||||
// one level down best-effort
|
||||
if let Ok(read) = std::fs::read_dir(&root) {
|
||||
for ent in read.flatten().take(50) {
|
||||
for ent in read.flatten().take(250) {
|
||||
let p = ent.path();
|
||||
if start.elapsed() > Duration::from_secs(5) || checked > 5000 {
|
||||
break;
|
||||
@@ -126,22 +217,93 @@ pub fn audit_everyone_writable(
|
||||
}
|
||||
if ft.is_dir() {
|
||||
checked += 1;
|
||||
if unsafe { path_has_world_write_allow(&p)? } {
|
||||
let has_child = unsafe { path_has_world_write_allow(&p)? };
|
||||
if has_child {
|
||||
flagged.push(p);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let elapsed_ms = start.elapsed().as_millis();
|
||||
if !flagged.is_empty() {
|
||||
let mut list = String::new();
|
||||
for p in flagged {
|
||||
for p in &flagged {
|
||||
list.push_str(&format!("\n - {}", p.display()));
|
||||
}
|
||||
crate::logging::log_note(
|
||||
&format!(
|
||||
"AUDIT: world-writable scan FAILED; checked={checked}; duration_ms={elapsed_ms}; flagged:{}",
|
||||
list
|
||||
),
|
||||
Some(cwd),
|
||||
);
|
||||
let mut list_err = String::new();
|
||||
for p in flagged {
|
||||
list_err.push_str(&format!("\n - {}", p.display()));
|
||||
}
|
||||
return Err(anyhow!(
|
||||
"Refusing to run: found directories writable by Everyone: {}",
|
||||
list
|
||||
list_err
|
||||
));
|
||||
}
|
||||
// Log success once if nothing flagged
|
||||
crate::logging::log_note(
|
||||
&format!(
|
||||
"AUDIT: world-writable scan OK; checked={checked}; duration_ms={elapsed_ms}"
|
||||
),
|
||||
Some(cwd),
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
// Fast mask-based check: does the DACL contain any ACCESS_ALLOWED ACE for
|
||||
// Everyone that includes generic or specific write bits? Skips inherit-only
|
||||
// ACEs (do not apply to the current object).
|
||||
unsafe fn dacl_quick_world_write_mask_allows(p_dacl: *mut ACL, psid_world: *mut c_void) -> bool {
|
||||
if p_dacl.is_null() {
|
||||
return false;
|
||||
}
|
||||
const INHERIT_ONLY_ACE: u8 = 0x08;
|
||||
let mut info: ACL_SIZE_INFORMATION = std::mem::zeroed();
|
||||
let ok = GetAclInformation(
|
||||
p_dacl as *const ACL,
|
||||
&mut info as *mut _ as *mut c_void,
|
||||
std::mem::size_of::<ACL_SIZE_INFORMATION>() as u32,
|
||||
AclSizeInformation,
|
||||
);
|
||||
if ok == 0 {
|
||||
return false;
|
||||
}
|
||||
for i in 0..(info.AceCount as usize) {
|
||||
let mut p_ace: *mut c_void = std::ptr::null_mut();
|
||||
if GetAce(p_dacl as *const ACL, i as u32, &mut p_ace) == 0 {
|
||||
continue;
|
||||
}
|
||||
let hdr = &*(p_ace as *const ACE_HEADER);
|
||||
if hdr.AceType != 0 { // ACCESS_ALLOWED_ACE_TYPE
|
||||
continue;
|
||||
}
|
||||
if (hdr.AceFlags & INHERIT_ONLY_ACE) != 0 {
|
||||
continue;
|
||||
}
|
||||
let base = p_ace as usize;
|
||||
let sid_ptr = (base
|
||||
+ std::mem::size_of::<ACE_HEADER>()
|
||||
+ std::mem::size_of::<u32>()) as *mut c_void; // skip header + mask
|
||||
if EqualSid(sid_ptr, psid_world) != 0 {
|
||||
let ace = &*(p_ace as *const ACCESS_ALLOWED_ACE);
|
||||
let mask = ace.Mask;
|
||||
let writey = FILE_GENERIC_WRITE
|
||||
| FILE_WRITE_DATA
|
||||
| FILE_APPEND_DATA
|
||||
| FILE_WRITE_EA
|
||||
| FILE_WRITE_ATTRIBUTES
|
||||
| GENERIC_WRITE_MASK
|
||||
| GENERIC_ALL_MASK;
|
||||
if (mask & writey) != 0 {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
@@ -55,3 +55,8 @@ pub fn debug_log(msg: &str, base_dir: Option<&Path>) {
|
||||
eprintln!("{msg}");
|
||||
}
|
||||
}
|
||||
|
||||
// Unconditional note logging to sandbox_commands.rust.log
|
||||
pub fn log_note(msg: &str, base_dir: Option<&Path>) {
|
||||
append_line(msg, base_dir);
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ If you want to add a new feature or change the behavior of an existing one, plea
|
||||
### Opening a pull request
|
||||
|
||||
- Fill in the PR template (or include similar information) - **What? Why? How?**
|
||||
- Include a link to a bug report or enhancement request in the issue tracker
|
||||
- Run **all** checks locally (`cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item`). CI failures that could have been caught locally slow down the process.
|
||||
- Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts.
|
||||
- Mark the PR as **Ready for review** only when you believe it is in a merge-able state.
|
||||
|
||||
@@ -221,9 +221,6 @@ enable_experimental_windows_sandbox = false
|
||||
# Experimental toggles (legacy; prefer [features])
|
||||
################################################################################
|
||||
|
||||
# Use experimental exec command tool (streamable shell). Default: false
|
||||
experimental_use_exec_command_tool = false
|
||||
|
||||
# Use experimental unified exec tool. Default: false
|
||||
experimental_use_unified_exec_tool = false
|
||||
|
||||
@@ -328,7 +325,6 @@ mcp_oauth_credentials_store = "auto"
|
||||
# experimental_compact_prompt_file = "compact_prompt.txt"
|
||||
# include_apply_patch_tool = false
|
||||
# experimental_use_unified_exec_tool = false
|
||||
# experimental_use_exec_command_tool = false
|
||||
# experimental_use_rmcp_client = false
|
||||
# experimental_use_freeform_apply_patch = false
|
||||
# experimental_sandbox_command_assessment = false
|
||||
|
||||
@@ -3,7 +3,7 @@ import path from "node:path";
|
||||
import readline from "node:readline";
|
||||
import { fileURLToPath } from "node:url";
|
||||
|
||||
import { SandboxMode, ModelReasoningEffort } from "./threadOptions";
|
||||
import { SandboxMode, ModelReasoningEffort, ApprovalMode } from "./threadOptions";
|
||||
|
||||
export type CodexExecArgs = {
|
||||
input: string;
|
||||
@@ -24,6 +24,12 @@ export type CodexExecArgs = {
|
||||
outputSchemaFile?: string;
|
||||
// --config model_reasoning_effort
|
||||
modelReasoningEffort?: ModelReasoningEffort;
|
||||
// --config sandbox_workspace_write.network_access
|
||||
networkAccessEnabled?: boolean;
|
||||
// --config features.web_search_request
|
||||
webSearchEnabled?: boolean;
|
||||
// --config approval_policy
|
||||
approvalPolicy?: ApprovalMode;
|
||||
};
|
||||
|
||||
const INTERNAL_ORIGINATOR_ENV = "CODEX_INTERNAL_ORIGINATOR_OVERRIDE";
|
||||
@@ -62,6 +68,18 @@ export class CodexExec {
|
||||
commandArgs.push("--config", `model_reasoning_effort="${args.modelReasoningEffort}"`);
|
||||
}
|
||||
|
||||
if (args.networkAccessEnabled !== undefined) {
|
||||
commandArgs.push("--config", `sandbox_workspace_write.network_access=${args.networkAccessEnabled}`);
|
||||
}
|
||||
|
||||
if (args.webSearchEnabled !== undefined) {
|
||||
commandArgs.push("--config", `features.web_search_request=${args.webSearchEnabled}`);
|
||||
}
|
||||
|
||||
if (args.approvalPolicy) {
|
||||
commandArgs.push("--config", `approval_policy="${args.approvalPolicy}"`);
|
||||
}
|
||||
|
||||
if (args.images?.length) {
|
||||
for (const image of args.images) {
|
||||
commandArgs.push("--image", image);
|
||||
|
||||
@@ -86,6 +86,9 @@ export class Thread {
|
||||
skipGitRepoCheck: options?.skipGitRepoCheck,
|
||||
outputSchemaFile: schemaPath,
|
||||
modelReasoningEffort: options?.modelReasoningEffort,
|
||||
networkAccessEnabled: options?.networkAccessEnabled,
|
||||
webSearchEnabled: options?.webSearchEnabled,
|
||||
approvalPolicy: options?.approvalPolicy,
|
||||
});
|
||||
try {
|
||||
for await (const item of generator) {
|
||||
|
||||
@@ -10,4 +10,7 @@ export type ThreadOptions = {
|
||||
workingDirectory?: string;
|
||||
skipGitRepoCheck?: boolean;
|
||||
modelReasoningEffort?: ModelReasoningEffort;
|
||||
networkAccessEnabled?: boolean;
|
||||
webSearchEnabled?: boolean;
|
||||
approvalPolicy?: ApprovalMode;
|
||||
};
|
||||
|
||||
@@ -254,6 +254,99 @@ describe("Codex", () => {
|
||||
}
|
||||
});
|
||||
|
||||
it("passes networkAccessEnabled to exec", async () => {
|
||||
const { url, close } = await startResponsesTestProxy({
|
||||
statusCode: 200,
|
||||
responseBodies: [
|
||||
sse(
|
||||
responseStarted("response_1"),
|
||||
assistantMessage("Network access enabled", "item_1"),
|
||||
responseCompleted("response_1"),
|
||||
),
|
||||
],
|
||||
});
|
||||
|
||||
const { args: spawnArgs, restore } = codexExecSpy();
|
||||
|
||||
try {
|
||||
const client = new Codex({ codexPathOverride: codexExecPath, baseUrl: url, apiKey: "test" });
|
||||
|
||||
const thread = client.startThread({
|
||||
networkAccessEnabled: true,
|
||||
});
|
||||
await thread.run("test network access");
|
||||
|
||||
const commandArgs = spawnArgs[0];
|
||||
expect(commandArgs).toBeDefined();
|
||||
expectPair(commandArgs, ["--config", "sandbox_workspace_write.network_access=true"]);
|
||||
} finally {
|
||||
restore();
|
||||
await close();
|
||||
}
|
||||
});
|
||||
|
||||
it("passes webSearchEnabled to exec", async () => {
|
||||
const { url, close } = await startResponsesTestProxy({
|
||||
statusCode: 200,
|
||||
responseBodies: [
|
||||
sse(
|
||||
responseStarted("response_1"),
|
||||
assistantMessage("Web search enabled", "item_1"),
|
||||
responseCompleted("response_1"),
|
||||
),
|
||||
],
|
||||
});
|
||||
|
||||
const { args: spawnArgs, restore } = codexExecSpy();
|
||||
|
||||
try {
|
||||
const client = new Codex({ codexPathOverride: codexExecPath, baseUrl: url, apiKey: "test" });
|
||||
|
||||
const thread = client.startThread({
|
||||
webSearchEnabled: true,
|
||||
});
|
||||
await thread.run("test web search");
|
||||
|
||||
const commandArgs = spawnArgs[0];
|
||||
expect(commandArgs).toBeDefined();
|
||||
expectPair(commandArgs, ["--config", "features.web_search_request=true"]);
|
||||
} finally {
|
||||
restore();
|
||||
await close();
|
||||
}
|
||||
});
|
||||
|
||||
it("passes approvalPolicy to exec", async () => {
|
||||
const { url, close } = await startResponsesTestProxy({
|
||||
statusCode: 200,
|
||||
responseBodies: [
|
||||
sse(
|
||||
responseStarted("response_1"),
|
||||
assistantMessage("Approval policy set", "item_1"),
|
||||
responseCompleted("response_1"),
|
||||
),
|
||||
],
|
||||
});
|
||||
|
||||
const { args: spawnArgs, restore } = codexExecSpy();
|
||||
|
||||
try {
|
||||
const client = new Codex({ codexPathOverride: codexExecPath, baseUrl: url, apiKey: "test" });
|
||||
|
||||
const thread = client.startThread({
|
||||
approvalPolicy: "on-request",
|
||||
});
|
||||
await thread.run("test approval policy");
|
||||
|
||||
const commandArgs = spawnArgs[0];
|
||||
expect(commandArgs).toBeDefined();
|
||||
expectPair(commandArgs, ["--config", 'approval_policy="on-request"']);
|
||||
} finally {
|
||||
restore();
|
||||
await close();
|
||||
}
|
||||
});
|
||||
|
||||
it("writes output schema to a temporary file and forwards it", async () => {
|
||||
const { url, close, requests } = await startResponsesTestProxy({
|
||||
statusCode: 200,
|
||||
|
||||
Reference in New Issue
Block a user