Compare commits

...

5 Commits

Author SHA1 Message Date
Eric Traut
3fe0e022be Add project-local codex bug triage skill (#17064)
Add a `codex-bug` skill to help diagnose and fix bugs in codex.
2026-04-07 19:20:04 -07:00
pakrym-oai
2c3be34bae Add remote exec start script (#17059)
Just pass an SSH host
```
./scripts/start-codex-exec.sh codex-remote
```
2026-04-07 19:16:19 -07:00
Vivian Fang
fa5119a8a6 Add regression tests for JsonSchema (#17052)
Tests added for existing JsonSchema in
`codex-rs/tools/src/json_schema_tests.rs`:

- `parse_tool_input_schema_coerces_boolean_schemas`
- `parse_tool_input_schema_infers_object_shape_and_defaults_properties`
- `parse_tool_input_schema_normalizes_integer_and_missing_array_items`
- `parse_tool_input_schema_sanitizes_additional_properties_schema`
-
`parse_tool_input_schema_infers_object_shape_from_boolean_additional_properties_only`
- `parse_tool_input_schema_infers_number_from_numeric_keywords`
- `parse_tool_input_schema_infers_number_from_multiple_of`
-
`parse_tool_input_schema_infers_string_from_enum_const_and_format_keywords`
- `parse_tool_input_schema_defaults_empty_schema_to_string`
- `parse_tool_input_schema_infers_array_from_prefix_items`
-
`parse_tool_input_schema_preserves_boolean_additional_properties_on_inferred_object`
-
`parse_tool_input_schema_infers_object_shape_from_schema_additional_properties_only`

Tests that we expect to fail on the baseline normalizer, but pass with
the new JsonSchema:

- `parse_tool_input_schema_preserves_nested_nullable_type_union`
- `parse_tool_input_schema_preserves_nested_any_of_property`
2026-04-07 18:18:54 -07:00
Felipe Coury
359e17a852 fix(tui): reduce startup and new-session latency (#17039)
## TL;DR

- Fetches account/rateLimits/read asynchronously so the TUI can continue
starting without waiting for the rate-limit response.
- Fixes the /status card so it no longer leaves a stale “refreshing
cached limits...” notice in terminal history.

## Problem

The TUI bootstrap path fetched account rate limits synchronously
(`account/rateLimits/read`) before the event loop started for
ChatGPT/OpenAI-authenticated startups. This added ~670 ms of blocking
latency in the measured hot-start case, even though rate-limit data is
not needed to render the initial UI or accept user input. The delay was
especially noticeable on hot starts where every other RPC
(`account/read`, `model/list`, `thread/start`) completed in under 70 ms
total.

Moving that fetch to the background also exposed a `/status` UI bug: the
status card is flattened into terminal scrollback when it is inserted. A
transient "refreshing limits in background..." line could not be cleared
later, because the async completion updated the retained `HistoryCell`,
not the already-written terminal history.

## Mental model

Before this change, `AppServerSession::bootstrap()` performed three
sequential RPCs: `account/read` → `model/list` →
`account/rateLimits/read`. The result of the third call was baked into
`AppServerBootstrap` and applied to the chat widget before the event
loop began.

After this change, `bootstrap()` only performs two RPCs (`account/read`
+ `model/list`), and rate-limit fetching is kicked off as an async
background task immediately after the first frame is scheduled. A new
enum, `RateLimitRefreshOrigin`, tags each fetch so the event handler
knows whether the result came from the startup prefetch or from a
user-initiated `/status` command; they have different completion
side-effects.

The `get_login_status()` helper (used outside the main app flow) was
also decoupled: it previously called the full `bootstrap()` just to
check auth mode, wasting model-list and rate-limit work. It now calls
the narrower `read_account()` directly.

For `/status`, this PR keeps the background refresh request but stops
printing transient refresh notices into status history when cached
limits are already available. If a refresh updates the cache, the next
`/status` command will render the new values.

## Non-goals

- This change does not alter the rate-limit data itself.
- This change does not introduce caching, retries, or staleness
management for rate limits.
- This change does not affect the `model/list` or `thread/start` RPCs;
they remain on the critical startup path.

## Tradeoffs

- **Stale-on-first-render**: The status bar will briefly show no
rate-limit info until the background fetch completes; observed
background fetches landed roughly in the 400-900 ms range after the UI
appeared. This is acceptable because the user cannot meaningfully act on
rate-limit data in the first fraction of a second.
- **Error silence on startup prefetch**: If the startup prefetch fails,
the error is logged but the UI is not notified (unlike `/status` refresh
failures, which go through the status-command completion path). This
avoids surfacing transient network errors as a startup blocker.
- **Static `/status` history**: `/status` output is terminal history,
not a live widget. The card now avoids progress-style language that
would appear stuck in scrollback; users can run `/status` again to see
newly cached values.
- **`account_auth_mode` field removed from `AppServerBootstrap`**: The
only consumer was `get_login_status()`, which no longer goes through
`bootstrap()`. The field was dead weight.

## Architecture

### New types

- `RateLimitRefreshOrigin` (in `app_event.rs`): A `Copy` enum
distinguishing `StartupPrefetch` from `StatusCommand { request_id }`.
Carried through `RefreshRateLimits` and `RateLimitsLoaded` events so the
handler applies the right completion behavior.

### Modified types

- `AppServerBootstrap`: Lost `account_auth_mode` and
`rate_limit_snapshots`; gained `requires_openai_auth: bool` (passed
through from the account response so the caller can decide whether to
fire the prefetch).

### Control flow

1. `bootstrap()` returns with `requires_openai_auth` and
`has_chatgpt_account`.
2. After scheduling the first frame, `App::run_inner` fires
`refresh_rate_limits(StartupPrefetch)` if both flags are true.
3. When `RateLimitsLoaded { StartupPrefetch, Ok(..) }` arrives,
snapshots are applied and a frame is scheduled to repaint the status
bar.
4. When `RateLimitsLoaded { StartupPrefetch, Err(..) }` arrives, the
error is logged and no UI update occurs.
5. `/status`-initiated refreshes continue to use `StatusCommand {
request_id }` and call `finish_status_rate_limit_refresh` on completion
(success or failure).
6. `/status` history cells with cached rate-limit rows no longer render
an additional "refreshing limits" notice; the async refresh updates the
cache for future status output.

### Extracted method

- `AppServerSession::read_account()`: Factored out of `bootstrap()` so
that `get_login_status()` can call it independently without triggering
model-list or rate-limit work.

## Observability

- The existing `tracing::warn!` for rate-limit fetch failures is
preserved for the startup path.
- No new metrics or spans are introduced. The startup-time improvement
is observable via the existing `ready` timestamp in TUI startup logs.

## Tests

- Existing tests in `status_command_tests.rs` are updated to match on
`RateLimitRefreshOrigin::StatusCommand { request_id }` instead of a bare
`request_id`.
- Focused `/status` tests now assert that status history avoids
transient refresh text, continues to request an async refresh, and uses
refreshed cached limits in future status output.
- No new tests are added for the startup prefetch path because it is a
fire-and-forget spawn with no observable side-effect other than the
widget state update, which is already covered by the
snapshot-application tests.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 22:16:09 -03:00
pash-openai
80ebc80be5 Use model metadata for Fast Mode status (#16949)
Fast Mode status was still tied to one model name in the TUI and
model-list plumbing. This changes the model metadata shape so a model
can advertise additional speed tiers, carries that field through the
app-server model list, and uses it to decide when to show Fast Mode
status.

For people using Codex, the behavior is intended to stay the same for
existing models. Fast Mode still requires the existing signed-in /
feature-gated path; the difference is that the UI can now recognize any
model the model list marks as Fast-capable, instead of requiring a new
client-side slug check.
2026-04-07 17:55:40 -07:00
41 changed files with 1002 additions and 142 deletions

View File

@@ -0,0 +1,48 @@
---
name: codex-bug
description: Diagnose GitHub bug reports in openai/codex. Use when given a GitHub issue URL from openai/codex and asked to decide next steps such as verifying against the repo, requesting more info, or explaining why it is not a bug; follow any additional user-provided instructions.
---
# Codex Bug
## Overview
Diagnose a Codex GitHub bug report and decide the next action: verify against sources, request more info, or explain why it is not a bug.
## Workflow
1. Confirm the input
- Require a GitHub issue URL that points to `github.com/openai/codex/issues/…`.
- If the URL is missing or not in the right repo, ask the user for the correct link.
2. Network access
- Always access the issue over the network immediately, even if you think access is blocked or unavailable.
- Prefer the GitHub API over HTML pages because the HTML is noisy:
- Issue: `https://api.github.com/repos/openai/codex/issues/<number>`
- Comments: `https://api.github.com/repos/openai/codex/issues/<number>/comments`
- If the environment requires explicit approval, request it on demand via the tool and continue without additional user prompting.
- Only if the network attempt fails after requesting approval, explain what you can do offline (e.g., draft a response template) and ask how to proceed.
3. Read the issue
- Use the GitHub API responses (issue + comments) as the source of truth rather than scraping the HTML issue page.
- Extract: title, body, repro steps, expected vs actual, environment, logs, and any attachments.
- Note whether the report already includes logs or session details.
- If the report includes a thread ID, mention it in the summary and use it to look up the logs and session details if you have access to them.
4. Summarize the bug before investigating
- Before inspecting code, docs, or logs in depth, write a short summary of the report in your own words.
- Include the reported behavior, expected behavior, repro steps, environment, and what evidence is already attached or missing.
5. Decide the course of action
- **Verify with sources** when the report is specific and likely reproducible. Inspect relevant Codex files (or mention the files to inspect if access is unavailable).
- **Request more information** when the report is vague, missing repro steps, or lacks logs/environment.
- **Explain not a bug** when the report contradicts current behavior or documented constraints (cite the evidence from the issue and any local sources you checked).
6. Respond
- Provide a concise report of your findings and next steps.

View File

@@ -9308,6 +9308,13 @@
},
"Model": {
"properties": {
"additionalSpeedTiers": {
"default": [],
"items": {
"type": "string"
},
"type": "array"
},
"availabilityNux": {
"anyOf": [
{

View File

@@ -6111,6 +6111,13 @@
},
"Model": {
"properties": {
"additionalSpeedTiers": {
"default": [],
"items": {
"type": "string"
},
"type": "array"
},
"availabilityNux": {
"anyOf": [
{

View File

@@ -22,6 +22,13 @@
},
"Model": {
"properties": {
"additionalSpeedTiers": {
"default": [],
"items": {
"type": "string"
},
"type": "array"
},
"availabilityNux": {
"anyOf": [
{

View File

@@ -7,4 +7,4 @@ import type { ModelAvailabilityNux } from "./ModelAvailabilityNux";
import type { ModelUpgradeInfo } from "./ModelUpgradeInfo";
import type { ReasoningEffortOption } from "./ReasoningEffortOption";
export type Model = { id: string, model: string, upgrade: string | null, upgradeInfo: ModelUpgradeInfo | null, availabilityNux: ModelAvailabilityNux | null, displayName: string, description: string, hidden: boolean, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, inputModalities: Array<InputModality>, supportsPersonality: boolean, isDefault: boolean, };
export type Model = { id: string, model: string, upgrade: string | null, upgradeInfo: ModelUpgradeInfo | null, availabilityNux: ModelAvailabilityNux | null, displayName: string, description: string, hidden: boolean, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, inputModalities: Array<InputModality>, supportsPersonality: boolean, additionalSpeedTiers: Array<string>, isDefault: boolean, };

View File

@@ -1791,6 +1791,8 @@ pub struct Model {
pub input_modalities: Vec<InputModality>,
#[serde(default)]
pub supports_personality: bool,
#[serde(default)]
pub additional_speed_tiers: Vec<String>,
// Only one model should be marked as default.
pub is_default: bool,
}

View File

@@ -172,7 +172,7 @@ Example with notification opt-out:
- `fs/watch` — subscribe this connection to filesystem change notifications for an absolute file or directory path and caller-provided `watchId`; returns the canonicalized `path`.
- `fs/unwatch` — stop sending notifications for a prior `fs/watch`; returns `{}`.
- `fs/changed` — notification emitted when watched paths change, including the `watchId` and `changedPaths`.
- `model/list` — list available models (set `includeHidden: true` to include entries with `hidden: true`), with reasoning effort options, optional legacy `upgrade` model ids, optional `upgradeInfo` metadata (`model`, `upgradeCopy`, `modelLink`, `migrationMarkdown`), and optional `availabilityNux` metadata.
- `model/list` — list available models (set `includeHidden: true` to include entries with `hidden: true`), with reasoning effort options, `additionalSpeedTiers`, optional legacy `upgrade` model ids, optional `upgradeInfo` metadata (`model`, `upgradeCopy`, `modelLink`, `migrationMarkdown`), and optional `availabilityNux` metadata.
- `experimentalFeature/list` — list feature flags with stage metadata (`beta`, `underDevelopment`, `stable`, etc.), enabled/default-enabled state, and cursor pagination. For non-beta flags, `displayName`/`description`/`announcement` are `null`.
- `experimentalFeature/enablement/set` — patch the in-memory process-wide runtime feature enablement for the currently supported feature keys (`apps`, `plugins`). For each feature, precedence is: cloud requirements > --enable <feature_name> > config.toml > experimentalFeature/enablement/set (new) > code default.
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination). This response omits built-in developer instructions; clients should either pass `settings.developer_instructions: null` when setting a mode to use Codex's built-in instructions, or provide their own instructions explicitly.

View File

@@ -42,6 +42,7 @@ fn model_from_preset(preset: ModelPreset) -> Model {
default_reasoning_effort: preset.default_reasoning_effort,
input_modalities: preset.input_modalities,
supports_personality: preset.supports_personality,
additional_speed_tiers: preset.additional_speed_tiers,
is_default: preset.is_default,
}
}

View File

@@ -28,6 +28,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
},
supported_in_api: preset.supported_in_api,
priority,
additional_speed_tiers: preset.additional_speed_tiers.clone(),
upgrade: preset.upgrade.as_ref().map(Into::into),
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -50,6 +50,7 @@ fn model_from_preset(preset: &ModelPreset) -> Model {
// cache report `supports_personality = false`.
// todo(sayan): fix, maybe make roundtrip use ModelInfo only
supports_personality: false,
additional_speed_tiers: preset.additional_speed_tiers.clone(),
is_default: preset.is_default,
}
}

View File

@@ -75,6 +75,7 @@ async fn models_client_hits_models_endpoint() {
visibility: ModelVisibility::List,
supported_in_api: true,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -82,6 +82,7 @@ fn test_model_info(
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,
@@ -898,6 +899,7 @@ async fn model_switch_to_smaller_model_updates_token_context_window() -> Result<
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -335,6 +335,7 @@ fn test_remote_model(slug: &str, priority: i32) -> ModelInfo {
visibility: ModelVisibility::List,
supported_in_api: true,
priority,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -644,6 +644,7 @@ async fn remote_model_friendly_personality_instructions_with_feature() -> anyhow
visibility: ModelVisibility::List,
supported_in_api: true,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: Some(ModelMessages {
@@ -760,6 +761,7 @@ async fn user_turn_personality_remote_model_template_includes_update_message() -
visibility: ModelVisibility::List,
supported_in_api: true,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: Some(ModelMessages {

View File

@@ -297,6 +297,7 @@ async fn remote_models_remote_model_uses_unified_exec() -> Result<()> {
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,
@@ -545,6 +546,7 @@ async fn remote_models_apply_remote_base_instructions() -> Result<()> {
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: remote_base.to_string(),
model_messages: None,
@@ -1027,6 +1029,7 @@ fn test_remote_model_with_policy(
used_fallback_model_metadata: false,
supports_search_tool: false,
priority,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -405,6 +405,7 @@ async fn stdio_image_responses_are_sanitized_for_text_only_model() -> anyhow::Re
visibility: ModelVisibility::List,
supported_in_api: true,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -67,6 +67,7 @@ fn test_model_info(
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -1349,6 +1349,7 @@ async fn view_image_tool_returns_unsupported_message_for_text_only_model() -> an
used_fallback_model_metadata: false,
supports_search_tool: false,
priority: 1,
additional_speed_tiers: Vec::new(),
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,

View File

@@ -116,6 +116,9 @@
"visibility": "list",
"minimal_client_version": "0.98.0",
"supported_in_api": true,
"additional_speed_tiers": [
"fast"
],
"availability_nux": null,
"upgrade": null,
"priority": 0,

View File

@@ -69,6 +69,7 @@ pub fn model_info_from_slug(slug: &str) -> ModelInfo {
visibility: ModelVisibility::None,
supported_in_api: true,
priority: 99,
additional_speed_tiers: Vec::new(),
availability_nux: None,
upgrade: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),

View File

@@ -20,6 +20,7 @@ use crate::config_types::ReasoningSummary;
use crate::config_types::Verbosity;
const PERSONALITY_PLACEHOLDER: &str = "{{ personality }}";
pub const SPEED_TIER_FAST: &str = "fast";
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning
#[derive(
@@ -132,6 +133,9 @@ pub struct ModelPreset {
/// Whether this model supports personality-specific instructions.
#[serde(default)]
pub supports_personality: bool,
/// Additional speed tiers this model can run with beyond the standard path.
#[serde(default)]
pub additional_speed_tiers: Vec<String>,
/// Whether this is the default model for new users.
pub is_default: bool,
/// recommended upgrade model
@@ -252,6 +256,8 @@ pub struct ModelInfo {
pub visibility: ModelVisibility,
pub supported_in_api: bool,
pub priority: i32,
#[serde(default)]
pub additional_speed_tiers: Vec<String>,
pub availability_nux: Option<ModelAvailabilityNux>,
pub upgrade: Option<ModelInfoUpgrade>,
pub base_instructions: String,
@@ -428,6 +434,7 @@ impl From<ModelInfo> for ModelPreset {
.unwrap_or(ReasoningEffort::None),
supported_reasoning_efforts: info.supported_reasoning_levels.clone(),
supports_personality,
additional_speed_tiers: info.additional_speed_tiers,
is_default: false, // default is the highest priority available model
upgrade: info.upgrade.as_ref().map(|upgrade| ModelUpgrade {
id: upgrade.model.clone(),
@@ -449,6 +456,12 @@ impl From<ModelInfo> for ModelPreset {
}
impl ModelPreset {
pub fn supports_fast_mode(&self) -> bool {
self.additional_speed_tiers
.iter()
.any(|tier| tier == SPEED_TIER_FAST)
}
/// Filter models based on authentication mode.
///
/// In ChatGPT mode, all models are visible. Otherwise, only API-supported models are shown.
@@ -527,6 +540,7 @@ mod tests {
visibility: ModelVisibility::List,
supported_in_api: true,
priority: 1,
additional_speed_tiers: Vec::new(),
availability_nux: None,
upgrade: None,
base_instructions: "base".to_string(),
@@ -772,6 +786,7 @@ mod tests {
availability_nux: Some(ModelAvailabilityNux {
message: "Try Spark.".to_string(),
}),
additional_speed_tiers: vec![SPEED_TIER_FAST.to_string()],
..test_model(/*spec*/ None)
});
@@ -781,5 +796,6 @@ mod tests {
message: "Try Spark.".to_string(),
})
);
assert!(preset.supports_fast_mode());
}
}

View File

@@ -17,6 +17,7 @@ fn model_preset(id: &str, show_in_picker: bool) -> ModelPreset {
description: "Balanced".to_string(),
}],
supports_personality: false,
additional_speed_tiers: Vec::new(),
is_default: false,
upgrade: None,
show_in_picker,

View File

@@ -4,8 +4,18 @@ use super::parse_tool_input_schema;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
// Tests in this section exercise normalization transforms that mutate badly
// formed JSON for consumption by the Responses API.
#[test]
fn parse_tool_input_schema_coerces_boolean_schemas() {
// Example schema shape:
// true
//
// Expected normalization behavior:
// - JSON Schema boolean forms are coerced to `{ "type": "string" }`
// because the baseline enum model cannot represent boolean-schema
// semantics directly.
let schema = parse_tool_input_schema(&serde_json::json!(true)).expect("parse schema");
assert_eq!(schema, JsonSchema::String { description: None });
@@ -13,6 +23,16 @@ fn parse_tool_input_schema_coerces_boolean_schemas() {
#[test]
fn parse_tool_input_schema_infers_object_shape_and_defaults_properties() {
// Example schema shape:
// {
// "properties": {
// "query": { "description": "search query" }
// }
// }
//
// Expected normalization behavior:
// - `properties` implies an object schema when `type` is omitted.
// - The child property keeps its description and defaults to a string type.
let schema = parse_tool_input_schema(&serde_json::json!({
"properties": {
"query": {"description": "search query"}
@@ -37,6 +57,19 @@ fn parse_tool_input_schema_infers_object_shape_and_defaults_properties() {
#[test]
fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "page": { "type": "integer" },
// "tags": { "type": "array" }
// }
// }
//
// Expected normalization behavior:
// - `"integer"` is accepted by the baseline model through the legacy
// number/integer alias.
// - Arrays missing `items` receive a permissive string `items` schema.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
@@ -50,7 +83,7 @@ fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
schema,
JsonSchema::Object {
properties: BTreeMap::from([
("page".to_string(), JsonSchema::Number { description: None },),
("page".to_string(), JsonSchema::Number { description: None }),
(
"tags".to_string(),
JsonSchema::Array {
@@ -67,6 +100,27 @@ fn parse_tool_input_schema_normalizes_integer_and_missing_array_items() {
#[test]
fn parse_tool_input_schema_sanitizes_additional_properties_schema() {
// Example schema shape:
// {
// "type": "object",
// "additionalProperties": {
// "required": ["value"],
// "properties": {
// "value": {
// "anyOf": [
// { "type": "string" },
// { "type": "number" }
// ]
// }
// }
// }
// }
//
// Expected normalization behavior:
// - `additionalProperties` schema objects are recursively sanitized.
// - The nested schema is normalized into the baseline object form.
// - In the baseline model, the nested `anyOf` degrades to a plain string
// field because richer combiners are not preserved.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"additionalProperties": {
@@ -96,3 +150,313 @@ fn parse_tool_input_schema_sanitizes_additional_properties_schema() {
}
);
}
#[test]
fn parse_tool_input_schema_infers_object_shape_from_boolean_additional_properties_only() {
// Example schema shape:
// {
// "additionalProperties": false
// }
//
// Expected normalization behavior:
// - `additionalProperties` implies an object schema when `type` is omitted.
// - The boolean `additionalProperties` setting is preserved.
let schema = parse_tool_input_schema(&serde_json::json!({
"additionalProperties": false
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(false.into()),
}
);
}
#[test]
fn parse_tool_input_schema_infers_number_from_numeric_keywords() {
// Example schema shape:
// {
// "minimum": 1
// }
//
// Expected normalization behavior:
// - Numeric constraint keywords imply a number schema when `type` is
// omitted.
let schema = parse_tool_input_schema(&serde_json::json!({
"minimum": 1
}))
.expect("parse schema");
assert_eq!(schema, JsonSchema::Number { description: None });
}
#[test]
fn parse_tool_input_schema_infers_number_from_multiple_of() {
// Example schema shape:
// {
// "multipleOf": 5
// }
//
// Expected normalization behavior:
// - `multipleOf` follows the same numeric-keyword inference path as
// `minimum` / `maximum`.
let schema = parse_tool_input_schema(&serde_json::json!({
"multipleOf": 5
}))
.expect("parse schema");
assert_eq!(schema, JsonSchema::Number { description: None });
}
#[test]
fn parse_tool_input_schema_infers_string_from_enum_const_and_format_keywords() {
// Example schema shapes:
// { "enum": ["fast", "safe"] }
// { "const": "file" }
// { "format": "date-time" }
//
// Expected normalization behavior:
// - Each of these keywords implies a string schema when `type` is omitted.
let enum_schema = parse_tool_input_schema(&serde_json::json!({
"enum": ["fast", "safe"]
}))
.expect("parse enum schema");
let const_schema = parse_tool_input_schema(&serde_json::json!({
"const": "file"
}))
.expect("parse const schema");
let format_schema = parse_tool_input_schema(&serde_json::json!({
"format": "date-time"
}))
.expect("parse format schema");
assert_eq!(enum_schema, JsonSchema::String { description: None });
assert_eq!(const_schema, JsonSchema::String { description: None });
assert_eq!(format_schema, JsonSchema::String { description: None });
}
#[test]
fn parse_tool_input_schema_defaults_empty_schema_to_string() {
// Example schema shape:
// {}
//
// Expected normalization behavior:
// - With no structural hints at all, the baseline normalizer falls back to
// a permissive string schema.
let schema = parse_tool_input_schema(&serde_json::json!({})).expect("parse schema");
assert_eq!(schema, JsonSchema::String { description: None });
}
#[test]
fn parse_tool_input_schema_infers_array_from_prefix_items() {
// Example schema shape:
// {
// "prefixItems": [
// { "type": "string" }
// ]
// }
//
// Expected normalization behavior:
// - `prefixItems` implies an array schema when `type` is omitted.
// - The baseline model still stores the normalized result as a regular
// array schema with string items.
let schema = parse_tool_input_schema(&serde_json::json!({
"prefixItems": [
{"type": "string"}
]
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None,
}
);
}
#[test]
fn parse_tool_input_schema_preserves_boolean_additional_properties_on_inferred_object() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "metadata": {
// "additionalProperties": true
// }
// }
// }
//
// Expected normalization behavior:
// - The nested `metadata` schema is inferred to be an object because it has
// `additionalProperties`.
// - `additionalProperties: true` is preserved rather than rewritten.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
"metadata": {
"additionalProperties": true
}
}
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
"metadata".to_string(),
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(AdditionalProperties::Boolean(true)),
},
)]),
required: None,
additional_properties: None,
}
);
}
#[test]
fn parse_tool_input_schema_infers_object_shape_from_schema_additional_properties_only() {
// Example schema shape:
// {
// "additionalProperties": {
// "type": "string"
// }
// }
//
// Expected normalization behavior:
// - A schema-valued `additionalProperties` also implies an object schema
// when `type` is omitted.
// - The nested schema is preserved as the object's
// `additionalProperties` definition.
let schema = parse_tool_input_schema(&serde_json::json!({
"additionalProperties": {
"type": "string"
}
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::new(),
required: None,
additional_properties: Some(AdditionalProperties::Schema(Box::new(
JsonSchema::String { description: None },
))),
}
);
}
// Schemas that should be preserved for Responses API compatibility rather than
// being rewritten into a different shape. These currently fail on the baseline
// normalizer and are the intended signal for the new JsonSchema work.
#[test]
#[ignore = "Expected to pass after the new JsonSchema preserves nullable type unions"]
fn parse_tool_input_schema_preserves_nested_nullable_type_union() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "nickname": {
// "type": ["string", "null"],
// "description": "Optional nickname"
// }
// },
// "required": ["nickname"],
// "additionalProperties": false
// }
//
// Expected normalization behavior:
// - The nested property keeps the explicit `["string", "null"]` union.
// - The object-level `required` and `additionalProperties: false` stay intact.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
"nickname": {
"type": ["string", "null"],
"description": "Optional nickname"
}
},
"required": ["nickname"],
"additionalProperties": false
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
"nickname".to_string(),
serde_json::from_value(serde_json::json!({
"type": ["string", "null"],
"description": "Optional nickname"
}))
.expect("nested nullable schema"),
)]),
required: Some(vec!["nickname".to_string()]),
additional_properties: Some(false.into()),
}
);
}
#[test]
#[ignore = "Expected to pass after the new JsonSchema preserves nested anyOf schemas"]
fn parse_tool_input_schema_preserves_nested_any_of_property() {
// Example schema shape:
// {
// "type": "object",
// "properties": {
// "query": {
// "anyOf": [
// { "type": "string" },
// { "type": "number" }
// ]
// }
// }
// }
//
// Expected normalization behavior:
// - The nested `anyOf` is preserved rather than flattened into a single
// fallback type.
let schema = parse_tool_input_schema(&serde_json::json!({
"type": "object",
"properties": {
"query": {
"anyOf": [
{ "type": "string" },
{ "type": "number" }
]
}
}
}))
.expect("parse schema");
assert_eq!(
schema,
JsonSchema::Object {
properties: BTreeMap::from([(
"query".to_string(),
serde_json::from_value(serde_json::json!({
"anyOf": [
{ "type": "string" },
{ "type": "number" }
]
}))
.expect("nested anyOf schema"),
)]),
required: None,
additional_properties: None,
}
);
}

View File

@@ -4,6 +4,7 @@ use crate::app_command::AppCommandView;
use crate::app_event::AppEvent;
use crate::app_event::ExitMode;
use crate::app_event::FeedbackCategory;
use crate::app_event::RateLimitRefreshOrigin;
use crate::app_event::RealtimeAudioDeviceKind;
#[cfg(target_os = "windows")]
use crate::app_event::WindowsSandboxEnableMode;
@@ -1894,14 +1895,25 @@ impl App {
});
}
fn refresh_rate_limits(&mut self, app_server: &AppServerSession, request_id: u64) {
/// Spawns a background task to fetch account rate limits and deliver the
/// result as a `RateLimitsLoaded` event.
///
/// The `origin` is forwarded to the completion handler so it can distinguish
/// a startup prefetch (which only updates cached snapshots and schedules a
/// frame) from a `/status`-triggered refresh (which must finalize the
/// corresponding status card).
fn refresh_rate_limits(
&mut self,
app_server: &AppServerSession,
origin: RateLimitRefreshOrigin,
) {
let request_handle = app_server.request_handle();
let app_event_tx = self.app_event_tx.clone();
tokio::spawn(async move {
let result = fetch_account_rate_limits(request_handle)
.await
.map_err(|err| err.to_string());
app_event_tx.send(AppEvent::RateLimitsLoaded { request_id, result });
app_event_tx.send(AppEvent::RateLimitsLoaded { origin, result });
});
}
@@ -3613,9 +3625,9 @@ impl App {
let feedback_audience = bootstrap.feedback_audience;
let auth_mode = bootstrap.auth_mode;
let has_chatgpt_account = bootstrap.has_chatgpt_account;
let requires_openai_auth = bootstrap.requires_openai_auth;
let status_account_display = bootstrap.status_account_display.clone();
let initial_plan_type = bootstrap.plan_type;
let startup_rate_limit_snapshots = bootstrap.rate_limit_snapshots;
let session_telemetry = SessionTelemetry::new(
ThreadId::new(),
model.as_str(),
@@ -3749,9 +3761,6 @@ impl App {
}
};
for snapshot in startup_rate_limit_snapshots {
chat_widget.on_rate_limit_snapshot(Some(snapshot));
}
chat_widget
.maybe_prompt_windows_sandbox_enable(should_prompt_windows_sandbox_nux_at_startup);
@@ -3839,6 +3848,11 @@ impl App {
tokio::pin!(tui_events);
tui.frame_requester().schedule_frame();
// Kick off a non-blocking rate-limit prefetch so the first `/status`
// already has data, without delaying the initial frame render.
if requires_openai_auth && has_chatgpt_account {
app.refresh_rate_limits(&app_server, RateLimitRefreshOrigin::StartupPrefetch);
}
let mut listen_for_app_server_events = true;
let mut waiting_for_initial_session_configured = wait_for_initial_session_configured;
@@ -4440,21 +4454,30 @@ impl App {
AppEvent::FileSearchResult { query, matches } => {
self.chat_widget.apply_file_search_result(query, matches);
}
AppEvent::RefreshRateLimits { request_id } => {
self.refresh_rate_limits(app_server, request_id);
AppEvent::RefreshRateLimits { origin } => {
self.refresh_rate_limits(app_server, origin);
}
AppEvent::RateLimitsLoaded { request_id, result } => match result {
AppEvent::RateLimitsLoaded { origin, result } => match result {
Ok(snapshots) => {
for snapshot in snapshots {
self.chat_widget.on_rate_limit_snapshot(Some(snapshot));
}
self.chat_widget
.finish_status_rate_limit_refresh(request_id);
match origin {
RateLimitRefreshOrigin::StartupPrefetch => {
tui.frame_requester().schedule_frame();
}
RateLimitRefreshOrigin::StatusCommand { request_id } => {
self.chat_widget
.finish_status_rate_limit_refresh(request_id);
}
}
}
Err(err) => {
tracing::warn!("account/rateLimits/read failed during TUI refresh: {err}");
self.chat_widget
.finish_status_rate_limit_refresh(request_id);
if let RateLimitRefreshOrigin::StatusCommand { request_id } = origin {
self.chat_widget
.finish_status_rate_limit_refresh(request_id);
}
}
},
AppEvent::ConnectorsLoaded { result, is_final } => {
@@ -6220,6 +6243,7 @@ mod tests {
use crate::chatwidget::create_initial_user_message;
use crate::chatwidget::tests::make_chatwidget_manual_with_sender;
use crate::chatwidget::tests::set_chatgpt_auth;
use crate::chatwidget::tests::set_fast_mode_test_catalog;
use crate::file_search::FileSearchManager;
use crate::history_cell::AgentMessageCell;
use crate::history_cell::HistoryCell;
@@ -9042,15 +9066,17 @@ guardian_approval = true
target_os = "windows",
ignore = "snapshot path rendering differs on Windows"
)]
async fn clear_ui_header_shows_fast_status_only_for_gpt54() {
async fn clear_ui_header_shows_fast_status_for_fast_capable_models() {
let mut app = make_test_app().await;
app.config.cwd = PathBuf::from("/tmp/project").abs();
app.chat_widget.set_model("gpt-5.4");
set_fast_mode_test_catalog(&mut app.chat_widget);
app.chat_widget
.set_reasoning_effort(Some(ReasoningEffortConfig::XHigh));
app.chat_widget
.set_service_tier(Some(codex_protocol::config_types::ServiceTier::Fast));
set_chatgpt_auth(&mut app.chat_widget);
set_fast_mode_test_catalog(&mut app.chat_widget);
let rendered = app
.clear_ui_header_lines_with_version(/*width*/ 80, "<VERSION>")
@@ -9064,7 +9090,7 @@ guardian_approval = true
.collect::<Vec<_>>()
.join("\n");
assert_snapshot!("clear_ui_header_fast_status_gpt54_only", rendered);
assert_snapshot!("clear_ui_header_fast_status_fast_capable_models", rendered);
}
async fn make_test_app() -> App {

View File

@@ -75,6 +75,23 @@ pub(crate) struct ConnectorsSnapshot {
pub(crate) connectors: Vec<AppInfo>,
}
/// Distinguishes why a rate-limit refresh was requested so the completion
/// handler can route the result correctly.
///
/// A `StartupPrefetch` fires once, concurrently with the rest of TUI init, and
/// only updates the cached snapshots (no status card to finalize). A
/// `StatusCommand` is tied to a specific `/status` invocation and must call
/// `finish_status_rate_limit_refresh` when done so the card stops showing a
/// "refreshing" state.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub(crate) enum RateLimitRefreshOrigin {
/// Eagerly fetched after bootstrap so the first `/status` already has data.
StartupPrefetch,
/// User-initiated via `/status`; the `request_id` correlates with the
/// status card that should be updated when the fetch completes.
StatusCommand { request_id: u64 },
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug)]
pub(crate) enum AppEvent {
@@ -139,12 +156,12 @@ pub(crate) enum AppEvent {
/// Refresh account rate limits in the background.
RefreshRateLimits {
request_id: u64,
origin: RateLimitRefreshOrigin,
},
/// Result of refreshing rate limits.
RateLimitsLoaded {
request_id: u64,
origin: RateLimitRefreshOrigin,
result: Result<Vec<RateLimitSnapshot>, String>,
},

View File

@@ -92,17 +92,25 @@ use color_eyre::eyre::WrapErr;
use std::collections::HashMap;
use std::path::PathBuf;
/// Data collected during the TUI bootstrap phase that the main event loop
/// needs to configure the UI, telemetry, and initial rate-limit prefetch.
///
/// Rate-limit snapshots are intentionally **not** included here; they are
/// fetched asynchronously after bootstrap returns so that the TUI can render
/// its first frame without waiting for the rate-limit round-trip.
pub(crate) struct AppServerBootstrap {
pub(crate) account_auth_mode: Option<AuthMode>,
pub(crate) account_email: Option<String>,
pub(crate) auth_mode: Option<TelemetryAuthMode>,
pub(crate) status_account_display: Option<StatusAccountDisplay>,
pub(crate) plan_type: Option<codex_protocol::account::PlanType>,
/// Whether the configured model provider needs OpenAI-style auth. Combined
/// with `has_chatgpt_account` to decide if a startup rate-limit prefetch
/// should be fired.
pub(crate) requires_openai_auth: bool,
pub(crate) default_model: String,
pub(crate) feedback_audience: FeedbackAudience,
pub(crate) has_chatgpt_account: bool,
pub(crate) available_models: Vec<ModelPreset>,
pub(crate) rate_limit_snapshots: Vec<RateLimitSnapshot>,
}
pub(crate) struct AppServerSession {
@@ -173,17 +181,7 @@ impl AppServerSession {
}
pub(crate) async fn bootstrap(&mut self, config: &Config) -> Result<AppServerBootstrap> {
let account_request_id = self.next_request_id();
let account: GetAccountResponse = self
.client
.request_typed(ClientRequest::GetAccount {
request_id: account_request_id,
params: GetAccountParams {
refresh_token: false,
},
})
.await
.wrap_err("account/read failed during TUI bootstrap")?;
let account = self.read_account().await?;
let model_request_id = self.next_request_id();
let models: ModelListResponse = self
.client
@@ -215,7 +213,6 @@ impl AppServerSession {
.wrap_err("model/list returned no models for TUI bootstrap")?;
let (
account_auth_mode,
account_email,
auth_mode,
status_account_display,
@@ -224,7 +221,6 @@ impl AppServerSession {
has_chatgpt_account,
) = match account.account {
Some(Account::ApiKey {}) => (
Some(AuthMode::ApiKey),
None,
Some(TelemetryAuthMode::ApiKey),
Some(StatusAccountDisplay::ApiKey),
@@ -239,7 +235,6 @@ impl AppServerSession {
FeedbackAudience::External
};
(
Some(AuthMode::Chatgpt),
Some(email.clone()),
Some(TelemetryAuthMode::Chatgpt),
Some(StatusAccountDisplay::ChatGpt {
@@ -251,50 +246,38 @@ impl AppServerSession {
true,
)
}
None => (
None,
None,
None,
None,
None,
FeedbackAudience::External,
false,
),
None => (None, None, None, None, FeedbackAudience::External, false),
};
let rate_limit_snapshots = if account.requires_openai_auth && has_chatgpt_account {
let rate_limit_request_id = self.next_request_id();
match self
.client
.request_typed(ClientRequest::GetAccountRateLimits {
request_id: rate_limit_request_id,
params: None,
})
.await
{
Ok(rate_limits) => app_server_rate_limit_snapshots_to_core(rate_limits),
Err(err) => {
tracing::warn!("account/rateLimits/read failed during TUI bootstrap: {err}");
Vec::new()
}
}
} else {
Vec::new()
};
Ok(AppServerBootstrap {
account_auth_mode,
account_email,
auth_mode,
status_account_display,
plan_type,
requires_openai_auth: account.requires_openai_auth,
default_model,
feedback_audience,
has_chatgpt_account,
available_models,
rate_limit_snapshots,
})
}
/// Fetches the current account info without refreshing the auth token.
///
/// Used by both `bootstrap` (to populate the initial UI) and `get_login_status`
/// (to check auth mode without the overhead of a full bootstrap).
pub(crate) async fn read_account(&mut self) -> Result<GetAccountResponse> {
let account_request_id = self.next_request_id();
self.client
.request_typed(ClientRequest::GetAccount {
request_id: account_request_id,
params: GetAccountParams {
refresh_token: false,
},
})
.await
.wrap_err("account/read failed during TUI bootstrap")
}
pub(crate) async fn next_event(&mut self) -> Option<AppServerEvent> {
self.client.next_event().await
}
@@ -821,6 +804,7 @@ fn model_preset_from_api_model(model: ApiModel) -> ModelPreset {
})
.collect(),
supports_personality: model.supports_personality,
additional_speed_tiers: model.additional_speed_tiers,
is_default: model.is_default,
upgrade,
show_in_picker: !model.hidden,

View File

@@ -293,6 +293,7 @@ fn queued_message_edit_binding_for_terminal(terminal_info: TerminalInfo) -> KeyB
use crate::app_event::AppEvent;
use crate::app_event::ConnectorsSnapshot;
use crate::app_event::ExitMode;
use crate::app_event::RateLimitRefreshOrigin;
#[cfg(target_os = "windows")]
use crate::app_event::WindowsSandboxEnableMode;
use crate::app_event_sender::AppEventSender;
@@ -387,7 +388,6 @@ use unicode_segmentation::UnicodeSegmentation;
const USER_SHELL_COMMAND_HELP_TITLE: &str = "Prefix a command with ! to run it locally";
const USER_SHELL_COMMAND_HELP_HINT: &str = "Example: !ls";
const DEFAULT_OPENAI_BASE_URL: &str = "https://api.openai.com/v1";
const FAST_STATUS_MODEL: &str = "gpt-5.4";
const DEFAULT_STATUS_LINE_ITEMS: [&str; 3] =
["model-with-reasoning", "context-remaining", "current-dir"];
// Track information about an in-flight exec command.
@@ -5250,8 +5250,9 @@ impl ChatWidget {
self.next_status_refresh_request_id =
self.next_status_refresh_request_id.wrapping_add(1);
self.add_status_output(/*refreshing_rate_limits*/ true, Some(request_id));
self.app_event_tx
.send(AppEvent::RefreshRateLimits { request_id });
self.app_event_tx.send(AppEvent::RefreshRateLimits {
origin: RateLimitRefreshOrigin::StatusCommand { request_id },
});
} else {
self.add_status_output(
/*refreshing_rate_limits*/ false, /*request_id*/ None,
@@ -9491,7 +9492,7 @@ impl ChatWidget {
model: &str,
service_tier: Option<ServiceTier>,
) -> bool {
model == FAST_STATUS_MODEL
self.model_supports_fast_mode(model)
&& matches!(service_tier, Some(ServiceTier::Fast))
&& self.has_chatgpt_account
}
@@ -9608,6 +9609,19 @@ impl ChatWidget {
.unwrap_or(false)
}
fn model_supports_fast_mode(&self, model: &str) -> bool {
self.model_catalog
.try_list_models()
.ok()
.and_then(|models| {
models
.into_iter()
.find(|preset| preset.model == model)
.map(|preset| preset.supports_fast_mode())
})
.unwrap_or(false)
}
/// Return whether the effective model currently advertises image-input support.
///
/// We intentionally default to `true` when model metadata cannot be read so transient catalog

View File

@@ -117,7 +117,9 @@ pub(super) use codex_protocol::models::FileSystemPermissions;
pub(super) use codex_protocol::models::MessagePhase;
pub(super) use codex_protocol::models::NetworkPermissions;
pub(super) use codex_protocol::models::PermissionProfile;
pub(super) use codex_protocol::openai_models::ModelInfo;
pub(super) use codex_protocol::openai_models::ModelPreset;
pub(super) use codex_protocol::openai_models::ModelsResponse;
pub(super) use codex_protocol::openai_models::ReasoningEffortPreset;
pub(super) use codex_protocol::openai_models::default_input_modalities;
pub(super) use codex_protocol::parse_command::ParsedCommand;
@@ -197,6 +199,7 @@ pub(super) use crossterm::event::KeyCode;
pub(super) use crossterm::event::KeyEvent;
pub(super) use crossterm::event::KeyModifiers;
pub(super) use insta::assert_snapshot;
pub(super) use serde_json::json;
#[cfg(target_os = "windows")]
pub(super) use serial_test::serial;
pub(super) use std::collections::BTreeMap;
@@ -262,4 +265,5 @@ mod status_command_tests;
pub(crate) use helpers::make_chatwidget_manual_with_sender;
pub(crate) use helpers::set_chatgpt_auth;
pub(crate) use helpers::set_fast_mode_test_catalog;
pub(super) use helpers::*;

View File

@@ -345,6 +345,69 @@ pub(crate) fn set_chatgpt_auth(chat: &mut ChatWidget) {
chat.model_catalog = test_model_catalog(&chat.config);
}
fn test_model_info(slug: &str, priority: i32, supports_fast_mode: bool) -> ModelInfo {
let additional_speed_tiers = if supports_fast_mode {
vec![codex_protocol::openai_models::SPEED_TIER_FAST]
} else {
Vec::new()
};
serde_json::from_value(json!({
"slug": slug,
"display_name": slug,
"description": format!("{slug} description"),
"default_reasoning_level": "medium",
"supported_reasoning_levels": [{"effort": "medium", "description": "medium"}],
"shell_type": "shell_command",
"visibility": "list",
"supported_in_api": true,
"priority": priority,
"additional_speed_tiers": additional_speed_tiers,
"availability_nux": null,
"upgrade": null,
"base_instructions": "base instructions",
"supports_reasoning_summaries": false,
"default_reasoning_summary": "none",
"support_verbosity": false,
"default_verbosity": null,
"apply_patch_tool_type": null,
"truncation_policy": {"mode": "bytes", "limit": 10_000},
"supports_parallel_tool_calls": false,
"supports_image_detail_original": false,
"context_window": 272_000,
"experimental_supported_tools": [],
}))
.expect("valid model info")
}
pub(crate) fn set_fast_mode_test_catalog(chat: &mut ChatWidget) {
let models: Vec<ModelPreset> = ModelsResponse {
models: vec![
test_model_info(
"gpt-5.4", /*priority*/ 0, /*supports_fast_mode*/ true,
),
test_model_info(
"gpt-5.3-codex",
/*priority*/ 1,
/*supports_fast_mode*/ false,
),
],
}
.models
.into_iter()
.map(Into::into)
.collect();
chat.model_catalog = Arc::new(ModelCatalog::new(
models,
CollaborationModesConfig {
default_mode_request_user_input: chat
.config
.features
.enabled(Feature::DefaultModeRequestUserInput),
},
));
}
pub(crate) async fn make_chatwidget_manual_with_sender() -> (
ChatWidget,
AppEventSender,

View File

@@ -1592,6 +1592,7 @@ async fn model_picker_hides_show_in_picker_false_models_from_cache() {
description: "medium".to_string(),
}],
supports_personality: false,
additional_speed_tiers: Vec::new(),
is_default: false,
upgrade: None,
show_in_picker,
@@ -1714,6 +1715,7 @@ async fn single_reasoning_option_skips_selection() {
default_reasoning_effort: ReasoningEffortConfig::High,
supported_reasoning_efforts: single_effort,
supports_personality: false,
additional_speed_tiers: Vec::new(),
is_default: false,
upgrade: None,
show_in_picker: true,

View File

@@ -574,20 +574,28 @@ async fn commentary_completion_restores_status_indicator_before_exec_begin() {
#[tokio::test]
async fn fast_status_indicator_requires_chatgpt_auth() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(Some("gpt-5.4")).await;
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
chat.set_service_tier(Some(ServiceTier::Fast));
assert!(!chat.should_show_fast_status(chat.current_model(), chat.current_service_tier(),));
set_chatgpt_auth(&mut chat);
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
assert!(chat.should_show_fast_status(chat.current_model(), chat.current_service_tier(),));
}
#[tokio::test]
async fn fast_status_indicator_is_hidden_for_non_gpt54_model() {
async fn fast_status_indicator_is_hidden_for_models_without_fast_support() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(Some("gpt-5.3-codex")).await;
set_fast_mode_test_catalog(&mut chat);
assert!(!get_available_model(&chat, "gpt-5.3-codex").supports_fast_mode());
chat.set_service_tier(Some(ServiceTier::Fast));
set_chatgpt_auth(&mut chat);
set_fast_mode_test_catalog(&mut chat);
assert!(!get_available_model(&chat, "gpt-5.3-codex").supports_fast_mode());
assert!(!chat.should_show_fast_status(chat.current_model(), chat.current_service_tier(),));
}
@@ -595,7 +603,11 @@ async fn fast_status_indicator_is_hidden_for_non_gpt54_model() {
#[tokio::test]
async fn fast_status_indicator_is_hidden_when_fast_mode_is_off() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(Some("gpt-5.4")).await;
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
set_chatgpt_auth(&mut chat);
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
assert!(!chat.should_show_fast_status(chat.current_model(), chat.current_service_tier(),));
}
@@ -946,8 +958,10 @@ async fn status_line_fast_mode_footer_snapshot() {
}
#[tokio::test]
async fn status_line_model_with_reasoning_includes_fast_for_gpt54_only() {
async fn status_line_model_with_reasoning_includes_fast_for_fast_capable_models() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(Some("gpt-5.4")).await;
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
chat.config.cwd = test_project_path().abs();
chat.config.tui_status_line = Some(vec![
"model-with-reasoning".to_string(),
@@ -957,6 +971,8 @@ async fn status_line_model_with_reasoning_includes_fast_for_gpt54_only() {
chat.set_reasoning_effort(Some(ReasoningEffortConfig::XHigh));
chat.set_service_tier(Some(ServiceTier::Fast));
set_chatgpt_auth(&mut chat);
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
chat.refresh_status_line();
let test_cwd = test_path_display("/tmp/project");
@@ -1051,6 +1067,8 @@ async fn status_line_model_with_reasoning_fast_footer_snapshot() {
use ratatui::backend::TestBackend;
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(Some("gpt-5.4")).await;
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
chat.show_welcome_banner = false;
chat.config.cwd = test_project_path().abs();
chat.config.tui_status_line = Some(vec![
@@ -1061,6 +1079,8 @@ async fn status_line_model_with_reasoning_fast_footer_snapshot() {
chat.set_reasoning_effort(Some(ReasoningEffortConfig::XHigh));
chat.set_service_tier(Some(ServiceTier::Fast));
set_chatgpt_auth(&mut chat);
set_fast_mode_test_catalog(&mut chat);
assert!(get_available_model(&chat, "gpt-5.4").supports_fast_mode());
chat.refresh_status_line();
let width = 80;

View File

@@ -15,49 +15,50 @@ async fn status_command_renders_immediately_and_refreshes_rate_limits_for_chatgp
other => panic!("expected status output before refresh request, got {other:?}"),
};
assert!(
rendered.contains("refreshing limits"),
"expected /status to explain the background refresh, got: {rendered}"
!rendered.contains("refreshing limits"),
"expected /status to avoid transient refresh text in terminal history, got: {rendered}"
);
let request_id = match rx.try_recv() {
Ok(AppEvent::RefreshRateLimits { request_id }) => request_id,
Ok(AppEvent::RefreshRateLimits {
origin: RateLimitRefreshOrigin::StatusCommand { request_id },
}) => request_id,
other => panic!("expected rate-limit refresh request, got {other:?}"),
};
pretty_assertions::assert_eq!(request_id, 0);
}
#[tokio::test]
async fn status_command_updates_rendered_cell_after_rate_limit_refresh() {
async fn status_command_refresh_updates_cached_limits_for_future_status_outputs() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual(/*model_override*/ None).await;
set_chatgpt_auth(&mut chat);
chat.dispatch_command(SlashCommand::Status);
let cell = match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(cell)) => cell,
match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(_)) => {}
other => panic!("expected status output before refresh request, got {other:?}"),
};
}
let first_request_id = match rx.try_recv() {
Ok(AppEvent::RefreshRateLimits { request_id }) => request_id,
Ok(AppEvent::RefreshRateLimits {
origin: RateLimitRefreshOrigin::StatusCommand { request_id },
}) => request_id,
other => panic!("expected rate-limit refresh request, got {other:?}"),
};
let initial = lines_to_single_string(&cell.display_lines(/*width*/ 80));
assert!(
initial.contains("refreshing limits"),
"expected initial /status output to show refresh notice, got: {initial}"
);
chat.on_rate_limit_snapshot(Some(snapshot(/*percent*/ 92.0)));
chat.finish_status_rate_limit_refresh(first_request_id);
drain_insert_history(&mut rx);
let updated = lines_to_single_string(&cell.display_lines(/*width*/ 80));
assert_ne!(
initial, updated,
"expected refreshed /status output to change"
);
chat.dispatch_command(SlashCommand::Status);
let refreshed = match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(cell)) => {
lines_to_single_string(&cell.display_lines(/*width*/ 80))
}
other => panic!("expected refreshed status output, got {other:?}"),
};
assert!(
!updated.contains("refreshing limits"),
"expected refresh notice to clear after background update, got: {updated}"
refreshed.contains("8% left"),
"expected a future /status output to use refreshed cached limits, got: {refreshed}"
);
}
@@ -81,46 +82,41 @@ async fn status_command_overlapping_refreshes_update_matching_cells_only() {
set_chatgpt_auth(&mut chat);
chat.dispatch_command(SlashCommand::Status);
let first_cell = match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(cell)) => cell,
match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(_)) => {}
other => panic!("expected first status output, got {other:?}"),
};
}
let first_request_id = match rx.try_recv() {
Ok(AppEvent::RefreshRateLimits { request_id }) => request_id,
Ok(AppEvent::RefreshRateLimits {
origin: RateLimitRefreshOrigin::StatusCommand { request_id },
}) => request_id,
other => panic!("expected first refresh request, got {other:?}"),
};
chat.dispatch_command(SlashCommand::Status);
let second_cell = match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(cell)) => cell,
let second_rendered = match rx.try_recv() {
Ok(AppEvent::InsertHistoryCell(cell)) => {
lines_to_single_string(&cell.display_lines(/*width*/ 80))
}
other => panic!("expected second status output, got {other:?}"),
};
let second_request_id = match rx.try_recv() {
Ok(AppEvent::RefreshRateLimits { request_id }) => request_id,
Ok(AppEvent::RefreshRateLimits {
origin: RateLimitRefreshOrigin::StatusCommand { request_id },
}) => request_id,
other => panic!("expected second refresh request, got {other:?}"),
};
assert_ne!(first_request_id, second_request_id);
assert!(
!second_rendered.contains("refreshing limits"),
"expected /status to avoid transient refresh text in terminal history, got: {second_rendered}"
);
chat.finish_status_rate_limit_refresh(first_request_id);
let first_after_failure = lines_to_single_string(&first_cell.display_lines(/*width*/ 80));
let second_still_refreshing = lines_to_single_string(&second_cell.display_lines(/*width*/ 80));
assert!(
!first_after_failure.contains("refreshing limits"),
"expected first status cell to stop refreshing after its request completed, got: {first_after_failure}"
);
assert!(
second_still_refreshing.contains("refreshing limits"),
"expected later status cell to keep refreshing until its own request completes, got: {second_still_refreshing}"
);
pretty_assertions::assert_eq!(chat.refreshing_status_outputs.len(), 1);
chat.on_rate_limit_snapshot(Some(snapshot(/*percent*/ 92.0)));
chat.finish_status_rate_limit_refresh(second_request_id);
let second_after_success = lines_to_single_string(&second_cell.display_lines(/*width*/ 80));
assert!(
!second_after_success.contains("refreshing limits"),
"expected second status cell to refresh once its own request completed, got: {second_after_success}"
);
assert!(chat.refreshing_status_outputs.is_empty());
}

View File

@@ -1615,6 +1615,9 @@ pub enum LoginStatus {
NotAuthenticated,
}
/// Determines the user's authentication mode using a lightweight account read
/// rather than a full `bootstrap`, avoiding the model-list fetch and
/// rate-limit round-trip that `bootstrap` would trigger.
async fn get_login_status(
app_server: &mut AppServerSession,
config: &Config,
@@ -1623,9 +1626,14 @@ async fn get_login_status(
return Ok(LoginStatus::NotAuthenticated);
}
let bootstrap = app_server.bootstrap(config).await?;
Ok(match bootstrap.account_auth_mode {
Some(auth_mode) => LoginStatus::AuthMode(auth_mode),
let account = app_server.read_account().await?;
Ok(match account.account {
Some(codex_app_server_protocol::Account::ApiKey {}) => {
LoginStatus::AuthMode(AppServerAuthMode::ApiKey)
}
Some(codex_app_server_protocol::Account::Chatgpt { .. }) => {
LoginStatus::AuthMode(AppServerAuthMode::Chatgpt)
}
None => LoginStatus::NotAuthenticated,
})
}

View File

@@ -415,23 +415,11 @@ impl StatusHistoryCell {
if rows_data.is_empty() {
return vec![formatter.line(
"Limits",
vec![if state.refreshing_rate_limits {
Span::from("refreshing cached limits...").dim()
} else {
Span::from("data not available yet").dim()
}],
vec![Span::from("not available for this account").dim()],
)];
}
let mut lines =
self.rate_limit_row_lines(rows_data, available_inner_width, formatter);
if state.refreshing_rate_limits {
lines.push(formatter.line(
"Notice",
vec![Span::from("refreshing limits in background...").dim()],
));
}
lines
self.rate_limit_row_lines(rows_data, available_inner_width, formatter)
}
StatusRateLimitData::Stale(rows_data) => {
let mut lines =
@@ -439,7 +427,7 @@ impl StatusHistoryCell {
lines.push(formatter.line(
"Warning",
vec![Span::from(if state.refreshing_rate_limits {
"limits may be stale - refreshing in background..."
"limits may be stale - run /status again shortly."
} else {
"limits may be stale - start new turn to refresh."
})
@@ -447,11 +435,17 @@ impl StatusHistoryCell {
));
lines
}
StatusRateLimitData::Unavailable => {
vec![formatter.line(
"Limits",
vec![Span::from("not available for this account").dim()],
)]
}
StatusRateLimitData::Missing => {
vec![formatter.line(
"Limits",
vec![Span::from(if state.refreshing_rate_limits {
"refreshing limits..."
"refresh requested; run /status again shortly."
} else {
"data not available yet"
})
@@ -536,6 +530,7 @@ impl StatusHistoryCell {
}
push_label(labels, seen, "Warning");
}
StatusRateLimitData::Unavailable => push_label(labels, seen, "Limits"),
StatusRateLimitData::Missing => push_label(labels, seen, "Limits"),
}
}

View File

@@ -50,6 +50,8 @@ pub(crate) enum StatusRateLimitData {
Available(Vec<StatusRateLimitRow>),
/// Snapshot data exists but is older than the staleness threshold.
Stale(Vec<StatusRateLimitRow>),
/// The refresh completed, but the response did not include displayable usage data.
Unavailable,
/// No snapshot data is currently available.
Missing,
}
@@ -269,7 +271,7 @@ pub(crate) fn compose_rate_limit_data_many(
}
if rows.is_empty() {
StatusRateLimitData::Available(vec![])
StatusRateLimitData::Unavailable
} else if stale {
StatusRateLimitData::Stale(rows)
} else {

View File

@@ -1,6 +1,5 @@
---
source: tui/src/status/tests.rs
assertion_line: 765
expression: sanitized
---
/status
@@ -20,5 +19,4 @@ expression: sanitized
│ Context window: 100% left (750 used / 272K) │
│ 5h limit: [███████████░░░░░░░░░] 55% left (resets 08:24) │
│ Weekly limit: [██████████████░░░░░░] 70% left (resets 08:54) │
│ Notice: refreshing limits in background... │
╰───────────────────────────────────────────────────────────────────────╯

View File

@@ -17,5 +17,5 @@ expression: sanitized
│ │
│ Token usage: 750 total (500 input + 250 output) │
│ Context window: 100% left (750 used / 272K) │
│ Limits: data not available yet
│ Limits: not available for this account
╰───────────────────────────────────────────────────────────────────────╯

View File

@@ -0,0 +1,21 @@
---
source: tui/src/status/tests.rs
expression: sanitized
---
/status
╭───────────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit https://chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5.1-codex-max (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Permissions: Custom (read-only, on-request) │
│ Agents.md: <none> │
│ │
│ Token usage: 750 total (500 input + 250 output) │
│ Context window: 100% left (750 used / 272K) │
│ Limits: not available for this account │
╰───────────────────────────────────────────────────────────────────────╯

View File

@@ -835,7 +835,7 @@ async fn status_snapshot_includes_credits_and_limits() {
}
#[tokio::test]
async fn status_snapshot_shows_empty_limits_message() {
async fn status_snapshot_shows_unavailable_limits_message() {
let temp_home = TempDir::new().expect("temp home");
let mut config = test_config(&temp_home).await;
config.model = Some("gpt-5.1-codex-max".to_string());
@@ -891,6 +891,63 @@ async fn status_snapshot_shows_empty_limits_message() {
assert_snapshot!(sanitized);
}
#[tokio::test]
async fn status_snapshot_treats_refreshing_empty_limits_as_unavailable() {
let temp_home = TempDir::new().expect("temp home");
let mut config = test_config(&temp_home).await;
config.model = Some("gpt-5.1-codex-max".to_string());
config.cwd = PathBuf::from("/workspace/tests").abs();
let usage = TokenUsage {
input_tokens: 500,
cached_input_tokens: 0,
output_tokens: 250,
reasoning_output_tokens: 0,
total_tokens: 750,
};
let snapshot = RateLimitSnapshot {
limit_id: None,
limit_name: None,
primary: None,
secondary: None,
credits: None,
plan_type: None,
};
let captured_at = chrono::Local
.with_ymd_and_hms(2024, 6, 7, 8, 9, 10)
.single()
.expect("timestamp");
let rate_display = rate_limit_snapshot_display(&snapshot, captured_at);
let model_slug = codex_core::test_support::get_model_offline(config.model.as_deref());
let token_info = token_info_for(&model_slug, &config, &usage);
let composite = new_status_output_with_rate_limits(
&config,
/*account_display*/ None,
Some(&token_info),
&usage,
&None,
/*thread_name*/ None,
/*forked_from*/ None,
std::slice::from_ref(&rate_display),
None,
captured_at,
&model_slug,
/*collaboration_mode*/ None,
/*reasoning_effort_override*/ None,
/*refreshing_rate_limits*/ true,
);
let mut rendered_lines = render_lines(&composite.display_lines(/*width*/ 80));
if cfg!(windows) {
for line in &mut rendered_lines {
*line = line.replace('\\', "/");
}
}
let sanitized = sanitize_directory(rendered_lines).join("\n");
assert_snapshot!(sanitized);
}
#[tokio::test]
async fn status_snapshot_shows_stale_limits_message() {
let temp_home = TempDir::new().expect("temp home");

182
scripts/start-codex-exec.sh Executable file
View File

@@ -0,0 +1,182 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
echo "Usage: $0 HOST [RSYNC_OPTION]..." >&2
}
if [[ $# -lt 1 ]]; then
usage
exit 2
fi
case "$1" in
-h|--help)
usage
exit 0
;;
esac
remote_host="$1"
shift
remote_path='~/code/codex-sync'
local_exec_server_port="${CODEX_REMOTE_EXEC_SERVER_LOCAL_PORT:-8765}"
remote_exec_server_start_timeout_seconds="${CODEX_REMOTE_EXEC_SERVER_START_TIMEOUT_SECONDS:-15}"
remote_exec_server_pid=''
remote_exec_server_log_path=''
remote_exec_server_pid_path=''
cleanup() {
local exit_code=$?
trap - EXIT INT TERM
if [[ -n "${remote_exec_server_pid_path}" ]]; then
ssh "${remote_host}" \
"if [[ -f '${remote_exec_server_pid_path}' ]]; then kill \$(cat '${remote_exec_server_pid_path}') >/dev/null 2>&1 || true; fi; rm -f '${remote_exec_server_pid_path}' '${remote_exec_server_log_path}'" \
>/dev/null 2>&1 || true
fi
exit "${exit_code}"
}
trap cleanup EXIT INT TERM
if ! command -v git >/dev/null 2>&1; then
echo "git is required" >&2
exit 1
fi
if ! command -v ssh >/dev/null 2>&1; then
echo "ssh is required" >&2
exit 1
fi
if ! command -v rsync >/dev/null 2>&1; then
echo "local rsync is required" >&2
exit 1
fi
repo_root="$(git rev-parse --show-toplevel 2>/dev/null)" || {
echo "run this script from inside a git repository" >&2
exit 1
}
ssh "${remote_host}" "mkdir -p ${remote_path}"
if ! ssh "${remote_host}" 'command -v rsync >/dev/null 2>&1'; then
echo "remote rsync is required on ${remote_host}" >&2
exit 1
fi
sync_instance_id="$(date +%s)-$$"
rsync \
--archive \
--compress \
--human-readable \
--itemize-changes \
--exclude '.git/' \
--exclude 'codex-rs/target/' \
--filter=':- .gitignore' \
"$@" \
"${repo_root}/" \
"${remote_host}:${remote_path}/" \
>&2
remote_exec_server_log_path="/tmp/codex-exec-server-${sync_instance_id}.log"
remote_exec_server_pid_path="/tmp/codex-exec-server-${sync_instance_id}.pid"
remote_start_output="$(
ssh "${remote_host}" bash -s -- \
"${remote_exec_server_log_path}" \
"${remote_exec_server_pid_path}" \
"${remote_exec_server_start_timeout_seconds}" <<'EOF'
set -euo pipefail
remote_exec_server_log_path="$1"
remote_exec_server_pid_path="$2"
remote_exec_server_start_timeout_seconds="$3"
remote_repo_root="$HOME/code/codex-sync"
remote_codex_rs="$remote_repo_root/codex-rs"
cd "${remote_codex_rs}"
cargo build -p codex-exec-server --bin codex-exec-server
rm -f "${remote_exec_server_log_path}" "${remote_exec_server_pid_path}"
nohup ./target/debug/codex-exec-server --listen ws://127.0.0.1:0 \
>"${remote_exec_server_log_path}" 2>&1 &
remote_exec_server_pid="$!"
echo "${remote_exec_server_pid}" >"${remote_exec_server_pid_path}"
deadline=$((SECONDS + remote_exec_server_start_timeout_seconds))
while (( SECONDS < deadline )); do
if [[ -s "${remote_exec_server_log_path}" ]]; then
listen_url="$(head -n 1 "${remote_exec_server_log_path}" || true)"
if [[ "${listen_url}" == ws://* ]]; then
printf 'remote_exec_server_pid=%s\n' "${remote_exec_server_pid}"
printf 'remote_exec_server_log_path=%s\n' "${remote_exec_server_log_path}"
printf 'listen_url=%s\n' "${listen_url}"
exit 0
fi
fi
if ! kill -0 "${remote_exec_server_pid}" >/dev/null 2>&1; then
cat "${remote_exec_server_log_path}" >&2 || true
echo "remote exec server exited before reporting a listen URL" >&2
exit 1
fi
sleep 0.1
done
cat "${remote_exec_server_log_path}" >&2 || true
echo "timed out waiting for remote exec server listen URL" >&2
exit 1
EOF
)"
listen_url=''
while IFS='=' read -r key value; do
case "${key}" in
remote_exec_server_pid)
remote_exec_server_pid="${value}"
;;
remote_exec_server_log_path)
remote_exec_server_log_path="${value}"
;;
listen_url)
listen_url="${value}"
;;
esac
done <<< "${remote_start_output}"
if [[ -z "${remote_exec_server_pid}" || -z "${listen_url}" ]]; then
echo "failed to parse remote exec server startup output" >&2
exit 1
fi
remote_exec_server_port="${listen_url##*:}"
if [[ -z "${remote_exec_server_port}" || "${remote_exec_server_port}" == "${listen_url}" ]]; then
echo "failed to parse remote exec server port from ${listen_url}" >&2
exit 1
fi
echo "Remote exec server: ${listen_url}"
echo "Remote exec server log: ${remote_exec_server_log_path}"
echo "Press Ctrl-C to stop the SSH tunnel and remote exec server."
echo "Start codex via: "
echo " CODEX_EXEC_SERVER_URL=ws://127.0.0.1:${local_exec_server_port} codex -C /tmp"
ssh \
-nNT \
-o ControlMaster=no \
-o ControlPath=none \
-o ExitOnForwardFailure=yes \
-o ServerAliveInterval=30 \
-o ServerAliveCountMax=3 \
-L "${local_exec_server_port}:127.0.0.1:${remote_exec_server_port}" \
"${remote_host}"