Compare commits

...

18 Commits

Author SHA1 Message Date
Javier Soto
557731d7d6 Fix CI errors 2025-09-02 16:34:59 -07:00
Javier Soto
8ac87edd64 Use proper logging method 2025-09-02 15:28:54 -07:00
Javier Soto
bba09a89e2 Simplify 2025-09-02 14:33:47 -07:00
Javier Soto
8958283edd fix: work-around "The cursor position could not be read within a normal duration" crash
Fixes #2805
2025-09-02 14:28:39 -07:00
dedrisian-oai
3f8184034f Fix CI release build (#2864) 2025-08-29 03:06:10 +00:00
unship
f7cb2f87a0 Bug fix: clone of incoming_tx can lead to deadlock (#2747)
POC code

```rust
use tokio::sync::mpsc;
use std::time::Duration;

#[tokio::main]
async fn main() {
    println!("=== Test 1: Simulating original MCP server pattern ===");
    test_original_pattern().await;
}

async fn test_original_pattern() {
    println!("Testing the original pattern from MCP server...");
    
    // Create channel - this simulates the original incoming_tx/incoming_rx
    let (tx, mut rx) = mpsc::channel::<String>(10);
    
    // Task 1: Simulates stdin reader that will naturally terminate
    let stdin_task = tokio::spawn({
        let tx_clone = tx.clone();
        async move {
            println!("  stdin_task: Started, will send 3 messages then exit");
            for i in 0..3 {
                let msg = format!("Message {}", i);
                if tx_clone.send(msg.clone()).await.is_err() {
                    println!("  stdin_task: Receiver dropped, exiting");
                    break;
                }
                println!("  stdin_task: Sent {}", msg);
                tokio::time::sleep(Duration::from_millis(300)).await;
            }
            println!("  stdin_task: Finished (simulating EOF)");
            // tx_clone is dropped here
        }
    });
    
    // Task 2: Simulates message processor
    let processor_task = tokio::spawn(async move {
        println!("  processor_task: Started, waiting for messages");
        while let Some(msg) = rx.recv().await {
            println!("  processor_task: Processing {}", msg);
            tokio::time::sleep(Duration::from_millis(100)).await;
        }
        println!("  processor_task: Finished (channel closed)");
    });
    
    // Task 3: Simulates stdout writer or other background task
    let background_task = tokio::spawn(async move {
        for i in 0..2 {
            tokio::time::sleep(Duration::from_millis(500)).await;
            println!("  background_task: Tick {}", i);
        }
        println!("  background_task: Finished");
    });
    
    println!("  main: Original tx is still alive here");
    println!("  main: About to call tokio::join! - will this deadlock?");
    
    // This is the pattern from the original code
    let _ = tokio::join!(stdin_task, processor_task, background_task);
}

```

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-08-28 19:28:17 -07:00
Ahmed Ibrahim
9dbe7284d2 Following up on #2371 post commit feedback (#2852)
- Introduce websearch end to complement the begin 
- Moves the logic of adding the sebsearch tool to
create_tools_json_for_responses_api
- Making it the client responsibility to toggle the tool on or off 
- Other misc in #2371 post commit feedback
- Show the query:

<img width="1392" height="151" alt="image"
src="https://github.com/user-attachments/assets/8457f1a6-f851-44cf-bcca-0d4fe460ce89"
/>
2025-08-28 19:24:38 -07:00
dedrisian-oai
b8e8454b3f Custom /prompts (#2696)
Adds custom `/prompts` to `~/.codex/prompts/<command>.md`.

<img width="239" height="107" alt="Screenshot 2025-08-25 at 6 22 42 PM"
src="https://github.com/user-attachments/assets/fe6ebbaa-1bf6-49d3-95f9-fdc53b752679"
/>

---

Details:

1. Adds `Op::ListCustomPrompts` to core.
2. Returns `ListCustomPromptsResponse` with list of `CustomPrompt`
(name, content).
3. TUI calls the operation on load, and populates the custom prompts
(excluding prompts that collide with builtins).
4. Selecting the custom prompt automatically sends the prompt to the
agent.
2025-08-29 02:16:39 +00:00
HaxagonusD
bbcfd63aba UI: Make slash commands bold in welcome message (#2762)
## What
Make slash commands (/init, /status, /approvals, /model) bold and white
in the welcome message for better visibility.
<img width="990" height="286" alt="image"
src="https://github.com/user-attachments/assets/13f90e96-b84a-4659-aab4-576d84a31af7"
/>


## Why
The current welcome message displays all text in a dimmed style, making
the slash commands less prominent. Users need to quickly identify
available commands when starting Codex.

## How
Modified `tui/src/history_cell.rs` in the `new_session_info` function
to:
- Split each command line into separate spans
- Apply bold white styling to command text (`/init`, `/status`, etc.)
- Keep descriptions dimmed for visual contrast
- Maintain existing layout and spacing

## Test plan
- [ ] Run the TUI and verify commands appear bold in the welcome message
- [ ] Ensure descriptions remain dimmed for readability
- [ ] Confirm all existing tests pass
2025-08-28 18:12:41 -07:00
Eric Traut
6209d49520 Changed OAuth success screen to use the string "Codex" rather than "Codex CLI" (#2737) 2025-08-28 21:21:10 +00:00
Gabriel Peal
c3a8b96a60 Add a VS Code Extension issue template (#2853)
Template mostly copied from the bug template
2025-08-28 16:56:52 -04:00
Ahmed Ibrahim
c9ca63dc1e burst paste edge cases (#2683)
This PR fixes two edge cases in managing burst paste (mainly on power
shell).
Bugs:
- Needs an event key after paste to render the pasted items

> ChatComposer::flush_paste_burst_if_due() flushes on timeout. Called:
>     - Pre-render in App on TuiEvent::Draw.
>     - Via a delayed frame
>
BottomPane::request_redraw_in(ChatComposer::recommended_paste_flush_delay()).

- Parses two key events separately before starting parsing burst paste

> When threshold is crossed, pull preceding burst chars out of the
textarea and prepend to paste_burst_buffer, then keep buffering.

- Integrates with #2567 to bring image pasting to windows.
2025-08-28 12:54:12 -07:00
Ahmed Ibrahim
ed06f90fb3 Race condition in compact (#2746)
This fixes the flakiness in
`summarize_context_three_requests_and_instructions` because we should
trim history before sending task complete.
2025-08-28 12:53:00 -07:00
Michael Bolin
f09170b574 chore: print stderr from MCP server to test output using eprintln! (#2849)
Related to https://github.com/openai/codex/pull/2848, I don't see the
stderr from `codex mcp` colocated with the other stderr from
`test_shell_command_approval_triggers_elicitation()` when it fails even
though we have `RUST_LOG=debug` set when we spawn `codex mcp`:


1e9e703b96/codex-rs/mcp-server/tests/common/mcp_process.rs (L65)

Let's try this new logic which should be more explicit.
2025-08-28 12:43:13 -07:00
Michael Bolin
1e9e703b96 chore: try to make it easier to debug the flakiness of test_shell_command_approval_triggers_elicitation (#2848)
`test_shell_command_approval_triggers_elicitation()` is one of a number
of integration tests that we have observed to be flaky on GitHub CI, so
this PR tries to reduce the flakiness _and_ to provide us with more
information when it flakes. Specifically:

- Changed the command that we use to trigger the elicitation from `git
init` to `python3 -c 'import pathlib; pathlib.Path(r"{}").touch()'`
because running `git` seems more likely to invite variance.
- Increased the timeout to wait for the task response from 10s to 20s.
- Added more logging.
2025-08-28 12:33:33 -07:00
Michael Bolin
74d2741729 chore: require uninlined_format_args from clippy (#2845)
- added `uninlined_format_args` to `[workspace.lints.clippy]` in the
`Cargo.toml` for the workspace
- ran `cargo clippy --tests --fix`
- ran `just fmt`
2025-08-28 11:25:23 -07:00
Jeremy Rose
e5611aab07 disallow some slash commands while a task is running (#2792)
/new, /init, /models, /approvals, etc. don't work correctly during a
turn. disable them.
2025-08-28 10:15:59 -07:00
dedrisian-oai
4e9ad23864 Add "View Image" tool (#2723)
Adds a "View Image" tool so Codex can find and see images by itself:

<img width="1772" height="420" alt="Screenshot 2025-08-26 at 10 40
04 AM"
src="https://github.com/user-attachments/assets/7a459c7b-0b86-4125-82d9-05fbb35ade03"
/>
2025-08-27 17:41:23 -07:00
45 changed files with 1528 additions and 339 deletions

View File

@@ -0,0 +1,50 @@
name: 🧑‍💻 VS Code Extension
description: Report an issue with the VS Code extension
labels:
- extension
- needs triage
body:
- type: markdown
attributes:
value: |
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
- type: input
id: version
attributes:
label: What version of the VS Code extension are you using?
- type: input
id: ide
attributes:
label: Which IDE are you using?
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
- type: input
id: platform
attributes:
label: What platform is your computer?
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: actual
attributes:
label: What do you see instead?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:
label: Additional information
description: Is there anything else you think we should know?

View File

@@ -34,6 +34,7 @@ rust = {}
[workspace.lints.clippy]
expect_used = "deny"
uninlined_format_args = "deny"
unwrap_used = "deny"
[profile.release]

View File

@@ -129,7 +129,9 @@ pub(crate) async fn stream_chat_completions(
"content": output,
}));
}
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::Other => {
// Omit these items from the conversation history.
continue;
}
@@ -623,11 +625,8 @@ where
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { .. }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin {
call_id: String::new(),
query: None,
})));
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id })));
}
}
}

View File

@@ -160,21 +160,7 @@ impl ModelClient {
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
let mut tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
// ChatGPT backend expects the preview name for web search.
if auth_mode == Some(AuthMode::ChatGPT) {
for tool in &mut tools_json {
if let Some(map) = tool.as_object_mut()
&& map.get("type").and_then(|v| v.as_str()) == Some("web_search")
{
map.insert(
"type".to_string(),
serde_json::Value::String("web_search_preview".to_string()),
);
}
}
}
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
let reasoning = create_reasoning_param_for_request(
&self.config.model_family,
self.effort,
@@ -607,11 +593,9 @@ async fn process_sse<S>(
| "response.custom_tool_call_input.delta"
| "response.custom_tool_call_input.done" // also emitted as response.output_item.done
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.done" => {
if event.kind == "response.output_item.added"
&& let Some(item) = event.item.as_ref()
{
| "response.output_text.done" => {}
"response.output_item.added" => {
if let Some(item) = event.item.as_ref() {
// Detect web_search_call begin and forward a synthetic event upstream.
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
&& ty == "web_search_call"
@@ -621,7 +605,7 @@ async fn process_sse<S>(
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let ev = ResponseEvent::WebSearchCallBegin { call_id, query: None };
let ev = ResponseEvent::WebSearchCallBegin { call_id };
if tx_event.send(Ok(ev)).await.is_err() {
return;
}

View File

@@ -95,7 +95,6 @@ pub enum ResponseEvent {
ReasoningSummaryPartAdded,
WebSearchCallBegin {
call_id: String,
query: Option<String>,
},
}

View File

@@ -89,6 +89,7 @@ use crate::protocol::ExecCommandBeginEvent;
use crate::protocol::ExecCommandEndEvent;
use crate::protocol::FileChange;
use crate::protocol::InputItem;
use crate::protocol::ListCustomPromptsResponseEvent;
use crate::protocol::Op;
use crate::protocol::PatchApplyBeginEvent;
use crate::protocol::PatchApplyEndEvent;
@@ -100,6 +101,7 @@ use crate::protocol::Submission;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::protocol::WebSearchEndEvent;
use crate::rollout::RolloutRecorder;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
@@ -110,6 +112,7 @@ use crate::user_notification::UserNotification;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::custom_prompts::CustomPrompt;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
@@ -118,6 +121,7 @@ use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::models::WebSearchAction;
// A convenience extension trait for acquiring mutex locks where poisoning is
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
@@ -518,6 +522,7 @@ impl Session {
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
}),
user_instructions,
base_instructions,
@@ -1108,6 +1113,7 @@ async fn submission_loop(
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
});
let new_turn_context = TurnContext {
@@ -1193,6 +1199,7 @@ async fn submission_loop(
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config
.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
}),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
@@ -1283,6 +1290,27 @@ async fn submission_loop(
warn!("failed to send McpListToolsResponse event: {e}");
}
}
Op::ListCustomPrompts => {
let tx_event = sess.tx_event.clone();
let sub_id = sub.id.clone();
let custom_prompts: Vec<CustomPrompt> =
if let Some(dir) = crate::custom_prompts::default_prompts_dir() {
crate::custom_prompts::discover_prompts_in(&dir).await
} else {
Vec::new()
};
let event = Event {
id: sub_id,
msg: EventMsg::ListCustomPromptsResponse(ListCustomPromptsResponseEvent {
custom_prompts,
}),
};
if let Err(e) = tx_event.send(event).await {
warn!("failed to send ListCustomPromptsResponse event: {e}");
}
}
Op::Compact => {
// Create a summarization request as user input
const SUMMARIZATION_PROMPT: &str = include_str!("prompt_for_compact_command.md");
@@ -1743,13 +1771,12 @@ async fn try_run_turn(
.await?;
output.push(ProcessedResponseItem { item, response });
}
ResponseEvent::WebSearchCallBegin { call_id, query } => {
let q = query.unwrap_or_else(|| "Searching Web...".to_string());
ResponseEvent::WebSearchCallBegin { call_id } => {
let _ = sess
.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id, query: q }),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id }),
})
.await;
}
@@ -1881,6 +1908,12 @@ async fn run_compact_task(
}
sess.remove_task(&sub_id);
{
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
}
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
@@ -1895,9 +1928,6 @@ async fn run_compact_task(
}),
};
sess.send_event(event).await;
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
}
async fn handle_response_item(
@@ -2045,6 +2075,17 @@ async fn handle_response_item(
debug!("unexpected CustomToolCallOutput from stream");
None
}
ResponseItem::WebSearchCall { id, action, .. } => {
if let WebSearchAction::Search { query } = action {
let call_id = id.unwrap_or_else(|| "".to_string());
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchEnd(WebSearchEndEvent { call_id, query }),
};
sess.tx_event.send(event).await.ok();
}
None
}
ResponseItem::Other => None,
};
Ok(output)
@@ -2077,6 +2118,36 @@ async fn handle_function_call(
)
.await
}
"view_image" => {
#[derive(serde::Deserialize)]
struct SeeImageArgs {
path: String,
}
let args = match serde_json::from_str::<SeeImageArgs>(&arguments) {
Ok(a) => a,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let abs = turn_context.resolve_path(Some(args.path));
let output = match sess.inject_input(vec![InputItem::LocalImage { path: abs }]) {
Ok(()) => FunctionCallOutputPayload {
content: "attached local image path".to_string(),
success: Some(true),
},
Err(_) => FunctionCallOutputPayload {
content: "unable to attach image (no active task)".to_string(),
success: Some(false),
},
};
ResponseInputItem::FunctionCallOutput { call_id, output }
}
"apply_patch" => {
let args = match serde_json::from_str::<ApplyPatchToolArgs>(&arguments) {
Ok(a) => a,

View File

@@ -178,6 +178,13 @@ pub struct Config {
pub preferred_auth_method: AuthMode,
pub use_experimental_streamable_shell_tool: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: bool,
}
impl Config {
@@ -485,6 +492,11 @@ pub struct ConfigToml {
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: Option<bool>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
@@ -494,9 +506,12 @@ pub struct ProjectConfig {
#[derive(Deserialize, Debug, Clone, Default)]
pub struct ToolsToml {
// Renamed from `web_search_request`; keep alias for backwards compatibility.
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
/// Enable the `view_image` tool that lets the agent attach local images.
#[serde(default)]
pub view_image: Option<bool>,
}
impl ConfigToml {
@@ -586,6 +601,7 @@ pub struct ConfigOverrides {
pub base_instructions: Option<String>,
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub disable_response_storage: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
@@ -613,6 +629,7 @@ impl Config {
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool,
disable_response_storage,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
@@ -654,7 +671,7 @@ impl Config {
})?
.clone();
let shell_environment_policy = cfg.shell_environment_policy.clone().into();
let shell_environment_policy = cfg.shell_environment_policy.into();
let resolved_cwd = {
use std::env;
@@ -675,12 +692,16 @@ impl Config {
}
};
let history = cfg.history.clone().unwrap_or_default();
let history = cfg.history.unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let include_view_image_tool = include_view_image_tool
.or(cfg.tools.as_ref().and_then(|t| t.view_image))
.unwrap_or(true);
let model = model
.or(config_profile.model)
.or(cfg.model)
@@ -753,7 +774,7 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.clone().unwrap_or_default(),
tui: cfg.tui.unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -772,7 +793,7 @@ impl Config {
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
chatgpt_base_url: config_profile
.chatgpt_base_url
.or(cfg.chatgpt_base_url.clone())
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
@@ -784,6 +805,8 @@ impl Config {
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
include_view_image_tool,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
};
Ok(config)
}
@@ -1152,6 +1175,8 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
},
o3_profile_config
);
@@ -1208,6 +1233,8 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -1279,6 +1306,8 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
@@ -1300,9 +1329,9 @@ disable_response_storage = true
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
format!("'{raw_path}'")
} else {
format!("\"{}\"", raw_path)
format!("\"{raw_path}\"")
};
let expected = format!(
r#"[projects.{path_str}]
@@ -1323,9 +1352,9 @@ trust_level = "trusted"
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
format!("'{raw_path}'")
} else {
format!("\"{}\"", raw_path)
format!("\"{raw_path}\"")
};
// Use a quoted key so backslashes don't require escaping on Windows
let initial = format!(

View File

@@ -72,7 +72,7 @@ fn is_api_message(message: &ResponseItem) -> bool {
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. } => true,
ResponseItem::Other => false,
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
}
}

View File

@@ -0,0 +1,127 @@
use codex_protocol::custom_prompts::CustomPrompt;
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use tokio::fs;
/// Return the default prompts directory: `$CODEX_HOME/prompts`.
/// If `CODEX_HOME` cannot be resolved, returns `None`.
pub fn default_prompts_dir() -> Option<PathBuf> {
crate::config::find_codex_home()
.ok()
.map(|home| home.join("prompts"))
}
/// Discover prompt files in the given directory, returning entries sorted by name.
/// Non-files are ignored. If the directory does not exist or cannot be read, returns empty.
pub async fn discover_prompts_in(dir: &Path) -> Vec<CustomPrompt> {
discover_prompts_in_excluding(dir, &HashSet::new()).await
}
/// Discover prompt files in the given directory, excluding any with names in `exclude`.
/// Returns entries sorted by name. Non-files are ignored. Missing/unreadable dir yields empty.
pub async fn discover_prompts_in_excluding(
dir: &Path,
exclude: &HashSet<String>,
) -> Vec<CustomPrompt> {
let mut out: Vec<CustomPrompt> = Vec::new();
let mut entries = match fs::read_dir(dir).await {
Ok(entries) => entries,
Err(_) => return out,
};
while let Ok(Some(entry)) = entries.next_entry().await {
let path = entry.path();
let is_file = entry
.file_type()
.await
.map(|ft| ft.is_file())
.unwrap_or(false);
if !is_file {
continue;
}
// Only include Markdown files with a .md extension.
let is_md = path
.extension()
.and_then(|s| s.to_str())
.map(|ext| ext.eq_ignore_ascii_case("md"))
.unwrap_or(false);
if !is_md {
continue;
}
let Some(name) = path
.file_stem()
.and_then(|s| s.to_str())
.map(|s| s.to_string())
else {
continue;
};
if exclude.contains(&name) {
continue;
}
let content = match fs::read_to_string(&path).await {
Ok(s) => s,
Err(_) => continue,
};
out.push(CustomPrompt {
name,
path,
content,
});
}
out.sort_by(|a, b| a.name.cmp(&b.name));
out
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[tokio::test]
async fn empty_when_dir_missing() {
let tmp = tempdir().expect("create TempDir");
let missing = tmp.path().join("nope");
let found = discover_prompts_in(&missing).await;
assert!(found.is_empty());
}
#[tokio::test]
async fn discovers_and_sorts_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("b.md"), b"b").unwrap();
fs::write(dir.join("a.md"), b"a").unwrap();
fs::create_dir(dir.join("subdir")).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["a", "b"]);
}
#[tokio::test]
async fn excludes_builtins() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("init.md"), b"ignored").unwrap();
fs::write(dir.join("foo.md"), b"ok").unwrap();
let mut exclude = HashSet::new();
exclude.insert("init".to_string());
let found = discover_prompts_in_excluding(dir, &exclude).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["foo"]);
}
#[tokio::test]
async fn skips_non_utf8_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
// Valid UTF-8 file
fs::write(dir.join("good.md"), b"hello").unwrap();
// Invalid UTF-8 content in .md file (e.g., lone 0xFF byte)
fs::write(dir.join("bad.md"), vec![0xFF, 0xFE, b'\n']).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["good"]);
}
}

View File

@@ -85,23 +85,21 @@ impl EnvironmentContext {
}
if let Some(approval_policy) = self.approval_policy {
lines.push(format!(
" <approval_policy>{}</approval_policy>",
approval_policy
" <approval_policy>{approval_policy}</approval_policy>"
));
}
if let Some(sandbox_mode) = self.sandbox_mode {
lines.push(format!(" <sandbox_mode>{}</sandbox_mode>", sandbox_mode));
lines.push(format!(" <sandbox_mode>{sandbox_mode}</sandbox_mode>"));
}
if let Some(network_access) = self.network_access {
lines.push(format!(
" <network_access>{}</network_access>",
network_access
" <network_access>{network_access}</network_access>"
));
}
if let Some(shell) = self.shell
&& let Some(shell_name) = shell.name()
{
lines.push(format!(" <shell>{}</shell>", shell_name));
lines.push(format!(" <shell>{shell_name}</shell>"));
}
lines.push(ENVIRONMENT_CONTEXT_END.to_string());
lines.join("\n")

View File

@@ -170,15 +170,15 @@ fn format_reset_duration(total_secs: u64) -> String {
let mut parts: Vec<String> = Vec::new();
if days > 0 {
let unit = if days == 1 { "day" } else { "days" };
parts.push(format!("{} {}", days, unit));
parts.push(format!("{days} {unit}"));
}
if hours > 0 {
let unit = if hours == 1 { "hour" } else { "hours" };
parts.push(format!("{} {}", hours, unit));
parts.push(format!("{hours} {unit}"));
}
if minutes > 0 {
let unit = if minutes == 1 { "minute" } else { "minutes" };
parts.push(format!("{} {}", minutes, unit));
parts.push(format!("{minutes} {unit}"));
}
if parts.is_empty() {

View File

@@ -359,10 +359,7 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
// Cannot keep any content; still return a full marker (never truncated).
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
// Helper to truncate a string to a given byte length on a char boundary.
@@ -406,16 +403,13 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
// Refine marker length and budgets until stable. Marker is never truncated.
let mut guess_tokens = est_tokens; // worst-case: everything truncated
for _ in 0..4 {
let marker = format!("{} tokens truncated…", guess_tokens);
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
// No room for any content within the cap; return a full, untruncated marker
// that reflects the entire truncated content.
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
@@ -441,14 +435,11 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
}
// Fallback: use last guess to build output.
let marker = format!("{} tokens truncated…", guess_tokens);
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;

View File

@@ -17,6 +17,7 @@ pub mod config;
pub mod config_profile;
pub mod config_types;
mod conversation_history;
pub mod custom_prompts;
mod environment_context;
pub mod error;
pub mod exec;

View File

@@ -47,7 +47,9 @@ pub(crate) enum OpenAiTool {
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
#[serde(rename = "web_search")]
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
#[serde(rename = "web_search_preview")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
@@ -67,6 +69,7 @@ pub(crate) struct ToolsConfig {
pub plan_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
pub include_view_image_tool: bool,
}
pub(crate) struct ToolsConfigParams<'a> {
@@ -77,6 +80,7 @@ pub(crate) struct ToolsConfigParams<'a> {
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
pub(crate) include_view_image_tool: bool,
}
impl ToolsConfig {
@@ -89,6 +93,7 @@ impl ToolsConfig {
include_apply_patch_tool,
include_web_search_request,
use_streamable_shell_tool,
include_view_image_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
@@ -120,6 +125,7 @@ impl ToolsConfig {
plan_tool: *include_plan_tool,
apply_patch_tool_type,
web_search_request: *include_web_search_request,
include_view_image_tool: *include_view_image_tool,
}
}
}
@@ -292,6 +298,30 @@ The shell tool is used to execute shell commands.
},
})
}
fn create_view_image_tool() -> OpenAiTool {
// Support only local filesystem path.
let mut properties = BTreeMap::new();
properties.insert(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "view_image".to_string(),
description:
"Attach a local image (by filesystem path) to the conversation context for this turn."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["path".to_string()]),
additional_properties: Some(false),
},
})
}
/// TODO(dylan): deprecate once we get rid of json tool
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
@@ -307,12 +337,12 @@ pub fn create_tools_json_for_responses_api(
let mut tools_json = Vec::new();
for tool in tools {
tools_json.push(serde_json::to_value(tool)?);
let json = serde_json::to_value(tool)?;
tools_json.push(json);
}
Ok(tools_json)
}
/// Returns JSON values that are compatible with Function Calling in the
/// Chat Completions API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
@@ -541,6 +571,11 @@ pub(crate) fn get_openai_tools(
tools.push(OpenAiTool::WebSearch {});
}
// Include the view_image tool so the agent can attach images to context.
if config.include_view_image_tool {
tools.push(create_view_image_tool());
}
if let Some(mcp_tools) = mcp_tools {
// Ensure deterministic ordering to maximize prompt cache hits.
// HashMap iteration order is non-deterministic, so sort by fully-qualified tool name.
@@ -604,10 +639,14 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["local_shell", "update_plan", "web_search"]);
assert_eq_tool_names(
&tools,
&["local_shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
@@ -621,10 +660,14 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["shell", "update_plan", "web_search"]);
assert_eq_tool_names(
&tools,
&["shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
@@ -638,6 +681,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -660,8 +704,8 @@ mod tests {
"number_property": { "type": "number" },
},
"required": [
"string_property".to_string(),
"number_property".to_string()
"string_property",
"number_property",
],
"additionalProperties": Some(false),
},
@@ -679,11 +723,16 @@ mod tests {
assert_eq_tool_names(
&tools,
&["shell", "web_search", "test_server/do_something_cool"],
&[
"shell",
"web_search",
"view_image",
"test_server/do_something_cool",
],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
@@ -737,6 +786,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
// Intentionally construct a map with keys that would sort alphabetically.
@@ -794,6 +844,7 @@ mod tests {
&tools,
&[
"shell",
"view_image",
"test_server/cool",
"test_server/do",
"test_server/something",
@@ -812,6 +863,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -837,10 +889,13 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/search"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/search"],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
@@ -870,6 +925,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -893,9 +949,12 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/paginate"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/paginate"],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
@@ -923,6 +982,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -946,9 +1006,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/tags"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/tags"]);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
@@ -979,6 +1039,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -1002,9 +1063,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/value"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/value"]);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {

View File

@@ -135,7 +135,7 @@ impl RolloutRecorder {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => filtered.push(item.clone()),
ResponseItem::Other => {
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
// These should never be serialized.
continue;
}
@@ -199,7 +199,7 @@ impl RolloutRecorder {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => items.push(item),
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
},
Err(e) => {
warn!("failed to parse item: {v:?}, error: {e}");
@@ -326,7 +326,7 @@ async fn rollout_writer(
| ResponseItem::Reasoning { .. } => {
writer.write_line(&item).await?;
}
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
}
}
}

View File

@@ -418,7 +418,7 @@ async fn prefers_chatgpt_token_when_config_prefers_chatgpt() {
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
@@ -499,7 +499,7 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {

View File

@@ -191,7 +191,7 @@ async fn prompt_tools_are_consistent_across_requests() {
let expected_instructions: &str = include_str!("../../prompt.md");
// our internal implementation is responsible for keeping tools in sync
// with the OpenAI schema, so we just verify the tool presence here
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch"];
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch", "view_image"];
let body0 = requests[0].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body0["instructions"],
@@ -280,7 +280,7 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
{}</environment_context>"#,
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{}</shell>\n", name),
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
);

View File

@@ -25,6 +25,7 @@ use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use owo_colors::OwoColorize;
use owo_colors::Style;
use shlex::try_join;
@@ -362,8 +363,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
}
}
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _, query }) => {
ts_println!(self, "🌐 {query}");
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _ }) => {}
EventMsg::WebSearchEnd(WebSearchEndEvent { call_id: _, query }) => {
ts_println!(self, "🌐 Searched: {query}");
}
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
call_id,
@@ -533,6 +535,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::McpListToolsResponse(_) => {
// Currently ignored in exec output.
}
EventMsg::ListCustomPromptsResponse(_) => {
// Currently ignored in exec output.
}
EventMsg::TurnAborted(abort_reason) => match abort_reason.reason {
TurnAbortReason::Interrupted => {
ts_println!(self, "task interrupted");

View File

@@ -148,6 +148,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
base_instructions: None,
include_plan_tool: None,
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: oss.then_some(true),
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,

View File

@@ -28,7 +28,7 @@ impl Respond for SeqResponder {
Some(body) => wiremock::ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(body, &format!("request_{}", call_num)),
load_sse_fixture_with_id_from_str(body, &format!("request_{call_num}")),
"text/event-stream",
),
None => panic!("no response for {call_num}"),
@@ -63,7 +63,7 @@ pub(crate) async fn run_e2e_exec_test(cwd: &Path, response_streams: Vec<String>)
.current_dir(cwd.clone())
.env("CODEX_HOME", cwd.clone())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", uri))
.env("OPENAI_BASE_URL", format!("{uri}/v1"))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("danger-full-access")

View File

@@ -2,7 +2,7 @@
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Sign into Codex CLI</title>
<title>Sign into Codex</title>
<link rel="icon" href='data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"%3E%3Cpath stroke="%23000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"/%3E%3C/svg%3E' type="image/svg+xml">
<style>
.container {
@@ -135,7 +135,7 @@
<div class="logo">
<svg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"><path stroke="#000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"></path></svg>
</div>
<div class="title">Signed in to Codex CLI</div>
<div class="title">Signed in to Codex</div>
</div>
<div class="close-box" style="display: none;">
<div class="setup-description">You may now close this page</div>

View File

@@ -798,6 +798,7 @@ fn derive_config_from_params(
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,

View File

@@ -161,6 +161,7 @@ impl CodexToolCallParam {
base_instructions,
include_plan_tool,
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,

View File

@@ -264,6 +264,7 @@ async fn run_codex_tool_session_inner(
| EventMsg::McpToolCallBegin(_)
| EventMsg::McpToolCallEnd(_)
| EventMsg::McpListToolsResponse(_)
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::ExecCommandBegin(_)
| EventMsg::ExecCommandOutputDelta(_)
| EventMsg::ExecCommandEnd(_)
@@ -273,6 +274,7 @@ async fn run_codex_tool_session_inner(
| EventMsg::PatchApplyEnd(_)
| EventMsg::TurnDiff(_)
| EventMsg::WebSearchBegin(_)
| EventMsg::WebSearchEnd(_)
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)

View File

@@ -63,7 +63,6 @@ pub async fn run_main(
// Task: read from stdin, push to `incoming_tx`.
let stdin_reader_handle = tokio::spawn({
let incoming_tx = incoming_tx.clone();
async move {
let stdin = io::stdin();
let reader = BufReader::new(stdin);

View File

@@ -123,7 +123,7 @@ impl OutgoingMessageSender {
}
pub(crate) async fn send_server_notification(&self, notification: ServerNotification) {
let method = format!("codex/event/{}", notification);
let method = format!("codex/event/{notification}");
let params = match serde_json::to_value(&notification) {
Ok(serde_json::Value::Object(mut map)) => map.remove("data"),
_ => None,

View File

@@ -61,6 +61,7 @@ impl McpProcess {
cmd.stdin(Stdio::piped());
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
cmd.env("CODEX_HOME", codex_home);
cmd.env("RUST_LOG", "debug");
@@ -77,6 +78,17 @@ impl McpProcess {
.take()
.ok_or_else(|| anyhow::format_err!("mcp should have stdout fd"))?;
let stdout = BufReader::new(stdout);
// Forward child's stderr to our stderr so failures are visible even
// when stdout/stderr are captured by the test harness.
if let Some(stderr) = process.stderr.take() {
let mut stderr_reader = BufReader::new(stderr).lines();
tokio::spawn(async move {
while let Ok(Some(line)) = stderr_reader.next_line().await {
eprintln!("[mcp stderr] {line}");
}
});
}
Ok(Self {
next_request_id: AtomicI64::new(0),
process,
@@ -283,6 +295,7 @@ impl McpProcess {
}
async fn send_jsonrpc_message(&mut self, message: JSONRPCMessage) -> anyhow::Result<()> {
eprintln!("writing message to stdin: {message:?}");
let payload = serde_json::to_string(&message)?;
self.stdin.write_all(payload.as_bytes()).await?;
self.stdin.write_all(b"\n").await?;
@@ -294,13 +307,15 @@ impl McpProcess {
let mut line = String::new();
self.stdout.read_line(&mut line).await?;
let message = serde_json::from_str::<JSONRPCMessage>(&line)?;
eprintln!("read message from stdout: {message:?}");
Ok(message)
}
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<JSONRPCRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
@@ -323,10 +338,10 @@ impl McpProcess {
&mut self,
request_id: RequestId,
) -> anyhow::Result<JSONRPCResponse> {
eprintln!("in read_stream_until_response_message({request_id:?})");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
@@ -352,8 +367,6 @@ impl McpProcess {
) -> anyhow::Result<mcp_types::JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
@@ -377,10 +390,10 @@ impl McpProcess {
&mut self,
method: &str,
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(notification) => {
if notification.method == method {
@@ -405,10 +418,10 @@ impl McpProcess {
pub async fn read_stream_until_legacy_task_complete_notification(
&mut self,
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_legacy_task_complete_notification()");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(notification) => {
let is_match = if notification.method == "codex/event" {
@@ -427,6 +440,8 @@ impl McpProcess {
if is_match {
return Ok(notification);
} else {
eprintln!("ignoring notification: {notification:?}");
}
}
JSONRPCMessage::Request(_) => {

View File

@@ -30,7 +30,8 @@ use mcp_test_support::create_final_assistant_message_sse_response;
use mcp_test_support::create_mock_chat_completions_server;
use mcp_test_support::create_shell_sse_response;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Allow ample time on slower CI or under load to avoid flakes.
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(20);
/// Test that a shell command that is not on the "trusted" list triggers an
/// elicitation request to the MCP and that sending the approval runs the
@@ -52,9 +53,22 @@ async fn test_shell_command_approval_triggers_elicitation() {
}
async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
// We use `git init` because it will not be on the "trusted" list.
let shell_command = vec!["git".to_string(), "init".to_string()];
// Use a simple, untrusted command that creates a file so we can
// observe a side-effect.
//
// Crossplatform approach: run a tiny Python snippet to touch the file
// using `python3 -c ...` on all platforms.
let workdir_for_shell_function_call = TempDir::new()?;
let created_filename = "created_by_shell_tool.txt";
let created_file = workdir_for_shell_function_call
.path()
.join(created_filename);
let shell_command = vec![
"python3".to_string(),
"-c".to_string(),
format!("import pathlib; pathlib.Path('{created_filename}').touch()"),
];
let McpHandle {
process: mut mcp_process,
@@ -67,7 +81,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
Some(5_000),
"call1234",
)?,
create_final_assistant_message_sse_response("Enjoy your new git repo!")?,
create_final_assistant_message_sse_response("File created!")?,
])
.await?;
@@ -122,8 +136,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
.expect("task_complete_notification timeout")
.expect("task_complete_notification resp");
// Verify the original `codex` tool call completes and that `git init` ran
// successfully.
// Verify the original `codex` tool call completes and that the file was created.
let codex_response = timeout(
DEFAULT_READ_TIMEOUT,
mcp_process.read_stream_until_response_message(RequestId::Integer(codex_request_id)),
@@ -136,7 +149,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
result: json!({
"content": [
{
"text": "Enjoy your new git repo!",
"text": "File created!",
"type": "text"
}
]
@@ -145,10 +158,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
codex_response
);
assert!(
workdir_for_shell_function_call.path().join(".git").is_dir(),
".git folder should have been created"
);
assert!(created_file.is_file(), "created file should exist");
Ok(())
}

View File

@@ -0,0 +1,10 @@
use serde::Deserialize;
use serde::Serialize;
use std::path::PathBuf;
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct CustomPrompt {
pub name: String,
pub path: PathBuf,
pub content: String,
}

View File

@@ -1,4 +1,5 @@
pub mod config_types;
pub mod custom_prompts;
pub mod mcp_protocol;
pub mod message_history;
pub mod models;

View File

@@ -95,6 +95,22 @@ pub enum ResponseItem {
call_id: String,
output: String,
},
// Emitted by the Responses API when the agent triggers a web search.
// Example payload (from SSE `response.output_item.done`):
// {
// "id":"ws_...",
// "type":"web_search_call",
// "status":"completed",
// "action": {"type":"search","query":"weather: San Francisco, CA"}
// }
WebSearchCall {
#[serde(default, skip_serializing_if = "Option::is_none")]
id: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
status: Option<String>,
action: WebSearchAction,
},
#[serde(other)]
Other,
}
@@ -162,6 +178,16 @@ pub struct LocalShellExecAction {
pub user: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum WebSearchAction {
Search {
query: String,
},
#[serde(other)]
Other,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ReasoningItemReasoningSummary {

View File

@@ -10,6 +10,7 @@ use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use crate::custom_prompts::CustomPrompt;
use mcp_types::CallToolResult;
use mcp_types::Tool as McpTool;
use serde::Deserialize;
@@ -146,6 +147,9 @@ pub enum Op {
/// Reply is delivered via `EventMsg::McpListToolsResponse`.
ListMcpTools,
/// Request the list of available custom prompts.
ListCustomPrompts,
/// Request the agent to summarize the current conversation context.
/// The agent will use its existing context (either conversation history or previous response id)
/// to generate a summary which will be returned as an AgentMessage event.
@@ -439,6 +443,8 @@ pub enum EventMsg {
WebSearchBegin(WebSearchBeginEvent),
WebSearchEnd(WebSearchEndEvent),
/// Notification that the server is about to execute a command.
ExecCommandBegin(ExecCommandBeginEvent),
@@ -472,6 +478,9 @@ pub enum EventMsg {
/// List of MCP tools available to the agent.
McpListToolsResponse(McpListToolsResponseEvent),
/// List of custom prompts available to the agent.
ListCustomPromptsResponse(ListCustomPromptsResponseEvent),
PlanUpdate(UpdatePlanArgs),
TurnAborted(TurnAbortedEvent),
@@ -668,6 +677,11 @@ impl McpToolCallEndEvent {
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchBeginEvent {
pub call_id: String,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchEndEvent {
pub call_id: String,
pub query: String,
}
@@ -806,6 +820,12 @@ pub struct McpListToolsResponseEvent {
pub tools: std::collections::HashMap<String, McpTool>,
}
/// Response payload for `Op::ListCustomPrompts`.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ListCustomPromptsResponseEvent {
pub custom_prompts: Vec<CustomPrompt>,
}
#[derive(Debug, Default, Clone, Deserialize, Serialize)]
pub struct SessionConfiguredEvent {
/// Unique id for this session.

View File

@@ -133,6 +133,12 @@ impl App {
self.chat_widget.handle_paste(pasted);
}
TuiEvent::Draw => {
if self
.chat_widget
.handle_paste_burst_tick(tui.frame_requester())
{
return Ok(true);
}
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {

View File

@@ -100,6 +100,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
assert_eq!(CancellationEvent::Handled, view.on_ctrl_c(&mut pane));
assert!(view.queue.is_empty());

View File

@@ -22,9 +22,13 @@ use ratatui::widgets::StatefulWidgetRef;
use ratatui::widgets::WidgetRef;
use super::chat_composer_history::ChatComposerHistory;
use super::command_popup::CommandItem;
use super::command_popup::CommandPopup;
use super::file_search_popup::FileSearchPopup;
use super::paste_burst::CharDecision;
use super::paste_burst::PasteBurst;
use crate::slash_command::SlashCommand;
use codex_protocol::custom_prompts::CustomPrompt;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
@@ -40,16 +44,12 @@ use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
// Heuristic thresholds for detecting paste-like input bursts.
const PASTE_BURST_MIN_CHARS: u16 = 3;
const PASTE_BURST_CHAR_INTERVAL: Duration = Duration::from_millis(8);
const PASTE_ENTER_SUPPRESS_WINDOW: Duration = Duration::from_millis(120);
/// If the pasted content exceeds this number of characters, replace it with a
/// placeholder in the UI.
const LARGE_PASTE_CHAR_THRESHOLD: usize = 1000;
/// Result returned when the user interacts with the text area.
#[derive(Debug, PartialEq)]
pub enum InputResult {
Submitted(String),
Command(SlashCommand),
@@ -93,13 +93,11 @@ pub(crate) struct ChatComposer {
has_focus: bool,
attached_images: Vec<AttachedImage>,
placeholder_text: String,
// Heuristic state to detect non-bracketed paste bursts.
last_plain_char_time: Option<Instant>,
consecutive_plain_char_burst: u16,
paste_burst_until: Option<Instant>,
// Buffer to accumulate characters during a detected non-bracketed paste burst.
paste_burst_buffer: String,
in_paste_burst_mode: bool,
// Non-bracketed paste burst tracker.
paste_burst: PasteBurst,
// When true, disables paste-burst logic and inserts characters immediately.
disable_paste_burst: bool,
custom_prompts: Vec<CustomPrompt>,
}
/// Popup state at most one can be visible at any time.
@@ -115,10 +113,11 @@ impl ChatComposer {
app_event_tx: AppEventSender,
enhanced_keys_supported: bool,
placeholder_text: String,
disable_paste_burst: bool,
) -> Self {
let use_shift_enter_hint = enhanced_keys_supported;
Self {
let mut this = Self {
textarea: TextArea::new(),
textarea_state: RefCell::new(TextAreaState::default()),
active_popup: ActivePopup::None,
@@ -134,12 +133,13 @@ impl ChatComposer {
has_focus: has_input_focus,
attached_images: Vec::new(),
placeholder_text,
last_plain_char_time: None,
consecutive_plain_char_burst: 0,
paste_burst_until: None,
paste_burst_buffer: String::new(),
in_paste_burst_mode: false,
}
paste_burst: PasteBurst::default(),
disable_paste_burst: false,
custom_prompts: Vec::new(),
};
// Apply configuration via the setter to keep side-effects centralized.
this.set_disable_paste_burst(disable_paste_burst);
this
}
pub fn desired_height(&self, width: u16) -> u16 {
@@ -229,11 +229,15 @@ impl ChatComposer {
self.textarea.insert_str(&pasted);
}
// Explicit paste events should not trigger Enter suppression.
self.last_plain_char_time = None;
self.consecutive_plain_char_burst = 0;
self.paste_burst_until = None;
self.paste_burst.clear_after_explicit_paste();
// Keep popup sync consistent with key handling: prefer slash popup; only
// sync file popup when slash popup is NOT active.
self.sync_command_popup();
self.sync_file_search_popup();
if matches!(self.active_popup, ActivePopup::Command(_)) {
self.dismissed_file_popup_token = None;
} else {
self.sync_file_search_popup();
}
true
}
@@ -256,6 +260,14 @@ impl ChatComposer {
}
}
pub(crate) fn set_disable_paste_burst(&mut self, disabled: bool) {
let was_disabled = self.disable_paste_burst;
self.disable_paste_burst = disabled;
if disabled && !was_disabled {
self.paste_burst.clear_window_after_non_char();
}
}
/// Replace the entire composer content with `text` and reset cursor.
pub(crate) fn set_text_content(&mut self, text: String) {
self.textarea.set_text(&text);
@@ -270,6 +282,7 @@ impl ChatComposer {
self.textarea.text().to_string()
}
/// Attempt to start a burst by retro-capturing recent chars before the cursor.
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, format_label: &str) {
let placeholder = format!("[image {width}x{height} {format_label}]");
// Insert as an element to match large paste placeholder behavior:
@@ -284,6 +297,23 @@ impl ChatComposer {
images.into_iter().map(|img| img.path).collect()
}
pub(crate) fn flush_paste_burst_if_due(&mut self) -> bool {
let now = Instant::now();
if let Some(pasted) = self.paste_burst.flush_if_due(now) {
let _ = self.handle_paste(pasted);
return true;
}
false
}
pub(crate) fn is_in_paste_burst(&self) -> bool {
self.paste_burst.is_active()
}
pub(crate) fn recommended_paste_flush_delay() -> Duration {
PasteBurst::recommended_flush_delay()
}
/// Integrate results from an asynchronous file search.
pub(crate) fn on_file_search_result(&mut self, query: String, matches: Vec<FileMatch>) {
// Only apply if user is still editing a token starting with `query`.
@@ -366,16 +396,27 @@ impl ChatComposer {
KeyEvent {
code: KeyCode::Tab, ..
} => {
if let Some(cmd) = popup.selected_command() {
if let Some(sel) = popup.selected_item() {
let first_line = self.textarea.text().lines().next().unwrap_or("");
let starts_with_cmd = first_line
.trim_start()
.starts_with(&format!("/{}", cmd.command()));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{} ", cmd.command()));
self.textarea.set_cursor(self.textarea.text().len());
match sel {
CommandItem::Builtin(cmd) => {
let starts_with_cmd = first_line
.trim_start()
.starts_with(&format!("/{}", cmd.command()));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{} ", cmd.command()));
}
}
CommandItem::UserPrompt(idx) => {
if let Some(name) = popup.prompt_name(idx) {
let starts_with_cmd =
first_line.trim_start().starts_with(&format!("/{name}"));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{name} "));
}
}
}
}
// After completing the command, move cursor to the end.
if !self.textarea.text().is_empty() {
@@ -390,16 +431,30 @@ impl ChatComposer {
modifiers: KeyModifiers::NONE,
..
} => {
if let Some(cmd) = popup.selected_command() {
if let Some(sel) = popup.selected_item() {
// Clear textarea so no residual text remains.
self.textarea.set_text("");
let result = (InputResult::Command(*cmd), true);
// Hide popup since the command has been dispatched.
// Capture any needed data from popup before clearing it.
let prompt_content = match sel {
CommandItem::UserPrompt(idx) => {
popup.prompt_content(idx).map(|s| s.to_string())
}
_ => None,
};
// Hide popup since an action has been dispatched.
self.active_popup = ActivePopup::None;
return result;
match sel {
CommandItem::Builtin(cmd) => {
return (InputResult::Command(cmd), true);
}
CommandItem::UserPrompt(_) => {
if let Some(contents) = prompt_content {
return (InputResult::Submitted(contents), true);
}
return (InputResult::None, true);
}
}
}
// Fallback to default newline handling if no command selected.
self.handle_key_event_without_popup(key_event)
@@ -423,9 +478,7 @@ impl ChatComposer {
#[inline]
fn handle_non_ascii_char(&mut self, input: KeyEvent) -> (InputResult, bool) {
if !self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
self.textarea.input(input);
@@ -740,14 +793,11 @@ impl ChatComposer {
.next()
.unwrap_or("")
.starts_with('/');
if (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
&& !in_slash_context
{
self.paste_burst_buffer.push('\n');
if self.paste_burst.is_active() && !in_slash_context {
let now = Instant::now();
// Keep the window alive so subsequent lines are captured too.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
if self.paste_burst.append_newline_if_active(now) {
return (InputResult::None, true);
}
}
// If we have pending placeholder pastes, submit immediately to expand them.
if !self.pending_pastes.is_empty() {
@@ -768,19 +818,12 @@ impl ChatComposer {
// During a paste-like burst, treat Enter as a newline instead of submit.
let now = Instant::now();
let tight_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_BURST_CHAR_INTERVAL);
let recent_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_ENTER_SUPPRESS_WINDOW);
let burst_by_count =
recent_after_char && self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS;
let in_burst_window = self.paste_burst_until.is_some_and(|until| now <= until);
if tight_after_char || burst_by_count || in_burst_window {
if self
.paste_burst
.newline_should_insert_instead_of_submit(now)
{
self.textarea.insert_str("\n");
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
self.paste_burst.extend_window(now);
return (InputResult::None, true);
}
let mut text = self.textarea.text().to_string();
@@ -810,22 +853,16 @@ impl ChatComposer {
// If we have a buffered non-bracketed paste burst and enough time has
// elapsed since the last char, flush it before handling a new input.
let now = Instant::now();
let timed_out = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
if timed_out && (!self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode) {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
if let Some(pasted) = self.paste_burst.flush_if_due(now) {
// Reuse normal paste path (handles large-paste placeholders).
self.handle_paste(pasted);
}
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
if matches!(input.code, KeyCode::Enter)
&& (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
&& self.paste_burst.is_active()
&& self.paste_burst.append_newline_if_active(now)
{
self.paste_burst_buffer.push('\n');
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
@@ -840,65 +877,50 @@ impl ChatComposer {
modifiers.contains(KeyModifiers::CONTROL) || modifiers.contains(KeyModifiers::ALT);
if !has_ctrl_or_alt {
// Non-ASCII characters (e.g., from IMEs) can arrive in quick bursts and be
// misclassified by our non-bracketed paste heuristic. To avoid leaving
// residual buffered content or misdetecting a paste, flush any burst buffer
// and insert non-ASCII characters directly.
// misclassified by paste heuristics. Flush any active burst buffer and insert
// non-ASCII characters directly.
if !ch.is_ascii() {
return self.handle_non_ascii_char(input);
}
// Update burst heuristics.
match self.last_plain_char_time {
Some(prev) if now.duration_since(prev) <= PASTE_BURST_CHAR_INTERVAL => {
self.consecutive_plain_char_burst =
self.consecutive_plain_char_burst.saturating_add(1);
}
_ => {
self.consecutive_plain_char_burst = 1;
}
}
self.last_plain_char_time = Some(now);
// If we're already buffering, capture the char into the buffer.
if self.in_paste_burst_mode {
self.paste_burst_buffer.push(ch);
// Keep the window alive while we receive the burst.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
} else if self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS {
// Do not start burst buffering while typing a slash command (first line starts with '/').
let first_line = self.textarea.text().lines().next().unwrap_or("");
if first_line.starts_with('/') {
// Keep heuristics but do not buffer.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
// Insert normally.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
match self.paste_burst.on_plain_char(ch, now) {
CharDecision::BufferAppend => {
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
CharDecision::BeginBuffer { retro_chars } => {
let cur = self.textarea.cursor();
let txt = self.textarea.text();
let safe_cur = Self::clamp_to_char_boundary(txt, cur);
let before = &txt[..safe_cur];
if let Some(grab) =
self.paste_burst
.decide_begin_buffer(now, before, retro_chars as usize)
{
if !grab.grabbed.is_empty() {
self.textarea.replace_range(grab.start_byte..safe_cur, "");
}
self.paste_burst.begin_with_retro_grabbed(grab.grabbed, now);
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
// If decide_begin_buffer opted not to start buffering,
// fall through to normal insertion below.
}
CharDecision::BeginBufferFromPending => {
// First char was held; now append the current one.
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
CharDecision::RetainFirstChar => {
// Keep the first fast char pending momentarily.
return (InputResult::None, true);
}
// Begin buffering from this character onward.
self.paste_burst_buffer.push(ch);
self.in_paste_burst_mode = true;
// Keep the window alive to continue capturing.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
// Not buffering: insert normally and continue.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
} else {
// Modified char ends any burst: flush buffered content before applying.
if !self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
self.handle_paste(pasted);
}
}
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
}
// For non-char inputs (or after flushing), handle normally.
@@ -925,25 +947,15 @@ impl ChatComposer {
let has_ctrl_or_alt = modifiers.contains(KeyModifiers::CONTROL)
|| modifiers.contains(KeyModifiers::ALT);
if has_ctrl_or_alt {
// Modified char: clear burst window.
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
self.paste_burst_buffer.clear();
self.paste_burst.clear_window_after_non_char();
}
// Plain chars handled above.
}
KeyCode::Enter => {
// Keep burst window alive (supports blank lines in paste).
}
_ => {
// Other keys: clear burst window and any buffer (after flushing earlier).
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
// Do not clear paste_burst_buffer here; it should have been flushed above.
// Other keys: clear burst window (buffer should have been flushed above if needed).
self.paste_burst.clear_window_after_non_char();
}
}
@@ -1135,7 +1147,7 @@ impl ChatComposer {
}
_ => {
if input_starts_with_slash {
let mut command_popup = CommandPopup::new();
let mut command_popup = CommandPopup::new(self.custom_prompts.clone());
command_popup.on_composer_text_change(first_line.to_string());
self.active_popup = ActivePopup::Command(command_popup);
}
@@ -1143,6 +1155,13 @@ impl ChatComposer {
}
}
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
self.custom_prompts = prompts.clone();
if let ActivePopup::Command(popup) = &mut self.active_popup {
popup.set_prompts(prompts);
}
}
/// Synchronize `self.file_search_popup` with the current text in the textarea.
/// Note this is only called when self.active_popup is NOT Command.
fn sync_file_search_popup(&mut self) {
@@ -1480,8 +1499,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let needs_redraw = composer.handle_paste("hello".to_string());
assert!(needs_redraw);
@@ -1504,8 +1528,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let large = "x".repeat(LARGE_PASTE_CHAR_THRESHOLD + 10);
let needs_redraw = composer.handle_paste(large.clone());
@@ -1534,8 +1563,13 @@ mod tests {
let large = "y".repeat(LARGE_PASTE_CHAR_THRESHOLD + 1);
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer.handle_paste(large);
assert_eq!(composer.pending_pastes.len(), 1);
@@ -1576,6 +1610,7 @@ mod tests {
sender.clone(),
false,
"Ask Codex to do anything".to_string(),
false,
);
if let Some(text) = input {
@@ -1605,6 +1640,18 @@ mod tests {
}
}
// Test helper: simulate human typing with a brief delay and flush the paste-burst buffer
fn type_chars_humanlike(composer: &mut ChatComposer, chars: &[char]) {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
for &ch in chars {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let _ = composer.flush_paste_burst_if_due();
}
}
#[test]
fn slash_init_dispatches_command_and_does_not_submit_literal_text() {
use crossterm::event::KeyCode;
@@ -1613,15 +1660,16 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Type the slash command.
for ch in [
'/', 'i', 'n', 'i', 't', // "/init"
] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'i', 'n', 'i', 't']);
// Press Enter to dispatch the selected command.
let (result, _needs_redraw) =
@@ -1649,12 +1697,15 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
for ch in ['/', 'c'] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'c']);
let (_result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Tab, KeyModifiers::NONE));
@@ -1671,12 +1722,15 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
for ch in ['/', 'm', 'e', 'n', 't', 'i', 'o', 'n'] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'm', 'e', 'n', 't', 'i', 'o', 'n']);
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
@@ -1703,8 +1757,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (paste content, is_large)
let test_cases = [
@@ -1777,8 +1836,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (content, is_large)
let test_cases = [
@@ -1844,8 +1908,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (cursor_position_from_end, expected_pending_count)
let test_cases = [
@@ -1887,8 +1956,13 @@ mod tests {
fn attach_image_and_submit_includes_image_paths() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image1.png");
composer.attach_image(path.clone(), 32, 16, "PNG");
composer.handle_paste(" hi".into());
@@ -1906,8 +1980,13 @@ mod tests {
fn attach_image_without_text_submits_empty_text_and_images() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image2.png");
composer.attach_image(path.clone(), 10, 5, "PNG");
let (result, _) =
@@ -1926,8 +2005,13 @@ mod tests {
fn image_placeholder_backspace_behaves_like_text_placeholder() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image3.png");
composer.attach_image(path.clone(), 20, 10, "PNG");
let placeholder = composer.attached_images[0].placeholder.clone();
@@ -1962,8 +2046,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Insert an image placeholder at the start
let path = PathBuf::from("/tmp/image_multibyte.png");
@@ -1983,8 +2072,13 @@ mod tests {
fn deleting_one_of_duplicate_image_placeholders_removes_matching_entry() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path1 = PathBuf::from("/tmp/image_dup1.png");
let path2 = PathBuf::from("/tmp/image_dup2.png");
@@ -2025,8 +2119,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let needs_redraw = composer.handle_paste(tmp_path.to_string_lossy().to_string());
assert!(needs_redraw);
@@ -2035,4 +2134,136 @@ mod tests {
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs, vec![tmp_path.clone()]);
}
#[test]
fn selecting_custom_prompt_submits_file_contents() {
let prompt_text = "Hello from saved prompt";
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Inject prompts as if received via event.
composer.set_custom_prompts(vec![CustomPrompt {
name: "my-prompt".to_string(),
path: "/tmp/my-prompt.md".to_string().into(),
content: prompt_text.to_string(),
}]);
type_chars_humanlike(
&mut composer,
&['/', 'm', 'y', '-', 'p', 'r', 'o', 'm', 'p', 't'],
);
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
assert_eq!(InputResult::Submitted(prompt_text.to_string()), result);
}
#[test]
fn burst_paste_fast_small_buffers_and_flushes_on_stop() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = 32;
for _ in 0..count {
let _ =
composer.handle_key_event(KeyEvent::new(KeyCode::Char('a'), KeyModifiers::NONE));
assert!(
composer.is_in_paste_burst(),
"expected active paste burst during fast typing"
);
assert!(
composer.textarea.text().is_empty(),
"text should not appear during burst"
);
}
assert!(
composer.textarea.text().is_empty(),
"text should remain empty until flush"
);
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let flushed = composer.flush_paste_burst_if_due();
assert!(flushed, "expected buffered text to flush after stop");
assert_eq!(composer.textarea.text(), "a".repeat(count));
assert!(
composer.pending_pastes.is_empty(),
"no placeholder for small burst"
);
}
#[test]
fn burst_paste_fast_large_inserts_placeholder_on_flush() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = LARGE_PASTE_CHAR_THRESHOLD + 1; // > threshold to trigger placeholder
for _ in 0..count {
let _ =
composer.handle_key_event(KeyEvent::new(KeyCode::Char('x'), KeyModifiers::NONE));
}
// Nothing should appear until we stop and flush
assert!(composer.textarea.text().is_empty());
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let flushed = composer.flush_paste_burst_if_due();
assert!(flushed, "expected flush after stopping fast input");
let expected_placeholder = format!("[Pasted Content {count} chars]");
assert_eq!(composer.textarea.text(), expected_placeholder);
assert_eq!(composer.pending_pastes.len(), 1);
assert_eq!(composer.pending_pastes[0].0, expected_placeholder);
assert_eq!(composer.pending_pastes[0].1.len(), count);
assert!(composer.pending_pastes[0].1.chars().all(|c| c == 'x'));
}
#[test]
fn humanlike_typing_1000_chars_appears_live_no_placeholder() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = LARGE_PASTE_CHAR_THRESHOLD; // 1000 in current config
let chars: Vec<char> = vec!['z'; count];
type_chars_humanlike(&mut composer, &chars);
assert_eq!(composer.textarea.text(), "z".repeat(count));
assert!(composer.pending_pastes.is_empty());
}
}

View File

@@ -9,22 +9,58 @@ use super::selection_popup_common::render_rows;
use crate::slash_command::SlashCommand;
use crate::slash_command::built_in_slash_commands;
use codex_common::fuzzy_match::fuzzy_match;
use codex_protocol::custom_prompts::CustomPrompt;
use std::collections::HashSet;
/// A selectable item in the popup: either a built-in command or a user prompt.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub(crate) enum CommandItem {
Builtin(SlashCommand),
// Index into `prompts`
UserPrompt(usize),
}
pub(crate) struct CommandPopup {
command_filter: String,
all_commands: Vec<(&'static str, SlashCommand)>,
builtins: Vec<(&'static str, SlashCommand)>,
prompts: Vec<CustomPrompt>,
state: ScrollState,
}
impl CommandPopup {
pub(crate) fn new() -> Self {
pub(crate) fn new(mut prompts: Vec<CustomPrompt>) -> Self {
let builtins = built_in_slash_commands();
// Exclude prompts that collide with builtin command names and sort by name.
let exclude: HashSet<String> = builtins.iter().map(|(n, _)| (*n).to_string()).collect();
prompts.retain(|p| !exclude.contains(&p.name));
prompts.sort_by(|a, b| a.name.cmp(&b.name));
Self {
command_filter: String::new(),
all_commands: built_in_slash_commands(),
builtins,
prompts,
state: ScrollState::new(),
}
}
pub(crate) fn set_prompts(&mut self, mut prompts: Vec<CustomPrompt>) {
let exclude: HashSet<String> = self
.builtins
.iter()
.map(|(n, _)| (*n).to_string())
.collect();
prompts.retain(|p| !exclude.contains(&p.name));
prompts.sort_by(|a, b| a.name.cmp(&b.name));
self.prompts = prompts;
}
pub(crate) fn prompt_name(&self, idx: usize) -> Option<&str> {
self.prompts.get(idx).map(|p| p.name.as_str())
}
pub(crate) fn prompt_content(&self, idx: usize) -> Option<&str> {
self.prompts.get(idx).map(|p| p.content.as_str())
}
/// Update the filter string based on the current composer text. The text
/// passed in is expected to start with a leading '/'. Everything after the
/// *first* '/" on the *first* line becomes the active filter that is used
@@ -50,7 +86,7 @@ impl CommandPopup {
}
// Reset or clamp selected index based on new filtered list.
let matches_len = self.filtered_commands().len();
let matches_len = self.filtered_items().len();
self.state.clamp_selection(matches_len);
self.state
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
@@ -59,56 +95,76 @@ impl CommandPopup {
/// Determine the preferred height of the popup. This is the number of
/// rows required to show at most MAX_POPUP_ROWS commands.
pub(crate) fn calculate_required_height(&self) -> u16 {
self.filtered_commands().len().clamp(1, MAX_POPUP_ROWS) as u16
self.filtered_items().len().clamp(1, MAX_POPUP_ROWS) as u16
}
/// Compute fuzzy-filtered matches paired with optional highlight indices and score.
/// Sorted by ascending score, then by command name for stability.
fn filtered(&self) -> Vec<(&SlashCommand, Option<Vec<usize>>, i32)> {
/// Compute fuzzy-filtered matches over built-in commands and user prompts,
/// paired with optional highlight indices and score. Sorted by ascending
/// score, then by name for stability.
fn filtered(&self) -> Vec<(CommandItem, Option<Vec<usize>>, i32)> {
let filter = self.command_filter.trim();
let mut out: Vec<(&SlashCommand, Option<Vec<usize>>, i32)> = Vec::new();
let mut out: Vec<(CommandItem, Option<Vec<usize>>, i32)> = Vec::new();
if filter.is_empty() {
for (_, cmd) in self.all_commands.iter() {
out.push((cmd, None, 0));
// Built-ins first, in presentation order.
for (_, cmd) in self.builtins.iter() {
out.push((CommandItem::Builtin(*cmd), None, 0));
}
// Then prompts, already sorted by name.
for idx in 0..self.prompts.len() {
out.push((CommandItem::UserPrompt(idx), None, 0));
}
// Keep the original presentation order when no filter is applied.
return out;
} else {
for (_, cmd) in self.all_commands.iter() {
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
out.push((cmd, Some(indices), score));
}
}
for (_, cmd) in self.builtins.iter() {
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
out.push((CommandItem::Builtin(*cmd), Some(indices), score));
}
}
// When filtering, sort by ascending score and then by command for stability.
out.sort_by(|a, b| a.2.cmp(&b.2).then_with(|| a.0.command().cmp(b.0.command())));
for (idx, p) in self.prompts.iter().enumerate() {
if let Some((indices, score)) = fuzzy_match(&p.name, filter) {
out.push((CommandItem::UserPrompt(idx), Some(indices), score));
}
}
// When filtering, sort by ascending score and then by name for stability.
out.sort_by(|a, b| {
a.2.cmp(&b.2).then_with(|| {
let an = match a.0 {
CommandItem::Builtin(c) => c.command(),
CommandItem::UserPrompt(i) => &self.prompts[i].name,
};
let bn = match b.0 {
CommandItem::Builtin(c) => c.command(),
CommandItem::UserPrompt(i) => &self.prompts[i].name,
};
an.cmp(bn)
})
});
out
}
fn filtered_commands(&self) -> Vec<&SlashCommand> {
fn filtered_items(&self) -> Vec<CommandItem> {
self.filtered().into_iter().map(|(c, _, _)| c).collect()
}
/// Move the selection cursor one step up.
pub(crate) fn move_up(&mut self) {
let matches = self.filtered_commands();
let len = matches.len();
let len = self.filtered_items().len();
self.state.move_up_wrap(len);
self.state.ensure_visible(len, MAX_POPUP_ROWS.min(len));
}
/// Move the selection cursor one step down.
pub(crate) fn move_down(&mut self) {
let matches = self.filtered_commands();
let matches_len = matches.len();
let matches_len = self.filtered_items().len();
self.state.move_down_wrap(matches_len);
self.state
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
}
/// Return currently selected command, if any.
pub(crate) fn selected_command(&self) -> Option<&SlashCommand> {
let matches = self.filtered_commands();
pub(crate) fn selected_item(&self) -> Option<CommandItem> {
let matches = self.filtered_items();
self.state
.selected_idx
.and_then(|idx| matches.get(idx).copied())
@@ -123,11 +179,19 @@ impl WidgetRef for CommandPopup {
} else {
matches
.into_iter()
.map(|(cmd, indices, _)| GenericDisplayRow {
name: format!("/{}", cmd.command()),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some(cmd.description().to_string()),
.map(|(item, indices, _)| match item {
CommandItem::Builtin(cmd) => GenericDisplayRow {
name: format!("/{}", cmd.command()),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some(cmd.description().to_string()),
},
CommandItem::UserPrompt(i) => GenericDisplayRow {
name: format!("/{}", self.prompts[i].name),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some("send saved prompt".to_string()),
},
})
.collect()
};
@@ -141,31 +205,82 @@ mod tests {
#[test]
fn filter_includes_init_when_typing_prefix() {
let mut popup = CommandPopup::new();
let mut popup = CommandPopup::new(Vec::new());
// Simulate the composer line starting with '/in' so the popup filters
// matching commands by prefix.
popup.on_composer_text_change("/in".to_string());
// Access the filtered list via the selected command and ensure that
// one of the matches is the new "init" command.
let matches = popup.filtered_commands();
let matches = popup.filtered_items();
let has_init = matches.iter().any(|item| match item {
CommandItem::Builtin(cmd) => cmd.command() == "init",
CommandItem::UserPrompt(_) => false,
});
assert!(
matches.iter().any(|cmd| cmd.command() == "init"),
has_init,
"expected '/init' to appear among filtered commands"
);
}
#[test]
fn selecting_init_by_exact_match() {
let mut popup = CommandPopup::new();
let mut popup = CommandPopup::new(Vec::new());
popup.on_composer_text_change("/init".to_string());
// When an exact match exists, the selected command should be that
// command by default.
let selected = popup.selected_command();
let selected = popup.selected_item();
match selected {
Some(cmd) => assert_eq!(cmd.command(), "init"),
Some(CommandItem::Builtin(cmd)) => assert_eq!(cmd.command(), "init"),
Some(CommandItem::UserPrompt(_)) => panic!("unexpected prompt selected for '/init'"),
None => panic!("expected a selected command for exact match"),
}
}
#[test]
fn prompt_discovery_lists_custom_prompts() {
let prompts = vec![
CustomPrompt {
name: "foo".to_string(),
path: "/tmp/foo.md".to_string().into(),
content: "hello from foo".to_string(),
},
CustomPrompt {
name: "bar".to_string(),
path: "/tmp/bar.md".to_string().into(),
content: "hello from bar".to_string(),
},
];
let popup = CommandPopup::new(prompts);
let items = popup.filtered_items();
let mut prompt_names: Vec<String> = items
.into_iter()
.filter_map(|it| match it {
CommandItem::UserPrompt(i) => popup.prompt_name(i).map(|s| s.to_string()),
_ => None,
})
.collect();
prompt_names.sort();
assert_eq!(prompt_names, vec!["bar".to_string(), "foo".to_string()]);
}
#[test]
fn prompt_name_collision_with_builtin_is_ignored() {
// Create a prompt named like a builtin (e.g. "init").
let popup = CommandPopup::new(vec![CustomPrompt {
name: "init".to_string(),
path: "/tmp/init.md".to_string().into(),
content: "should be ignored".to_string(),
}]);
let items = popup.filtered_items();
let has_collision_prompt = items.into_iter().any(|it| match it {
CommandItem::UserPrompt(i) => popup.prompt_name(i) == Some("init"),
_ => false,
});
assert!(
!has_collision_prompt,
"prompt with builtin name should be ignored"
);
}
}

View File

@@ -13,6 +13,7 @@ use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
use std::time::Duration;
mod approval_modal_view;
mod bottom_pane_view;
@@ -21,6 +22,7 @@ mod chat_composer_history;
mod command_popup;
mod file_search_popup;
mod list_selection_view;
mod paste_burst;
mod popup_consts;
mod scroll_state;
mod selection_popup_common;
@@ -34,6 +36,7 @@ pub(crate) enum CancellationEvent {
pub(crate) use chat_composer::ChatComposer;
pub(crate) use chat_composer::InputResult;
use codex_protocol::custom_prompts::CustomPrompt;
use crate::status_indicator_widget::StatusIndicatorWidget;
use approval_modal_view::ApprovalModalView;
@@ -69,6 +72,7 @@ pub(crate) struct BottomPaneParams {
pub(crate) has_input_focus: bool,
pub(crate) enhanced_keys_supported: bool,
pub(crate) placeholder_text: String,
pub(crate) disable_paste_burst: bool,
}
impl BottomPane {
@@ -81,6 +85,7 @@ impl BottomPane {
params.app_event_tx.clone(),
enhanced_keys_supported,
params.placeholder_text,
params.disable_paste_burst,
),
active_view: None,
app_event_tx: params.app_event_tx,
@@ -182,6 +187,9 @@ impl BottomPane {
if needs_redraw {
self.request_redraw();
}
if self.composer.is_in_paste_burst() {
self.request_redraw_in(ChatComposer::recommended_paste_flush_delay());
}
input_result
}
}
@@ -329,6 +337,12 @@ impl BottomPane {
self.request_redraw();
}
/// Update custom prompts available for the slash popup.
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
self.composer.set_custom_prompts(prompts);
self.request_redraw();
}
pub(crate) fn composer_is_empty(&self) -> bool {
self.composer.is_empty()
}
@@ -382,12 +396,24 @@ impl BottomPane {
self.frame_requester.schedule_frame();
}
pub(crate) fn request_redraw_in(&self, dur: Duration) {
self.frame_requester.schedule_frame_in(dur);
}
// --- History helpers ---
pub(crate) fn set_history_metadata(&mut self, log_id: u64, entry_count: usize) {
self.composer.set_history_metadata(log_id, entry_count);
}
pub(crate) fn flush_paste_burst_if_due(&mut self) -> bool {
self.composer.flush_paste_burst_if_due()
}
pub(crate) fn is_in_paste_burst(&self) -> bool {
self.composer.is_in_paste_burst()
}
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,
@@ -473,6 +499,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.push_approval_request(exec_request());
assert_eq!(CancellationEvent::Handled, pane.on_ctrl_c());
@@ -492,6 +519,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Create an approval modal (active view).
@@ -522,6 +550,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Start a running task so the status indicator is active above the composer.
@@ -589,6 +618,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Begin a task: show initial status.
@@ -619,6 +649,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Activate spinner (status view replaces composer) with no live ring.
@@ -669,6 +700,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.set_task_running(true);

View File

@@ -0,0 +1,246 @@
use std::time::Duration;
use std::time::Instant;
// Heuristic thresholds for detecting paste-like input bursts.
// Detect quickly to avoid showing typed prefix before paste is recognized
const PASTE_BURST_MIN_CHARS: u16 = 3;
const PASTE_BURST_CHAR_INTERVAL: Duration = Duration::from_millis(8);
const PASTE_ENTER_SUPPRESS_WINDOW: Duration = Duration::from_millis(120);
#[derive(Default)]
pub(crate) struct PasteBurst {
last_plain_char_time: Option<Instant>,
consecutive_plain_char_burst: u16,
burst_window_until: Option<Instant>,
buffer: String,
active: bool,
// Hold first fast char briefly to avoid rendering flicker
pending_first_char: Option<(char, Instant)>,
}
pub(crate) enum CharDecision {
/// Start buffering and retroactively capture some already-inserted chars.
BeginBuffer { retro_chars: u16 },
/// We are currently buffering; append the current char into the buffer.
BufferAppend,
/// Do not insert/render this char yet; temporarily save the first fast
/// char while we wait to see if a paste-like burst follows.
RetainFirstChar,
/// Begin buffering using the previously saved first char (no retro grab needed).
BeginBufferFromPending,
}
pub(crate) struct RetroGrab {
pub start_byte: usize,
pub grabbed: String,
}
impl PasteBurst {
/// Recommended delay to wait between simulated keypresses (or before
/// scheduling a UI tick) so that a pending fast keystroke is flushed
/// out of the burst detector as normal typed input.
///
/// Primarily used by tests and by the TUI to reliably cross the
/// paste-burst timing threshold.
pub fn recommended_flush_delay() -> Duration {
PASTE_BURST_CHAR_INTERVAL + Duration::from_millis(1)
}
/// Entry point: decide how to treat a plain char with current timing.
pub fn on_plain_char(&mut self, ch: char, now: Instant) -> CharDecision {
match self.last_plain_char_time {
Some(prev) if now.duration_since(prev) <= PASTE_BURST_CHAR_INTERVAL => {
self.consecutive_plain_char_burst =
self.consecutive_plain_char_burst.saturating_add(1)
}
_ => self.consecutive_plain_char_burst = 1,
}
self.last_plain_char_time = Some(now);
if self.active {
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return CharDecision::BufferAppend;
}
// If we already held a first char and receive a second fast char,
// start buffering without retro-grabbing (we never rendered the first).
if let Some((held, held_at)) = self.pending_first_char
&& now.duration_since(held_at) <= PASTE_BURST_CHAR_INTERVAL
{
self.active = true;
// take() to clear pending; we already captured the held char above
let _ = self.pending_first_char.take();
self.buffer.push(held);
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return CharDecision::BeginBufferFromPending;
}
if self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS {
return CharDecision::BeginBuffer {
retro_chars: self.consecutive_plain_char_burst.saturating_sub(1),
};
}
// Save the first fast char very briefly to see if a burst follows.
self.pending_first_char = Some((ch, now));
CharDecision::RetainFirstChar
}
/// Flush the buffered burst if the inter-key timeout has elapsed.
///
/// Returns Some(String) when either:
/// - We were actively buffering paste-like input and the buffer is now
/// emitted as a single pasted string; or
/// - We had saved a single fast first-char with no subsequent burst and we
/// now emit that char as normal typed input.
///
/// Returns None if the timeout has not elapsed or there is nothing to flush.
pub fn flush_if_due(&mut self, now: Instant) -> Option<String> {
let timed_out = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
if timed_out && self.is_active_internal() {
self.active = false;
let out = std::mem::take(&mut self.buffer);
Some(out)
} else if timed_out {
// If we were saving a single fast char and no burst followed,
// flush it as normal typed input.
if let Some((ch, _at)) = self.pending_first_char.take() {
Some(ch.to_string())
} else {
None
}
} else {
None
}
}
/// While bursting: accumulate a newline into the buffer instead of
/// submitting the textarea.
///
/// Returns true if a newline was appended (we are in a burst context),
/// false otherwise.
pub fn append_newline_if_active(&mut self, now: Instant) -> bool {
if self.is_active() {
self.buffer.push('\n');
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
true
} else {
false
}
}
/// Decide if Enter should insert a newline (burst context) vs submit.
pub fn newline_should_insert_instead_of_submit(&self, now: Instant) -> bool {
let in_burst_window = self.burst_window_until.is_some_and(|until| now <= until);
self.is_active() || in_burst_window
}
/// Keep the burst window alive.
pub fn extend_window(&mut self, now: Instant) {
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Begin buffering with retroactively grabbed text.
pub fn begin_with_retro_grabbed(&mut self, grabbed: String, now: Instant) {
if !grabbed.is_empty() {
self.buffer.push_str(&grabbed);
}
self.active = true;
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Append a char into the burst buffer.
pub fn append_char_to_buffer(&mut self, ch: char, now: Instant) {
self.buffer.push(ch);
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Decide whether to begin buffering by retroactively capturing recent
/// chars from the slice before the cursor.
///
/// Heuristic: if the retro-grabbed slice contains any whitespace or is
/// sufficiently long (>= 16 characters), treat it as paste-like to avoid
/// rendering the typed prefix momentarily before the paste is recognized.
/// This favors responsiveness and prevents flicker for typical pastes
/// (URLs, file paths, multiline text) while not triggering on short words.
///
/// Returns Some(RetroGrab) with the start byte and grabbed text when we
/// decide to buffer retroactively; otherwise None.
pub fn decide_begin_buffer(
&mut self,
now: Instant,
before: &str,
retro_chars: usize,
) -> Option<RetroGrab> {
let start_byte = retro_start_index(before, retro_chars);
let grabbed = before[start_byte..].to_string();
let looks_pastey =
grabbed.chars().any(|c| c.is_whitespace()) || grabbed.chars().count() >= 16;
if looks_pastey {
// Note: caller is responsible for removing this slice from UI text.
self.begin_with_retro_grabbed(grabbed.clone(), now);
Some(RetroGrab {
start_byte,
grabbed,
})
} else {
None
}
}
/// Before applying modified/non-char input: flush buffered burst immediately.
pub fn flush_before_modified_input(&mut self) -> Option<String> {
if self.is_active() {
self.active = false;
Some(std::mem::take(&mut self.buffer))
} else {
None
}
}
/// Clear only the timing window and any pending first-char.
///
/// Does not emit or clear the buffered text itself; callers should have
/// already flushed (if needed) via one of the flush methods above.
pub fn clear_window_after_non_char(&mut self) {
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.burst_window_until = None;
self.active = false;
self.pending_first_char = None;
}
/// Returns true if we are in any paste-burst related transient state
/// (actively buffering, have a non-empty buffer, or have saved the first
/// fast char while waiting for a potential burst).
pub fn is_active(&self) -> bool {
self.is_active_internal() || self.pending_first_char.is_some()
}
fn is_active_internal(&self) -> bool {
self.active || !self.buffer.is_empty()
}
pub fn clear_after_explicit_paste(&mut self) {
self.last_plain_char_time = None;
self.consecutive_plain_char_burst = 0;
self.burst_window_until = None;
self.active = false;
self.buffer.clear();
self.pending_first_char = None;
}
}
pub(crate) fn retro_start_index(before: &str, retro_chars: usize) -> usize {
if retro_chars == 0 {
return before.len();
}
before
.char_indices()
.rev()
.nth(retro_chars.saturating_sub(1))
.map(|(idx, _)| idx)
.unwrap_or(0)
}

View File

@@ -19,6 +19,7 @@ use codex_core::protocol::ExecApprovalRequestEvent;
use codex_core::protocol::ExecCommandBeginEvent;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::ListCustomPromptsResponseEvent;
use codex_core::protocol::McpListToolsResponseEvent;
use codex_core::protocol::McpToolCallBeginEvent;
use codex_core::protocol::McpToolCallEndEvent;
@@ -30,6 +31,7 @@ use codex_core::protocol::TokenUsage;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
@@ -153,6 +155,8 @@ impl ChatWidget {
event,
self.show_welcome_banner,
));
// Ask codex-core to enumerate custom prompts for this session.
self.submit_op(Op::ListCustomPrompts);
if let Some(user_message) = self.initial_user_message.take() {
self.submit_user_message(user_message);
}
@@ -355,9 +359,16 @@ impl ChatWidget {
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
}
fn on_web_search_begin(&mut self, ev: WebSearchBeginEvent) {
fn on_web_search_begin(&mut self, _ev: WebSearchBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(ev.query));
}
fn on_web_search_end(&mut self, ev: WebSearchEndEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(format!(
"Searched: {}",
ev.query
)));
}
fn on_get_history_entry_response(
@@ -604,6 +615,7 @@ impl ChatWidget {
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
@@ -652,6 +664,7 @@ impl ChatWidget {
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
@@ -751,12 +764,20 @@ impl ChatWidget {
}
fn dispatch_command(&mut self, cmd: SlashCommand) {
if !cmd.available_during_task() && self.bottom_pane.is_task_running() {
let message = format!(
"'/'{}' is disabled while a task is in progress.",
cmd.command()
);
self.add_to_history(history_cell::new_error_event(message));
self.request_redraw();
return;
}
match cmd {
SlashCommand::New => {
self.app_event_tx.send(AppEvent::NewSession);
}
SlashCommand::Init => {
// Guard: do not run if a task is active.
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.submit_text_message(INIT_PROMPT.to_string());
}
@@ -850,6 +871,24 @@ impl ChatWidget {
self.bottom_pane.handle_paste(text);
}
// Returns true if caller should skip rendering this frame (a future frame is scheduled).
pub(crate) fn handle_paste_burst_tick(&mut self, frame_requester: FrameRequester) -> bool {
if self.bottom_pane.flush_paste_burst_if_due() {
// A paste just flushed; request an immediate redraw and skip this frame.
self.request_redraw();
true
} else if self.bottom_pane.is_in_paste_burst() {
// While capturing a burst, schedule a follow-up tick and skip this frame
// to avoid redundant renders between ticks.
frame_requester.schedule_frame_in(
crate::bottom_pane::ChatComposer::recommended_paste_flush_delay(),
);
true
} else {
false
}
}
fn flush_active_exec_cell(&mut self) {
if let Some(active) = self.active_exec_cell.take() {
self.last_history_was_exec = true;
@@ -961,8 +1000,10 @@ impl ChatWidget {
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::WebSearchEnd(ev) => self.on_web_search_end(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ListCustomPromptsResponse(ev) => self.on_list_custom_prompts(ev),
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
EventMsg::TurnDiff(TurnDiffEvent { unified_diff }) => self.on_turn_diff(unified_diff),
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
@@ -1192,6 +1233,13 @@ impl ChatWidget {
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
}
fn on_list_custom_prompts(&mut self, ev: ListCustomPromptsResponseEvent) {
let len = ev.custom_prompts.len();
debug!("received {len} custom prompts");
// Forward to bottom pane so the slash popup can show them now.
self.bottom_pane.set_custom_prompts(ev.custom_prompts);
}
/// Programmatically submit a user text message as if typed in the
/// composer. The text will be added to conversation history and sent to
/// the agent.

View File

@@ -164,6 +164,7 @@ fn make_chatwidget_manual() -> (
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
let widget = ChatWidget {
app_event_tx,

View File

@@ -22,6 +22,7 @@
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
use std::io;
use std::io::ErrorKind;
use ratatui::backend::Backend;
use ratatui::backend::ClearType;
@@ -200,7 +201,14 @@ where
/// Creates a new [`Terminal`] with the given [`Backend`] and [`TerminalOptions`].
pub fn with_options(mut backend: B) -> io::Result<Self> {
let screen_size = backend.size()?;
let cursor_pos = backend.get_cursor_position()?;
let cursor_pos = backend.get_cursor_position().or_else(|error| {
if is_get_cursor_position_timeout_error(&error) {
tracing::warn!("cursor position read timed out during startup: {error}");
Ok(Position { x: 0, y: 0 })
} else {
Err(error)
}
})?;
Ok(Self {
backend,
buffers: [
@@ -406,7 +414,19 @@ where
/// This is the position of the cursor after the last draw call.
#[allow(dead_code)]
pub fn get_cursor_position(&mut self) -> io::Result<Position> {
self.backend.get_cursor_position()
self.backend
.get_cursor_position()
.inspect(|position| {
self.last_known_cursor_pos = *position;
})
.or_else(|error| {
if is_get_cursor_position_timeout_error(&error) {
tracing::warn!("cursor position read timed out: {error}");
Ok(self.last_known_cursor_pos)
} else {
Err(error)
}
})
}
/// Sets the cursor position.
@@ -441,3 +461,10 @@ where
self.backend.size()
}
}
// Crossterm occasionally times out while another task holds the terminal for event polling.
// That error originates here: https://github.com/crossterm-rs/crossterm/blob/6af9116b6a8ba365d8d8e7a806cbce318498b84d/src/cursor/sys/unix.rs#L53
fn is_get_cursor_position_timeout_error(error: &io::Error) -> bool {
error.kind() == ErrorKind::Other
&& error.to_string() == "The cursor position could not be read within a normal duration"
}

View File

@@ -276,10 +276,54 @@ pub(crate) fn new_session_info(
Line::from("".dim()),
Line::from(" To get started, describe a task or try one of these commands:".dim()),
Line::from("".dim()),
Line::from(format!(" /init - {}", SlashCommand::Init.description()).dim()),
Line::from(format!(" /status - {}", SlashCommand::Status.description()).dim()),
Line::from(format!(" /approvals - {}", SlashCommand::Approvals.description()).dim()),
Line::from(format!(" /model - {}", SlashCommand::Model.description()).dim()),
Line::from(vec![
Span::styled(
" /init",
Style::default()
.add_modifier(Modifier::BOLD)
.fg(Color::White),
),
Span::styled(
format!(" - {}", SlashCommand::Init.description()),
Style::default().dim(),
),
]),
Line::from(vec![
Span::styled(
" /status",
Style::default()
.add_modifier(Modifier::BOLD)
.fg(Color::White),
),
Span::styled(
format!(" - {}", SlashCommand::Status.description()),
Style::default().dim(),
),
]),
Line::from(vec![
Span::styled(
" /approvals",
Style::default()
.add_modifier(Modifier::BOLD)
.fg(Color::White),
),
Span::styled(
format!(" - {}", SlashCommand::Approvals.description()),
Style::default().dim(),
),
]),
Line::from(vec![
Span::styled(
" /model",
Style::default()
.add_modifier(Modifier::BOLD)
.fg(Color::White),
),
Span::styled(
format!(" - {}", SlashCommand::Model.description()),
Style::default().dim(),
),
]),
];
PlainHistoryCell { lines }
} else if config.model == model {

View File

@@ -128,6 +128,7 @@ pub async fn run_main(
base_instructions: None,
include_plan_tool: Some(true),
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: cli.oss.then_some(true),
show_raw_agent_reasoning: cli.oss.then_some(true),
tools_web_search_request: cli.web_search.then_some(true),

View File

@@ -52,6 +52,26 @@ impl SlashCommand {
pub fn command(self) -> &'static str {
self.into()
}
/// Whether this command can be run while a task is in progress.
pub fn available_during_task(self) -> bool {
match self {
SlashCommand::New
| SlashCommand::Init
| SlashCommand::Compact
| SlashCommand::Model
| SlashCommand::Approvals
| SlashCommand::Logout => false,
SlashCommand::Diff
| SlashCommand::Mention
| SlashCommand::Status
| SlashCommand::Mcp
| SlashCommand::Quit => true,
#[cfg(debug_assertions)]
SlashCommand::TestApproval => true,
}
}
}
/// Return all built-in commands in a Vec paired with their command string.

15
docs/prompts.md Normal file
View File

@@ -0,0 +1,15 @@
## Custom Prompts
Save frequently used prompts as Markdown files and reuse them quickly from the slash menu.
- Location: Put files in `$CODEX_HOME/prompts/` (defaults to `~/.codex/prompts/`).
- File type: Only Markdown files with the `.md` extension are recognized.
- Name: The filename without the `.md` extension becomes the slash entry. For a file named `my-prompt.md`, type `/my-prompt`.
- Content: The file contents are sent as your message when you select the item in the slash popup and press Enter.
- How to use:
- Start a new session (Codex loads custom prompts on session start).
- In the composer, type `/` to open the slash popup and begin typing your prompt name.
- Use Up/Down to select it. Press Enter to submit its contents, or Tab to autocomplete the name.
- Notes:
- Files with names that collide with builtin commands (e.g. `/init`) are ignored and wont appear.
- New or changed files are discovered on session start. If you add a new prompt while Codex is running, start a new session to pick it up.