mirror of
https://github.com/openai/codex.git
synced 2026-02-04 16:03:46 +00:00
Compare commits
27 Commits
seratch-pa
...
oss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4e6bc85fc4 | ||
|
|
8bb57afc84 | ||
|
|
a8324c5d94 | ||
|
|
46e35a2345 | ||
|
|
7bcdc5cc7c | ||
|
|
4b426f7e1e | ||
|
|
fcb62a0fa5 | ||
|
|
eb40fe3451 | ||
|
|
b32c79e371 | ||
|
|
e442ecedab | ||
|
|
3f8d6021ac | ||
|
|
7ac6194c22 | ||
|
|
619436c58f | ||
|
|
1cc6b97227 | ||
|
|
7eee69d821 | ||
|
|
65636802f7 | ||
|
|
c988ce28fe | ||
|
|
cb2f952143 | ||
|
|
7d734bff65 | ||
|
|
970e466ab3 | ||
|
|
5d2d3002ef | ||
|
|
bb30996f7c | ||
|
|
3f8184034f | ||
|
|
f7cb2f87a0 | ||
|
|
9dbe7284d2 | ||
|
|
b8e8454b3f | ||
|
|
bbcfd63aba |
4
.github/dotslash-config.json
vendored
4
.github/dotslash-config.json
vendored
@@ -21,6 +21,10 @@
|
||||
"windows-x86_64": {
|
||||
"regex": "^codex-x86_64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex.exe"
|
||||
},
|
||||
"windows-aarch64": {
|
||||
"regex": "^codex-aarch64-pc-windows-msvc\\.exe\\.zst$",
|
||||
"path": "codex.exe"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
4
.github/pull_request_template.md
vendored
4
.github/pull_request_template.md
vendored
@@ -1,6 +1,6 @@
|
||||
# External (non-OpenAI) Pull Request Requirements
|
||||
|
||||
Before opening this Pull Request, please read the "Contributing" section of the README or your PR may be closed:
|
||||
https://github.com/openai/codex#contributing
|
||||
Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
|
||||
https://github.com/openai/codex/blob/main/docs/contributing.md
|
||||
|
||||
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.
|
||||
|
||||
13
.github/workflows/rust-ci.yml
vendored
13
.github/workflows/rust-ci.yml
vendored
@@ -100,15 +100,26 @@ jobs:
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
profile: dev
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
profile: dev
|
||||
|
||||
# Also run representative release builds on Mac and Linux because
|
||||
# there could be release-only build errors we want to catch.
|
||||
# Hopefully this also pre-populates the build cache to speed up
|
||||
# releases.
|
||||
- runner: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
profile: release
|
||||
- runner: ubuntu-24.04
|
||||
target: x86_64-unknown-linux-musl
|
||||
profile: release
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
profile: release
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
profile: release
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
@@ -134,7 +145,7 @@ jobs:
|
||||
|
||||
- name: cargo clippy
|
||||
id: clippy
|
||||
run: cargo clippy --target ${{ matrix.target }} --all-features --tests -- -D warnings
|
||||
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} -- -D warnings
|
||||
|
||||
# Running `cargo build` from the workspace root builds the workspace using
|
||||
# the union of all features from third-party crates. This can mask errors
|
||||
|
||||
4
.github/workflows/rust-release.yml
vendored
4
.github/workflows/rust-release.yml
vendored
@@ -72,6 +72,8 @@ jobs:
|
||||
target: aarch64-unknown-linux-gnu
|
||||
- runner: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
- runner: windows-11-arm
|
||||
target: aarch64-pc-windows-msvc
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
@@ -87,7 +89,7 @@ jobs:
|
||||
~/.cargo/registry/cache/
|
||||
~/.cargo/git/db/
|
||||
${{ github.workspace }}/codex-rs/target/
|
||||
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
|
||||
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
|
||||
name: Install musl build tools
|
||||
|
||||
@@ -8,7 +8,7 @@ In the codex-rs folder where the rust code lives:
|
||||
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
|
||||
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
|
||||
|
||||
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Additionally, run the tests:
|
||||
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspace‑wide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
|
||||
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
|
||||
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
|
||||
When running interactively, ask the user before running these commands to finalize.
|
||||
|
||||
10
README.md
10
README.md
@@ -14,10 +14,16 @@
|
||||
|
||||
### Installing and running Codex CLI
|
||||
|
||||
Install globally with your preferred package manager:
|
||||
Install globally with your preferred package manager. If you use npm:
|
||||
|
||||
```shell
|
||||
npm install -g @openai/codex # Alternatively: `brew install codex`
|
||||
npm install -g @openai/codex
|
||||
```
|
||||
|
||||
Alternatively, if you use Homebrew:
|
||||
|
||||
```shell
|
||||
brew install codex
|
||||
```
|
||||
|
||||
Then simply run `codex` to get started:
|
||||
|
||||
12
codex-rs/Cargo.lock
generated
12
codex-rs/Cargo.lock
generated
@@ -973,11 +973,13 @@ dependencies = [
|
||||
"diffy",
|
||||
"image",
|
||||
"insta",
|
||||
"itertools 0.14.0",
|
||||
"lazy_static",
|
||||
"libc",
|
||||
"mcp-types",
|
||||
"once_cell",
|
||||
"path-clean",
|
||||
"pathdiff",
|
||||
"pretty_assertions",
|
||||
"rand 0.9.2",
|
||||
"ratatui",
|
||||
@@ -3377,6 +3379,12 @@ dependencies = [
|
||||
"once_cell",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pathdiff"
|
||||
version = "0.2.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "df94ce210e5bc13cb6651479fa48d14f601d9858cfe0467f43ae157023b938d3"
|
||||
|
||||
[[package]]
|
||||
name = "percent-encoding"
|
||||
version = "2.3.1"
|
||||
@@ -3907,9 +3915,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex-lite"
|
||||
version = "0.1.6"
|
||||
version = "0.1.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "53a49587ad06b26609c52e423de037e7f57f20d53535d66e08c695f347df952a"
|
||||
checksum = "943f41321c63ef1c92fd763bfe054d2668f7f225a5c29f0105903dc2fc04ba30"
|
||||
|
||||
[[package]]
|
||||
name = "regex-syntax"
|
||||
|
||||
@@ -116,7 +116,9 @@ pub enum ApplyPatchFileChange {
|
||||
Add {
|
||||
content: String,
|
||||
},
|
||||
Delete,
|
||||
Delete {
|
||||
content: String,
|
||||
},
|
||||
Update {
|
||||
unified_diff: String,
|
||||
move_path: Option<PathBuf>,
|
||||
@@ -210,7 +212,18 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
|
||||
changes.insert(path, ApplyPatchFileChange::Add { content: contents });
|
||||
}
|
||||
Hunk::DeleteFile { .. } => {
|
||||
changes.insert(path, ApplyPatchFileChange::Delete);
|
||||
let content = match std::fs::read_to_string(&path) {
|
||||
Ok(content) => content,
|
||||
Err(e) => {
|
||||
return MaybeApplyPatchVerified::CorrectnessError(
|
||||
ApplyPatchError::IoError(IoError {
|
||||
context: format!("Failed to read {}", path.display()),
|
||||
source: e,
|
||||
}),
|
||||
);
|
||||
}
|
||||
};
|
||||
changes.insert(path, ApplyPatchFileChange::Delete { content });
|
||||
}
|
||||
Hunk::UpdateFile {
|
||||
move_path, chunks, ..
|
||||
|
||||
@@ -31,7 +31,7 @@ mime_guess = "2.0"
|
||||
os_info = "3.12.0"
|
||||
portable-pty = "0.9.0"
|
||||
rand = "0.9"
|
||||
regex-lite = "0.1.6"
|
||||
regex-lite = "0.1.7"
|
||||
reqwest = { version = "0.12", features = ["json", "stream"] }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_bytes = "0.11"
|
||||
|
||||
@@ -109,7 +109,9 @@ pub(crate) fn convert_apply_patch_to_protocol(
|
||||
ApplyPatchFileChange::Add { content } => FileChange::Add {
|
||||
content: content.clone(),
|
||||
},
|
||||
ApplyPatchFileChange::Delete => FileChange::Delete,
|
||||
ApplyPatchFileChange::Delete { content } => FileChange::Delete {
|
||||
content: content.clone(),
|
||||
},
|
||||
ApplyPatchFileChange::Update {
|
||||
unified_diff,
|
||||
move_path,
|
||||
|
||||
@@ -19,6 +19,7 @@ use crate::ModelProviderInfo;
|
||||
use crate::client_common::Prompt;
|
||||
use crate::client_common::ResponseEvent;
|
||||
use crate::client_common::ResponseStream;
|
||||
use crate::config::Config;
|
||||
use crate::error::CodexErr;
|
||||
use crate::error::Result;
|
||||
use crate::model_family::ModelFamily;
|
||||
@@ -34,6 +35,7 @@ pub(crate) async fn stream_chat_completions(
|
||||
model_family: &ModelFamily,
|
||||
client: &reqwest::Client,
|
||||
provider: &ModelProviderInfo,
|
||||
config: &Config,
|
||||
) -> Result<ResponseStream> {
|
||||
// Build messages array
|
||||
let mut messages = Vec::<serde_json::Value>::new();
|
||||
@@ -129,8 +131,26 @@ pub(crate) async fn stream_chat_completions(
|
||||
"content": output,
|
||||
}));
|
||||
}
|
||||
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
|
||||
// Omit these items from the conversation history.
|
||||
ResponseItem::Reasoning {
|
||||
id: _,
|
||||
summary,
|
||||
content,
|
||||
encrypted_content: _,
|
||||
} => {
|
||||
if !config.skip_reasoning_in_chat_completions {
|
||||
// There is no clear way of sending reasoning items over chat completions.
|
||||
// We are sending it as an assistant message.
|
||||
tracing::info!("reasoning item: {:?}", item);
|
||||
let reasoning =
|
||||
format!("Reasoning Summary: {summary:?}, Reasoning Content: {content:?}");
|
||||
messages.push(json!({
|
||||
"role": "assistant",
|
||||
"content": reasoning,
|
||||
}));
|
||||
}
|
||||
}
|
||||
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
|
||||
tracing::info!("omitting item from chat completions: {:?}", item);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@@ -348,6 +368,8 @@ async fn process_chat_sse<S>(
|
||||
}
|
||||
|
||||
if let Some(reasoning) = maybe_text {
|
||||
// Accumulate so we can emit a terminal Reasoning item at end-of-turn.
|
||||
reasoning_text.push_str(&reasoning);
|
||||
let _ = tx_event
|
||||
.send(Ok(ResponseEvent::ReasoningContentDelta(reasoning)))
|
||||
.await;
|
||||
@@ -623,11 +645,8 @@ where
|
||||
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
|
||||
continue;
|
||||
}
|
||||
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { .. }))) => {
|
||||
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin {
|
||||
call_id: String::new(),
|
||||
query: None,
|
||||
})));
|
||||
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id }))) => {
|
||||
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id })));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -110,6 +110,7 @@ impl ModelClient {
|
||||
&self.config.model_family,
|
||||
&self.client,
|
||||
&self.provider,
|
||||
&self.config,
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -160,21 +161,7 @@ impl ModelClient {
|
||||
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
|
||||
|
||||
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
|
||||
let mut tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
|
||||
// ChatGPT backend expects the preview name for web search.
|
||||
if auth_mode == Some(AuthMode::ChatGPT) {
|
||||
for tool in &mut tools_json {
|
||||
if let Some(map) = tool.as_object_mut()
|
||||
&& map.get("type").and_then(|v| v.as_str()) == Some("web_search")
|
||||
{
|
||||
map.insert(
|
||||
"type".to_string(),
|
||||
serde_json::Value::String("web_search_preview".to_string()),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
|
||||
let reasoning = create_reasoning_param_for_request(
|
||||
&self.config.model_family,
|
||||
self.effort,
|
||||
@@ -607,11 +594,9 @@ async fn process_sse<S>(
|
||||
| "response.custom_tool_call_input.delta"
|
||||
| "response.custom_tool_call_input.done" // also emitted as response.output_item.done
|
||||
| "response.in_progress"
|
||||
| "response.output_item.added"
|
||||
| "response.output_text.done" => {
|
||||
if event.kind == "response.output_item.added"
|
||||
&& let Some(item) = event.item.as_ref()
|
||||
{
|
||||
| "response.output_text.done" => {}
|
||||
"response.output_item.added" => {
|
||||
if let Some(item) = event.item.as_ref() {
|
||||
// Detect web_search_call begin and forward a synthetic event upstream.
|
||||
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
|
||||
&& ty == "web_search_call"
|
||||
@@ -621,7 +606,7 @@ async fn process_sse<S>(
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
let ev = ResponseEvent::WebSearchCallBegin { call_id, query: None };
|
||||
let ev = ResponseEvent::WebSearchCallBegin { call_id };
|
||||
if tx_event.send(Ok(ev)).await.is_err() {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -95,7 +95,6 @@ pub enum ResponseEvent {
|
||||
ReasoningSummaryPartAdded,
|
||||
WebSearchCallBegin {
|
||||
call_id: String,
|
||||
query: Option<String>,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -89,6 +89,7 @@ use crate::protocol::ExecCommandBeginEvent;
|
||||
use crate::protocol::ExecCommandEndEvent;
|
||||
use crate::protocol::FileChange;
|
||||
use crate::protocol::InputItem;
|
||||
use crate::protocol::ListCustomPromptsResponseEvent;
|
||||
use crate::protocol::Op;
|
||||
use crate::protocol::PatchApplyBeginEvent;
|
||||
use crate::protocol::PatchApplyEndEvent;
|
||||
@@ -100,6 +101,7 @@ use crate::protocol::Submission;
|
||||
use crate::protocol::TaskCompleteEvent;
|
||||
use crate::protocol::TurnDiffEvent;
|
||||
use crate::protocol::WebSearchBeginEvent;
|
||||
use crate::protocol::WebSearchEndEvent;
|
||||
use crate::rollout::RolloutRecorder;
|
||||
use crate::safety::SafetyCheck;
|
||||
use crate::safety::assess_command_safety;
|
||||
@@ -110,6 +112,7 @@ use crate::user_notification::UserNotification;
|
||||
use crate::util::backoff;
|
||||
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
|
||||
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
|
||||
use codex_protocol::custom_prompts::CustomPrompt;
|
||||
use codex_protocol::models::ContentItem;
|
||||
use codex_protocol::models::FunctionCallOutputPayload;
|
||||
use codex_protocol::models::LocalShellAction;
|
||||
@@ -118,6 +121,7 @@ use codex_protocol::models::ReasoningItemReasoningSummary;
|
||||
use codex_protocol::models::ResponseInputItem;
|
||||
use codex_protocol::models::ResponseItem;
|
||||
use codex_protocol::models::ShellToolCallParams;
|
||||
use codex_protocol::models::WebSearchAction;
|
||||
|
||||
// A convenience extension trait for acquiring mutex locks where poisoning is
|
||||
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
|
||||
@@ -653,9 +657,17 @@ impl Session {
|
||||
}
|
||||
|
||||
pub fn notify_approval(&self, sub_id: &str, decision: ReviewDecision) {
|
||||
let mut state = self.state.lock_unchecked();
|
||||
if let Some(tx_approve) = state.pending_approvals.remove(sub_id) {
|
||||
tx_approve.send(decision).ok();
|
||||
let entry = {
|
||||
let mut state = self.state.lock_unchecked();
|
||||
state.pending_approvals.remove(sub_id)
|
||||
};
|
||||
match entry {
|
||||
Some(tx_approve) => {
|
||||
tx_approve.send(decision).ok();
|
||||
}
|
||||
None => {
|
||||
warn!("No pending approval found for sub_id: {sub_id}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1286,6 +1298,27 @@ async fn submission_loop(
|
||||
warn!("failed to send McpListToolsResponse event: {e}");
|
||||
}
|
||||
}
|
||||
Op::ListCustomPrompts => {
|
||||
let tx_event = sess.tx_event.clone();
|
||||
let sub_id = sub.id.clone();
|
||||
|
||||
let custom_prompts: Vec<CustomPrompt> =
|
||||
if let Some(dir) = crate::custom_prompts::default_prompts_dir() {
|
||||
crate::custom_prompts::discover_prompts_in(&dir).await
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let event = Event {
|
||||
id: sub_id,
|
||||
msg: EventMsg::ListCustomPromptsResponse(ListCustomPromptsResponseEvent {
|
||||
custom_prompts,
|
||||
}),
|
||||
};
|
||||
if let Err(e) = tx_event.send(event).await {
|
||||
warn!("failed to send ListCustomPromptsResponse event: {e}");
|
||||
}
|
||||
}
|
||||
Op::Compact => {
|
||||
// Create a summarization request as user input
|
||||
const SUMMARIZATION_PROMPT: &str = include_str!("prompt_for_compact_command.md");
|
||||
@@ -1746,13 +1779,12 @@ async fn try_run_turn(
|
||||
.await?;
|
||||
output.push(ProcessedResponseItem { item, response });
|
||||
}
|
||||
ResponseEvent::WebSearchCallBegin { call_id, query } => {
|
||||
let q = query.unwrap_or_else(|| "Searching Web...".to_string());
|
||||
ResponseEvent::WebSearchCallBegin { call_id } => {
|
||||
let _ = sess
|
||||
.tx_event
|
||||
.send(Event {
|
||||
id: sub_id.to_string(),
|
||||
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id, query: q }),
|
||||
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id }),
|
||||
})
|
||||
.await;
|
||||
}
|
||||
@@ -2051,6 +2083,17 @@ async fn handle_response_item(
|
||||
debug!("unexpected CustomToolCallOutput from stream");
|
||||
None
|
||||
}
|
||||
ResponseItem::WebSearchCall { id, action, .. } => {
|
||||
if let WebSearchAction::Search { query } = action {
|
||||
let call_id = id.unwrap_or_else(|| "".to_string());
|
||||
let event = Event {
|
||||
id: sub_id.to_string(),
|
||||
msg: EventMsg::WebSearchEnd(WebSearchEndEvent { call_id, query }),
|
||||
};
|
||||
sess.tx_event.send(event).await.ok();
|
||||
}
|
||||
None
|
||||
}
|
||||
ResponseItem::Other => None,
|
||||
};
|
||||
Ok(output)
|
||||
|
||||
@@ -185,6 +185,10 @@ pub struct Config {
|
||||
/// All characters are inserted as they are received, and no buffering
|
||||
/// or placeholder replacement will occur for fast keypress bursts.
|
||||
pub disable_paste_burst: bool,
|
||||
|
||||
/// When `true`, reasoning items in Chat Completions input will be skipped.
|
||||
/// Defaults to `false`.
|
||||
pub skip_reasoning_in_chat_completions: bool,
|
||||
}
|
||||
|
||||
impl Config {
|
||||
@@ -497,6 +501,10 @@ pub struct ConfigToml {
|
||||
/// All characters are inserted as they are received, and no buffering
|
||||
/// or placeholder replacement will occur for fast keypress bursts.
|
||||
pub disable_paste_burst: Option<bool>,
|
||||
|
||||
/// When set to `true`, reasoning items will be skipped from Chat Completions input.
|
||||
/// Defaults to `false`.
|
||||
pub skip_reasoning_in_chat_completions: Option<bool>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
|
||||
@@ -506,7 +514,6 @@ pub struct ProjectConfig {
|
||||
|
||||
#[derive(Deserialize, Debug, Clone, Default)]
|
||||
pub struct ToolsToml {
|
||||
// Renamed from `web_search_request`; keep alias for backwards compatibility.
|
||||
#[serde(default, alias = "web_search_request")]
|
||||
pub web_search: Option<bool>,
|
||||
|
||||
@@ -672,7 +679,7 @@ impl Config {
|
||||
})?
|
||||
.clone();
|
||||
|
||||
let shell_environment_policy = cfg.shell_environment_policy.clone().into();
|
||||
let shell_environment_policy = cfg.shell_environment_policy.into();
|
||||
|
||||
let resolved_cwd = {
|
||||
use std::env;
|
||||
@@ -693,7 +700,7 @@ impl Config {
|
||||
}
|
||||
};
|
||||
|
||||
let history = cfg.history.clone().unwrap_or_default();
|
||||
let history = cfg.history.unwrap_or_default();
|
||||
|
||||
let tools_web_search_request = override_tools_web_search_request
|
||||
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
|
||||
@@ -775,7 +782,7 @@ impl Config {
|
||||
codex_home,
|
||||
history,
|
||||
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
|
||||
tui: cfg.tui.clone().unwrap_or_default(),
|
||||
tui: cfg.tui.unwrap_or_default(),
|
||||
codex_linux_sandbox_exe,
|
||||
|
||||
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
|
||||
@@ -794,7 +801,7 @@ impl Config {
|
||||
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
|
||||
chatgpt_base_url: config_profile
|
||||
.chatgpt_base_url
|
||||
.or(cfg.chatgpt_base_url.clone())
|
||||
.or(cfg.chatgpt_base_url)
|
||||
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
|
||||
|
||||
experimental_resume,
|
||||
@@ -808,6 +815,9 @@ impl Config {
|
||||
.unwrap_or(false),
|
||||
include_view_image_tool,
|
||||
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
|
||||
skip_reasoning_in_chat_completions: cfg
|
||||
.skip_reasoning_in_chat_completions
|
||||
.unwrap_or(false),
|
||||
};
|
||||
Ok(config)
|
||||
}
|
||||
@@ -1178,6 +1188,7 @@ disable_response_storage = true
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
include_view_image_tool: true,
|
||||
disable_paste_burst: false,
|
||||
skip_reasoning_in_chat_completions: false,
|
||||
},
|
||||
o3_profile_config
|
||||
);
|
||||
@@ -1236,6 +1247,7 @@ disable_response_storage = true
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
include_view_image_tool: true,
|
||||
disable_paste_burst: false,
|
||||
skip_reasoning_in_chat_completions: false,
|
||||
};
|
||||
|
||||
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
|
||||
@@ -1309,6 +1321,7 @@ disable_response_storage = true
|
||||
use_experimental_streamable_shell_tool: false,
|
||||
include_view_image_tool: true,
|
||||
disable_paste_burst: false,
|
||||
skip_reasoning_in_chat_completions: false,
|
||||
};
|
||||
|
||||
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
|
||||
|
||||
@@ -72,7 +72,7 @@ fn is_api_message(message: &ResponseItem) -> bool {
|
||||
| ResponseItem::CustomToolCallOutput { .. }
|
||||
| ResponseItem::LocalShellCall { .. }
|
||||
| ResponseItem::Reasoning { .. } => true,
|
||||
ResponseItem::Other => false,
|
||||
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
127
codex-rs/core/src/custom_prompts.rs
Normal file
127
codex-rs/core/src/custom_prompts.rs
Normal file
@@ -0,0 +1,127 @@
|
||||
use codex_protocol::custom_prompts::CustomPrompt;
|
||||
use std::collections::HashSet;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
use tokio::fs;
|
||||
|
||||
/// Return the default prompts directory: `$CODEX_HOME/prompts`.
|
||||
/// If `CODEX_HOME` cannot be resolved, returns `None`.
|
||||
pub fn default_prompts_dir() -> Option<PathBuf> {
|
||||
crate::config::find_codex_home()
|
||||
.ok()
|
||||
.map(|home| home.join("prompts"))
|
||||
}
|
||||
|
||||
/// Discover prompt files in the given directory, returning entries sorted by name.
|
||||
/// Non-files are ignored. If the directory does not exist or cannot be read, returns empty.
|
||||
pub async fn discover_prompts_in(dir: &Path) -> Vec<CustomPrompt> {
|
||||
discover_prompts_in_excluding(dir, &HashSet::new()).await
|
||||
}
|
||||
|
||||
/// Discover prompt files in the given directory, excluding any with names in `exclude`.
|
||||
/// Returns entries sorted by name. Non-files are ignored. Missing/unreadable dir yields empty.
|
||||
pub async fn discover_prompts_in_excluding(
|
||||
dir: &Path,
|
||||
exclude: &HashSet<String>,
|
||||
) -> Vec<CustomPrompt> {
|
||||
let mut out: Vec<CustomPrompt> = Vec::new();
|
||||
let mut entries = match fs::read_dir(dir).await {
|
||||
Ok(entries) => entries,
|
||||
Err(_) => return out,
|
||||
};
|
||||
|
||||
while let Ok(Some(entry)) = entries.next_entry().await {
|
||||
let path = entry.path();
|
||||
let is_file = entry
|
||||
.file_type()
|
||||
.await
|
||||
.map(|ft| ft.is_file())
|
||||
.unwrap_or(false);
|
||||
if !is_file {
|
||||
continue;
|
||||
}
|
||||
// Only include Markdown files with a .md extension.
|
||||
let is_md = path
|
||||
.extension()
|
||||
.and_then(|s| s.to_str())
|
||||
.map(|ext| ext.eq_ignore_ascii_case("md"))
|
||||
.unwrap_or(false);
|
||||
if !is_md {
|
||||
continue;
|
||||
}
|
||||
let Some(name) = path
|
||||
.file_stem()
|
||||
.and_then(|s| s.to_str())
|
||||
.map(|s| s.to_string())
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
if exclude.contains(&name) {
|
||||
continue;
|
||||
}
|
||||
let content = match fs::read_to_string(&path).await {
|
||||
Ok(s) => s,
|
||||
Err(_) => continue,
|
||||
};
|
||||
out.push(CustomPrompt {
|
||||
name,
|
||||
path,
|
||||
content,
|
||||
});
|
||||
}
|
||||
out.sort_by(|a, b| a.name.cmp(&b.name));
|
||||
out
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::fs;
|
||||
use tempfile::tempdir;
|
||||
|
||||
#[tokio::test]
|
||||
async fn empty_when_dir_missing() {
|
||||
let tmp = tempdir().expect("create TempDir");
|
||||
let missing = tmp.path().join("nope");
|
||||
let found = discover_prompts_in(&missing).await;
|
||||
assert!(found.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn discovers_and_sorts_files() {
|
||||
let tmp = tempdir().expect("create TempDir");
|
||||
let dir = tmp.path();
|
||||
fs::write(dir.join("b.md"), b"b").unwrap();
|
||||
fs::write(dir.join("a.md"), b"a").unwrap();
|
||||
fs::create_dir(dir.join("subdir")).unwrap();
|
||||
let found = discover_prompts_in(dir).await;
|
||||
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
|
||||
assert_eq!(names, vec!["a", "b"]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn excludes_builtins() {
|
||||
let tmp = tempdir().expect("create TempDir");
|
||||
let dir = tmp.path();
|
||||
fs::write(dir.join("init.md"), b"ignored").unwrap();
|
||||
fs::write(dir.join("foo.md"), b"ok").unwrap();
|
||||
let mut exclude = HashSet::new();
|
||||
exclude.insert("init".to_string());
|
||||
let found = discover_prompts_in_excluding(dir, &exclude).await;
|
||||
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
|
||||
assert_eq!(names, vec!["foo"]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn skips_non_utf8_files() {
|
||||
let tmp = tempdir().expect("create TempDir");
|
||||
let dir = tmp.path();
|
||||
// Valid UTF-8 file
|
||||
fs::write(dir.join("good.md"), b"hello").unwrap();
|
||||
// Invalid UTF-8 content in .md file (e.g., lone 0xFF byte)
|
||||
fs::write(dir.join("bad.md"), vec![0xFF, 0xFE, b'\n']).unwrap();
|
||||
let found = discover_prompts_in(dir).await;
|
||||
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
|
||||
assert_eq!(names, vec!["good"]);
|
||||
}
|
||||
}
|
||||
@@ -10,7 +10,35 @@ use tokio::process::Command;
|
||||
use tokio::time::Duration as TokioDuration;
|
||||
use tokio::time::timeout;
|
||||
|
||||
use crate::util::is_inside_git_repo;
|
||||
/// Return `true` if the project folder specified by the `Config` is inside a
|
||||
/// Git repository.
|
||||
///
|
||||
/// The check walks up the directory hierarchy looking for a `.git` file or
|
||||
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
|
||||
/// approach does **not** require the `git` binary or the `git2` crate and is
|
||||
/// therefore fairly lightweight.
|
||||
///
|
||||
/// Note that this does **not** detect *work‑trees* created with
|
||||
/// `git worktree add` where the checkout lives outside the main repository
|
||||
/// directory. If you need Codex to work from such a checkout simply pass the
|
||||
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
|
||||
pub fn get_git_repo_root(base_dir: &Path) -> Option<PathBuf> {
|
||||
let mut dir = base_dir.to_path_buf();
|
||||
|
||||
loop {
|
||||
if dir.join(".git").exists() {
|
||||
return Some(dir);
|
||||
}
|
||||
|
||||
// Pop one component (go up one directory). `pop` returns false when
|
||||
// we have reached the filesystem root.
|
||||
if !dir.pop() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Timeout for git commands to prevent freezing on large repositories
|
||||
const GIT_COMMAND_TIMEOUT: TokioDuration = TokioDuration::from_secs(5);
|
||||
@@ -94,9 +122,7 @@ pub async fn collect_git_info(cwd: &Path) -> Option<GitInfo> {
|
||||
|
||||
/// Returns the closest git sha to HEAD that is on a remote as well as the diff to that sha.
|
||||
pub async fn git_diff_to_remote(cwd: &Path) -> Option<GitDiffToRemote> {
|
||||
if !is_inside_git_repo(cwd) {
|
||||
return None;
|
||||
}
|
||||
get_git_repo_root(cwd)?;
|
||||
|
||||
let remotes = get_git_remotes(cwd).await?;
|
||||
let branches = branch_ancestry(cwd).await?;
|
||||
@@ -440,7 +466,7 @@ async fn diff_against_sha(cwd: &Path, sha: &GitSha) -> Option<String> {
|
||||
}
|
||||
|
||||
/// Resolve the path that should be used for trust checks. Similar to
|
||||
/// `[utils::is_inside_git_repo]`, but resolves to the root of the main
|
||||
/// `[get_git_repo_root]`, but resolves to the root of the main
|
||||
/// repository. Handles worktrees.
|
||||
pub fn resolve_root_git_project_for_trust(cwd: &Path) -> Option<PathBuf> {
|
||||
let base = if cwd.is_dir() { cwd } else { cwd.parent()? };
|
||||
|
||||
@@ -17,6 +17,7 @@ pub mod config;
|
||||
pub mod config_profile;
|
||||
pub mod config_types;
|
||||
mod conversation_history;
|
||||
pub mod custom_prompts;
|
||||
mod environment_context;
|
||||
pub mod error;
|
||||
pub mod exec;
|
||||
|
||||
@@ -47,7 +47,9 @@ pub(crate) enum OpenAiTool {
|
||||
Function(ResponsesApiTool),
|
||||
#[serde(rename = "local_shell")]
|
||||
LocalShell {},
|
||||
#[serde(rename = "web_search")]
|
||||
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
|
||||
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
|
||||
#[serde(rename = "web_search_preview")]
|
||||
WebSearch {},
|
||||
#[serde(rename = "custom")]
|
||||
Freeform(FreeformTool),
|
||||
@@ -335,12 +337,12 @@ pub fn create_tools_json_for_responses_api(
|
||||
let mut tools_json = Vec::new();
|
||||
|
||||
for tool in tools {
|
||||
tools_json.push(serde_json::to_value(tool)?);
|
||||
let json = serde_json::to_value(tool)?;
|
||||
tools_json.push(json);
|
||||
}
|
||||
|
||||
Ok(tools_json)
|
||||
}
|
||||
|
||||
/// Returns JSON values that are compatible with Function Calling in the
|
||||
/// Chat Completions API:
|
||||
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
|
||||
@@ -702,8 +704,8 @@ mod tests {
|
||||
"number_property": { "type": "number" },
|
||||
},
|
||||
"required": [
|
||||
"string_property".to_string(),
|
||||
"number_property".to_string()
|
||||
"string_property",
|
||||
"number_property",
|
||||
],
|
||||
"additionalProperties": Some(false),
|
||||
},
|
||||
|
||||
@@ -20,22 +20,6 @@ pub enum ParsedCommand {
|
||||
query: Option<String>,
|
||||
path: Option<String>,
|
||||
},
|
||||
Format {
|
||||
cmd: String,
|
||||
tool: Option<String>,
|
||||
targets: Option<Vec<String>>,
|
||||
},
|
||||
Test {
|
||||
cmd: String,
|
||||
},
|
||||
Lint {
|
||||
cmd: String,
|
||||
tool: Option<String>,
|
||||
targets: Option<Vec<String>>,
|
||||
},
|
||||
Noop {
|
||||
cmd: String,
|
||||
},
|
||||
Unknown {
|
||||
cmd: String,
|
||||
},
|
||||
@@ -50,10 +34,6 @@ impl From<ParsedCommand> for codex_protocol::parse_command::ParsedCommand {
|
||||
ParsedCommand::Read { cmd, name } => P::Read { cmd, name },
|
||||
ParsedCommand::ListFiles { cmd, path } => P::ListFiles { cmd, path },
|
||||
ParsedCommand::Search { cmd, query, path } => P::Search { cmd, query, path },
|
||||
ParsedCommand::Format { cmd, tool, targets } => P::Format { cmd, tool, targets },
|
||||
ParsedCommand::Test { cmd } => P::Test { cmd },
|
||||
ParsedCommand::Lint { cmd, tool, targets } => P::Lint { cmd, tool, targets },
|
||||
ParsedCommand::Noop { cmd } => P::Noop { cmd },
|
||||
ParsedCommand::Unknown { cmd } => P::Unknown { cmd },
|
||||
}
|
||||
}
|
||||
@@ -122,7 +102,7 @@ mod tests {
|
||||
assert_parsed(
|
||||
&vec_str(&["bash", "-lc", inner]),
|
||||
vec![ParsedCommand::Unknown {
|
||||
cmd: "git status | wc -l".to_string(),
|
||||
cmd: "git status".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
@@ -244,6 +224,17 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cd_then_cat_is_single_read() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cd foo && cat foo.txt"),
|
||||
vec![ParsedCommand::Read {
|
||||
cmd: "cat foo.txt".to_string(),
|
||||
name: "foo.txt".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn supports_ls_with_pipe() {
|
||||
let inner = "ls -la | sed -n '1,120p'";
|
||||
@@ -315,27 +306,6 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn supports_npm_run_with_forwarded_args() {
|
||||
assert_parsed(
|
||||
&vec_str(&[
|
||||
"npm",
|
||||
"run",
|
||||
"lint",
|
||||
"--",
|
||||
"--max-warnings",
|
||||
"0",
|
||||
"--format",
|
||||
"json",
|
||||
]),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "npm run lint -- --max-warnings 0 --format json".to_string(),
|
||||
tool: Some("npm-script:lint".to_string()),
|
||||
targets: None,
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn supports_grep_recursive_current_dir() {
|
||||
assert_parsed(
|
||||
@@ -396,173 +366,10 @@ mod tests {
|
||||
fn supports_cd_and_rg_files() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cd codex-rs && rg --files"),
|
||||
vec![
|
||||
ParsedCommand::Unknown {
|
||||
cmd: "cd codex-rs".to_string(),
|
||||
},
|
||||
ParsedCommand::Search {
|
||||
cmd: "rg --files".to_string(),
|
||||
query: None,
|
||||
path: None,
|
||||
},
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn echo_then_cargo_test_sequence() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("echo Running tests... && cargo test --all-features --quiet"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "cargo test --all-features --quiet".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn supports_cargo_fmt_and_test_with_config() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe(
|
||||
"cargo fmt -- --config imports_granularity=Item && cargo test -p core --all-features",
|
||||
),
|
||||
vec![
|
||||
ParsedCommand::Format {
|
||||
cmd: "cargo fmt -- --config 'imports_granularity=Item'".to_string(),
|
||||
tool: Some("cargo fmt".to_string()),
|
||||
targets: None,
|
||||
},
|
||||
ParsedCommand::Test {
|
||||
cmd: "cargo test -p core --all-features".to_string(),
|
||||
},
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn recognizes_rustfmt_and_clippy() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("rustfmt src/main.rs"),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "rustfmt src/main.rs".to_string(),
|
||||
tool: Some("rustfmt".to_string()),
|
||||
targets: Some(vec!["src/main.rs".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cargo clippy -p core --all-features -- -D warnings"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "cargo clippy -p core --all-features -- -D warnings".to_string(),
|
||||
tool: Some("cargo clippy".to_string()),
|
||||
targets: None,
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn recognizes_pytest_go_and_tools() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe(
|
||||
"pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok",
|
||||
),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok"
|
||||
.to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("go fmt ./..."),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "go fmt ./...".to_string(),
|
||||
tool: Some("go fmt".to_string()),
|
||||
targets: Some(vec!["./...".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("go test ./pkg -run TestThing"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "go test ./pkg -run TestThing".to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("eslint . --max-warnings 0"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "eslint . --max-warnings 0".to_string(),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets: Some(vec![".".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("prettier -w ."),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "prettier -w .".to_string(),
|
||||
tool: Some("prettier".to_string()),
|
||||
targets: Some(vec![".".to_string()]),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn recognizes_jest_and_vitest_filters() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("jest -t 'should work' src/foo.test.ts"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "jest -t 'should work' src/foo.test.ts".to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("vitest -t 'runs' src/foo.test.tsx"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "vitest -t runs src/foo.test.tsx".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn recognizes_npx_and_scripts() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("npx eslint src"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "npx eslint src".to_string(),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets: Some(vec!["src".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("npx prettier -c ."),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "npx prettier -c .".to_string(),
|
||||
tool: Some("prettier".to_string()),
|
||||
targets: Some(vec![".".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("pnpm run lint -- --max-warnings 0"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "pnpm run lint -- --max-warnings 0".to_string(),
|
||||
tool: Some("pnpm-script:lint".to_string()),
|
||||
targets: None,
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("npm test"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "npm test".to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
assert_parsed(
|
||||
&shlex_split_safe("yarn test"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "yarn test".to_string(),
|
||||
vec![ParsedCommand::Search {
|
||||
cmd: "rg --files".to_string(),
|
||||
query: None,
|
||||
path: None,
|
||||
}],
|
||||
);
|
||||
}
|
||||
@@ -770,6 +577,51 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_mixed_sequence_with_pipes_semicolons_and_or() {
|
||||
// Provided long command sequence combining sequencing, pipelines, and ORs.
|
||||
let inner = "pwd; ls -la; rg --files -g '!target' | wc -l; rg -n '^\\[workspace\\]' -n Cargo.toml || true; rg -n '^\\[package\\]' -n */Cargo.toml || true; cargo --version; rustc --version; cargo clippy --workspace --all-targets --all-features -q";
|
||||
let args = vec_str(&["bash", "-lc", inner]);
|
||||
|
||||
let expected = vec![
|
||||
ParsedCommand::Unknown {
|
||||
cmd: "pwd".to_string(),
|
||||
},
|
||||
ParsedCommand::ListFiles {
|
||||
cmd: shlex_join(&shlex_split_safe("ls -la")),
|
||||
path: None,
|
||||
},
|
||||
ParsedCommand::Search {
|
||||
cmd: shlex_join(&shlex_split_safe("rg --files -g '!target'")),
|
||||
query: None,
|
||||
path: Some("!target".to_string()),
|
||||
},
|
||||
ParsedCommand::Search {
|
||||
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[workspace\\]' -n Cargo.toml")),
|
||||
query: Some("^\\[workspace\\]".to_string()),
|
||||
path: Some("Cargo.toml".to_string()),
|
||||
},
|
||||
ParsedCommand::Search {
|
||||
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[package\\]' -n */Cargo.toml")),
|
||||
query: Some("^\\[package\\]".to_string()),
|
||||
path: Some("Cargo.toml".to_string()),
|
||||
},
|
||||
ParsedCommand::Unknown {
|
||||
cmd: shlex_join(&shlex_split_safe("cargo --version")),
|
||||
},
|
||||
ParsedCommand::Unknown {
|
||||
cmd: shlex_join(&shlex_split_safe("rustc --version")),
|
||||
},
|
||||
ParsedCommand::Unknown {
|
||||
cmd: shlex_join(&shlex_split_safe(
|
||||
"cargo clippy --workspace --all-targets --all-features -q",
|
||||
)),
|
||||
},
|
||||
];
|
||||
|
||||
assert_parsed(&args, expected);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn strips_true_in_sequence() {
|
||||
// `true` should be dropped from parsed sequences
|
||||
@@ -867,159 +719,6 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pnpm_test_is_parsed_as_test() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("pnpm test"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "pnpm test".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pnpm_exec_vitest_is_unknown() {
|
||||
// From commands_combined: cd codex-cli && pnpm exec vitest run tests/... --threads=false --passWithNoTests
|
||||
let inner = "cd codex-cli && pnpm exec vitest run tests/file-tag-utils.test.ts --threads=false --passWithNoTests";
|
||||
assert_parsed(
|
||||
&shlex_split_safe(inner),
|
||||
vec![
|
||||
ParsedCommand::Unknown {
|
||||
cmd: "cd codex-cli".to_string(),
|
||||
},
|
||||
ParsedCommand::Unknown {
|
||||
cmd: "pnpm exec vitest run tests/file-tag-utils.test.ts '--threads=false' --passWithNoTests".to_string(),
|
||||
},
|
||||
],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cargo_test_with_crate() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cargo test -p codex-core parse_command::"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "cargo test -p codex-core parse_command::".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cargo_test_with_crate_2() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe(
|
||||
"cd core && cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants",
|
||||
),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cargo_test_with_crate_3() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cd core && cargo test -q parse_command::tests"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "cargo test -q parse_command::tests".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cargo_test_with_crate_4() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("cd core && cargo test --all-features parse_command -- --nocapture"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "cargo test --all-features parse_command -- --nocapture".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
// Additional coverage for other common tools/frameworks
|
||||
#[test]
|
||||
fn recognizes_black_and_ruff() {
|
||||
// black formats Python code
|
||||
assert_parsed(
|
||||
&shlex_split_safe("black src"),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "black src".to_string(),
|
||||
tool: Some("black".to_string()),
|
||||
targets: Some(vec!["src".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
// ruff check is a linter; ensure we collect targets
|
||||
assert_parsed(
|
||||
&shlex_split_safe("ruff check ."),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "ruff check .".to_string(),
|
||||
tool: Some("ruff".to_string()),
|
||||
targets: Some(vec![".".to_string()]),
|
||||
}],
|
||||
);
|
||||
|
||||
// ruff format is a formatter
|
||||
assert_parsed(
|
||||
&shlex_split_safe("ruff format pkg/"),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "ruff format pkg/".to_string(),
|
||||
tool: Some("ruff".to_string()),
|
||||
targets: Some(vec!["pkg/".to_string()]),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn recognizes_pnpm_monorepo_test_and_npm_format_script() {
|
||||
// pnpm -r test in a monorepo should still parse as a test action
|
||||
assert_parsed(
|
||||
&shlex_split_safe("pnpm -r test"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "pnpm -r test".to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
// npm run format should be recognized as a format action
|
||||
assert_parsed(
|
||||
&shlex_split_safe("npm run format -- -w ."),
|
||||
vec![ParsedCommand::Format {
|
||||
cmd: "npm run format -- -w .".to_string(),
|
||||
tool: Some("npm-script:format".to_string()),
|
||||
targets: None,
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn yarn_test_is_parsed_as_test() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("yarn test"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "yarn test".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pytest_file_only_and_go_run_regex() {
|
||||
// pytest invoked with a file path should be captured as a filter
|
||||
assert_parsed(
|
||||
&shlex_split_safe("pytest tests/test_example.py"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "pytest tests/test_example.py".to_string(),
|
||||
}],
|
||||
);
|
||||
|
||||
// go test with -run regex should capture the filter
|
||||
assert_parsed(
|
||||
&shlex_split_safe("go test ./... -run '^TestFoo$'"),
|
||||
vec![ParsedCommand::Test {
|
||||
cmd: "go test ./... -run '^TestFoo$'".to_string(),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn grep_with_query_and_path() {
|
||||
assert_parsed(
|
||||
@@ -1090,30 +789,6 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn eslint_with_config_path_and_target() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("eslint -c .eslintrc.json src"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "eslint -c .eslintrc.json src".to_string(),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets: Some(vec!["src".to_string()]),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn npx_eslint_with_config_path_and_target() {
|
||||
assert_parsed(
|
||||
&shlex_split_safe("npx eslint -c .eslintrc src"),
|
||||
vec![ParsedCommand::Lint {
|
||||
cmd: "npx eslint -c .eslintrc src".to_string(),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets: Some(vec!["src".to_string()]),
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fd_file_finder_variants() {
|
||||
assert_parsed(
|
||||
@@ -1202,16 +877,13 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
|
||||
return Some(commands[1..].to_vec());
|
||||
}
|
||||
|
||||
// cd foo && [any Test command] => [any Test command]
|
||||
// cd foo && [any command] => [any command] (keep non-cd when a cd is followed by something)
|
||||
if let Some(idx) = commands.iter().position(|pc| match pc {
|
||||
ParsedCommand::Unknown { cmd } => {
|
||||
shlex_split(cmd).is_some_and(|t| t.first().map(|s| s.as_str()) == Some("cd"))
|
||||
}
|
||||
_ => false,
|
||||
}) && commands
|
||||
.iter()
|
||||
.skip(idx + 1)
|
||||
.any(|pc| matches!(pc, ParsedCommand::Test { .. }))
|
||||
}) && commands.len() > idx + 1
|
||||
{
|
||||
let mut out = Vec::with_capacity(commands.len() - 1);
|
||||
out.extend_from_slice(&commands[..idx]);
|
||||
@@ -1220,10 +892,10 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
|
||||
}
|
||||
|
||||
// cmd || true => cmd
|
||||
if let Some(idx) = commands.iter().position(|pc| match pc {
|
||||
ParsedCommand::Noop { cmd } => cmd == "true",
|
||||
_ => false,
|
||||
}) {
|
||||
if let Some(idx) = commands
|
||||
.iter()
|
||||
.position(|pc| matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"))
|
||||
{
|
||||
let mut out = Vec::with_capacity(commands.len() - 1);
|
||||
out.extend_from_slice(&commands[..idx]);
|
||||
out.extend_from_slice(&commands[idx + 1..]);
|
||||
@@ -1377,75 +1049,6 @@ fn skip_flag_values<'a>(args: &'a [String], flags_with_vals: &[&str]) -> Vec<&'a
|
||||
out
|
||||
}
|
||||
|
||||
/// Common flags for ESLint that take a following value and should not be
|
||||
/// considered positional targets.
|
||||
const ESLINT_FLAGS_WITH_VALUES: &[&str] = &[
|
||||
"-c",
|
||||
"--config",
|
||||
"--parser",
|
||||
"--parser-options",
|
||||
"--rulesdir",
|
||||
"--plugin",
|
||||
"--max-warnings",
|
||||
"--format",
|
||||
];
|
||||
|
||||
fn collect_non_flag_targets(args: &[String]) -> Option<Vec<String>> {
|
||||
let mut targets = Vec::new();
|
||||
let mut skip_next = false;
|
||||
for (i, a) in args.iter().enumerate() {
|
||||
if a == "--" {
|
||||
break;
|
||||
}
|
||||
if skip_next {
|
||||
skip_next = false;
|
||||
continue;
|
||||
}
|
||||
if a == "-p"
|
||||
|| a == "--package"
|
||||
|| a == "--features"
|
||||
|| a == "-C"
|
||||
|| a == "--config"
|
||||
|| a == "--config-path"
|
||||
|| a == "--out-dir"
|
||||
|| a == "-o"
|
||||
|| a == "--run"
|
||||
|| a == "--max-warnings"
|
||||
|| a == "--format"
|
||||
{
|
||||
if i + 1 < args.len() {
|
||||
skip_next = true;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if a.starts_with('-') {
|
||||
continue;
|
||||
}
|
||||
targets.push(a.clone());
|
||||
}
|
||||
if targets.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(targets)
|
||||
}
|
||||
}
|
||||
|
||||
fn collect_non_flag_targets_with_flags(
|
||||
args: &[String],
|
||||
flags_with_vals: &[&str],
|
||||
) -> Option<Vec<String>> {
|
||||
let targets: Vec<String> = skip_flag_values(args, flags_with_vals)
|
||||
.into_iter()
|
||||
.filter(|a| !a.starts_with('-'))
|
||||
.cloned()
|
||||
.collect();
|
||||
if targets.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(targets)
|
||||
}
|
||||
}
|
||||
|
||||
fn is_pathish(s: &str) -> bool {
|
||||
s == "."
|
||||
|| s == ".."
|
||||
@@ -1514,47 +1117,6 @@ fn parse_find_query_and_path(tail: &[String]) -> (Option<String>, Option<String>
|
||||
(query, path)
|
||||
}
|
||||
|
||||
fn classify_npm_like(tool: &str, tail: &[String], full_cmd: &[String]) -> Option<ParsedCommand> {
|
||||
let mut r = tail;
|
||||
if tool == "pnpm" && r.first().map(|s| s.as_str()) == Some("-r") {
|
||||
r = &r[1..];
|
||||
}
|
||||
let mut script_name: Option<String> = None;
|
||||
if r.first().map(|s| s.as_str()) == Some("run") {
|
||||
script_name = r.get(1).cloned();
|
||||
} else {
|
||||
let is_test_cmd = (tool == "npm" && r.first().map(|s| s.as_str()) == Some("t"))
|
||||
|| ((tool == "npm" || tool == "pnpm" || tool == "yarn")
|
||||
&& r.first().map(|s| s.as_str()) == Some("test"));
|
||||
if is_test_cmd {
|
||||
script_name = Some("test".to_string());
|
||||
}
|
||||
}
|
||||
if let Some(name) = script_name {
|
||||
let lname = name.to_lowercase();
|
||||
if lname == "test" || lname == "unit" || lname == "jest" || lname == "vitest" {
|
||||
return Some(ParsedCommand::Test {
|
||||
cmd: shlex_join(full_cmd),
|
||||
});
|
||||
}
|
||||
if lname == "lint" || lname == "eslint" {
|
||||
return Some(ParsedCommand::Lint {
|
||||
cmd: shlex_join(full_cmd),
|
||||
tool: Some(format!("{tool}-script:{name}")),
|
||||
targets: None,
|
||||
});
|
||||
}
|
||||
if lname == "format" || lname == "fmt" || lname == "prettier" {
|
||||
return Some(ParsedCommand::Format {
|
||||
cmd: shlex_join(full_cmd),
|
||||
tool: Some(format!("{tool}-script:{name}")),
|
||||
targets: None,
|
||||
});
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
|
||||
let [bash, flag, script] = original else {
|
||||
return None;
|
||||
@@ -1586,7 +1148,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
|
||||
.map(|tokens| summarize_main_tokens(&tokens))
|
||||
.collect();
|
||||
if commands.len() > 1 {
|
||||
commands.retain(|pc| !matches!(pc, ParsedCommand::Noop { .. }));
|
||||
commands.retain(|pc| !matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"));
|
||||
}
|
||||
if commands.len() == 1 {
|
||||
// If we reduced to a single command, attribute the full original script
|
||||
@@ -1655,27 +1217,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
|
||||
}
|
||||
}
|
||||
}
|
||||
ParsedCommand::Format {
|
||||
tool, targets, cmd, ..
|
||||
} => ParsedCommand::Format {
|
||||
cmd: cmd.clone(),
|
||||
tool,
|
||||
targets,
|
||||
},
|
||||
ParsedCommand::Test { cmd, .. } => ParsedCommand::Test { cmd: cmd.clone() },
|
||||
ParsedCommand::Lint {
|
||||
tool, targets, cmd, ..
|
||||
} => ParsedCommand::Lint {
|
||||
cmd: cmd.clone(),
|
||||
tool,
|
||||
targets,
|
||||
},
|
||||
ParsedCommand::Unknown { .. } => ParsedCommand::Unknown {
|
||||
cmd: script.clone(),
|
||||
},
|
||||
ParsedCommand::Noop { .. } => ParsedCommand::Noop {
|
||||
cmd: script.clone(),
|
||||
},
|
||||
other => other,
|
||||
})
|
||||
.collect();
|
||||
}
|
||||
@@ -1728,124 +1270,6 @@ fn drop_small_formatting_commands(mut commands: Vec<Vec<String>>) -> Vec<Vec<Str
|
||||
|
||||
fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
|
||||
match main_cmd.split_first() {
|
||||
Some((head, tail)) if head == "true" && tail.is_empty() => ParsedCommand::Noop {
|
||||
cmd: shlex_join(main_cmd),
|
||||
},
|
||||
// (sed-specific logic handled below in dedicated arm returning Read)
|
||||
Some((head, tail))
|
||||
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("fmt") =>
|
||||
{
|
||||
ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("cargo fmt".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
Some((head, tail))
|
||||
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("clippy") =>
|
||||
{
|
||||
ParsedCommand::Lint {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("cargo clippy".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
Some((head, tail))
|
||||
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("test") =>
|
||||
{
|
||||
ParsedCommand::Test {
|
||||
cmd: shlex_join(main_cmd),
|
||||
}
|
||||
}
|
||||
Some((head, tail)) if head == "rustfmt" => ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("rustfmt".to_string()),
|
||||
targets: collect_non_flag_targets(tail),
|
||||
},
|
||||
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("fmt") => {
|
||||
ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("go fmt".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("test") => {
|
||||
ParsedCommand::Test {
|
||||
cmd: shlex_join(main_cmd),
|
||||
}
|
||||
}
|
||||
Some((head, _)) if head == "pytest" => ParsedCommand::Test {
|
||||
cmd: shlex_join(main_cmd),
|
||||
},
|
||||
Some((head, tail)) if head == "eslint" => {
|
||||
// Treat configuration flags with values (e.g. `-c .eslintrc`) as non-targets.
|
||||
let targets = collect_non_flag_targets_with_flags(tail, ESLINT_FLAGS_WITH_VALUES);
|
||||
ParsedCommand::Lint {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets,
|
||||
}
|
||||
}
|
||||
Some((head, tail)) if head == "prettier" => ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("prettier".to_string()),
|
||||
targets: collect_non_flag_targets(tail),
|
||||
},
|
||||
Some((head, tail)) if head == "black" => ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("black".to_string()),
|
||||
targets: collect_non_flag_targets(tail),
|
||||
},
|
||||
Some((head, tail))
|
||||
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("check") =>
|
||||
{
|
||||
ParsedCommand::Lint {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("ruff".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
Some((head, tail))
|
||||
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("format") =>
|
||||
{
|
||||
ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("ruff".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
Some((head, _)) if (head == "jest" || head == "vitest") => ParsedCommand::Test {
|
||||
cmd: shlex_join(main_cmd),
|
||||
},
|
||||
Some((head, tail))
|
||||
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("eslint") =>
|
||||
{
|
||||
let targets = collect_non_flag_targets_with_flags(&tail[1..], ESLINT_FLAGS_WITH_VALUES);
|
||||
ParsedCommand::Lint {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("eslint".to_string()),
|
||||
targets,
|
||||
}
|
||||
}
|
||||
Some((head, tail))
|
||||
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("prettier") =>
|
||||
{
|
||||
ParsedCommand::Format {
|
||||
cmd: shlex_join(main_cmd),
|
||||
tool: Some("prettier".to_string()),
|
||||
targets: collect_non_flag_targets(&tail[1..]),
|
||||
}
|
||||
}
|
||||
// NPM-like scripts including yarn
|
||||
Some((tool, tail)) if (tool == "pnpm" || tool == "npm" || tool == "yarn") => {
|
||||
if let Some(cmd) = classify_npm_like(tool, tail, main_cmd) {
|
||||
cmd
|
||||
} else {
|
||||
ParsedCommand::Unknown {
|
||||
cmd: shlex_join(main_cmd),
|
||||
}
|
||||
}
|
||||
}
|
||||
Some((head, tail)) if head == "ls" => {
|
||||
// Avoid treating option values as paths (e.g., ls -I "*.test.js").
|
||||
let candidates = skip_flag_values(
|
||||
|
||||
@@ -135,7 +135,7 @@ impl RolloutRecorder {
|
||||
| ResponseItem::CustomToolCall { .. }
|
||||
| ResponseItem::CustomToolCallOutput { .. }
|
||||
| ResponseItem::Reasoning { .. } => filtered.push(item.clone()),
|
||||
ResponseItem::Other => {
|
||||
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
|
||||
// These should never be serialized.
|
||||
continue;
|
||||
}
|
||||
@@ -199,7 +199,7 @@ impl RolloutRecorder {
|
||||
| ResponseItem::CustomToolCall { .. }
|
||||
| ResponseItem::CustomToolCallOutput { .. }
|
||||
| ResponseItem::Reasoning { .. } => items.push(item),
|
||||
ResponseItem::Other => {}
|
||||
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
|
||||
},
|
||||
Err(e) => {
|
||||
warn!("failed to parse item: {v:?}, error: {e}");
|
||||
@@ -326,7 +326,7 @@ async fn rollout_writer(
|
||||
| ResponseItem::Reasoning { .. } => {
|
||||
writer.write_line(&item).await?;
|
||||
}
|
||||
ResponseItem::Other => {}
|
||||
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -222,7 +222,7 @@ fn is_write_patch_constrained_to_writable_paths(
|
||||
|
||||
for (path, change) in action.changes() {
|
||||
match change {
|
||||
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete => {
|
||||
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete { .. } => {
|
||||
if !is_path_writable(path) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -578,7 +578,12 @@ index {ZERO_OID}..{right_oid}
|
||||
fs::write(&file, "x\n").unwrap();
|
||||
|
||||
let mut acc = TurnDiffTracker::new();
|
||||
let del_changes = HashMap::from([(file.clone(), FileChange::Delete)]);
|
||||
let del_changes = HashMap::from([(
|
||||
file.clone(),
|
||||
FileChange::Delete {
|
||||
content: "x\n".to_string(),
|
||||
},
|
||||
)]);
|
||||
acc.on_patch_begin(&del_changes);
|
||||
|
||||
// Simulate apply: delete the file from disk.
|
||||
@@ -741,7 +746,12 @@ index {left_oid}..{right_oid}
|
||||
assert_eq!(first, expected_first);
|
||||
|
||||
// Next: introduce a brand-new path b.txt into baseline snapshots via a delete change.
|
||||
let del_b = HashMap::from([(b.clone(), FileChange::Delete)]);
|
||||
let del_b = HashMap::from([(
|
||||
b.clone(),
|
||||
FileChange::Delete {
|
||||
content: "z\n".to_string(),
|
||||
},
|
||||
)]);
|
||||
acc.on_patch_begin(&del_b);
|
||||
// Simulate apply: delete b.txt.
|
||||
let baseline_mode = file_mode_for_path(&b).unwrap_or(FileMode::Regular);
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use std::path::Path;
|
||||
use std::time::Duration;
|
||||
|
||||
use rand::Rng;
|
||||
@@ -12,33 +11,3 @@ pub(crate) fn backoff(attempt: u64) -> Duration {
|
||||
let jitter = rand::rng().random_range(0.9..1.1);
|
||||
Duration::from_millis((base as f64 * jitter) as u64)
|
||||
}
|
||||
|
||||
/// Return `true` if the project folder specified by the `Config` is inside a
|
||||
/// Git repository.
|
||||
///
|
||||
/// The check walks up the directory hierarchy looking for a `.git` file or
|
||||
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
|
||||
/// approach does **not** require the `git` binary or the `git2` crate and is
|
||||
/// therefore fairly lightweight.
|
||||
///
|
||||
/// Note that this does **not** detect *work‑trees* created with
|
||||
/// `git worktree add` where the checkout lives outside the main repository
|
||||
/// directory. If you need Codex to work from such a checkout simply pass the
|
||||
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
|
||||
pub fn is_inside_git_repo(base_dir: &Path) -> bool {
|
||||
let mut dir = base_dir.to_path_buf();
|
||||
|
||||
loop {
|
||||
if dir.join(".git").exists() {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Pop one component (go up one directory). `pop` returns false when
|
||||
// we have reached the filesystem root.
|
||||
if !dir.pop() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ use codex_core::protocol::TaskCompleteEvent;
|
||||
use codex_core::protocol::TurnAbortReason;
|
||||
use codex_core::protocol::TurnDiffEvent;
|
||||
use codex_core::protocol::WebSearchBeginEvent;
|
||||
use codex_core::protocol::WebSearchEndEvent;
|
||||
use owo_colors::OwoColorize;
|
||||
use owo_colors::Style;
|
||||
use shlex::try_join;
|
||||
@@ -362,8 +363,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
|
||||
}
|
||||
}
|
||||
}
|
||||
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _, query }) => {
|
||||
ts_println!(self, "🌐 {query}");
|
||||
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _ }) => {}
|
||||
EventMsg::WebSearchEnd(WebSearchEndEvent { call_id: _, query }) => {
|
||||
ts_println!(self, "🌐 Searched: {query}");
|
||||
}
|
||||
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
|
||||
call_id,
|
||||
@@ -402,13 +404,16 @@ impl EventProcessor for EventProcessorWithHumanOutput {
|
||||
println!("{}", line.style(self.green));
|
||||
}
|
||||
}
|
||||
FileChange::Delete => {
|
||||
FileChange::Delete { content } => {
|
||||
let header = format!(
|
||||
"{} {}",
|
||||
format_file_change(change),
|
||||
path.to_string_lossy()
|
||||
);
|
||||
println!("{}", header.style(self.magenta));
|
||||
for line in content.lines() {
|
||||
println!("{}", line.style(self.red));
|
||||
}
|
||||
}
|
||||
FileChange::Update {
|
||||
unified_diff,
|
||||
@@ -533,6 +538,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
|
||||
EventMsg::McpListToolsResponse(_) => {
|
||||
// Currently ignored in exec output.
|
||||
}
|
||||
EventMsg::ListCustomPromptsResponse(_) => {
|
||||
// Currently ignored in exec output.
|
||||
}
|
||||
EventMsg::TurnAborted(abort_reason) => match abort_reason.reason {
|
||||
TurnAbortReason::Interrupted => {
|
||||
ts_println!(self, "task interrupted");
|
||||
@@ -555,7 +563,7 @@ fn escape_command(command: &[String]) -> String {
|
||||
fn format_file_change(change: &FileChange) -> &'static str {
|
||||
match change {
|
||||
FileChange::Add { .. } => "A",
|
||||
FileChange::Delete => "D",
|
||||
FileChange::Delete { .. } => "D",
|
||||
FileChange::Update {
|
||||
move_path: Some(_), ..
|
||||
} => "R",
|
||||
|
||||
@@ -13,13 +13,13 @@ use codex_core::ConversationManager;
|
||||
use codex_core::NewConversation;
|
||||
use codex_core::config::Config;
|
||||
use codex_core::config::ConfigOverrides;
|
||||
use codex_core::git_info::get_git_repo_root;
|
||||
use codex_core::protocol::AskForApproval;
|
||||
use codex_core::protocol::Event;
|
||||
use codex_core::protocol::EventMsg;
|
||||
use codex_core::protocol::InputItem;
|
||||
use codex_core::protocol::Op;
|
||||
use codex_core::protocol::TaskCompleteEvent;
|
||||
use codex_core::util::is_inside_git_repo;
|
||||
use codex_login::AuthManager;
|
||||
use codex_ollama::DEFAULT_OSS_MODEL;
|
||||
use codex_protocol::config_types::SandboxMode;
|
||||
@@ -183,7 +183,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
|
||||
// is using.
|
||||
event_processor.print_config_summary(&config, &prompt);
|
||||
|
||||
if !skip_git_repo_check && !is_inside_git_repo(&config.cwd.to_path_buf()) {
|
||||
if !skip_git_repo_check && get_git_repo_root(&config.cwd.to_path_buf()).is_none() {
|
||||
eprintln!("Not inside a trusted directory and --skip-git-repo-check was not specified.");
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
@@ -129,10 +129,7 @@ impl McpClient {
|
||||
error!("failed to write newline to child stdin");
|
||||
break;
|
||||
}
|
||||
if stdin.flush().await.is_err() {
|
||||
error!("failed to flush child stdin");
|
||||
break;
|
||||
}
|
||||
// No explicit flush needed on a pipe; write_all is sufficient.
|
||||
}
|
||||
Err(e) => error!("failed to serialize JSONRPCMessage: {e}"),
|
||||
}
|
||||
@@ -365,7 +362,11 @@ impl McpClient {
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(tx) = pending.lock().await.remove(&id) {
|
||||
let tx_opt = {
|
||||
let mut guard = pending.lock().await;
|
||||
guard.remove(&id)
|
||||
};
|
||||
if let Some(tx) = tx_opt {
|
||||
// Ignore send errors – the receiver might have been dropped.
|
||||
let _ = tx.send(JSONRPCMessage::Response(resp));
|
||||
} else {
|
||||
@@ -383,7 +384,11 @@ impl McpClient {
|
||||
RequestId::String(_) => return, // see comment above
|
||||
};
|
||||
|
||||
if let Some(tx) = pending.lock().await.remove(&id) {
|
||||
let tx_opt = {
|
||||
let mut guard = pending.lock().await;
|
||||
guard.remove(&id)
|
||||
};
|
||||
if let Some(tx) = tx_opt {
|
||||
let _ = tx.send(JSONRPCMessage::Error(err));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -264,6 +264,7 @@ async fn run_codex_tool_session_inner(
|
||||
| EventMsg::McpToolCallBegin(_)
|
||||
| EventMsg::McpToolCallEnd(_)
|
||||
| EventMsg::McpListToolsResponse(_)
|
||||
| EventMsg::ListCustomPromptsResponse(_)
|
||||
| EventMsg::ExecCommandBegin(_)
|
||||
| EventMsg::ExecCommandOutputDelta(_)
|
||||
| EventMsg::ExecCommandEnd(_)
|
||||
@@ -273,6 +274,7 @@ async fn run_codex_tool_session_inner(
|
||||
| EventMsg::PatchApplyEnd(_)
|
||||
| EventMsg::TurnDiff(_)
|
||||
| EventMsg::WebSearchBegin(_)
|
||||
| EventMsg::WebSearchEnd(_)
|
||||
| EventMsg::GetHistoryEntryResponse(_)
|
||||
| EventMsg::PlanUpdate(_)
|
||||
| EventMsg::TurnAborted(_)
|
||||
|
||||
@@ -59,11 +59,10 @@ pub async fn run_main(
|
||||
|
||||
// Set up channels.
|
||||
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(CHANNEL_CAPACITY);
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
|
||||
|
||||
// Task: read from stdin, push to `incoming_tx`.
|
||||
let stdin_reader_handle = tokio::spawn({
|
||||
let incoming_tx = incoming_tx.clone();
|
||||
async move {
|
||||
let stdin = io::stdin();
|
||||
let reader = BufReader::new(stdin);
|
||||
@@ -135,10 +134,6 @@ pub async fn run_main(
|
||||
error!("Failed to write newline to stdout: {e}");
|
||||
break;
|
||||
}
|
||||
if let Err(e) = stdout.flush().await {
|
||||
error!("Failed to flush stdout: {e}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
|
||||
}
|
||||
|
||||
@@ -24,12 +24,12 @@ use crate::error_code::INTERNAL_ERROR_CODE;
|
||||
/// Sends messages to the client and manages request callbacks.
|
||||
pub(crate) struct OutgoingMessageSender {
|
||||
next_request_id: AtomicI64,
|
||||
sender: mpsc::Sender<OutgoingMessage>,
|
||||
sender: mpsc::UnboundedSender<OutgoingMessage>,
|
||||
request_id_to_callback: Mutex<HashMap<RequestId, oneshot::Sender<Result>>>,
|
||||
}
|
||||
|
||||
impl OutgoingMessageSender {
|
||||
pub(crate) fn new(sender: mpsc::Sender<OutgoingMessage>) -> Self {
|
||||
pub(crate) fn new(sender: mpsc::UnboundedSender<OutgoingMessage>) -> Self {
|
||||
Self {
|
||||
next_request_id: AtomicI64::new(0),
|
||||
sender,
|
||||
@@ -55,7 +55,7 @@ impl OutgoingMessageSender {
|
||||
method: method.to_string(),
|
||||
params,
|
||||
});
|
||||
let _ = self.sender.send(outgoing_message).await;
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
rx_approve
|
||||
}
|
||||
|
||||
@@ -81,7 +81,7 @@ impl OutgoingMessageSender {
|
||||
match serde_json::to_value(response) {
|
||||
Ok(result) => {
|
||||
let outgoing_message = OutgoingMessage::Response(OutgoingResponse { id, result });
|
||||
let _ = self.sender.send(outgoing_message).await;
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
}
|
||||
Err(err) => {
|
||||
self.send_error(
|
||||
@@ -130,17 +130,17 @@ impl OutgoingMessageSender {
|
||||
};
|
||||
let outgoing_message =
|
||||
OutgoingMessage::Notification(OutgoingNotification { method, params });
|
||||
let _ = self.sender.send(outgoing_message).await;
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
}
|
||||
|
||||
pub(crate) async fn send_notification(&self, notification: OutgoingNotification) {
|
||||
let outgoing_message = OutgoingMessage::Notification(notification);
|
||||
let _ = self.sender.send(outgoing_message).await;
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
}
|
||||
|
||||
pub(crate) async fn send_error(&self, id: RequestId, error: JSONRPCErrorError) {
|
||||
let outgoing_message = OutgoingMessage::Error(OutgoingError { id, error });
|
||||
let _ = self.sender.send(outgoing_message).await;
|
||||
let _ = self.sender.send(outgoing_message);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -250,7 +250,7 @@ mod tests {
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_event_as_notification() {
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(2);
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
|
||||
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
|
||||
|
||||
let event = Event {
|
||||
@@ -281,7 +281,7 @@ mod tests {
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_send_event_as_notification_with_meta() {
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(2);
|
||||
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
|
||||
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
|
||||
|
||||
let session_configured_event = SessionConfiguredEvent {
|
||||
|
||||
10
codex-rs/protocol/src/custom_prompts.rs
Normal file
10
codex-rs/protocol/src/custom_prompts.rs
Normal file
@@ -0,0 +1,10 @@
|
||||
use serde::Deserialize;
|
||||
use serde::Serialize;
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||
pub struct CustomPrompt {
|
||||
pub name: String,
|
||||
pub path: PathBuf,
|
||||
pub content: String,
|
||||
}
|
||||
@@ -1,4 +1,5 @@
|
||||
pub mod config_types;
|
||||
pub mod custom_prompts;
|
||||
pub mod mcp_protocol;
|
||||
pub mod message_history;
|
||||
pub mod models;
|
||||
|
||||
@@ -95,6 +95,22 @@ pub enum ResponseItem {
|
||||
call_id: String,
|
||||
output: String,
|
||||
},
|
||||
// Emitted by the Responses API when the agent triggers a web search.
|
||||
// Example payload (from SSE `response.output_item.done`):
|
||||
// {
|
||||
// "id":"ws_...",
|
||||
// "type":"web_search_call",
|
||||
// "status":"completed",
|
||||
// "action": {"type":"search","query":"weather: San Francisco, CA"}
|
||||
// }
|
||||
WebSearchCall {
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
id: Option<String>,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
status: Option<String>,
|
||||
action: WebSearchAction,
|
||||
},
|
||||
|
||||
#[serde(other)]
|
||||
Other,
|
||||
}
|
||||
@@ -162,6 +178,16 @@ pub struct LocalShellExecAction {
|
||||
pub user: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum WebSearchAction {
|
||||
Search {
|
||||
query: String,
|
||||
},
|
||||
#[serde(other)]
|
||||
Other,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum ReasoningItemReasoningSummary {
|
||||
|
||||
@@ -17,22 +17,6 @@ pub enum ParsedCommand {
|
||||
query: Option<String>,
|
||||
path: Option<String>,
|
||||
},
|
||||
Format {
|
||||
cmd: String,
|
||||
tool: Option<String>,
|
||||
targets: Option<Vec<String>>,
|
||||
},
|
||||
Test {
|
||||
cmd: String,
|
||||
},
|
||||
Lint {
|
||||
cmd: String,
|
||||
tool: Option<String>,
|
||||
targets: Option<Vec<String>>,
|
||||
},
|
||||
Noop {
|
||||
cmd: String,
|
||||
},
|
||||
Unknown {
|
||||
cmd: String,
|
||||
},
|
||||
|
||||
@@ -10,6 +10,7 @@ use std::path::PathBuf;
|
||||
use std::str::FromStr;
|
||||
use std::time::Duration;
|
||||
|
||||
use crate::custom_prompts::CustomPrompt;
|
||||
use mcp_types::CallToolResult;
|
||||
use mcp_types::Tool as McpTool;
|
||||
use serde::Deserialize;
|
||||
@@ -146,6 +147,9 @@ pub enum Op {
|
||||
/// Reply is delivered via `EventMsg::McpListToolsResponse`.
|
||||
ListMcpTools,
|
||||
|
||||
/// Request the list of available custom prompts.
|
||||
ListCustomPrompts,
|
||||
|
||||
/// Request the agent to summarize the current conversation context.
|
||||
/// The agent will use its existing context (either conversation history or previous response id)
|
||||
/// to generate a summary which will be returned as an AgentMessage event.
|
||||
@@ -439,6 +443,8 @@ pub enum EventMsg {
|
||||
|
||||
WebSearchBegin(WebSearchBeginEvent),
|
||||
|
||||
WebSearchEnd(WebSearchEndEvent),
|
||||
|
||||
/// Notification that the server is about to execute a command.
|
||||
ExecCommandBegin(ExecCommandBeginEvent),
|
||||
|
||||
@@ -472,6 +478,9 @@ pub enum EventMsg {
|
||||
/// List of MCP tools available to the agent.
|
||||
McpListToolsResponse(McpListToolsResponseEvent),
|
||||
|
||||
/// List of custom prompts available to the agent.
|
||||
ListCustomPromptsResponse(ListCustomPromptsResponseEvent),
|
||||
|
||||
PlanUpdate(UpdatePlanArgs),
|
||||
|
||||
TurnAborted(TurnAbortedEvent),
|
||||
@@ -668,6 +677,11 @@ impl McpToolCallEndEvent {
|
||||
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||
pub struct WebSearchBeginEvent {
|
||||
pub call_id: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||
pub struct WebSearchEndEvent {
|
||||
pub call_id: String,
|
||||
pub query: String,
|
||||
}
|
||||
|
||||
@@ -806,6 +820,12 @@ pub struct McpListToolsResponseEvent {
|
||||
pub tools: std::collections::HashMap<String, McpTool>,
|
||||
}
|
||||
|
||||
/// Response payload for `Op::ListCustomPrompts`.
|
||||
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||
pub struct ListCustomPromptsResponseEvent {
|
||||
pub custom_prompts: Vec<CustomPrompt>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, Deserialize, Serialize)]
|
||||
pub struct SessionConfiguredEvent {
|
||||
/// Unique id for this session.
|
||||
@@ -849,7 +869,9 @@ pub enum FileChange {
|
||||
Add {
|
||||
content: String,
|
||||
},
|
||||
Delete,
|
||||
Delete {
|
||||
content: String,
|
||||
},
|
||||
Update {
|
||||
unified_diff: String,
|
||||
move_path: Option<PathBuf>,
|
||||
|
||||
@@ -49,6 +49,7 @@ image = { version = "^0.25.6", default-features = false, features = [
|
||||
"jpeg",
|
||||
"png",
|
||||
] }
|
||||
itertools = "0.14.0"
|
||||
lazy_static = "1"
|
||||
mcp-types = { path = "../mcp-types" }
|
||||
once_cell = "1"
|
||||
@@ -87,6 +88,7 @@ unicode-segmentation = "1.12.0"
|
||||
unicode-width = "0.1"
|
||||
url = "2"
|
||||
uuid = "1"
|
||||
pathdiff = "0.2"
|
||||
|
||||
[target.'cfg(unix)'.dependencies]
|
||||
libc = "0.2"
|
||||
|
||||
@@ -43,6 +43,7 @@ pub(crate) struct App {
|
||||
// Pager overlay state (Transcript or Static like Diff)
|
||||
pub(crate) overlay: Option<Overlay>,
|
||||
pub(crate) deferred_history_lines: Vec<Line<'static>>,
|
||||
has_emitted_history_lines: bool,
|
||||
|
||||
pub(crate) enhanced_keys_supported: bool,
|
||||
|
||||
@@ -91,6 +92,7 @@ impl App {
|
||||
transcript_lines: Vec::new(),
|
||||
overlay: None,
|
||||
deferred_history_lines: Vec::new(),
|
||||
has_emitted_history_lines: false,
|
||||
commit_anim_running: Arc::new(AtomicBool::new(false)),
|
||||
backtrack: BacktrackState::default(),
|
||||
};
|
||||
@@ -177,27 +179,28 @@ impl App {
|
||||
);
|
||||
tui.frame_requester().schedule_frame();
|
||||
}
|
||||
AppEvent::InsertHistoryLines(lines) => {
|
||||
if let Some(Overlay::Transcript(t)) = &mut self.overlay {
|
||||
t.insert_lines(lines.clone());
|
||||
tui.frame_requester().schedule_frame();
|
||||
}
|
||||
self.transcript_lines.extend(lines.clone());
|
||||
if self.overlay.is_some() {
|
||||
self.deferred_history_lines.extend(lines);
|
||||
} else {
|
||||
tui.insert_history_lines(lines);
|
||||
}
|
||||
}
|
||||
AppEvent::InsertHistoryCell(cell) => {
|
||||
let cell_transcript = cell.transcript_lines();
|
||||
let mut cell_transcript = cell.transcript_lines();
|
||||
if !cell.is_stream_continuation() && !self.transcript_lines.is_empty() {
|
||||
cell_transcript.insert(0, Line::from(""));
|
||||
}
|
||||
if let Some(Overlay::Transcript(t)) = &mut self.overlay {
|
||||
t.insert_lines(cell_transcript.clone());
|
||||
tui.frame_requester().schedule_frame();
|
||||
}
|
||||
self.transcript_lines.extend(cell_transcript.clone());
|
||||
let display = cell.display_lines();
|
||||
let mut display = cell.display_lines(tui.terminal.last_known_screen_size.width);
|
||||
if !display.is_empty() {
|
||||
// Only insert a separating blank line for new cells that are not
|
||||
// part of an ongoing stream. Streaming continuations should not
|
||||
// accrue extra blank lines between chunks.
|
||||
if !cell.is_stream_continuation() {
|
||||
if self.has_emitted_history_lines {
|
||||
display.insert(0, Line::from(""));
|
||||
} else {
|
||||
self.has_emitted_history_lines = true;
|
||||
}
|
||||
}
|
||||
if self.overlay.is_some() {
|
||||
self.deferred_history_lines.extend(display);
|
||||
} else {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
use codex_core::protocol::ConversationHistoryResponseEvent;
|
||||
use codex_core::protocol::Event;
|
||||
use codex_file_search::FileMatch;
|
||||
use ratatui::text::Line;
|
||||
|
||||
use crate::history_cell::HistoryCell;
|
||||
|
||||
@@ -40,7 +39,6 @@ pub(crate) enum AppEvent {
|
||||
/// Result of computing a `/diff` command.
|
||||
DiffResult(String),
|
||||
|
||||
InsertHistoryLines(Vec<Line<'static>>),
|
||||
InsertHistoryCell(Box<dyn HistoryCell>),
|
||||
|
||||
StartCommitAnimation,
|
||||
|
||||
@@ -22,11 +22,14 @@ use ratatui::widgets::StatefulWidgetRef;
|
||||
use ratatui::widgets::WidgetRef;
|
||||
|
||||
use super::chat_composer_history::ChatComposerHistory;
|
||||
use super::command_popup::CommandItem;
|
||||
use super::command_popup::CommandPopup;
|
||||
use super::file_search_popup::FileSearchPopup;
|
||||
use super::paste_burst::CharDecision;
|
||||
use super::paste_burst::PasteBurst;
|
||||
use crate::bottom_pane::paste_burst::FlushResult;
|
||||
use crate::slash_command::SlashCommand;
|
||||
use codex_protocol::custom_prompts::CustomPrompt;
|
||||
|
||||
use crate::app_event::AppEvent;
|
||||
use crate::app_event_sender::AppEventSender;
|
||||
@@ -47,6 +50,7 @@ use std::time::Instant;
|
||||
const LARGE_PASTE_CHAR_THRESHOLD: usize = 1000;
|
||||
|
||||
/// Result returned when the user interacts with the text area.
|
||||
#[derive(Debug, PartialEq)]
|
||||
pub enum InputResult {
|
||||
Submitted(String),
|
||||
Command(SlashCommand),
|
||||
@@ -94,6 +98,7 @@ pub(crate) struct ChatComposer {
|
||||
paste_burst: PasteBurst,
|
||||
// When true, disables paste-burst logic and inserts characters immediately.
|
||||
disable_paste_burst: bool,
|
||||
custom_prompts: Vec<CustomPrompt>,
|
||||
}
|
||||
|
||||
/// Popup state – at most one can be visible at any time.
|
||||
@@ -131,6 +136,7 @@ impl ChatComposer {
|
||||
placeholder_text,
|
||||
paste_burst: PasteBurst::default(),
|
||||
disable_paste_burst: false,
|
||||
custom_prompts: Vec::new(),
|
||||
};
|
||||
// Apply configuration via the setter to keep side-effects centralized.
|
||||
this.set_disable_paste_burst(disable_paste_burst);
|
||||
@@ -218,7 +224,7 @@ impl ChatComposer {
|
||||
let placeholder = format!("[Pasted Content {char_count} chars]");
|
||||
self.textarea.insert_element(&placeholder);
|
||||
self.pending_pastes.push((placeholder, pasted));
|
||||
} else if self.handle_paste_image_path(pasted.clone()) {
|
||||
} else if char_count > 1 && self.handle_paste_image_path(pasted.clone()) {
|
||||
self.textarea.insert_str(" ");
|
||||
} else {
|
||||
self.textarea.insert_str(&pasted);
|
||||
@@ -293,12 +299,7 @@ impl ChatComposer {
|
||||
}
|
||||
|
||||
pub(crate) fn flush_paste_burst_if_due(&mut self) -> bool {
|
||||
let now = Instant::now();
|
||||
if let Some(pasted) = self.paste_burst.flush_if_due(now) {
|
||||
let _ = self.handle_paste(pasted);
|
||||
return true;
|
||||
}
|
||||
false
|
||||
self.handle_paste_burst_flush(Instant::now())
|
||||
}
|
||||
|
||||
pub(crate) fn is_in_paste_burst(&self) -> bool {
|
||||
@@ -391,16 +392,29 @@ impl ChatComposer {
|
||||
KeyEvent {
|
||||
code: KeyCode::Tab, ..
|
||||
} => {
|
||||
if let Some(cmd) = popup.selected_command() {
|
||||
let first_line = self.textarea.text().lines().next().unwrap_or("");
|
||||
|
||||
let starts_with_cmd = first_line
|
||||
.trim_start()
|
||||
.starts_with(&format!("/{}", cmd.command()));
|
||||
|
||||
if !starts_with_cmd {
|
||||
self.textarea.set_text(&format!("/{} ", cmd.command()));
|
||||
self.textarea.set_cursor(self.textarea.text().len());
|
||||
// Ensure popup filtering/selection reflects the latest composer text
|
||||
// before applying completion.
|
||||
let first_line = self.textarea.text().lines().next().unwrap_or("");
|
||||
popup.on_composer_text_change(first_line.to_string());
|
||||
if let Some(sel) = popup.selected_item() {
|
||||
match sel {
|
||||
CommandItem::Builtin(cmd) => {
|
||||
let starts_with_cmd = first_line
|
||||
.trim_start()
|
||||
.starts_with(&format!("/{}", cmd.command()));
|
||||
if !starts_with_cmd {
|
||||
self.textarea.set_text(&format!("/{} ", cmd.command()));
|
||||
}
|
||||
}
|
||||
CommandItem::UserPrompt(idx) => {
|
||||
if let Some(name) = popup.prompt_name(idx) {
|
||||
let starts_with_cmd =
|
||||
first_line.trim_start().starts_with(&format!("/{name}"));
|
||||
if !starts_with_cmd {
|
||||
self.textarea.set_text(&format!("/{name} "));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// After completing the command, move cursor to the end.
|
||||
if !self.textarea.text().is_empty() {
|
||||
@@ -415,16 +429,30 @@ impl ChatComposer {
|
||||
modifiers: KeyModifiers::NONE,
|
||||
..
|
||||
} => {
|
||||
if let Some(cmd) = popup.selected_command() {
|
||||
if let Some(sel) = popup.selected_item() {
|
||||
// Clear textarea so no residual text remains.
|
||||
self.textarea.set_text("");
|
||||
|
||||
let result = (InputResult::Command(*cmd), true);
|
||||
|
||||
// Hide popup since the command has been dispatched.
|
||||
// Capture any needed data from popup before clearing it.
|
||||
let prompt_content = match sel {
|
||||
CommandItem::UserPrompt(idx) => {
|
||||
popup.prompt_content(idx).map(|s| s.to_string())
|
||||
}
|
||||
_ => None,
|
||||
};
|
||||
// Hide popup since an action has been dispatched.
|
||||
self.active_popup = ActivePopup::None;
|
||||
|
||||
return result;
|
||||
match sel {
|
||||
CommandItem::Builtin(cmd) => {
|
||||
return (InputResult::Command(cmd), true);
|
||||
}
|
||||
CommandItem::UserPrompt(_) => {
|
||||
if let Some(contents) = prompt_content {
|
||||
return (InputResult::Submitted(contents), true);
|
||||
}
|
||||
return (InputResult::None, true);
|
||||
}
|
||||
}
|
||||
}
|
||||
// Fallback to default newline handling if no command selected.
|
||||
self.handle_key_event_without_popup(key_event)
|
||||
@@ -807,7 +835,12 @@ impl ChatComposer {
|
||||
}
|
||||
self.pending_pastes.clear();
|
||||
|
||||
// If there is neither text nor attachments, suppress submission entirely.
|
||||
let has_attachments = !self.attached_images.is_empty();
|
||||
text = text.trim().to_string();
|
||||
if text.is_empty() && !has_attachments {
|
||||
return (InputResult::None, true);
|
||||
}
|
||||
if !text.is_empty() {
|
||||
self.history.record_local_submission(&text);
|
||||
}
|
||||
@@ -818,15 +851,36 @@ impl ChatComposer {
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_paste_burst_flush(&mut self, now: Instant) -> bool {
|
||||
match self.paste_burst.flush_if_due(now) {
|
||||
FlushResult::Paste(pasted) => {
|
||||
self.handle_paste(pasted);
|
||||
true
|
||||
}
|
||||
FlushResult::Typed(ch) => {
|
||||
// Mirror insert_str() behavior so popups stay in sync when a
|
||||
// pending fast char flushes as normal typed input.
|
||||
self.textarea.insert_str(ch.to_string().as_str());
|
||||
// Keep popup sync consistent with key handling: prefer slash popup; only
|
||||
// sync file popup when slash popup is NOT active.
|
||||
self.sync_command_popup();
|
||||
if matches!(self.active_popup, ActivePopup::Command(_)) {
|
||||
self.dismissed_file_popup_token = None;
|
||||
} else {
|
||||
self.sync_file_search_popup();
|
||||
}
|
||||
true
|
||||
}
|
||||
FlushResult::None => false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle generic Input events that modify the textarea content.
|
||||
fn handle_input_basic(&mut self, input: KeyEvent) -> (InputResult, bool) {
|
||||
// If we have a buffered non-bracketed paste burst and enough time has
|
||||
// elapsed since the last char, flush it before handling a new input.
|
||||
let now = Instant::now();
|
||||
if let Some(pasted) = self.paste_burst.flush_if_due(now) {
|
||||
// Reuse normal paste path (handles large-paste placeholders).
|
||||
self.handle_paste(pasted);
|
||||
}
|
||||
self.handle_paste_burst_flush(now);
|
||||
|
||||
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
|
||||
if matches!(input.code, KeyCode::Enter)
|
||||
@@ -1117,7 +1171,7 @@ impl ChatComposer {
|
||||
}
|
||||
_ => {
|
||||
if input_starts_with_slash {
|
||||
let mut command_popup = CommandPopup::new();
|
||||
let mut command_popup = CommandPopup::new(self.custom_prompts.clone());
|
||||
command_popup.on_composer_text_change(first_line.to_string());
|
||||
self.active_popup = ActivePopup::Command(command_popup);
|
||||
}
|
||||
@@ -1125,6 +1179,13 @@ impl ChatComposer {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
|
||||
self.custom_prompts = prompts.clone();
|
||||
if let ActivePopup::Command(popup) = &mut self.active_popup {
|
||||
popup.set_prompts(prompts);
|
||||
}
|
||||
}
|
||||
|
||||
/// Synchronize `self.file_search_popup` with the current text in the textarea.
|
||||
/// Note this is only called when self.active_popup is NOT Command.
|
||||
fn sync_file_search_popup(&mut self) {
|
||||
@@ -1483,6 +1544,33 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn empty_enter_returns_none() {
|
||||
use crossterm::event::KeyCode;
|
||||
use crossterm::event::KeyEvent;
|
||||
use crossterm::event::KeyModifiers;
|
||||
|
||||
let (tx, _rx) = unbounded_channel::<AppEvent>();
|
||||
let sender = AppEventSender::new(tx);
|
||||
let mut composer = ChatComposer::new(
|
||||
true,
|
||||
sender,
|
||||
false,
|
||||
"Ask Codex to do anything".to_string(),
|
||||
false,
|
||||
);
|
||||
|
||||
// Ensure composer is empty and press Enter.
|
||||
assert!(composer.textarea.text().is_empty());
|
||||
let (result, _needs_redraw) =
|
||||
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
|
||||
|
||||
match result {
|
||||
InputResult::None => {}
|
||||
other => panic!("expected None for empty enter, got: {other:?}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handle_paste_large_uses_placeholder_and_replaces_on_submit() {
|
||||
use crossterm::event::KeyCode;
|
||||
@@ -1603,6 +1691,66 @@ mod tests {
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn slash_popup_model_first_for_mo_ui() {
|
||||
use insta::assert_snapshot;
|
||||
use ratatui::Terminal;
|
||||
use ratatui::backend::TestBackend;
|
||||
|
||||
let (tx, _rx) = unbounded_channel::<AppEvent>();
|
||||
let sender = AppEventSender::new(tx);
|
||||
|
||||
let mut composer = ChatComposer::new(
|
||||
true,
|
||||
sender,
|
||||
false,
|
||||
"Ask Codex to do anything".to_string(),
|
||||
false,
|
||||
);
|
||||
|
||||
// Type "/mo" humanlike so paste-burst doesn’t interfere.
|
||||
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
|
||||
|
||||
let mut terminal = match Terminal::new(TestBackend::new(60, 4)) {
|
||||
Ok(t) => t,
|
||||
Err(e) => panic!("Failed to create terminal: {e}"),
|
||||
};
|
||||
terminal
|
||||
.draw(|f| f.render_widget_ref(composer, f.area()))
|
||||
.unwrap_or_else(|e| panic!("Failed to draw composer: {e}"));
|
||||
|
||||
// Visual snapshot should show the slash popup with /model as the first entry.
|
||||
assert_snapshot!("slash_popup_mo", terminal.backend());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn slash_popup_model_first_for_mo_logic() {
|
||||
use super::super::command_popup::CommandItem;
|
||||
let (tx, _rx) = unbounded_channel::<AppEvent>();
|
||||
let sender = AppEventSender::new(tx);
|
||||
let mut composer = ChatComposer::new(
|
||||
true,
|
||||
sender,
|
||||
false,
|
||||
"Ask Codex to do anything".to_string(),
|
||||
false,
|
||||
);
|
||||
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
|
||||
|
||||
match &composer.active_popup {
|
||||
ActivePopup::Command(popup) => match popup.selected_item() {
|
||||
Some(CommandItem::Builtin(cmd)) => {
|
||||
assert_eq!(cmd.command(), "model")
|
||||
}
|
||||
Some(CommandItem::UserPrompt(_)) => {
|
||||
panic!("unexpected prompt selected for '/mo'")
|
||||
}
|
||||
None => panic!("no selected command for '/mo'"),
|
||||
},
|
||||
_ => panic!("slash popup not active after typing '/mo'"),
|
||||
}
|
||||
}
|
||||
|
||||
// Test helper: simulate human typing with a brief delay and flush the paste-burst buffer
|
||||
fn type_chars_humanlike(composer: &mut ChatComposer, chars: &[char]) {
|
||||
use crossterm::event::KeyCode;
|
||||
@@ -2098,6 +2246,38 @@ mod tests {
|
||||
assert_eq!(imgs, vec![tmp_path.clone()]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn selecting_custom_prompt_submits_file_contents() {
|
||||
let prompt_text = "Hello from saved prompt";
|
||||
|
||||
let (tx, _rx) = unbounded_channel::<AppEvent>();
|
||||
let sender = AppEventSender::new(tx);
|
||||
let mut composer = ChatComposer::new(
|
||||
true,
|
||||
sender,
|
||||
false,
|
||||
"Ask Codex to do anything".to_string(),
|
||||
false,
|
||||
);
|
||||
|
||||
// Inject prompts as if received via event.
|
||||
composer.set_custom_prompts(vec![CustomPrompt {
|
||||
name: "my-prompt".to_string(),
|
||||
path: "/tmp/my-prompt.md".to_string().into(),
|
||||
content: prompt_text.to_string(),
|
||||
}]);
|
||||
|
||||
type_chars_humanlike(
|
||||
&mut composer,
|
||||
&['/', 'm', 'y', '-', 'p', 'r', 'o', 'm', 'p', 't'],
|
||||
);
|
||||
|
||||
let (result, _needs_redraw) =
|
||||
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
|
||||
|
||||
assert_eq!(InputResult::Submitted(prompt_text.to_string()), result);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn burst_paste_fast_small_buffers_and_flushes_on_stop() {
|
||||
use crossterm::event::KeyCode;
|
||||
|
||||
@@ -9,22 +9,58 @@ use super::selection_popup_common::render_rows;
|
||||
use crate::slash_command::SlashCommand;
|
||||
use crate::slash_command::built_in_slash_commands;
|
||||
use codex_common::fuzzy_match::fuzzy_match;
|
||||
use codex_protocol::custom_prompts::CustomPrompt;
|
||||
use std::collections::HashSet;
|
||||
|
||||
/// A selectable item in the popup: either a built-in command or a user prompt.
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub(crate) enum CommandItem {
|
||||
Builtin(SlashCommand),
|
||||
// Index into `prompts`
|
||||
UserPrompt(usize),
|
||||
}
|
||||
|
||||
pub(crate) struct CommandPopup {
|
||||
command_filter: String,
|
||||
all_commands: Vec<(&'static str, SlashCommand)>,
|
||||
builtins: Vec<(&'static str, SlashCommand)>,
|
||||
prompts: Vec<CustomPrompt>,
|
||||
state: ScrollState,
|
||||
}
|
||||
|
||||
impl CommandPopup {
|
||||
pub(crate) fn new() -> Self {
|
||||
pub(crate) fn new(mut prompts: Vec<CustomPrompt>) -> Self {
|
||||
let builtins = built_in_slash_commands();
|
||||
// Exclude prompts that collide with builtin command names and sort by name.
|
||||
let exclude: HashSet<String> = builtins.iter().map(|(n, _)| (*n).to_string()).collect();
|
||||
prompts.retain(|p| !exclude.contains(&p.name));
|
||||
prompts.sort_by(|a, b| a.name.cmp(&b.name));
|
||||
Self {
|
||||
command_filter: String::new(),
|
||||
all_commands: built_in_slash_commands(),
|
||||
builtins,
|
||||
prompts,
|
||||
state: ScrollState::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn set_prompts(&mut self, mut prompts: Vec<CustomPrompt>) {
|
||||
let exclude: HashSet<String> = self
|
||||
.builtins
|
||||
.iter()
|
||||
.map(|(n, _)| (*n).to_string())
|
||||
.collect();
|
||||
prompts.retain(|p| !exclude.contains(&p.name));
|
||||
prompts.sort_by(|a, b| a.name.cmp(&b.name));
|
||||
self.prompts = prompts;
|
||||
}
|
||||
|
||||
pub(crate) fn prompt_name(&self, idx: usize) -> Option<&str> {
|
||||
self.prompts.get(idx).map(|p| p.name.as_str())
|
||||
}
|
||||
|
||||
pub(crate) fn prompt_content(&self, idx: usize) -> Option<&str> {
|
||||
self.prompts.get(idx).map(|p| p.content.as_str())
|
||||
}
|
||||
|
||||
/// Update the filter string based on the current composer text. The text
|
||||
/// passed in is expected to start with a leading '/'. Everything after the
|
||||
/// *first* '/" on the *first* line becomes the active filter that is used
|
||||
@@ -50,7 +86,7 @@ impl CommandPopup {
|
||||
}
|
||||
|
||||
// Reset or clamp selected index based on new filtered list.
|
||||
let matches_len = self.filtered_commands().len();
|
||||
let matches_len = self.filtered_items().len();
|
||||
self.state.clamp_selection(matches_len);
|
||||
self.state
|
||||
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
|
||||
@@ -59,56 +95,76 @@ impl CommandPopup {
|
||||
/// Determine the preferred height of the popup. This is the number of
|
||||
/// rows required to show at most MAX_POPUP_ROWS commands.
|
||||
pub(crate) fn calculate_required_height(&self) -> u16 {
|
||||
self.filtered_commands().len().clamp(1, MAX_POPUP_ROWS) as u16
|
||||
self.filtered_items().len().clamp(1, MAX_POPUP_ROWS) as u16
|
||||
}
|
||||
|
||||
/// Compute fuzzy-filtered matches paired with optional highlight indices and score.
|
||||
/// Sorted by ascending score, then by command name for stability.
|
||||
fn filtered(&self) -> Vec<(&SlashCommand, Option<Vec<usize>>, i32)> {
|
||||
/// Compute fuzzy-filtered matches over built-in commands and user prompts,
|
||||
/// paired with optional highlight indices and score. Sorted by ascending
|
||||
/// score, then by name for stability.
|
||||
fn filtered(&self) -> Vec<(CommandItem, Option<Vec<usize>>, i32)> {
|
||||
let filter = self.command_filter.trim();
|
||||
let mut out: Vec<(&SlashCommand, Option<Vec<usize>>, i32)> = Vec::new();
|
||||
let mut out: Vec<(CommandItem, Option<Vec<usize>>, i32)> = Vec::new();
|
||||
if filter.is_empty() {
|
||||
for (_, cmd) in self.all_commands.iter() {
|
||||
out.push((cmd, None, 0));
|
||||
// Built-ins first, in presentation order.
|
||||
for (_, cmd) in self.builtins.iter() {
|
||||
out.push((CommandItem::Builtin(*cmd), None, 0));
|
||||
}
|
||||
// Then prompts, already sorted by name.
|
||||
for idx in 0..self.prompts.len() {
|
||||
out.push((CommandItem::UserPrompt(idx), None, 0));
|
||||
}
|
||||
// Keep the original presentation order when no filter is applied.
|
||||
return out;
|
||||
} else {
|
||||
for (_, cmd) in self.all_commands.iter() {
|
||||
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
|
||||
out.push((cmd, Some(indices), score));
|
||||
}
|
||||
}
|
||||
|
||||
for (_, cmd) in self.builtins.iter() {
|
||||
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
|
||||
out.push((CommandItem::Builtin(*cmd), Some(indices), score));
|
||||
}
|
||||
}
|
||||
// When filtering, sort by ascending score and then by command for stability.
|
||||
out.sort_by(|a, b| a.2.cmp(&b.2).then_with(|| a.0.command().cmp(b.0.command())));
|
||||
for (idx, p) in self.prompts.iter().enumerate() {
|
||||
if let Some((indices, score)) = fuzzy_match(&p.name, filter) {
|
||||
out.push((CommandItem::UserPrompt(idx), Some(indices), score));
|
||||
}
|
||||
}
|
||||
// When filtering, sort by ascending score and then by name for stability.
|
||||
out.sort_by(|a, b| {
|
||||
a.2.cmp(&b.2).then_with(|| {
|
||||
let an = match a.0 {
|
||||
CommandItem::Builtin(c) => c.command(),
|
||||
CommandItem::UserPrompt(i) => &self.prompts[i].name,
|
||||
};
|
||||
let bn = match b.0 {
|
||||
CommandItem::Builtin(c) => c.command(),
|
||||
CommandItem::UserPrompt(i) => &self.prompts[i].name,
|
||||
};
|
||||
an.cmp(bn)
|
||||
})
|
||||
});
|
||||
out
|
||||
}
|
||||
|
||||
fn filtered_commands(&self) -> Vec<&SlashCommand> {
|
||||
fn filtered_items(&self) -> Vec<CommandItem> {
|
||||
self.filtered().into_iter().map(|(c, _, _)| c).collect()
|
||||
}
|
||||
|
||||
/// Move the selection cursor one step up.
|
||||
pub(crate) fn move_up(&mut self) {
|
||||
let matches = self.filtered_commands();
|
||||
let len = matches.len();
|
||||
let len = self.filtered_items().len();
|
||||
self.state.move_up_wrap(len);
|
||||
self.state.ensure_visible(len, MAX_POPUP_ROWS.min(len));
|
||||
}
|
||||
|
||||
/// Move the selection cursor one step down.
|
||||
pub(crate) fn move_down(&mut self) {
|
||||
let matches = self.filtered_commands();
|
||||
let matches_len = matches.len();
|
||||
let matches_len = self.filtered_items().len();
|
||||
self.state.move_down_wrap(matches_len);
|
||||
self.state
|
||||
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
|
||||
}
|
||||
|
||||
/// Return currently selected command, if any.
|
||||
pub(crate) fn selected_command(&self) -> Option<&SlashCommand> {
|
||||
let matches = self.filtered_commands();
|
||||
pub(crate) fn selected_item(&self) -> Option<CommandItem> {
|
||||
let matches = self.filtered_items();
|
||||
self.state
|
||||
.selected_idx
|
||||
.and_then(|idx| matches.get(idx).copied())
|
||||
@@ -123,11 +179,19 @@ impl WidgetRef for CommandPopup {
|
||||
} else {
|
||||
matches
|
||||
.into_iter()
|
||||
.map(|(cmd, indices, _)| GenericDisplayRow {
|
||||
name: format!("/{}", cmd.command()),
|
||||
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
|
||||
is_current: false,
|
||||
description: Some(cmd.description().to_string()),
|
||||
.map(|(item, indices, _)| match item {
|
||||
CommandItem::Builtin(cmd) => GenericDisplayRow {
|
||||
name: format!("/{}", cmd.command()),
|
||||
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
|
||||
is_current: false,
|
||||
description: Some(cmd.description().to_string()),
|
||||
},
|
||||
CommandItem::UserPrompt(i) => GenericDisplayRow {
|
||||
name: format!("/{}", self.prompts[i].name),
|
||||
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
|
||||
is_current: false,
|
||||
description: Some("send saved prompt".to_string()),
|
||||
},
|
||||
})
|
||||
.collect()
|
||||
};
|
||||
@@ -141,31 +205,96 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn filter_includes_init_when_typing_prefix() {
|
||||
let mut popup = CommandPopup::new();
|
||||
let mut popup = CommandPopup::new(Vec::new());
|
||||
// Simulate the composer line starting with '/in' so the popup filters
|
||||
// matching commands by prefix.
|
||||
popup.on_composer_text_change("/in".to_string());
|
||||
|
||||
// Access the filtered list via the selected command and ensure that
|
||||
// one of the matches is the new "init" command.
|
||||
let matches = popup.filtered_commands();
|
||||
let matches = popup.filtered_items();
|
||||
let has_init = matches.iter().any(|item| match item {
|
||||
CommandItem::Builtin(cmd) => cmd.command() == "init",
|
||||
CommandItem::UserPrompt(_) => false,
|
||||
});
|
||||
assert!(
|
||||
matches.iter().any(|cmd| cmd.command() == "init"),
|
||||
has_init,
|
||||
"expected '/init' to appear among filtered commands"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn selecting_init_by_exact_match() {
|
||||
let mut popup = CommandPopup::new();
|
||||
let mut popup = CommandPopup::new(Vec::new());
|
||||
popup.on_composer_text_change("/init".to_string());
|
||||
|
||||
// When an exact match exists, the selected command should be that
|
||||
// command by default.
|
||||
let selected = popup.selected_command();
|
||||
let selected = popup.selected_item();
|
||||
match selected {
|
||||
Some(cmd) => assert_eq!(cmd.command(), "init"),
|
||||
Some(CommandItem::Builtin(cmd)) => assert_eq!(cmd.command(), "init"),
|
||||
Some(CommandItem::UserPrompt(_)) => panic!("unexpected prompt selected for '/init'"),
|
||||
None => panic!("expected a selected command for exact match"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn model_is_first_suggestion_for_mo() {
|
||||
let mut popup = CommandPopup::new(Vec::new());
|
||||
popup.on_composer_text_change("/mo".to_string());
|
||||
let matches = popup.filtered_items();
|
||||
match matches.first() {
|
||||
Some(CommandItem::Builtin(cmd)) => assert_eq!(cmd.command(), "model"),
|
||||
Some(CommandItem::UserPrompt(_)) => {
|
||||
panic!("unexpected prompt ranked before '/model' for '/mo'")
|
||||
}
|
||||
None => panic!("expected at least one match for '/mo'"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn prompt_discovery_lists_custom_prompts() {
|
||||
let prompts = vec![
|
||||
CustomPrompt {
|
||||
name: "foo".to_string(),
|
||||
path: "/tmp/foo.md".to_string().into(),
|
||||
content: "hello from foo".to_string(),
|
||||
},
|
||||
CustomPrompt {
|
||||
name: "bar".to_string(),
|
||||
path: "/tmp/bar.md".to_string().into(),
|
||||
content: "hello from bar".to_string(),
|
||||
},
|
||||
];
|
||||
let popup = CommandPopup::new(prompts);
|
||||
let items = popup.filtered_items();
|
||||
let mut prompt_names: Vec<String> = items
|
||||
.into_iter()
|
||||
.filter_map(|it| match it {
|
||||
CommandItem::UserPrompt(i) => popup.prompt_name(i).map(|s| s.to_string()),
|
||||
_ => None,
|
||||
})
|
||||
.collect();
|
||||
prompt_names.sort();
|
||||
assert_eq!(prompt_names, vec!["bar".to_string(), "foo".to_string()]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn prompt_name_collision_with_builtin_is_ignored() {
|
||||
// Create a prompt named like a builtin (e.g. "init").
|
||||
let popup = CommandPopup::new(vec![CustomPrompt {
|
||||
name: "init".to_string(),
|
||||
path: "/tmp/init.md".to_string().into(),
|
||||
content: "should be ignored".to_string(),
|
||||
}]);
|
||||
let items = popup.filtered_items();
|
||||
let has_collision_prompt = items.into_iter().any(|it| match it {
|
||||
CommandItem::UserPrompt(i) => popup.prompt_name(i) == Some("init"),
|
||||
_ => false,
|
||||
});
|
||||
assert!(
|
||||
!has_collision_prompt,
|
||||
"prompt with builtin name should be ignored"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -36,6 +36,7 @@ pub(crate) enum CancellationEvent {
|
||||
|
||||
pub(crate) use chat_composer::ChatComposer;
|
||||
pub(crate) use chat_composer::InputResult;
|
||||
use codex_protocol::custom_prompts::CustomPrompt;
|
||||
|
||||
use crate::status_indicator_widget::StatusIndicatorWidget;
|
||||
use approval_modal_view::ApprovalModalView;
|
||||
@@ -99,53 +100,47 @@ impl BottomPane {
|
||||
}
|
||||
|
||||
pub fn desired_height(&self, width: u16) -> u16 {
|
||||
let top_margin = if self.active_view.is_some() { 0 } else { 1 };
|
||||
// Always reserve one blank row above the pane for visual spacing.
|
||||
let top_margin = 1;
|
||||
|
||||
// Base height depends on whether a modal/overlay is active.
|
||||
let mut base = if let Some(view) = self.active_view.as_ref() {
|
||||
view.desired_height(width)
|
||||
} else {
|
||||
self.composer.desired_height(width)
|
||||
let base = match self.active_view.as_ref() {
|
||||
Some(view) => view.desired_height(width),
|
||||
None => self.composer.desired_height(width).saturating_add(
|
||||
self.status
|
||||
.as_ref()
|
||||
.map_or(0, |status| status.desired_height(width)),
|
||||
),
|
||||
};
|
||||
// If a status indicator is active and no modal is covering the composer,
|
||||
// include its height above the composer.
|
||||
if self.active_view.is_none()
|
||||
&& let Some(status) = self.status.as_ref()
|
||||
{
|
||||
base = base.saturating_add(status.desired_height(width));
|
||||
}
|
||||
// Account for bottom padding rows. Top spacing is handled in layout().
|
||||
base.saturating_add(Self::BOTTOM_PAD_LINES)
|
||||
.saturating_add(top_margin)
|
||||
}
|
||||
|
||||
fn layout(&self, area: Rect) -> [Rect; 2] {
|
||||
// Prefer showing the status header when space is extremely tight.
|
||||
// Drop the top spacer if there is only one row available.
|
||||
let mut top_margin = if self.active_view.is_some() { 0 } else { 1 };
|
||||
if area.height <= 1 {
|
||||
top_margin = 0;
|
||||
}
|
||||
|
||||
let status_height = if self.active_view.is_none() {
|
||||
if let Some(status) = self.status.as_ref() {
|
||||
status.desired_height(area.width)
|
||||
} else {
|
||||
0
|
||||
}
|
||||
// At small heights, bottom pane takes the entire height.
|
||||
let (top_margin, bottom_margin) = if area.height <= BottomPane::BOTTOM_PAD_LINES + 1 {
|
||||
(0, 0)
|
||||
} else {
|
||||
0
|
||||
(1, BottomPane::BOTTOM_PAD_LINES)
|
||||
};
|
||||
|
||||
let [_, status, content, _] = Layout::vertical([
|
||||
Constraint::Max(top_margin),
|
||||
Constraint::Max(status_height),
|
||||
Constraint::Min(1),
|
||||
Constraint::Max(BottomPane::BOTTOM_PAD_LINES),
|
||||
])
|
||||
.areas(area);
|
||||
|
||||
[status, content]
|
||||
let area = Rect {
|
||||
x: area.x,
|
||||
y: area.y + top_margin,
|
||||
width: area.width,
|
||||
height: area.height - top_margin - bottom_margin,
|
||||
};
|
||||
match self.active_view.as_ref() {
|
||||
Some(_) => [Rect::ZERO, area],
|
||||
None => {
|
||||
let status_height = self
|
||||
.status
|
||||
.as_ref()
|
||||
.map_or(0, |status| status.desired_height(area.width));
|
||||
Layout::vertical([Constraint::Max(status_height), Constraint::Min(1)]).areas(area)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
|
||||
@@ -336,6 +331,12 @@ impl BottomPane {
|
||||
self.request_redraw();
|
||||
}
|
||||
|
||||
/// Update custom prompts available for the slash popup.
|
||||
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
|
||||
self.composer.set_custom_prompts(prompts);
|
||||
self.request_redraw();
|
||||
}
|
||||
|
||||
pub(crate) fn composer_is_empty(&self) -> bool {
|
||||
self.composer.is_empty()
|
||||
}
|
||||
@@ -698,7 +699,7 @@ mod tests {
|
||||
|
||||
pane.set_task_running(true);
|
||||
|
||||
// Height=2 → composer visible; status is hidden to preserve composer. Spacer may collapse.
|
||||
// Height=2 → status on one row, composer on the other.
|
||||
let area2 = Rect::new(0, 0, 20, 2);
|
||||
let mut buf2 = Buffer::empty(area2);
|
||||
(&pane).render_ref(area2, &mut buf2);
|
||||
@@ -714,8 +715,8 @@ mod tests {
|
||||
"expected composer to be visible on one of the rows: row0={row0:?}, row1={row1:?}"
|
||||
);
|
||||
assert!(
|
||||
!row0.contains("Working") && !row1.contains("Working"),
|
||||
"status header should be hidden when height=2"
|
||||
row0.contains("Working") || row1.contains("Working"),
|
||||
"expected status header to be visible at height=2: row0={row0:?}, row1={row1:?}"
|
||||
);
|
||||
|
||||
// Height=1 → no padding; single row is the composer (status hidden).
|
||||
|
||||
@@ -35,6 +35,12 @@ pub(crate) struct RetroGrab {
|
||||
pub grabbed: String,
|
||||
}
|
||||
|
||||
pub(crate) enum FlushResult {
|
||||
Paste(String),
|
||||
Typed(char),
|
||||
None,
|
||||
}
|
||||
|
||||
impl PasteBurst {
|
||||
/// Recommended delay to wait between simulated keypresses (or before
|
||||
/// scheduling a UI tick) so that a pending fast keystroke is flushed
|
||||
@@ -95,24 +101,24 @@ impl PasteBurst {
|
||||
/// now emit that char as normal typed input.
|
||||
///
|
||||
/// Returns None if the timeout has not elapsed or there is nothing to flush.
|
||||
pub fn flush_if_due(&mut self, now: Instant) -> Option<String> {
|
||||
pub fn flush_if_due(&mut self, now: Instant) -> FlushResult {
|
||||
let timed_out = self
|
||||
.last_plain_char_time
|
||||
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
|
||||
if timed_out && self.is_active_internal() {
|
||||
self.active = false;
|
||||
let out = std::mem::take(&mut self.buffer);
|
||||
Some(out)
|
||||
FlushResult::Paste(out)
|
||||
} else if timed_out {
|
||||
// If we were saving a single fast char and no burst followed,
|
||||
// flush it as normal typed input.
|
||||
if let Some((ch, _at)) = self.pending_first_char.take() {
|
||||
Some(ch.to_string())
|
||||
FlushResult::Typed(ch)
|
||||
} else {
|
||||
None
|
||||
FlushResult::None
|
||||
}
|
||||
} else {
|
||||
None
|
||||
FlushResult::None
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,8 @@
|
||||
---
|
||||
source: tui/src/bottom_pane/chat_composer.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"▌/mo "
|
||||
"▌ "
|
||||
"▌/model choose what model and reasoning effort to use "
|
||||
"▌/mention mention a file "
|
||||
@@ -245,7 +245,6 @@ impl TextArea {
|
||||
} => self.delete_backward_word(),
|
||||
KeyEvent {
|
||||
code: KeyCode::Backspace,
|
||||
modifiers: KeyModifiers::NONE,
|
||||
..
|
||||
}
|
||||
| KeyEvent {
|
||||
|
||||
@@ -19,6 +19,7 @@ use codex_core::protocol::ExecApprovalRequestEvent;
|
||||
use codex_core::protocol::ExecCommandBeginEvent;
|
||||
use codex_core::protocol::ExecCommandEndEvent;
|
||||
use codex_core::protocol::InputItem;
|
||||
use codex_core::protocol::ListCustomPromptsResponseEvent;
|
||||
use codex_core::protocol::McpListToolsResponseEvent;
|
||||
use codex_core::protocol::McpToolCallBeginEvent;
|
||||
use codex_core::protocol::McpToolCallEndEvent;
|
||||
@@ -30,6 +31,7 @@ use codex_core::protocol::TokenUsage;
|
||||
use codex_core::protocol::TurnAbortReason;
|
||||
use codex_core::protocol::TurnDiffEvent;
|
||||
use codex_core::protocol::WebSearchBeginEvent;
|
||||
use codex_core::protocol::WebSearchEndEvent;
|
||||
use codex_protocol::parse_command::ParsedCommand;
|
||||
use crossterm::event::KeyCode;
|
||||
use crossterm::event::KeyEvent;
|
||||
@@ -99,7 +101,6 @@ pub(crate) struct ChatWidget {
|
||||
// Stream lifecycle controller
|
||||
stream: StreamController,
|
||||
running_commands: HashMap<String, RunningCommand>,
|
||||
pending_exec_completions: Vec<(Vec<String>, Vec<ParsedCommand>, CommandOutput)>,
|
||||
task_complete_pending: bool,
|
||||
// Queue of interruptive UI events deferred during an active write cycle
|
||||
interrupts: InterruptManager,
|
||||
@@ -111,7 +112,6 @@ pub(crate) struct ChatWidget {
|
||||
frame_requester: FrameRequester,
|
||||
// Whether to include the initial welcome banner on session configured
|
||||
show_welcome_banner: bool,
|
||||
last_history_was_exec: bool,
|
||||
// User messages queued while a turn is in progress
|
||||
queued_user_messages: VecDeque<UserMessage>,
|
||||
}
|
||||
@@ -153,6 +153,8 @@ impl ChatWidget {
|
||||
event,
|
||||
self.show_welcome_banner,
|
||||
));
|
||||
// Ask codex-core to enumerate custom prompts for this session.
|
||||
self.submit_op(Op::ListCustomPrompts);
|
||||
if let Some(user_message) = self.initial_user_message.take() {
|
||||
self.submit_user_message(user_message);
|
||||
}
|
||||
@@ -329,6 +331,7 @@ impl ChatWidget {
|
||||
auto_approved: event.auto_approved,
|
||||
},
|
||||
event.changes,
|
||||
&self.config.cwd,
|
||||
));
|
||||
}
|
||||
|
||||
@@ -355,9 +358,16 @@ impl ChatWidget {
|
||||
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
|
||||
}
|
||||
|
||||
fn on_web_search_begin(&mut self, ev: WebSearchBeginEvent) {
|
||||
fn on_web_search_begin(&mut self, _ev: WebSearchBeginEvent) {
|
||||
self.flush_answer_stream_with_separator();
|
||||
self.add_to_history(history_cell::new_web_search_call(ev.query));
|
||||
}
|
||||
|
||||
fn on_web_search_end(&mut self, ev: WebSearchEndEvent) {
|
||||
self.flush_answer_stream_with_separator();
|
||||
self.add_to_history(history_cell::new_web_search_call(format!(
|
||||
"Searched: {}",
|
||||
ev.query
|
||||
)));
|
||||
}
|
||||
|
||||
fn on_get_history_entry_response(
|
||||
@@ -431,14 +441,14 @@ impl ChatWidget {
|
||||
self.task_complete_pending = false;
|
||||
}
|
||||
// A completed stream indicates non-exec content was just inserted.
|
||||
// Reset the exec header grouping so the next exec shows its header.
|
||||
self.last_history_was_exec = false;
|
||||
self.flush_interrupt_queue();
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn handle_streaming_delta(&mut self, delta: String) {
|
||||
// Before streaming agent content, flush any active exec cell group.
|
||||
self.flush_active_exec_cell();
|
||||
let sink = AppEventHistorySink(self.app_event_tx.clone());
|
||||
self.stream.begin(&sink);
|
||||
self.stream.push_and_maybe_commit(&delta, &sink);
|
||||
@@ -451,31 +461,29 @@ impl ChatWidget {
|
||||
Some(rc) => (rc.command, rc.parsed_cmd),
|
||||
None => (vec![ev.call_id.clone()], Vec::new()),
|
||||
};
|
||||
self.pending_exec_completions.push((
|
||||
command,
|
||||
parsed,
|
||||
CommandOutput {
|
||||
exit_code: ev.exit_code,
|
||||
stdout: ev.stdout.clone(),
|
||||
stderr: ev.stderr.clone(),
|
||||
formatted_output: ev.formatted_output.clone(),
|
||||
},
|
||||
));
|
||||
|
||||
if self.running_commands.is_empty() {
|
||||
self.active_exec_cell = None;
|
||||
let pending = std::mem::take(&mut self.pending_exec_completions);
|
||||
for (command, parsed, output) in pending {
|
||||
let include_header = !self.last_history_was_exec;
|
||||
let cell = history_cell::new_completed_exec_command(
|
||||
command,
|
||||
parsed,
|
||||
output,
|
||||
include_header,
|
||||
ev.duration,
|
||||
);
|
||||
self.add_to_history(cell);
|
||||
self.last_history_was_exec = true;
|
||||
if self.active_exec_cell.is_none() {
|
||||
// This should have been created by handle_exec_begin_now, but in case it wasn't,
|
||||
// create it now.
|
||||
self.active_exec_cell = Some(history_cell::new_active_exec_command(
|
||||
ev.call_id.clone(),
|
||||
command,
|
||||
parsed,
|
||||
));
|
||||
}
|
||||
if let Some(cell) = self.active_exec_cell.as_mut() {
|
||||
cell.complete_call(
|
||||
&ev.call_id,
|
||||
CommandOutput {
|
||||
exit_code: ev.exit_code,
|
||||
stdout: ev.stdout.clone(),
|
||||
stderr: ev.stderr.clone(),
|
||||
formatted_output: ev.formatted_output.clone(),
|
||||
},
|
||||
ev.duration,
|
||||
);
|
||||
if cell.should_flush() {
|
||||
self.flush_active_exec_cell();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -484,9 +492,9 @@ impl ChatWidget {
|
||||
&mut self,
|
||||
event: codex_core::protocol::PatchApplyEndEvent,
|
||||
) {
|
||||
if event.success {
|
||||
self.add_to_history(history_cell::new_patch_apply_success(event.stdout));
|
||||
} else {
|
||||
// If the patch was successful, just let the "Edited" block stand.
|
||||
// Otherwise, add a failure block.
|
||||
if !event.success {
|
||||
self.add_to_history(history_cell::new_patch_apply_failure(event.stderr));
|
||||
}
|
||||
}
|
||||
@@ -512,6 +520,7 @@ impl ChatWidget {
|
||||
self.add_to_history(history_cell::new_patch_event(
|
||||
PatchEventType::ApprovalRequest,
|
||||
ev.changes.clone(),
|
||||
&self.config.cwd,
|
||||
));
|
||||
|
||||
let request = ApprovalRequest::ApplyPatch {
|
||||
@@ -532,19 +541,28 @@ impl ChatWidget {
|
||||
parsed_cmd: ev.parsed_cmd.clone(),
|
||||
},
|
||||
);
|
||||
// Accumulate parsed commands into a single active Exec cell so they stack
|
||||
match self.active_exec_cell.as_mut() {
|
||||
Some(exec) => {
|
||||
exec.parsed.extend(ev.parsed_cmd);
|
||||
}
|
||||
_ => {
|
||||
let include_header = !self.last_history_was_exec;
|
||||
if let Some(exec) = &self.active_exec_cell {
|
||||
if let Some(new_exec) = exec.with_added_call(
|
||||
ev.call_id.clone(),
|
||||
ev.command.clone(),
|
||||
ev.parsed_cmd.clone(),
|
||||
) {
|
||||
self.active_exec_cell = Some(new_exec);
|
||||
} else {
|
||||
// Make a new cell.
|
||||
self.flush_active_exec_cell();
|
||||
self.active_exec_cell = Some(history_cell::new_active_exec_command(
|
||||
ev.command,
|
||||
ev.parsed_cmd,
|
||||
include_header,
|
||||
ev.call_id.clone(),
|
||||
ev.command.clone(),
|
||||
ev.parsed_cmd.clone(),
|
||||
));
|
||||
}
|
||||
} else {
|
||||
self.active_exec_cell = Some(history_cell::new_active_exec_command(
|
||||
ev.call_id.clone(),
|
||||
ev.command.clone(),
|
||||
ev.parsed_cmd.clone(),
|
||||
));
|
||||
}
|
||||
|
||||
// Request a redraw so the working header and command list are visible immediately.
|
||||
@@ -574,7 +592,7 @@ impl ChatWidget {
|
||||
Constraint::Max(
|
||||
self.active_exec_cell
|
||||
.as_ref()
|
||||
.map_or(0, |c| c.desired_height(area.width)),
|
||||
.map_or(0, |c| c.desired_height(area.width) + 1),
|
||||
),
|
||||
Constraint::Min(self.bottom_pane.desired_height(area.width)),
|
||||
])
|
||||
@@ -616,13 +634,11 @@ impl ChatWidget {
|
||||
last_token_usage: TokenUsage::default(),
|
||||
stream: StreamController::new(config),
|
||||
running_commands: HashMap::new(),
|
||||
pending_exec_completions: Vec::new(),
|
||||
task_complete_pending: false,
|
||||
interrupts: InterruptManager::new(),
|
||||
reasoning_buffer: String::new(),
|
||||
full_reasoning_buffer: String::new(),
|
||||
session_id: None,
|
||||
last_history_was_exec: false,
|
||||
queued_user_messages: VecDeque::new(),
|
||||
show_welcome_banner: true,
|
||||
}
|
||||
@@ -662,13 +678,11 @@ impl ChatWidget {
|
||||
last_token_usage: TokenUsage::default(),
|
||||
stream: StreamController::new(config),
|
||||
running_commands: HashMap::new(),
|
||||
pending_exec_completions: Vec::new(),
|
||||
task_complete_pending: false,
|
||||
interrupts: InterruptManager::new(),
|
||||
reasoning_buffer: String::new(),
|
||||
full_reasoning_buffer: String::new(),
|
||||
session_id: None,
|
||||
last_history_was_exec: false,
|
||||
queued_user_messages: VecDeque::new(),
|
||||
show_welcome_banner: false,
|
||||
}
|
||||
@@ -679,7 +693,7 @@ impl ChatWidget {
|
||||
+ self
|
||||
.active_exec_cell
|
||||
.as_ref()
|
||||
.map_or(0, |c| c.desired_height(width))
|
||||
.map_or(0, |c| c.desired_height(width) + 1)
|
||||
}
|
||||
|
||||
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
|
||||
@@ -755,7 +769,7 @@ impl ChatWidget {
|
||||
fn dispatch_command(&mut self, cmd: SlashCommand) {
|
||||
if !cmd.available_during_task() && self.bottom_pane.is_task_running() {
|
||||
let message = format!(
|
||||
"'/'{}' is disabled while a task is in progress.",
|
||||
"'/{}' is disabled while a task is in progress.",
|
||||
cmd.command()
|
||||
);
|
||||
self.add_to_history(history_cell::new_error_event(message));
|
||||
@@ -880,18 +894,15 @@ impl ChatWidget {
|
||||
|
||||
fn flush_active_exec_cell(&mut self) {
|
||||
if let Some(active) = self.active_exec_cell.take() {
|
||||
self.last_history_was_exec = true;
|
||||
self.app_event_tx
|
||||
.send(AppEvent::InsertHistoryCell(Box::new(active)));
|
||||
}
|
||||
}
|
||||
|
||||
fn add_to_history(&mut self, cell: impl HistoryCell + 'static) {
|
||||
// Only break exec grouping if the cell renders visible lines.
|
||||
let has_display_lines = !cell.display_lines().is_empty();
|
||||
self.flush_active_exec_cell();
|
||||
if has_display_lines {
|
||||
self.last_history_was_exec = false;
|
||||
if !cell.display_lines(u16::MAX).is_empty() {
|
||||
// Only break exec grouping if the cell renders visible lines.
|
||||
self.flush_active_exec_cell();
|
||||
}
|
||||
self.app_event_tx
|
||||
.send(AppEvent::InsertHistoryCell(Box::new(cell)));
|
||||
@@ -989,8 +1000,10 @@ impl ChatWidget {
|
||||
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
|
||||
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
|
||||
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
|
||||
EventMsg::WebSearchEnd(ev) => self.on_web_search_end(ev),
|
||||
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
|
||||
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
|
||||
EventMsg::ListCustomPromptsResponse(ev) => self.on_list_custom_prompts(ev),
|
||||
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
|
||||
EventMsg::TurnDiff(TurnDiffEvent { unified_diff }) => self.on_turn_diff(unified_diff),
|
||||
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
|
||||
@@ -1015,7 +1028,6 @@ impl ChatWidget {
|
||||
let cell = cell.into_failed();
|
||||
// Insert finalized exec into history and keep grouping consistent.
|
||||
self.add_to_history(cell);
|
||||
self.last_history_was_exec = true;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1070,6 +1082,7 @@ impl ChatWidget {
|
||||
let is_current = preset.model == current_model && preset.effort == current_effort;
|
||||
let model_slug = preset.model.to_string();
|
||||
let effort = preset.effort;
|
||||
let current_model = current_model.clone();
|
||||
let actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
|
||||
tx.send(AppEvent::CodexOp(Op::OverrideTurnContext {
|
||||
cwd: None,
|
||||
@@ -1081,6 +1094,13 @@ impl ChatWidget {
|
||||
}));
|
||||
tx.send(AppEvent::UpdateModel(model_slug.clone()));
|
||||
tx.send(AppEvent::UpdateReasoningEffort(effort));
|
||||
tracing::info!(
|
||||
"New model: {}, New effort: {}, Current model: {}, Current effort: {}",
|
||||
model_slug.clone(),
|
||||
effort,
|
||||
current_model,
|
||||
current_effort
|
||||
);
|
||||
})];
|
||||
items.push(SelectionItem {
|
||||
name,
|
||||
@@ -1220,6 +1240,13 @@ impl ChatWidget {
|
||||
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
|
||||
}
|
||||
|
||||
fn on_list_custom_prompts(&mut self, ev: ListCustomPromptsResponseEvent) {
|
||||
let len = ev.custom_prompts.len();
|
||||
debug!("received {len} custom prompts");
|
||||
// Forward to bottom pane so the slash popup can show them now.
|
||||
self.bottom_pane.set_custom_prompts(ev.custom_prompts);
|
||||
}
|
||||
|
||||
/// Programmatically submit a user text message as if typed in the
|
||||
/// composer. The text will be added to conversation history and sent to
|
||||
/// the agent.
|
||||
@@ -1264,6 +1291,9 @@ impl WidgetRef for &ChatWidget {
|
||||
let [active_cell_area, bottom_pane_area] = self.layout_areas(area);
|
||||
(&self.bottom_pane).render(bottom_pane_area, buf);
|
||||
if let Some(cell) = &self.active_exec_cell {
|
||||
let mut active_cell_area = active_cell_area;
|
||||
active_cell_area.y += 1;
|
||||
active_cell_area.height -= 1;
|
||||
cell.render_ref(active_cell_area, buf);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: lines_to_single_string(&approved_lines)
|
||||
---
|
||||
• Change Approved foo.txt (+1 -0)
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: lines_to_single_string(&proposed_lines)
|
||||
---
|
||||
• Proposed Change foo.txt (+1 -0)
|
||||
1 +hello
|
||||
@@ -3,6 +3,7 @@ source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 728
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
"? Codex wants to run echo hello world "
|
||||
" "
|
||||
"Model wants to run a command "
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
"? Codex wants to run echo hello world "
|
||||
" "
|
||||
"▌Allow command? "
|
||||
"▌ Yes Always No, provide feedback "
|
||||
"▌ Approve and run the command "
|
||||
" "
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 763
|
||||
assertion_line: 794
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
"The model wants to apply changes "
|
||||
" "
|
||||
"This will grant write access to /tmp for the remainder of this session. "
|
||||
|
||||
@@ -0,0 +1,57 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: visible_after
|
||||
---
|
||||
> I’m going to scan the workspace and Cargo manifests to see build profiles and
|
||||
dependencies that impact binary size. Then I’ll summarize the main causes.
|
||||
|
||||
• Explored
|
||||
└ List ls -la
|
||||
Read Cargo.toml
|
||||
|
||||
• Ran
|
||||
└ for d in ansi-escape apply-patch arg0 cli common core exec execpolicy
|
||||
file-search linux-sandbox login mcp-client mcp-server mcp-types ollama
|
||||
tui; do echo "--- $d/Cargo.toml"; sed -n '1,200p' $d/Cargo.toml; echo;
|
||||
done
|
||||
|
||||
• Explored
|
||||
└ Read Cargo.toml
|
||||
|
||||
> Here’s what’s driving size in this workspace’s binaries.
|
||||
|
||||
Main Causes
|
||||
|
||||
- Static linking style: Each bin (codex, codex-tui, codex-exec,
|
||||
codex-mcp-server, etc.) statically links its full dependency graph, so common
|
||||
code isn’t shared at runtime across executables.
|
||||
- Heavy deps (HTTP/TLS): reqwest brings in Hyper, HTTP/2, compressors,
|
||||
and a TLS stack (rustls by default; OpenSSL on musl). In core, login, tui,
|
||||
and ollama you enable reqwest with json/stream, which still pulls a large
|
||||
transitive set.
|
||||
- Image/terminal stack: codex-tui includes image (with jpeg), ratatui,
|
||||
crossterm, and ratatui-image, which together contribute significant code size
|
||||
(decoders, pixel ops, terminal backends).
|
||||
- Parsers/VMs: tree-sitter + tree-sitter-bash (in core and apply-patch) and
|
||||
starlark (in execpolicy) include sizeable parser/VM tables and runtimes.
|
||||
- Tokio runtime: Broad tokio features (rt-multi-thread, macros, process,
|
||||
signal) across many crates inflate code size even if only subsets are used
|
||||
per bin.
|
||||
- Panic + backtraces: Default panic = unwind and backtrace support keep
|
||||
unwinding tables and symbols that add weight.
|
||||
- Per-target OpenSSL (musl): For *-unknown-linux-musl, core enables
|
||||
openssl-sys with vendored, compiling OpenSSL into the binary—this adds
|
||||
multiple megabytes per executable.
|
||||
|
||||
Build-Mode Notes
|
||||
|
||||
- Release settings: You use lto = "fat" and codegen-units = 1 (good for size),
|
||||
but strip = "symbols" keeps debuginfo. Debuginfo is often the largest single
|
||||
contributor; if you build in release with that setting, binaries can still
|
||||
be large.
|
||||
- Debug builds: cargo build (dev profile) includes full debuginfo, no LTO, and
|
||||
assertions—outputs are much larger than cargo build --release.
|
||||
|
||||
If you want, I can outline targeted trims (e.g., strip = "debuginfo",
|
||||
opt-level = "z", panic abort, tighter tokio/reqwest features) and estimate
|
||||
impact per binary.
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 779
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"▌ Ask Codex to do anything "
|
||||
" "
|
||||
" ⏎ send Ctrl+J newline Ctrl+T transc"
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 779
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 807
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" Thinking (0s • Esc to interrupt) "
|
||||
"▌ Ask Codex to do anything "
|
||||
" "
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 807
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: visual
|
||||
---
|
||||
> I’m going to search the repo for where “Change Approved” is rendered to update
|
||||
that view.
|
||||
|
||||
• Explored
|
||||
└ Search Change Approved
|
||||
Read diff_render.rs
|
||||
|
||||
Investigating rendering code (0s • Esc to interrupt)
|
||||
|
||||
▌Summarize recent commits
|
||||
⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit
|
||||
@@ -2,5 +2,4 @@
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: combined
|
||||
---
|
||||
codex
|
||||
Here is the result.
|
||||
> Here is the result.
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob
|
||||
---
|
||||
🖐 '/model' is disabled while a task is in progress.
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob1
|
||||
---
|
||||
⠋ Exploring
|
||||
└ List ls -la
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob2
|
||||
---
|
||||
• Explored
|
||||
└ List ls -la
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob3
|
||||
---
|
||||
⠋ Exploring
|
||||
└ List ls -la
|
||||
Read foo.txt
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob4
|
||||
---
|
||||
• Explored
|
||||
└ List ls -la
|
||||
Read foo.txt
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob5
|
||||
---
|
||||
• Explored
|
||||
└ List ls -la
|
||||
Read foo.txt
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: blob6
|
||||
---
|
||||
• Explored
|
||||
└ List ls -la
|
||||
Read foo.txt, bar.txt
|
||||
@@ -2,5 +2,4 @@
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: combined
|
||||
---
|
||||
codex
|
||||
Here is the result.
|
||||
> Here is the result.
|
||||
|
||||
@@ -2,5 +2,4 @@
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: exec_blob
|
||||
---
|
||||
>_
|
||||
✗ ⌨️ sleep 1
|
||||
• Ran sleep 1
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 878
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
assertion_line: 851
|
||||
assertion_line: 921
|
||||
expression: terminal.backend()
|
||||
---
|
||||
" "
|
||||
"? Codex wants to run echo 'hello world' "
|
||||
" "
|
||||
"Codex wants to run a command "
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1 +0,0 @@
|
||||
pub(crate) const DEFAULT_WRAP_COLS: u16 = 80;
|
||||
@@ -1,16 +1,17 @@
|
||||
use crossterm::terminal;
|
||||
use ratatui::style::Color;
|
||||
use ratatui::style::Modifier;
|
||||
use ratatui::style::Style;
|
||||
use ratatui::style::Stylize;
|
||||
use ratatui::text::Line as RtLine;
|
||||
use ratatui::text::Span as RtSpan;
|
||||
use std::collections::HashMap;
|
||||
use std::path::Path;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use crate::common::DEFAULT_WRAP_COLS;
|
||||
use codex_core::protocol::FileChange;
|
||||
|
||||
use crate::exec_command::relativize_to_home;
|
||||
use crate::history_cell::PatchEventType;
|
||||
use codex_core::git_info::get_git_repo_root;
|
||||
use codex_core::protocol::FileChange;
|
||||
|
||||
const SPACES_AFTER_LINE_NUMBER: usize = 6;
|
||||
|
||||
@@ -22,205 +23,199 @@ enum DiffLineType {
|
||||
}
|
||||
|
||||
pub(crate) fn create_diff_summary(
|
||||
title: &str,
|
||||
changes: &HashMap<PathBuf, FileChange>,
|
||||
event_type: PatchEventType,
|
||||
cwd: &Path,
|
||||
wrap_cols: usize,
|
||||
) -> Vec<RtLine<'static>> {
|
||||
struct FileSummary {
|
||||
display_path: String,
|
||||
added: usize,
|
||||
removed: usize,
|
||||
}
|
||||
|
||||
let count_from_unified = |diff: &str| -> (usize, usize) {
|
||||
if let Ok(patch) = diffy::Patch::from_str(diff) {
|
||||
patch
|
||||
.hunks()
|
||||
.iter()
|
||||
.flat_map(|h| h.lines())
|
||||
.fold((0, 0), |(a, d), l| match l {
|
||||
diffy::Line::Insert(_) => (a + 1, d),
|
||||
diffy::Line::Delete(_) => (a, d + 1),
|
||||
_ => (a, d),
|
||||
})
|
||||
} else {
|
||||
// Fallback: manual scan to preserve counts even for unparsable diffs
|
||||
let mut adds = 0usize;
|
||||
let mut dels = 0usize;
|
||||
for l in diff.lines() {
|
||||
if l.starts_with("+++") || l.starts_with("---") || l.starts_with("@@") {
|
||||
continue;
|
||||
}
|
||||
match l.as_bytes().first() {
|
||||
Some(b'+') => adds += 1,
|
||||
Some(b'-') => dels += 1,
|
||||
_ => {}
|
||||
}
|
||||
let rows = collect_rows(changes);
|
||||
let header_kind = match event_type {
|
||||
PatchEventType::ApplyBegin { auto_approved } => {
|
||||
if auto_approved {
|
||||
HeaderKind::Edited
|
||||
} else {
|
||||
HeaderKind::ChangeApproved
|
||||
}
|
||||
(adds, dels)
|
||||
}
|
||||
PatchEventType::ApprovalRequest => HeaderKind::ProposedChange,
|
||||
};
|
||||
|
||||
let mut files: Vec<FileSummary> = Vec::new();
|
||||
for (path, change) in changes.iter() {
|
||||
match change {
|
||||
FileChange::Add { content } => files.push(FileSummary {
|
||||
display_path: path.display().to_string(),
|
||||
added: content.lines().count(),
|
||||
removed: 0,
|
||||
}),
|
||||
FileChange::Delete => files.push(FileSummary {
|
||||
display_path: path.display().to_string(),
|
||||
added: 0,
|
||||
removed: std::fs::read_to_string(path)
|
||||
.ok()
|
||||
.map(|s| s.lines().count())
|
||||
.unwrap_or(0),
|
||||
}),
|
||||
FileChange::Update {
|
||||
unified_diff,
|
||||
move_path,
|
||||
} => {
|
||||
let (added, removed) = count_from_unified(unified_diff);
|
||||
let display_path = if let Some(new_path) = move_path {
|
||||
format!("{} → {}", path.display(), new_path.display())
|
||||
} else {
|
||||
path.display().to_string()
|
||||
};
|
||||
files.push(FileSummary {
|
||||
display_path,
|
||||
added,
|
||||
removed,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let file_count = files.len();
|
||||
let total_added: usize = files.iter().map(|f| f.added).sum();
|
||||
let total_removed: usize = files.iter().map(|f| f.removed).sum();
|
||||
let noun = if file_count == 1 { "file" } else { "files" };
|
||||
|
||||
let mut out: Vec<RtLine<'static>> = Vec::new();
|
||||
|
||||
// Header
|
||||
let mut header_spans: Vec<RtSpan<'static>> = Vec::new();
|
||||
header_spans.push(RtSpan::styled(
|
||||
title.to_owned(),
|
||||
Style::default()
|
||||
.fg(Color::Magenta)
|
||||
.add_modifier(Modifier::BOLD),
|
||||
));
|
||||
header_spans.push(RtSpan::raw(" to "));
|
||||
header_spans.push(RtSpan::raw(format!("{file_count} {noun} ")));
|
||||
header_spans.push(RtSpan::raw("("));
|
||||
header_spans.push(RtSpan::styled(
|
||||
format!("+{total_added}"),
|
||||
Style::default().fg(Color::Green),
|
||||
));
|
||||
header_spans.push(RtSpan::raw(" "));
|
||||
header_spans.push(RtSpan::styled(
|
||||
format!("-{total_removed}"),
|
||||
Style::default().fg(Color::Red),
|
||||
));
|
||||
header_spans.push(RtSpan::raw(")"));
|
||||
out.push(RtLine::from(header_spans));
|
||||
|
||||
// Dimmed per-file lines with prefix
|
||||
for (idx, f) in files.iter().enumerate() {
|
||||
let mut spans: Vec<RtSpan<'static>> = Vec::new();
|
||||
spans.push(RtSpan::raw(f.display_path.clone()));
|
||||
// Show per-file +/- counts only when there are multiple files
|
||||
if file_count > 1 {
|
||||
spans.push(RtSpan::raw(" ("));
|
||||
spans.push(RtSpan::styled(
|
||||
format!("+{}", f.added),
|
||||
Style::default().fg(Color::Green),
|
||||
));
|
||||
spans.push(RtSpan::raw(" "));
|
||||
spans.push(RtSpan::styled(
|
||||
format!("-{}", f.removed),
|
||||
Style::default().fg(Color::Red),
|
||||
));
|
||||
spans.push(RtSpan::raw(")"));
|
||||
}
|
||||
|
||||
let mut line = RtLine::from(spans);
|
||||
let prefix = if idx == 0 { " └ " } else { " " };
|
||||
line.spans.insert(0, prefix.into());
|
||||
line.spans
|
||||
.iter_mut()
|
||||
.for_each(|span| span.style = span.style.add_modifier(Modifier::DIM));
|
||||
out.push(line);
|
||||
}
|
||||
|
||||
let show_details = matches!(
|
||||
event_type,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true
|
||||
} | PatchEventType::ApprovalRequest
|
||||
);
|
||||
|
||||
if show_details {
|
||||
out.extend(render_patch_details(changes));
|
||||
}
|
||||
|
||||
out
|
||||
render_changes_block(rows, wrap_cols, header_kind, cwd)
|
||||
}
|
||||
|
||||
fn render_patch_details(changes: &HashMap<PathBuf, FileChange>) -> Vec<RtLine<'static>> {
|
||||
let mut out: Vec<RtLine<'static>> = Vec::new();
|
||||
let term_cols: usize = terminal::size()
|
||||
.map(|(w, _)| w as usize)
|
||||
.unwrap_or(DEFAULT_WRAP_COLS.into());
|
||||
// Shared row for per-file presentation
|
||||
#[derive(Clone)]
|
||||
struct Row {
|
||||
#[allow(dead_code)]
|
||||
path: PathBuf,
|
||||
move_path: Option<PathBuf>,
|
||||
added: usize,
|
||||
removed: usize,
|
||||
change: FileChange,
|
||||
}
|
||||
|
||||
for (index, (path, change)) in changes.iter().enumerate() {
|
||||
let is_first_file = index == 0;
|
||||
// Add separator only between files (not at the very start)
|
||||
if !is_first_file {
|
||||
out.push(RtLine::from(vec![
|
||||
RtSpan::raw(" "),
|
||||
RtSpan::styled("...", style_dim()),
|
||||
]));
|
||||
fn collect_rows(changes: &HashMap<PathBuf, FileChange>) -> Vec<Row> {
|
||||
let mut rows: Vec<Row> = Vec::new();
|
||||
for (path, change) in changes.iter() {
|
||||
let (added, removed) = match change {
|
||||
FileChange::Add { content } => (content.lines().count(), 0),
|
||||
FileChange::Delete { content } => (0, content.lines().count()),
|
||||
FileChange::Update { unified_diff, .. } => calculate_add_remove_from_diff(unified_diff),
|
||||
};
|
||||
let move_path = match change {
|
||||
FileChange::Update {
|
||||
move_path: Some(new),
|
||||
..
|
||||
} => Some(new.clone()),
|
||||
_ => None,
|
||||
};
|
||||
rows.push(Row {
|
||||
path: path.clone(),
|
||||
move_path,
|
||||
added,
|
||||
removed,
|
||||
change: change.clone(),
|
||||
});
|
||||
}
|
||||
rows.sort_by_key(|r| r.path.clone());
|
||||
rows
|
||||
}
|
||||
|
||||
enum HeaderKind {
|
||||
ProposedChange,
|
||||
Edited,
|
||||
ChangeApproved,
|
||||
}
|
||||
|
||||
fn render_changes_block(
|
||||
rows: Vec<Row>,
|
||||
wrap_cols: usize,
|
||||
header_kind: HeaderKind,
|
||||
cwd: &Path,
|
||||
) -> Vec<RtLine<'static>> {
|
||||
let mut out: Vec<RtLine<'static>> = Vec::new();
|
||||
let term_cols = wrap_cols;
|
||||
|
||||
fn render_line_count_summary(added: usize, removed: usize) -> Vec<RtSpan<'static>> {
|
||||
let mut spans = Vec::new();
|
||||
spans.push("(".into());
|
||||
spans.push(format!("+{added}").green());
|
||||
spans.push(" ".into());
|
||||
spans.push(format!("-{removed}").red());
|
||||
spans.push(")".into());
|
||||
spans
|
||||
}
|
||||
|
||||
let render_path = |row: &Row| -> Vec<RtSpan<'static>> {
|
||||
let mut spans = Vec::new();
|
||||
spans.push(display_path_for(&row.path, cwd).into());
|
||||
if let Some(move_path) = &row.move_path {
|
||||
spans.push(format!(" → {}", display_path_for(move_path, cwd)).into());
|
||||
}
|
||||
match change {
|
||||
spans
|
||||
};
|
||||
|
||||
// Header
|
||||
let total_added: usize = rows.iter().map(|r| r.added).sum();
|
||||
let total_removed: usize = rows.iter().map(|r| r.removed).sum();
|
||||
let file_count = rows.len();
|
||||
let noun = if file_count == 1 { "file" } else { "files" };
|
||||
let mut header_spans: Vec<RtSpan<'static>> = vec!["• ".into()];
|
||||
match header_kind {
|
||||
HeaderKind::ProposedChange => {
|
||||
header_spans.push("Proposed Change".bold());
|
||||
if let [row] = &rows[..] {
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_path(row));
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_line_count_summary(row.added, row.removed));
|
||||
} else {
|
||||
header_spans.push(format!(" to {file_count} {noun} ").into());
|
||||
header_spans.extend(render_line_count_summary(total_added, total_removed));
|
||||
}
|
||||
}
|
||||
HeaderKind::Edited => {
|
||||
if let [row] = &rows[..] {
|
||||
let verb = match &row.change {
|
||||
FileChange::Add { .. } => "Added",
|
||||
FileChange::Delete { .. } => "Deleted",
|
||||
_ => "Edited",
|
||||
};
|
||||
header_spans.push(verb.bold());
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_path(row));
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_line_count_summary(row.added, row.removed));
|
||||
} else {
|
||||
header_spans.push("Edited".bold());
|
||||
header_spans.push(format!(" {file_count} {noun} ").into());
|
||||
header_spans.extend(render_line_count_summary(total_added, total_removed));
|
||||
}
|
||||
}
|
||||
HeaderKind::ChangeApproved => {
|
||||
header_spans.push("Change Approved".bold());
|
||||
if let [row] = &rows[..] {
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_path(row));
|
||||
header_spans.push(" ".into());
|
||||
header_spans.extend(render_line_count_summary(row.added, row.removed));
|
||||
} else {
|
||||
header_spans.push(format!(" {file_count} {noun} ").into());
|
||||
header_spans.extend(render_line_count_summary(total_added, total_removed));
|
||||
}
|
||||
}
|
||||
}
|
||||
out.push(RtLine::from(header_spans));
|
||||
|
||||
// For Change Approved, we only show the header summary and no per-file/diff details.
|
||||
if matches!(header_kind, HeaderKind::ChangeApproved) {
|
||||
return out;
|
||||
}
|
||||
|
||||
for (idx, r) in rows.into_iter().enumerate() {
|
||||
// Insert a blank separator between file chunks (except before the first)
|
||||
if idx > 0 {
|
||||
out.push("".into());
|
||||
}
|
||||
// File header line (skip when single-file header already shows the name)
|
||||
let skip_file_header =
|
||||
matches!(header_kind, HeaderKind::ProposedChange | HeaderKind::Edited)
|
||||
&& file_count == 1;
|
||||
if !skip_file_header {
|
||||
let mut header: Vec<RtSpan<'static>> = Vec::new();
|
||||
header.push(" └ ".dim());
|
||||
header.extend(render_path(&r));
|
||||
header.push(" ".into());
|
||||
header.extend(render_line_count_summary(r.added, r.removed));
|
||||
out.push(RtLine::from(header));
|
||||
}
|
||||
|
||||
match r.change {
|
||||
FileChange::Add { content } => {
|
||||
for (i, raw) in content.lines().enumerate() {
|
||||
let ln = i + 1;
|
||||
out.extend(push_wrapped_diff_line(
|
||||
ln,
|
||||
i + 1,
|
||||
DiffLineType::Insert,
|
||||
raw,
|
||||
term_cols,
|
||||
));
|
||||
}
|
||||
}
|
||||
FileChange::Delete => {
|
||||
let original = std::fs::read_to_string(path).unwrap_or_default();
|
||||
for (i, raw) in original.lines().enumerate() {
|
||||
let ln = i + 1;
|
||||
FileChange::Delete { content } => {
|
||||
for (i, raw) in content.lines().enumerate() {
|
||||
out.extend(push_wrapped_diff_line(
|
||||
ln,
|
||||
i + 1,
|
||||
DiffLineType::Delete,
|
||||
raw,
|
||||
term_cols,
|
||||
));
|
||||
}
|
||||
}
|
||||
FileChange::Update {
|
||||
unified_diff,
|
||||
move_path: _,
|
||||
} => {
|
||||
if let Ok(patch) = diffy::Patch::from_str(unified_diff) {
|
||||
FileChange::Update { unified_diff, .. } => {
|
||||
if let Ok(patch) = diffy::Patch::from_str(&unified_diff) {
|
||||
let mut is_first_hunk = true;
|
||||
for h in patch.hunks() {
|
||||
// Render a simple separator between non-contiguous hunks
|
||||
// instead of diff-style @@ headers.
|
||||
if !is_first_hunk {
|
||||
out.push(RtLine::from(vec![
|
||||
RtSpan::raw(" "),
|
||||
RtSpan::styled("⋮", style_dim()),
|
||||
]));
|
||||
out.push(RtLine::from(vec![" ".into(), "⋮".dim()]));
|
||||
}
|
||||
is_first_hunk = false;
|
||||
|
||||
@@ -265,13 +260,41 @@ fn render_patch_details(changes: &HashMap<PathBuf, FileChange>) -> Vec<RtLine<'s
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
out.push(RtLine::from(RtSpan::raw("")));
|
||||
}
|
||||
|
||||
out
|
||||
}
|
||||
|
||||
fn display_path_for(path: &Path, cwd: &Path) -> String {
|
||||
let path_in_same_repo = match (get_git_repo_root(cwd), get_git_repo_root(path)) {
|
||||
(Some(cwd_repo), Some(path_repo)) => cwd_repo == path_repo,
|
||||
_ => false,
|
||||
};
|
||||
let chosen = if path_in_same_repo {
|
||||
pathdiff::diff_paths(path, cwd).unwrap_or_else(|| path.to_path_buf())
|
||||
} else {
|
||||
relativize_to_home(path).unwrap_or_else(|| path.to_path_buf())
|
||||
};
|
||||
chosen.display().to_string()
|
||||
}
|
||||
|
||||
fn calculate_add_remove_from_diff(diff: &str) -> (usize, usize) {
|
||||
if let Ok(patch) = diffy::Patch::from_str(diff) {
|
||||
patch
|
||||
.hunks()
|
||||
.iter()
|
||||
.flat_map(|h| h.lines())
|
||||
.fold((0, 0), |(a, d), l| match l {
|
||||
diffy::Line::Insert(_) => (a + 1, d),
|
||||
diffy::Line::Delete(_) => (a, d + 1),
|
||||
diffy::Line::Context(_) => (a, d),
|
||||
})
|
||||
} else {
|
||||
// For unparsable diffs, return 0 for both counts.
|
||||
(0, 0)
|
||||
}
|
||||
}
|
||||
|
||||
fn push_wrapped_diff_line(
|
||||
line_number: usize,
|
||||
kind: DiffLineType,
|
||||
@@ -290,10 +313,10 @@ fn push_wrapped_diff_line(
|
||||
let prefix_cols = indent.len() + ln_str.len() + gap_after_ln;
|
||||
|
||||
let mut first = true;
|
||||
let (sign_opt, line_style) = match kind {
|
||||
DiffLineType::Insert => (Some('+'), Some(style_add())),
|
||||
DiffLineType::Delete => (Some('-'), Some(style_del())),
|
||||
DiffLineType::Context => (None, None),
|
||||
let (sign_char, line_style) = match kind {
|
||||
DiffLineType::Insert => ('+', style_add()),
|
||||
DiffLineType::Delete => ('-', style_del()),
|
||||
DiffLineType::Context => (' ', style_context()),
|
||||
};
|
||||
let mut lines: Vec<RtLine<'static>> = Vec::new();
|
||||
|
||||
@@ -301,9 +324,7 @@ fn push_wrapped_diff_line(
|
||||
// Fit the content for the current terminal row:
|
||||
// compute how many columns are available after the prefix, then split
|
||||
// at a UTF-8 character boundary so this row's chunk fits exactly.
|
||||
let available_content_cols = term_cols
|
||||
.saturating_sub(if first { prefix_cols + 1 } else { prefix_cols })
|
||||
.max(1);
|
||||
let available_content_cols = term_cols.saturating_sub(prefix_cols + 1).max(1);
|
||||
let split_at_byte_index = remaining_text
|
||||
.char_indices()
|
||||
.nth(available_content_cols)
|
||||
@@ -313,41 +334,22 @@ fn push_wrapped_diff_line(
|
||||
remaining_text = rest;
|
||||
|
||||
if first {
|
||||
let mut spans: Vec<RtSpan<'static>> = Vec::new();
|
||||
spans.push(RtSpan::raw(indent));
|
||||
spans.push(RtSpan::styled(ln_str.clone(), style_dim()));
|
||||
spans.push(RtSpan::raw(" ".repeat(gap_after_ln)));
|
||||
// Always include a sign character at the start of the displayed chunk
|
||||
// ('+' for insert, '-' for delete, ' ' for context) so gutters align.
|
||||
let sign_char = sign_opt.unwrap_or(' ');
|
||||
let display_chunk = format!("{sign_char}{chunk}");
|
||||
let content_span = match line_style {
|
||||
Some(style) => RtSpan::styled(display_chunk, style),
|
||||
None => RtSpan::raw(display_chunk),
|
||||
};
|
||||
spans.push(content_span);
|
||||
let mut line = RtLine::from(spans);
|
||||
if let Some(style) = line_style {
|
||||
line.style = line.style.patch(style);
|
||||
}
|
||||
lines.push(line);
|
||||
// Build gutter (indent + line number + spacing) as a dimmed span
|
||||
let gutter = format!("{indent}{ln_str}{}", " ".repeat(gap_after_ln));
|
||||
// Content with a sign ('+'/'-'/' ') styled per diff kind
|
||||
let content = format!("{sign_char}{chunk}");
|
||||
lines.push(RtLine::from(vec![
|
||||
RtSpan::styled(gutter, style_gutter()),
|
||||
RtSpan::styled(content, line_style),
|
||||
]));
|
||||
first = false;
|
||||
} else {
|
||||
// Continuation lines keep a space for the sign column so content aligns
|
||||
let hang_prefix = format!(
|
||||
"{indent}{}{} ",
|
||||
" ".repeat(ln_str.len()),
|
||||
" ".repeat(gap_after_ln)
|
||||
);
|
||||
let content_span = match line_style {
|
||||
Some(style) => RtSpan::styled(chunk.to_string(), style),
|
||||
None => RtSpan::raw(chunk.to_string()),
|
||||
};
|
||||
let mut line = RtLine::from(vec![RtSpan::raw(hang_prefix), content_span]);
|
||||
if let Some(style) = line_style {
|
||||
line.style = line.style.patch(style);
|
||||
}
|
||||
lines.push(line);
|
||||
let gutter = format!("{indent}{} ", " ".repeat(ln_str.len() + gap_after_ln));
|
||||
lines.push(RtLine::from(vec![
|
||||
RtSpan::styled(gutter, style_gutter()),
|
||||
RtSpan::styled(chunk.to_string(), line_style),
|
||||
]));
|
||||
}
|
||||
if remaining_text.is_empty() {
|
||||
break;
|
||||
@@ -356,10 +358,14 @@ fn push_wrapped_diff_line(
|
||||
lines
|
||||
}
|
||||
|
||||
fn style_dim() -> Style {
|
||||
fn style_gutter() -> Style {
|
||||
Style::default().add_modifier(Modifier::DIM)
|
||||
}
|
||||
|
||||
fn style_context() -> Style {
|
||||
Style::default()
|
||||
}
|
||||
|
||||
fn style_add() -> Style {
|
||||
Style::default().fg(Color::Green)
|
||||
}
|
||||
@@ -378,6 +384,12 @@ mod tests {
|
||||
use ratatui::widgets::Paragraph;
|
||||
use ratatui::widgets::WidgetRef;
|
||||
use ratatui::widgets::Wrap;
|
||||
fn diff_summary_for_tests(
|
||||
changes: &HashMap<PathBuf, FileChange>,
|
||||
event_type: PatchEventType,
|
||||
) -> Vec<RtLine<'static>> {
|
||||
create_diff_summary(changes, event_type, &PathBuf::from("/"), 80)
|
||||
}
|
||||
|
||||
fn snapshot_lines(name: &str, lines: Vec<RtLine<'static>>, width: u16, height: u16) {
|
||||
let mut terminal = Terminal::new(TestBackend::new(width, height)).expect("terminal");
|
||||
@@ -391,6 +403,23 @@ mod tests {
|
||||
assert_snapshot!(name, terminal.backend());
|
||||
}
|
||||
|
||||
fn snapshot_lines_text(name: &str, lines: &[RtLine<'static>]) {
|
||||
// Convert Lines to plain text rows and trim trailing spaces so it's
|
||||
// easier to validate indentation visually in snapshots.
|
||||
let text = lines
|
||||
.iter()
|
||||
.map(|l| {
|
||||
l.spans
|
||||
.iter()
|
||||
.map(|s| s.content.as_ref())
|
||||
.collect::<String>()
|
||||
})
|
||||
.map(|s| s.trim_end().to_string())
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
assert_snapshot!(name, text);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_add_details() {
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
@@ -401,8 +430,7 @@ mod tests {
|
||||
},
|
||||
);
|
||||
|
||||
let lines =
|
||||
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
|
||||
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
|
||||
|
||||
snapshot_lines("add_details", lines, 80, 10);
|
||||
}
|
||||
@@ -423,8 +451,7 @@ mod tests {
|
||||
},
|
||||
);
|
||||
|
||||
let lines =
|
||||
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
|
||||
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
|
||||
|
||||
snapshot_lines("update_details_with_rename", lines, 80, 12);
|
||||
}
|
||||
@@ -435,11 +462,10 @@ mod tests {
|
||||
let long_line = "this is a very long line that should wrap across multiple terminal columns and continue";
|
||||
|
||||
// Call the wrapping function directly so we can precisely control the width
|
||||
let lines =
|
||||
push_wrapped_diff_line(1, DiffLineType::Insert, long_line, DEFAULT_WRAP_COLS.into());
|
||||
let lines = push_wrapped_diff_line(1, DiffLineType::Insert, long_line, 80);
|
||||
|
||||
// Render into a small terminal to capture the visual layout
|
||||
snapshot_lines("wrap_behavior_insert", lines, DEFAULT_WRAP_COLS + 10, 8);
|
||||
snapshot_lines("wrap_behavior_insert", lines, 90, 8);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -458,8 +484,7 @@ mod tests {
|
||||
},
|
||||
);
|
||||
|
||||
let lines =
|
||||
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
|
||||
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
|
||||
|
||||
snapshot_lines("single_line_replacement_counts", lines, 80, 8);
|
||||
}
|
||||
@@ -480,8 +505,7 @@ mod tests {
|
||||
},
|
||||
);
|
||||
|
||||
let lines =
|
||||
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
|
||||
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
|
||||
|
||||
snapshot_lines("blank_context_line", lines, 80, 10);
|
||||
}
|
||||
@@ -503,10 +527,232 @@ mod tests {
|
||||
},
|
||||
);
|
||||
|
||||
let lines =
|
||||
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
|
||||
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
|
||||
|
||||
// Height is large enough to show both hunks and the separator
|
||||
snapshot_lines("vertical_ellipsis_between_hunks", lines, 80, 16);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_update_block() {
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
let original = "line one\nline two\nline three\n";
|
||||
let modified = "line one\nline two changed\nline three\n";
|
||||
let patch = diffy::create_patch(original, modified).to_string();
|
||||
|
||||
changes.insert(
|
||||
PathBuf::from("example.txt"),
|
||||
FileChange::Update {
|
||||
unified_diff: patch,
|
||||
move_path: None,
|
||||
},
|
||||
);
|
||||
|
||||
for (name, auto_approved) in [
|
||||
("apply_update_block", true),
|
||||
("apply_update_block_manual", false),
|
||||
] {
|
||||
let lines =
|
||||
diff_summary_for_tests(&changes, PatchEventType::ApplyBegin { auto_approved });
|
||||
|
||||
snapshot_lines(name, lines, 80, 12);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_update_with_rename_block() {
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
let original = "A\nB\nC\n";
|
||||
let modified = "A\nB changed\nC\n";
|
||||
let patch = diffy::create_patch(original, modified).to_string();
|
||||
|
||||
changes.insert(
|
||||
PathBuf::from("old_name.rs"),
|
||||
FileChange::Update {
|
||||
unified_diff: patch,
|
||||
move_path: Some(PathBuf::from("new_name.rs")),
|
||||
},
|
||||
);
|
||||
|
||||
let lines = diff_summary_for_tests(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
);
|
||||
|
||||
snapshot_lines("apply_update_with_rename_block", lines, 80, 12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_multiple_files_block() {
|
||||
// Two files: one update and one add, to exercise combined header and per-file rows
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
|
||||
// File a.txt: single-line replacement (one delete, one insert)
|
||||
let patch_a = diffy::create_patch("one\n", "one changed\n").to_string();
|
||||
changes.insert(
|
||||
PathBuf::from("a.txt"),
|
||||
FileChange::Update {
|
||||
unified_diff: patch_a,
|
||||
move_path: None,
|
||||
},
|
||||
);
|
||||
|
||||
// File b.txt: newly added with one line
|
||||
changes.insert(
|
||||
PathBuf::from("b.txt"),
|
||||
FileChange::Add {
|
||||
content: "new\n".to_string(),
|
||||
},
|
||||
);
|
||||
|
||||
let lines = diff_summary_for_tests(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
);
|
||||
|
||||
snapshot_lines("apply_multiple_files_block", lines, 80, 14);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_add_block() {
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
changes.insert(
|
||||
PathBuf::from("new_file.txt"),
|
||||
FileChange::Add {
|
||||
content: "alpha\nbeta\n".to_string(),
|
||||
},
|
||||
);
|
||||
|
||||
let lines = diff_summary_for_tests(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
);
|
||||
|
||||
snapshot_lines("apply_add_block", lines, 80, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_delete_block() {
|
||||
// Write a temporary file so the delete renderer can read original content
|
||||
let tmp_path = PathBuf::from("tmp_delete_example.txt");
|
||||
std::fs::write(&tmp_path, "first\nsecond\nthird\n").expect("write tmp file");
|
||||
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
changes.insert(
|
||||
tmp_path.clone(),
|
||||
FileChange::Delete {
|
||||
content: "first\nsecond\nthird\n".to_string(),
|
||||
},
|
||||
);
|
||||
|
||||
let lines = diff_summary_for_tests(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
);
|
||||
|
||||
// Cleanup best-effort; rendering has already read the file
|
||||
let _ = std::fs::remove_file(&tmp_path);
|
||||
|
||||
snapshot_lines("apply_delete_block", lines, 80, 12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_update_block_wraps_long_lines() {
|
||||
// Create a patch with a long modified line to force wrapping
|
||||
let original = "line 1\nshort\nline 3\n";
|
||||
let modified = "line 1\nshort this_is_a_very_long_modified_line_that_should_wrap_across_multiple_terminal_columns_and_continue_even_further_beyond_eighty_columns_to_force_multiple_wraps\nline 3\n";
|
||||
let patch = diffy::create_patch(original, modified).to_string();
|
||||
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
changes.insert(
|
||||
PathBuf::from("long_example.txt"),
|
||||
FileChange::Update {
|
||||
unified_diff: patch,
|
||||
move_path: None,
|
||||
},
|
||||
);
|
||||
|
||||
let lines = create_diff_summary(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
&PathBuf::from("/"),
|
||||
72,
|
||||
);
|
||||
|
||||
// Render with backend width wider than wrap width to avoid Paragraph auto-wrap.
|
||||
snapshot_lines("apply_update_block_wraps_long_lines", lines, 80, 12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_update_block_wraps_long_lines_text() {
|
||||
// This mirrors the desired layout example: sign only on first inserted line,
|
||||
// subsequent wrapped pieces start aligned under the line number gutter.
|
||||
let original = "1\n2\n3\n4\n";
|
||||
let modified = "1\nadded long line which wraps and_if_there_is_a_long_token_it_will_be_broken\n3\n4 context line which also wraps across\n";
|
||||
let patch = diffy::create_patch(original, modified).to_string();
|
||||
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
changes.insert(
|
||||
PathBuf::from("wrap_demo.txt"),
|
||||
FileChange::Update {
|
||||
unified_diff: patch,
|
||||
move_path: None,
|
||||
},
|
||||
);
|
||||
|
||||
let mut lines = create_diff_summary(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
&PathBuf::from("/"),
|
||||
28,
|
||||
);
|
||||
// Drop the combined header for this text-only snapshot
|
||||
if !lines.is_empty() {
|
||||
lines.remove(0);
|
||||
}
|
||||
snapshot_lines_text("apply_update_block_wraps_long_lines_text", &lines);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_snapshot_apply_update_block_relativizes_path() {
|
||||
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
|
||||
let abs_old = cwd.join("abs_old.rs");
|
||||
let abs_new = cwd.join("abs_new.rs");
|
||||
|
||||
let original = "X\nY\n";
|
||||
let modified = "X changed\nY\n";
|
||||
let patch = diffy::create_patch(original, modified).to_string();
|
||||
|
||||
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
|
||||
changes.insert(
|
||||
abs_old.clone(),
|
||||
FileChange::Update {
|
||||
unified_diff: patch,
|
||||
move_path: Some(abs_new.clone()),
|
||||
},
|
||||
);
|
||||
|
||||
let lines = create_diff_summary(
|
||||
&changes,
|
||||
PatchEventType::ApplyBegin {
|
||||
auto_approved: true,
|
||||
},
|
||||
&cwd,
|
||||
80,
|
||||
);
|
||||
|
||||
snapshot_lines("apply_update_block_relativizes_path", lines, 80, 10);
|
||||
}
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -21,7 +21,8 @@ use ratatui::text::Span;
|
||||
use textwrap::Options as TwOptions;
|
||||
use textwrap::WordSplitter;
|
||||
|
||||
/// Insert `lines` above the viewport.
|
||||
/// Insert `lines` above the viewport using the terminal's backend writer
|
||||
/// (avoids direct stdout references).
|
||||
pub(crate) fn insert_history_lines(terminal: &mut tui::Terminal, lines: Vec<Line>) {
|
||||
let mut out = std::io::stdout();
|
||||
insert_history_lines_to_writer(terminal, &mut out, lines);
|
||||
@@ -262,7 +263,10 @@ where
|
||||
}
|
||||
|
||||
/// Word-aware wrapping for a list of `Line`s preserving styles.
|
||||
pub(crate) fn word_wrap_lines(lines: &[Line], width: u16) -> Vec<Line<'static>> {
|
||||
pub(crate) fn word_wrap_lines<'a, I>(lines: I, width: u16) -> Vec<Line<'static>>
|
||||
where
|
||||
I: IntoIterator<Item = &'a Line<'a>>,
|
||||
{
|
||||
let mut out = Vec::new();
|
||||
let w = width.max(1) as usize;
|
||||
for line in lines {
|
||||
|
||||
@@ -34,7 +34,6 @@ mod chatwidget;
|
||||
mod citation_regex;
|
||||
mod cli;
|
||||
mod clipboard_paste;
|
||||
mod common;
|
||||
pub mod custom_terminal;
|
||||
mod diff_render;
|
||||
mod exec_command;
|
||||
@@ -64,8 +63,6 @@ mod chatwidget_stream_tests;
|
||||
|
||||
#[cfg(not(debug_assertions))]
|
||||
mod updates;
|
||||
#[cfg(not(debug_assertions))]
|
||||
use color_eyre::owo_colors::OwoColorize;
|
||||
|
||||
pub use cli::Cli;
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use codex_core::util::is_inside_git_repo;
|
||||
use codex_core::git_info::get_git_repo_root;
|
||||
use codex_login::AuthManager;
|
||||
use crossterm::event::KeyCode;
|
||||
use crossterm::event::KeyEvent;
|
||||
@@ -88,7 +88,7 @@ impl OnboardingScreen {
|
||||
auth_manager,
|
||||
}))
|
||||
}
|
||||
let is_git_repo = is_inside_git_repo(&cwd);
|
||||
let is_git_repo = get_git_repo_root(&cwd).is_some();
|
||||
let highlighted = if is_git_repo {
|
||||
TrustDirectorySelection::Trust
|
||||
} else {
|
||||
|
||||
@@ -140,16 +140,6 @@ pub(crate) fn log_inbound_app_event(event: &AppEvent) {
|
||||
});
|
||||
LOGGER.write_json_line(value);
|
||||
}
|
||||
// Internal UI events; still log for fidelity, but avoid heavy payloads.
|
||||
AppEvent::InsertHistoryLines(lines) => {
|
||||
let value = json!({
|
||||
"ts": now_ts(),
|
||||
"dir": "to_tui",
|
||||
"kind": "insert_history",
|
||||
"lines": lines.len(),
|
||||
});
|
||||
LOGGER.write_json_line(value);
|
||||
}
|
||||
AppEvent::InsertHistoryCell(cell) => {
|
||||
let value = json!({
|
||||
"ts": now_ts(),
|
||||
|
||||
@@ -66,8 +66,10 @@ impl SlashCommand {
|
||||
| SlashCommand::Mention
|
||||
| SlashCommand::Status
|
||||
| SlashCommand::Mcp
|
||||
| SlashCommand::Quit
|
||||
| SlashCommand::TestApproval => true,
|
||||
| SlashCommand::Quit => true,
|
||||
|
||||
#[cfg(debug_assertions)]
|
||||
SlashCommand::TestApproval => true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 765
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"proposed patch to 1 file (+2 -0) "
|
||||
" └ README.md "
|
||||
"• Proposed Change README.md (+2 -0) "
|
||||
" 1 +first line "
|
||||
" 2 +second line "
|
||||
" "
|
||||
@@ -12,3 +12,4 @@ expression: terminal.backend()
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Added new_file.txt (+2 -0) "
|
||||
" 1 +alpha "
|
||||
" 2 +beta "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Deleted tmp_delete_example.txt (+0 -3) "
|
||||
" 1 -first "
|
||||
" 2 -second "
|
||||
" 3 -third "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Edited 2 files (+2 -1) "
|
||||
" └ a.txt (+1 -1) "
|
||||
" 1 -one "
|
||||
" 1 +one changed "
|
||||
" "
|
||||
" └ b.txt (+1 -0) "
|
||||
" 1 +new "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 748
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Edited example.txt (+1 -1) "
|
||||
" 1 line one "
|
||||
" 2 -line two "
|
||||
" 2 +line two changed "
|
||||
" 3 line three "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Change Approved example.txt (+1 -1) "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 748
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Edited abs_old.rs → abs_new.rs (+1 -1) "
|
||||
" 1 -X "
|
||||
" 1 +X changed "
|
||||
" 2 Y "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 748
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Edited long_example.txt (+1 -1) "
|
||||
" 1 line 1 "
|
||||
" 2 -short "
|
||||
" 2 +short this_is_a_very_long_modified_line_that_should_wrap_acro "
|
||||
" ss_multiple_terminal_columns_and_continue_even_further_beyond "
|
||||
" _eighty_columns_to_force_multiple_wraps "
|
||||
" 3 line 3 "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
expression: text
|
||||
---
|
||||
1 1
|
||||
2 -2
|
||||
2 +added long line w
|
||||
hich wraps and_if
|
||||
_there_is_a_long_
|
||||
token_it_will_be_
|
||||
broken
|
||||
3 3
|
||||
4 -4
|
||||
4 +4 context line wh
|
||||
ich also wraps ac
|
||||
ross
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 748
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"• Edited old_name.rs → new_name.rs (+1 -1) "
|
||||
" 1 A "
|
||||
" 2 -B "
|
||||
" 2 +B changed "
|
||||
" 3 C "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 765
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"proposed patch to 1 file (+1 -1) "
|
||||
" └ example.txt "
|
||||
"• Proposed Change example.txt (+1 -1) "
|
||||
" 1 "
|
||||
" 2 -Y "
|
||||
" 2 +Y changed "
|
||||
@@ -12,3 +12,4 @@ expression: terminal.backend()
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 765
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"proposed patch to 1 file (+1 -1) "
|
||||
" └ README.md "
|
||||
"• Proposed Change README.md (+1 -1) "
|
||||
" 1 -# Codex CLI (Rust Implementation) "
|
||||
" 1 +# Codex CLI (Rust Implementation) banana "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 765
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"proposed patch to 1 file (+1 -1) "
|
||||
" └ src/lib.rs → src/lib_new.rs "
|
||||
"• Proposed Change src/lib.rs → src/lib_new.rs (+1 -1) "
|
||||
" 1 line one "
|
||||
" 2 -line two "
|
||||
" 2 +line two changed "
|
||||
@@ -14,3 +14,4 @@ expression: terminal.backend()
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
" "
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
source: tui/src/diff_render.rs
|
||||
assertion_line: 765
|
||||
expression: terminal.backend()
|
||||
---
|
||||
"proposed patch to 1 file (+2 -2) "
|
||||
" └ example.txt "
|
||||
"• Proposed Change example.txt (+2 -2) "
|
||||
" 1 line 1 "
|
||||
" 2 -line 2 "
|
||||
" 2 +line two changed "
|
||||
@@ -18,3 +18,4 @@ expression: terminal.backend()
|
||||
" 9 +line nine changed "
|
||||
" 10 line 10 "
|
||||
" "
|
||||
" "
|
||||
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Explored
|
||||
└ Read auth.rs, shimmer.rs
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Explored
|
||||
└ Search shimmer_spans
|
||||
Read shimmer.rs, status_indicator_widget.rs
|
||||
@@ -0,0 +1,8 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Explored
|
||||
└ Search shimmer_spans
|
||||
Read shimmer.rs
|
||||
Read status_indicator_widget.rs
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Ran
|
||||
└ first_token_is_long_
|
||||
enough_to_wrap
|
||||
second_token_is_also
|
||||
_long_enough_to_wrap
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Ran
|
||||
└ echo one
|
||||
echo two
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Ran
|
||||
└ set -o pipefail
|
||||
cargo test
|
||||
--all-features
|
||||
--quiet
|
||||
@@ -0,0 +1,20 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Updated Plan
|
||||
└ I’ll update Grafana call
|
||||
error handling by adding
|
||||
retries and clearer
|
||||
messages when the backend is
|
||||
unreachable.
|
||||
✔ Investigate existing error
|
||||
paths and logging around
|
||||
HTTP timeouts
|
||||
□ Harden Grafana client
|
||||
error handling with retry/
|
||||
backoff and user‑friendly
|
||||
messages
|
||||
□ Add tests for transient
|
||||
failure scenarios and
|
||||
surfacing to the UI
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
source: tui/src/history_cell.rs
|
||||
expression: rendered
|
||||
---
|
||||
• Updated Plan
|
||||
└ □ Define error taxonomy
|
||||
□ Implement mapping to user messages
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user