Compare commits

...

17 Commits

Author SHA1 Message Date
Won Park
a606e85859 tweaked /clear to support clear + new chat, also fix minor bug for macos terminal (#12520)
# /clear feature! 

Use /clear to start a new chat with Codex on a clean terminal!
2026-02-23 09:11:05 -08:00
Ahmed Ibrahim
6e60f724bc remove feature flag collaboration modes (#12028)
All code should go in the direction that steer is enabled

---------

Co-authored-by: Codex <noreply@openai.com>
2026-02-23 09:06:08 -08:00
jif-oai
3b6c50d925 chore: better bazel test logs (#12576)
## Summary

Improve Bazel CI failure diagnostics by printing the tail of each failed
target’s test.log directly in the GitHub Actions output.

Today, when a large Bazel test target fails (for example tests of
`codex-core`), the workflow often only shows a target-level Exit 101
plus a path to Bazel’s test.log. That makes it hard to see the actual
failing Rust test and panic without digging into artifacts or
reproducing locally.

This change makes the workflow automatically surface that information
inline.

  ## What Changed

In .github/workflows/bazel.yml:

  - Capture Bazel console output via tee
  - Preserve the Bazel exit code when piping (PIPESTATUS[0])
  - On failure:
      - Parse failed Bazel test targets from FAIL: //... lines
      - Resolve Bazel test log directory via bazel info bazel-testlogs
      - Print tail -n 200 for each failed target’s test.log
      - Group each target’s output in GitHub Actions logs (::group::)

## Bonus
Disable `experimental_remote_repo_contents_cache` to prevent "Permission
Denied"
2026-02-23 08:13:29 -08:00
jif-oai
eace7c6610 feat: land sqlite (#12141) 2026-02-23 16:12:23 +00:00
jif-oai
2119532a81 feat: role metrics multi-agent (#12579)
add metrics for agent role
2026-02-23 15:55:48 +00:00
Eric Traut
862a5b3eb3 Allow exec resume to parse output-last-message flag after command (#12541)
Summary
- mark `output-last-message` as a global exec flag so it can follow
subcommands like `resume`
- add regression tests in both `cli` and `exec` crates verifying the
flag order works when invoking `resume`

Fixes #12538
2026-02-23 07:55:37 -08:00
jif-oai
e8709bc11a chore: rename memory feature flag (#12580)
`memory_tool` -> `memories`
2026-02-23 15:37:12 +00:00
jif-oai
764ac9449f feat: add uuid helper (#12500) 2026-02-23 14:14:36 +00:00
jif-oai
cf0210bf22 feat: agent nick names to model (#12575) 2026-02-23 13:44:37 +00:00
jif-oai
829d1080f6 feat: keep dead agents in the agent picker (#12570) 2026-02-23 12:58:55 +00:00
jif-oai
9d826a20c6 fix: TUI constraint (#12571) 2026-02-23 12:49:54 +00:00
jif-oai
6fbf19ef5f chore: phase 2 name (#12568) 2026-02-23 11:04:55 +00:00
jif-oai
2b9d0c385f chore: add doc to memories (#12565)
]
2026-02-23 10:52:58 +00:00
jif-oai
cfcbff4c48 chore: awaiter (#12562) 2026-02-23 10:28:24 +00:00
jif-oai
8e9312958d chore: nit name (#12559) 2026-02-23 08:49:41 +00:00
Michael Bolin
956f2f439e refactor: decouple MCP policy construction from escalate server (#12555)
## Why
The current escalate path in `codex-rs/exec-server` still had policy
creation coupled to MCP details, which makes it hard to reuse the shell
execution flow outside the MCP server. This change is part of a broader
goal to split MCP-specific behavior from shared escalation execution so
other handlers (for example a future `ShellCommandHandler`) can reuse it
without depending on MCP request context types.

## What changed
- Added a new `EscalationPolicyFactory` abstraction in `mcp.rs`:
  - `crate`-relative path: `codex-rs/exec-server/src/posix/mcp.rs`
-
https://github.com/openai/codex/blob/main/codex-rs/exec-server/src/posix/mcp.rs#L87-L107
- Made `run_escalate_server` in `mcp.rs` accept a policy factory instead
of constructing `McpEscalationPolicy` directly.
-
https://github.com/openai/codex/blob/main/codex-rs/exec-server/src/posix/mcp.rs#L178-L201
- Introduced `McpEscalationPolicyFactory` that stores MCP-only state
(`RequestContext`, `preserve_program_paths`) and implements the new
trait.
-
https://github.com/openai/codex/blob/main/codex-rs/exec-server/src/posix/mcp.rs#L100-L117
- Updated `shell()` to pass a `McpEscalationPolicyFactory` instance into
`run_escalate_server`, so the server remains the MCP-specific wiring
layer.
-
https://github.com/openai/codex/blob/main/codex-rs/exec-server/src/posix/mcp.rs#L163-L170

## Verification
- Build and test execution was not re-run in this pass; changes are
limited to `mcp.rs` and preserve the existing escalation flow semantics
by only extracting policy construction behind a factory.




---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/12555).
* #12556
* __->__ #12555
2026-02-23 00:31:29 -08:00
pakrym-oai
335a4e1cbc Return image content from view_image (#12553)
Responses API supports image content
2026-02-22 23:00:08 -08:00
33 changed files with 826 additions and 336 deletions

View File

@@ -107,6 +107,45 @@ jobs:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
set -o pipefail
bazel_console_log="$(mktemp)"
print_failed_bazel_test_logs() {
local console_log="$1"
local testlogs_dir
testlogs_dir="$(bazel $BAZEL_STARTUP_ARGS info bazel-testlogs 2>/dev/null || echo bazel-testlogs)"
local failed_targets=()
while IFS= read -r target; do
failed_targets+=("$target")
done < <(
grep -E '^FAIL: //' "$console_log" \
| sed -E 's#^FAIL: (//[^ ]+).*#\1#' \
| sort -u
)
if [[ ${#failed_targets[@]} -eq 0 ]]; then
echo "No failed Bazel test targets were found in console output."
return
fi
for target in "${failed_targets[@]}"; do
local rel_path="${target#//}"
rel_path="${rel_path/:/\/}"
local test_log="${testlogs_dir}/${rel_path}/test.log"
echo "::group::Bazel test log tail for ${target}"
if [[ -f "$test_log" ]]; then
tail -n 200 "$test_log"
else
echo "Missing test log: $test_log"
fi
echo "::endgroup::"
done
}
bazel_args=(
test
//...
@@ -119,10 +158,19 @@ jobs:
if [[ -n "${BUILDBUDDY_API_KEY:-}" ]]; then
echo "BuildBuddy API key is available; using remote Bazel configuration."
# Work around Bazel 9 remote repo contents cache / overlay materialization failures
# seen in CI (for example "is not a symlink" or permission errors while
# materializing external repos such as rules_perl). We still use BuildBuddy for
# remote execution/cache; this only disables the startup-level repo contents cache.
set +e
bazel $BAZEL_STARTUP_ARGS \
--noexperimental_remote_repo_contents_cache \
--bazelrc=.github/workflows/ci.bazelrc \
"${bazel_args[@]}" \
"--remote_header=x-buildbuddy-api-key=$BUILDBUDDY_API_KEY"
"--remote_header=x-buildbuddy-api-key=$BUILDBUDDY_API_KEY" \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
else
echo "BuildBuddy API key is not available; using local Bazel configuration."
# Keep fork/community PRs on Bazel but disable remote services that are
@@ -141,9 +189,18 @@ jobs:
# clear remote cache/execution endpoints configured in .bazelrc.
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_cache
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_executor
set +e
bazel $BAZEL_STARTUP_ARGS \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
--remote_cache= \
--remote_executor=
--remote_executor= \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
fi
if [[ ${bazel_status:-0} -ne 0 ]]; then
print_failed_bazel_test_logs "$bazel_console_log"
exit "$bazel_status"
fi

1
codex-rs/Cargo.lock generated
View File

@@ -2484,6 +2484,7 @@ name = "codex-utils-string"
version = "0.0.0"
dependencies = [
"pretty_assertions",
"regex-lite",
]
[[package]]

View File

@@ -1112,6 +1112,34 @@ mod tests {
assert_eq!(args.prompt.as_deref(), Some("2+2"));
}
#[test]
fn exec_resume_accepts_output_last_message_flag_after_subcommand() {
let cli = MultitoolCli::try_parse_from([
"codex",
"exec",
"resume",
"session-123",
"-o",
"/tmp/resume-output.md",
"re-review",
])
.expect("parse should succeed");
let Some(Subcommand::Exec(exec)) = cli.subcommand else {
panic!("expected exec subcommand");
};
let Some(codex_exec::Command::Resume(args)) = exec.command else {
panic!("expected exec resume");
};
assert_eq!(
exec.last_message_file,
Some(std::path::PathBuf::from("/tmp/resume-output.md"))
);
assert_eq!(args.session_id.as_deref(), Some("session-123"));
assert_eq!(args.prompt.as_deref(), Some("re-review"));
}
fn app_server_from_args(args: &[&str]) -> AppServerCommand {
let cli = MultitoolCli::try_parse_from(args).expect("parse");
let Subcommand::AppServer(app_server) = cli.subcommand.expect("app-server present") else {

View File

@@ -49,10 +49,12 @@ async fn features_enable_under_development_feature_prints_warning() -> Result<()
let codex_home = TempDir::new()?;
let mut cmd = codex_command(codex_home.path())?;
cmd.args(["features", "enable", "sqlite"])
cmd.args(["features", "enable", "runtime_metrics"])
.assert()
.success()
.stderr(contains("Under-development features enabled: sqlite."));
.stderr(contains(
"Under-development features enabled: runtime_metrics.",
));
Ok(())
}

View File

@@ -349,6 +349,9 @@
"js_repl_tools_only": {
"type": "boolean"
},
"memories": {
"type": "boolean"
},
"memory_tool": {
"type": "boolean"
},
@@ -1627,6 +1630,9 @@
"js_repl_tools_only": {
"type": "boolean"
},
"memories": {
"type": "boolean"
},
"memory_tool": {
"type": "boolean"
},

View File

@@ -1,36 +1,35 @@
background_terminal_max_timeout = 3600000
model_reasoning_effort = "low"
developer_instructions="""You are a waiting agent.
You're name is Superman.
Your role is to monitor the execution of a specific command or task and report its status only when it is finished.
developer_instructions="""You are an awaiter.
Your role is to await the completion of a specific command or task and report its status only when it is finished.
Behavior rules:
1. When given a command or task identifier, you must:
- Execute or monitor it using the appropriate tool
- Continue waiting until the task reaches a terminal state.
- Execute or await it using the appropriate tool
- Continue awaiting until the task reaches a terminal state.
2. You must NOT:
- Modify the task.
- Interpret or optimize the task.
- Perform unrelated actions.
- Stop waiting unless explicitly instructed.
- Stop awaiting unless explicitly instructed.
3. Waiting behavior:
3. Awaiting behavior:
- If the task is still running, continue polling using tool calls.
- Use repeated tool calls if necessary.
- Do not hallucinate completion.
- Use long timeouts when waiting for something. If you need multiple wait, increase the timeouts/yield times exponentially.
- Use long timeouts when awaiting for something. If you need multiple awaits, increase the timeouts/yield times exponentially.
4. If asked for status:
- Return the current known status.
- Immediately resume waiting afterward.
- Immediately resume awaiting afterward.
5. Termination:
- Only exit waiting when:
- Only exit awaiting when:
- The task completes successfully, OR
- The task fails, OR
- You receive an explicit stop instruction.
You must behave deterministically and conservatively.
"""
"""

View File

@@ -118,6 +118,9 @@ impl Guards {
} else {
active_agents.used_agent_nicknames.clear();
active_agents.nickname_reset_count += 1;
if let Some(metrics) = codex_otel::metrics::global() {
let _ = metrics.counter("codex.multi_agent.nickname_pool_reset", 1, &[]);
}
names.choose(&mut rand::rng())?.to_string()
}
};

View File

@@ -13,7 +13,7 @@ use std::path::Path;
use std::sync::LazyLock;
use toml::Value as TomlValue;
const DEFAULT_ROLE_NAME: &str = "default";
pub const DEFAULT_ROLE_NAME: &str = "default";
const AGENT_TYPE_UNAVAILABLE_ERROR: &str = "agent type is currently not available";
/// Applies a role config layer to a mutable config and preserves unspecified keys.
@@ -187,16 +187,16 @@ Rules:
}
),
(
"monitor".to_string(),
"awaiter".to_string(),
AgentRoleConfig {
description: Some(r#"Use a `monitor` agent EVERY TIME you must run a command that might take some time.
description: Some(r#"Use an `awaiter` agent EVERY TIME you must run a command that might take some time.
This includes, but not only:
* testing
* monitoring of a long running process
* explicit ask to wait for something
When YOU wait for the `monitor` agent to be done, use the largest possible timeout."#.to_string()),
config_file: Some("monitor.toml".to_string().parse().unwrap_or_default()),
When YOU wait for the `awaiter` agent to be done, use the largest possible timeout."#.to_string()),
config_file: Some("awaiter.toml".to_string().parse().unwrap_or_default()),
}
)
])
@@ -207,10 +207,10 @@ When YOU wait for the `monitor` agent to be done, use the largest possible timeo
/// Resolves a built-in role `config_file` path to embedded content.
pub(super) fn config_file_contents(path: &Path) -> Option<&'static str> {
const EXPLORER: &str = include_str!("builtins/explorer.toml");
const MONITOR: &str = include_str!("builtins/monitor.toml");
const AWAITER: &str = include_str!("builtins/awaiter.toml");
match path.to_str()? {
"explorer.toml" => Some(EXPLORER),
"monitor.toml" => Some(MONITOR),
"awaiter.toml" => Some(AWAITER),
_ => None,
}
}

View File

@@ -134,6 +134,7 @@ pub enum Feature {
/// Steer feature flag - when enabled, Enter submits immediately instead of queuing.
Steer,
/// Enable collaboration modes (Plan, Default).
/// Kept for config backward compatibility; behavior is always collaboration-modes-enabled.
CollaborationModes,
/// Enable personality selection in the TUI.
Personality,
@@ -494,12 +495,12 @@ pub const FEATURES: &[FeatureSpec] = &[
FeatureSpec {
id: Feature::Sqlite,
key: "sqlite",
stage: Stage::UnderDevelopment,
default_enabled: false,
stage: Stage::Stable,
default_enabled: true,
},
FeatureSpec {
id: Feature::MemoryTool,
key: "memory_tool",
key: "memories",
stage: Stage::UnderDevelopment,
default_enabled: false,
},
@@ -617,7 +618,7 @@ pub const FEATURES: &[FeatureSpec] = &[
FeatureSpec {
id: Feature::CollaborationModes,
key: "collaboration_modes",
stage: Stage::Stable,
stage: Stage::Removed,
default_enabled: true,
},
FeatureSpec {
@@ -728,10 +729,9 @@ mod tests {
fn default_enabled_features_are_stable() {
for spec in FEATURES {
if spec.default_enabled {
assert_eq!(
spec.stage,
Stage::Stable,
"feature `{}` is enabled by default but is not stable ({:?})",
assert!(
matches!(spec.stage, Stage::Stable | Stage::Removed),
"feature `{}` is enabled by default but is not stable/removed ({:?})",
spec.key,
spec.stage
);

View File

@@ -37,6 +37,10 @@ const ALIASES: &[Alias] = &[
legacy_key: "collab",
feature: Feature::Collab,
},
Alias {
legacy_key: "memory_tool",
feature: Feature::MemoryTool,
},
];
pub(crate) fn legacy_feature_keys() -> impl Iterator<Item = &'static str> {

View File

@@ -0,0 +1,92 @@
# Memories Pipeline (Core)
This module runs a startup memory pipeline for eligible sessions.
## When it runs
The pipeline is triggered when a root session starts, and only if:
- the session is not ephemeral
- the memory feature is enabled
- the session is not a sub-agent session
- the state DB is available
It runs asynchronously in the background and executes two phases in order: Phase 1, then Phase 2.
## Phase 1: Rollout Extraction (per-thread)
Phase 1 finds recent eligible rollouts and extracts a structured memory from each one.
Eligible rollouts are selected from the state DB using startup claim rules. In practice this means
the pipeline only considers rollouts that are:
- from allowed interactive session sources
- within the configured age window
- idle long enough (to avoid summarizing still-active/fresh rollouts)
- not already owned by another in-flight phase-1 worker
- within startup scan/claim limits (bounded work per startup)
What it does:
- claims a bounded set of rollout jobs from the state DB (startup claim)
- filters rollout content down to memory-relevant response items
- sends each rollout to a model (in parallel, with a concurrency cap)
- expects structured output containing:
- a detailed `raw_memory`
- a compact `rollout_summary`
- an optional `rollout_slug`
- redacts secrets from the generated memory fields
- stores successful outputs back into the state DB as stage-1 outputs
Concurrency / coordination:
- Phase 1 runs multiple extraction jobs in parallel (with a fixed concurrency cap) so startup memory generation can process several rollouts at once.
- Each job is leased/claimed in the state DB before processing, which prevents duplicate work across concurrent workers/startups.
- Failed jobs are marked with retry backoff, so they are retried later instead of hot-looping.
Job outcomes:
- `succeeded` (memory produced)
- `succeeded_no_output` (valid run but nothing useful generated)
- `failed` (with retry backoff/lease handling in DB)
Phase 1 is the stage that turns individual rollouts into DB-backed memory records.
## Phase 2: Global Consolidation
Phase 2 consolidates the latest stage-1 outputs into the filesystem memory artifacts and then runs a dedicated consolidation agent.
What it does:
- claims a single global phase-2 job (so only one consolidation runs at a time)
- loads a bounded set of the most recent stage-1 outputs from the state DB (the per-rollout memories produced by Phase 1, used as the consolidation input set)
- computes a completion watermark from the claimed watermark + newest input timestamps
- syncs local memory artifacts under the memories root:
- `raw_memories.md` (merged raw memories, latest first)
- `rollout_summaries/` (one summary file per retained rollout)
- prunes stale rollout summaries that are no longer retained
- if there are no inputs, marks the job successful and exits
If there is input, it then:
- spawns an internal consolidation sub-agent
- runs it with no approvals, no network, and local write access only
- disables collab for that agent (to prevent recursive delegation)
- watches the agent status and heartbeats the global job lease while it runs
- marks the phase-2 job success/failure in the state DB when the agent finishes
Watermark behavior:
- The global phase-2 job claim includes an input watermark representing the latest input timestamp known when the job was claimed.
- Phase 2 recomputes a `new_watermark` using the max of:
- the claimed watermark
- the newest `source_updated_at` timestamp in the stage-1 inputs it actually loaded
- On success, Phase 2 stores that completion watermark in the DB.
- This lets later phase-2 runs know whether new stage-1 data arrived since the last successful consolidation (dirty vs not dirty), while also avoiding moving the watermark backwards.
In practice, this phase is responsible for refreshing the on-disk memory workspace and producing/updating the higher-level consolidated memory outputs.
## Why it is split into two phases
- Phase 1 scales across many rollouts and produces normalized per-rollout memory records.
- Phase 2 serializes global consolidation so the shared memory artifacts are updated safely and consistently.

View File

@@ -93,6 +93,7 @@ impl ToolHandler for MultiAgentHandler {
mod spawn {
use super::*;
use crate::agent::role::DEFAULT_ROLE_NAME;
use crate::agent::role::apply_role_to_config;
use crate::agent::exceeds_thread_spawn_depth_limit;
@@ -109,6 +110,7 @@ mod spawn {
#[derive(Debug, Serialize)]
struct SpawnAgentResult {
agent_id: String,
nickname: Option<String>,
}
pub async fn handle(
@@ -183,6 +185,7 @@ mod spawn {
.unwrap_or((None, None)),
None => (None, None),
};
let nickname = new_agent_nickname.clone();
session
.send_event(
&turn,
@@ -199,9 +202,13 @@ mod spawn {
)
.await;
let new_thread_id = result?;
let role_tag = role_name.unwrap_or(DEFAULT_ROLE_NAME);
turn.otel_manager
.counter("codex.multi_agent.spawn", 1, &[("role", role_tag)]);
let content = serde_json::to_string(&SpawnAgentResult {
agent_id: new_thread_id.to_string(),
nickname,
})
.map_err(|err| {
FunctionCallError::Fatal(format!("failed to serialize spawn_agent result: {err}"))
@@ -406,6 +413,8 @@ mod resume_agent {
if let Some(err) = error {
return Err(err);
}
turn.otel_manager
.counter("codex.multi_agent.resume", 1, &[]);
let content = serde_json::to_string(&ResumeAgentResult { status }).map_err(|err| {
FunctionCallError::Fatal(format!("failed to serialize resume_agent result: {err}"))
@@ -1085,6 +1094,7 @@ mod tests {
#[derive(Debug, Deserialize)]
struct SpawnAgentResult {
agent_id: String,
nickname: Option<String>,
}
let (mut session, mut turn) = make_session_and_context().await;
@@ -1121,6 +1131,12 @@ mod tests {
let result: SpawnAgentResult =
serde_json::from_str(&content).expect("spawn_agent result should be json");
let agent_id = agent_id(&result.agent_id).expect("agent_id should be valid");
assert!(
result
.nickname
.as_deref()
.is_some_and(|nickname| !nickname.is_empty())
);
let snapshot = manager
.get_thread(agent_id)
.await
@@ -1184,6 +1200,7 @@ mod tests {
#[derive(Debug, Deserialize)]
struct SpawnAgentResult {
agent_id: String,
nickname: Option<String>,
}
let (mut session, mut turn) = make_session_and_context().await;
@@ -1221,6 +1238,12 @@ mod tests {
let result: SpawnAgentResult =
serde_json::from_str(&content).expect("spawn_agent result should be json");
assert!(!result.agent_id.is_empty());
assert!(
result
.nickname
.as_deref()
.is_some_and(|nickname| !nickname.is_empty())
);
assert_eq!(success, Some(true));
}

View File

@@ -1,5 +1,6 @@
use async_trait::async_trait;
use codex_protocol::models::FunctionCallOutputBody;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::openai_models::InputModality;
use serde::Deserialize;
use tokio::fs;
@@ -14,7 +15,6 @@ use crate::tools::handlers::parse_arguments;
use crate::tools::registry::ToolHandler;
use crate::tools::registry::ToolKind;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::local_image_content_items_with_label_number;
pub struct ViewImageHandler;
@@ -81,21 +81,20 @@ impl ToolHandler for ViewImageHandler {
}
let event_path = abs_path.clone();
let content: Vec<ContentItem> =
local_image_content_items_with_label_number(&abs_path, None);
let input = ResponseInputItem::Message {
role: "user".to_string(),
content,
};
session
.inject_response_items(vec![input])
.await
.map_err(|_| {
FunctionCallError::RespondToModel(
"unable to attach image (no active task)".to_string(),
)
})?;
let content = local_image_content_items_with_label_number(&abs_path, None)
.into_iter()
.map(|item| match item {
ContentItem::InputText { text } => {
FunctionCallOutputContentItem::InputText { text }
}
ContentItem::InputImage { image_url } => {
FunctionCallOutputContentItem::InputImage { image_url }
}
ContentItem::OutputText { text } => {
FunctionCallOutputContentItem::InputText { text }
}
})
.collect();
session
.send_event(
@@ -108,7 +107,7 @@ impl ToolHandler for ViewImageHandler {
.await;
Ok(ToolOutput::Function {
body: FunctionCallOutputBody::Text("attached local image path".to_string()),
body: FunctionCallOutputBody::ContentItems(content),
success: Some(true),
})
}

View File

@@ -65,7 +65,7 @@ impl ToolsConfig {
let include_js_repl_tools_only =
include_js_repl && features.enabled(Feature::JsReplToolsOnly);
let include_collab_tools = features.enabled(Feature::Collab);
let include_collaboration_modes_tools = features.enabled(Feature::CollaborationModes);
let include_collaboration_modes_tools = true;
let include_search_tool = features.enabled(Feature::Apps);
let shell_type = if !features.enabled(Feature::ShellTool) {
@@ -557,7 +557,7 @@ fn create_spawn_agent_tool(config: &ToolsConfig) -> ToolSpec {
ToolSpec::Function(ResponsesApiTool {
name: "spawn_agent".to_string(),
description:
"Spawn a sub-agent for a well-scoped task. Returns the agent id to use to communicate with this agent."
"Spawn a sub-agent for a well-scoped task. Returns the agent id (and user-facing nickname when available) to use to communicate with this agent."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
@@ -1887,34 +1887,6 @@ mod tests {
);
}
#[test]
fn request_user_input_requires_collaboration_modes_feature() {
let config = test_config();
let model_info =
ModelsManager::construct_model_info_offline_for_tests("gpt-5-codex", &config);
let mut features = Features::with_defaults();
features.disable(Feature::CollaborationModes);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None, None, &[]).build();
assert!(
!tools.iter().any(|t| t.spec.name() == "request_user_input"),
"request_user_input should be disabled when collaboration_modes feature is off"
);
features.enable(Feature::CollaborationModes);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None, None, &[]).build();
assert_contains_tool_names(&tools, &["request_user_input"]);
}
#[test]
fn js_repl_requires_feature_flag() {
let config = test_config();

View File

@@ -237,40 +237,29 @@ async fn view_image_tool_attaches_local_image() -> anyhow::Result<()> {
let req = mock.single_request();
let body = req.body_json();
let output_text = req
.function_call_output_content_and_success(call_id)
.and_then(|(content, _)| content)
.expect("output text present");
assert_eq!(output_text, "attached local image path");
assert!(
find_image_message(&body).is_none(),
"view_image tool should not inject a separate image message"
);
let image_message =
find_image_message(&body).expect("pending input image message not included in request");
let content_items = image_message
.get("content")
let function_output = req.function_call_output(call_id);
let output_items = function_output
.get("output")
.and_then(Value::as_array)
.expect("image message has content array");
.expect("function_call_output should be a content item array");
assert_eq!(
content_items.len(),
output_items.len(),
1,
"view_image should inject only the image content item (no tag/label text)"
"view_image should return only the image content item (no tag/label text)"
);
assert_eq!(
content_items[0].get("type").and_then(Value::as_str),
output_items[0].get("type").and_then(Value::as_str),
Some("input_image"),
"view_image should inject only an input_image content item"
"view_image should return only an input_image content item"
);
let image_url = image_message
.get("content")
.and_then(Value::as_array)
.and_then(|content| {
content.iter().find_map(|span| {
if span.get("type").and_then(Value::as_str) == Some("input_image") {
span.get("image_url").and_then(Value::as_str)
} else {
None
}
})
})
let image_url = output_items[0]
.get("image_url")
.and_then(Value::as_str)
.expect("image_url present");
let (prefix, encoded) = image_url
@@ -535,38 +524,36 @@ async fn view_image_tool_placeholder_for_non_image_files() -> anyhow::Result<()>
request.inputs_of_type("input_image").is_empty(),
"non-image file should not produce an input_image message"
);
let function_output = request.function_call_output(call_id);
let output_items = function_output
.get("output")
.and_then(Value::as_array)
.expect("function_call_output should be a content item array");
assert_eq!(
output_items.len(),
1,
"non-image placeholder should be returned as a single content item"
);
assert_eq!(
output_items[0].get("type").and_then(Value::as_str),
Some("input_text"),
"non-image placeholder should be returned as input_text"
);
let placeholder = output_items[0]
.get("text")
.and_then(Value::as_str)
.expect("placeholder text present");
let placeholder = request
.inputs_of_type("message")
.iter()
.find_map(|item| {
let content = item.get("content").and_then(Value::as_array)?;
content.iter().find_map(|span| {
if span.get("type").and_then(Value::as_str) == Some("input_text") {
let text = span.get("text").and_then(Value::as_str)?;
if text.contains("Codex could not read the local image at")
&& text.contains("unsupported MIME type `application/json`")
{
return Some(text.to_string());
}
}
None
})
})
.expect("placeholder text found");
assert!(
placeholder.contains("Codex could not read the local image at")
&& placeholder.contains("unsupported MIME type `application/json`"),
"placeholder should describe the unsupported file type: {placeholder}"
);
assert!(
placeholder.contains(&abs_path.display().to_string()),
"placeholder should mention path: {placeholder}"
);
let output_text = mock
.single_request()
.function_call_output_content_and_success(call_id)
.and_then(|(content, _)| content)
.expect("output text present");
assert_eq!(output_text, "attached local image path");
Ok(())
}

View File

@@ -1,3 +1,4 @@
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
@@ -30,6 +31,7 @@ use tokio::sync::RwLock;
use crate::posix::escalate_server::EscalateServer;
use crate::posix::escalate_server::{self};
use crate::posix::escalation_policy::EscalationPolicy;
use crate::posix::mcp_escalation_policy::McpEscalationPolicy;
use crate::posix::stopwatch::Stopwatch;
@@ -85,6 +87,30 @@ pub struct ExecTool {
sandbox_state: Arc<RwLock<Option<SandboxState>>>,
}
trait EscalationPolicyFactory {
type Policy: EscalationPolicy + Send + Sync + 'static;
fn create_policy(&self, policy: Arc<RwLock<Policy>>, stopwatch: Stopwatch) -> Self::Policy;
}
struct McpEscalationPolicyFactory {
context: RequestContext<RoleServer>,
preserve_program_paths: bool,
}
impl EscalationPolicyFactory for McpEscalationPolicyFactory {
type Policy = McpEscalationPolicy;
fn create_policy(&self, policy: Arc<RwLock<Policy>>, stopwatch: Stopwatch) -> Self::Policy {
McpEscalationPolicy::new(
policy,
self.context.clone(),
stopwatch,
self.preserve_program_paths,
)
}
}
#[tool_router]
impl ExecTool {
pub fn new(
@@ -115,8 +141,6 @@ impl ExecTool {
.timeout_ms
.unwrap_or(codex_core::exec::DEFAULT_EXEC_COMMAND_TIMEOUT_MS),
);
let stopwatch = Stopwatch::new(effective_timeout);
let cancel_token = stopwatch.cancellation_token();
let sandbox_state =
self.sandbox_state
.read()
@@ -128,27 +152,68 @@ impl ExecTool {
sandbox_cwd: PathBuf::from(&params.workdir),
use_linux_sandbox_bwrap: false,
});
let escalate_server = EscalateServer::new(
self.bash_path.clone(),
self.execve_wrapper.clone(),
McpEscalationPolicy::new(
self.policy.clone(),
let result = run_escalate_server(
params,
sandbox_state,
&self.bash_path,
&self.execve_wrapper,
self.policy.clone(),
McpEscalationPolicyFactory {
context,
stopwatch.clone(),
self.preserve_program_paths,
),
);
let result = escalate_server
.exec(params, cancel_token, &sandbox_state)
.await
.map_err(|e| McpError::internal_error(e.to_string(), None))?;
preserve_program_paths: self.preserve_program_paths,
},
effective_timeout,
)
.await
.map_err(|e| McpError::internal_error(e.to_string(), None))?;
Ok(CallToolResult::success(vec![Content::json(
ExecResult::from(result),
)?]))
}
}
/// Runs the escalate server to execute a shell command with potential
/// escalation of execve calls.
///
/// - `exec_params` defines the shell command to run
/// - `sandbox_state` is the sandbox to use to run the shell program
/// - `shell_program` is the path to the shell program to run (e.g. /bin/bash)
/// - `execve_wrapper` is the path to the execve wrapper binary to use for
/// handling execve calls from the shell program. This is likely a symlink to
/// Codex using a special name.
/// - `policy` is the exec policy to use for deciding whether to allow or deny
/// execve calls from the shell program.
/// - `escalation_policy_factory` is a factory for creating an
/// `EscalationPolicy` to use for deciding whether to allow, deny, or prompt
/// the user for execve calls from the shell program. We use a factory here
/// because the `EscalationPolicy` may need to capture request-specific
/// context (e.g. the MCP request context) that is not available at the time
/// we create the `ExecTool`.
/// - `effective_timeout` is the timeout to use for running the shell command.
/// Implementations are encouraged to excludeany time spent prompting the
/// user.
async fn run_escalate_server(
exec_params: ExecParams,
sandbox_state: SandboxState,
shell_program: impl AsRef<Path>,
execve_wrapper: impl AsRef<Path>,
policy: Arc<RwLock<Policy>>,
escalation_policy_factory: impl EscalationPolicyFactory,
effective_timeout: Duration,
) -> anyhow::Result<crate::posix::escalate_server::ExecResult> {
let stopwatch = Stopwatch::new(effective_timeout);
let cancel_token = stopwatch.cancellation_token();
let escalate_server = EscalateServer::new(
shell_program.as_ref().to_path_buf(),
execve_wrapper.as_ref().to_path_buf(),
escalation_policy_factory.create_policy(policy, stopwatch),
);
escalate_server
.exec(exec_params, cancel_token, &sandbox_state)
.await
}
#[derive(Default)]
pub struct CodexSandboxStateUpdateMethod;

View File

@@ -96,7 +96,12 @@ pub struct Cli {
pub json: bool,
/// Specifies file where the last message from the agent should be written.
#[arg(long = "output-last-message", short = 'o', value_name = "FILE")]
#[arg(
long = "output-last-message",
short = 'o',
value_name = "FILE",
global = true
)]
pub last_message_file: Option<PathBuf>,
/// Initial instructions for the agent. If not provided as an argument (or
@@ -283,4 +288,27 @@ mod tests {
});
assert_eq!(effective_prompt.as_deref(), Some(PROMPT));
}
#[test]
fn resume_accepts_output_last_message_flag_after_subcommand() {
const PROMPT: &str = "echo resume-with-output-file";
let cli = Cli::parse_from([
"codex-exec",
"resume",
"session-123",
"-o",
"/tmp/resume-output.md",
PROMPT,
]);
assert_eq!(
cli.last_message_file,
Some(PathBuf::from("/tmp/resume-output.md"))
);
let Some(Command::Resume(args)) = cli.command else {
panic!("expected resume command");
};
assert_eq!(args.session_id.as_deref(), Some("session-123"));
assert_eq!(args.prompt.as_deref(), Some(PROMPT));
}
}

View File

@@ -4,11 +4,7 @@ use codex_utils_cargo_bin::find_resource;
use core_test_support::test_codex_exec::test_codex_exec;
use pretty_assertions::assert_eq;
use serde_json::Value;
use std::fs::FileTimes;
use std::fs::OpenOptions;
use std::string::ToString;
use std::time::Duration;
use std::time::SystemTime;
use tempfile::TempDir;
use uuid::Uuid;
use walkdir::WalkDir;
@@ -257,21 +253,26 @@ fn exec_resume_last_respects_cwd_filter_and_all_flag() -> anyhow::Result<()> {
.success();
let sessions_dir = test.home_path().join("sessions");
let path_a = find_session_file_containing_marker(&sessions_dir, &marker_a)
find_session_file_containing_marker(&sessions_dir, &marker_a)
.expect("no session file found for marker_a");
let path_b = find_session_file_containing_marker(&sessions_dir, &marker_b)
.expect("no session file found for marker_b");
// Files are ordered by `updated_at`, then by `uuid`.
// We mutate the mtimes to ensure file_b is the newest file.
let file_a = OpenOptions::new().write(true).open(&path_a)?;
file_a.set_times(
FileTimes::new().set_modified(SystemTime::UNIX_EPOCH + Duration::from_secs(1)),
)?;
let file_b = OpenOptions::new().write(true).open(&path_b)?;
file_b.set_times(
FileTimes::new().set_modified(SystemTime::UNIX_EPOCH + Duration::from_secs(2)),
)?;
// Make thread B deterministically newest according to rollout metadata.
let session_id_b = extract_conversation_id(&path_b);
let marker_b_touch = format!("resume-cwd-b-touch-{}", Uuid::new_v4());
let prompt_b_touch = format!("echo {marker_b_touch}");
test.cmd()
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(dir_b.path())
.arg("resume")
.arg(&session_id_b)
.arg(&prompt_b_touch)
.assert()
.success();
let marker_b2 = format!("resume-cwd-b-2-{}", Uuid::new_v4());
let prompt_b2 = format!("echo {marker_b2}");
@@ -312,8 +313,8 @@ fn exec_resume_last_respects_cwd_filter_and_all_flag() -> anyhow::Result<()> {
let resumed_path_cwd = find_session_file_containing_marker(&sessions_dir, &marker_a2)
.expect("no resumed session file containing marker_a2");
assert_eq!(
resumed_path_cwd, path_a,
"resume --last should prefer sessions from the same cwd"
resumed_path_cwd, path_b,
"resume --last should prefer sessions whose latest turn context matches the current cwd"
);
Ok(())

View File

@@ -1961,6 +1961,9 @@ impl SessionSource {
SessionSource::SubAgent(SubAgentSource::ThreadSpawn { agent_nickname, .. }) => {
agent_nickname.clone()
}
SessionSource::SubAgent(SubAgentSource::MemoryConsolidation) => {
Some("morpheus".to_string())
}
_ => None,
}
}
@@ -1970,6 +1973,9 @@ impl SessionSource {
SessionSource::SubAgent(SubAgentSource::ThreadSpawn { agent_role, .. }) => {
agent_role.clone()
}
SessionSource::SubAgent(SubAgentSource::MemoryConsolidation) => {
Some("memory builder".to_string())
}
_ => None,
}
}

View File

@@ -23,6 +23,10 @@ use crate::history_cell::UpdateAvailableHistoryCell;
use crate::model_migration::ModelMigrationOutcome;
use crate::model_migration::migration_copy_for_models;
use crate::model_migration::run_model_migration_prompt;
use crate::multi_agents::AgentPickerThreadEntry;
use crate::multi_agents::agent_picker_status_dot_spans;
use crate::multi_agents::format_agent_picker_item_name;
use crate::multi_agents::sort_agent_picker_threads;
use crate::pager_overlay::Overlay;
use crate::render::highlight::highlight_bash_to_lines;
use crate::render::renderable::Renderable;
@@ -576,6 +580,7 @@ pub(crate) struct App {
windows_sandbox: WindowsSandboxState,
thread_event_channels: HashMap<ThreadId, ThreadEventChannel>,
agent_picker_threads: HashMap<ThreadId, AgentPickerThreadEntry>,
active_thread_id: Option<ThreadId>,
active_thread_rx: Option<mpsc::Receiver<Event>>,
primary_thread_id: Option<ThreadId>,
@@ -699,27 +704,18 @@ impl App {
self.clear_ui_header_lines_with_version(width, CODEX_CLI_VERSION)
}
fn clear_terminal_ui(&mut self, tui: &mut tui::Tui) -> Result<()> {
fn clear_terminal_ui(&mut self, tui: &mut tui::Tui, redraw_header: bool) -> Result<()> {
let is_alt_screen_active = tui.is_alt_screen_active();
let use_apple_terminal_clear_workaround = !is_alt_screen_active
&& matches!(
codex_core::terminal::terminal_info().name,
codex_core::terminal::TerminalName::AppleTerminal
);
// Drop queued history insertions so stale transcript lines cannot be flushed after /clear.
tui.clear_pending_history_lines();
if is_alt_screen_active {
tui.terminal.clear_visible_screen()?;
} else if use_apple_terminal_clear_workaround {
// Terminal.app can leave mixed old/new glyphs behind when we purge + clear.
// Use a stricter ANSI reset, then redraw only a fresh session header box instead of
// replaying the initialization transcript preamble.
tui.terminal.clear_scrollback_and_visible_screen_ansi()?;
} else {
tui.terminal.clear_scrollback()?;
tui.terminal.clear_visible_screen()?;
// Some terminals (Terminal.app, Warp) do not reliably drop scrollback when purge and
// clear are emitted as separate backend commands. Prefer a single ANSI sequence.
tui.terminal.clear_scrollback_and_visible_screen_ansi()?;
}
let mut area = tui.terminal.viewport_area;
@@ -731,11 +727,13 @@ impl App {
}
self.has_emitted_history_lines = false;
let width = tui.terminal.last_known_screen_size.width;
let header_lines = self.clear_ui_header_lines(width);
if !header_lines.is_empty() {
tui.insert_history_lines(header_lines);
self.has_emitted_history_lines = true;
if redraw_header {
let width = tui.terminal.last_known_screen_size.width;
let header_lines = self.clear_ui_header_lines(width);
if !header_lines.is_empty() {
tui.insert_history_lines(header_lines);
self.has_emitted_history_lines = true;
}
}
Ok(())
}
@@ -868,50 +866,55 @@ impl App {
async fn open_agent_picker(&mut self) {
let thread_ids: Vec<ThreadId> = self.thread_event_channels.keys().cloned().collect();
let mut agent_threads = Vec::new();
for thread_id in thread_ids {
match self.server.get_thread(thread_id).await {
Ok(thread) => {
let session_source = thread.config_snapshot().await.session_source;
agent_threads.push((
self.upsert_agent_picker_thread(
thread_id,
session_source.get_nickname(),
session_source.get_agent_role(),
));
false,
);
}
Err(_) => {
self.thread_event_channels.remove(&thread_id);
self.mark_agent_picker_thread_closed(thread_id);
}
}
}
if agent_threads.is_empty() {
if self.agent_picker_threads.is_empty() {
self.chat_widget
.add_info_message("No agents available yet.".to_string(), None);
return;
}
agent_threads.sort_by(|(left, ..), (right, ..)| left.to_string().cmp(&right.to_string()));
let mut agent_threads: Vec<(ThreadId, AgentPickerThreadEntry)> = self
.agent_picker_threads
.iter()
.map(|(thread_id, entry)| (*thread_id, entry.clone()))
.collect();
sort_agent_picker_threads(&mut agent_threads);
let mut initial_selected_idx = None;
let items: Vec<SelectionItem> = agent_threads
.iter()
.enumerate()
.map(|(idx, (thread_id, agent_nickname, agent_role))| {
.map(|(idx, (thread_id, entry))| {
if self.active_thread_id == Some(*thread_id) {
initial_selected_idx = Some(idx);
}
let id = *thread_id;
let is_primary = self.primary_thread_id == Some(*thread_id);
let name = format_agent_picker_item_name(
*thread_id,
agent_nickname.as_deref(),
agent_role.as_deref(),
entry.agent_nickname.as_deref(),
entry.agent_role.as_deref(),
is_primary,
);
let uuid = thread_id.to_string();
SelectionItem {
name: name.clone(),
name_prefix_spans: agent_picker_status_dot_spans(entry.is_closed),
description: Some(uuid.clone()),
is_current: self.active_thread_id == Some(*thread_id),
actions: vec![Box::new(move |tx| {
@@ -934,20 +937,51 @@ impl App {
});
}
fn upsert_agent_picker_thread(
&mut self,
thread_id: ThreadId,
agent_nickname: Option<String>,
agent_role: Option<String>,
is_closed: bool,
) {
self.agent_picker_threads.insert(
thread_id,
AgentPickerThreadEntry {
agent_nickname,
agent_role,
is_closed,
},
);
}
fn mark_agent_picker_thread_closed(&mut self, thread_id: ThreadId) {
if let Some(entry) = self.agent_picker_threads.get_mut(&thread_id) {
entry.is_closed = true;
} else {
self.upsert_agent_picker_thread(thread_id, None, None, true);
}
}
async fn select_agent_thread(&mut self, tui: &mut tui::Tui, thread_id: ThreadId) -> Result<()> {
if self.active_thread_id == Some(thread_id) {
return Ok(());
}
let thread = match self.server.get_thread(thread_id).await {
Ok(thread) => thread,
let live_thread = match self.server.get_thread(thread_id).await {
Ok(thread) => Some(thread),
Err(err) => {
self.chat_widget.add_error_message(format!(
"Failed to attach to agent thread {thread_id}: {err}"
));
return Ok(());
if self.thread_event_channels.contains_key(&thread_id) {
self.mark_agent_picker_thread_closed(thread_id);
None
} else {
self.chat_widget.add_error_message(format!(
"Failed to attach to agent thread {thread_id}: {err}"
));
return Ok(());
}
}
};
let is_replay_only = live_thread.is_none();
let previous_thread_id = self.active_thread_id;
self.store_active_thread_receiver().await;
@@ -965,11 +999,22 @@ impl App {
self.active_thread_rx = Some(receiver);
let init = self.chatwidget_init_for_forked_or_resumed_thread(tui, self.config.clone());
let codex_op_tx = crate::chatwidget::spawn_op_forwarder(thread);
let codex_op_tx = if let Some(thread) = live_thread {
crate::chatwidget::spawn_op_forwarder(thread)
} else {
let (tx, _rx) = unbounded_channel();
tx
};
self.chat_widget = ChatWidget::new_with_op_sender(init, codex_op_tx);
self.reset_for_thread_switch(tui)?;
self.replay_thread_snapshot(snapshot);
if is_replay_only {
self.chat_widget.add_info_message(
format!("Agent thread {thread_id} is closed. Replaying saved transcript."),
None,
);
}
self.drain_active_thread_events(tui).await?;
Ok(())
@@ -989,12 +1034,55 @@ impl App {
fn reset_thread_event_state(&mut self) {
self.thread_event_channels.clear();
self.agent_picker_threads.clear();
self.active_thread_id = None;
self.active_thread_rx = None;
self.primary_thread_id = None;
self.pending_primary_events.clear();
}
async fn start_fresh_session_with_summary_hint(&mut self, tui: &mut tui::Tui) {
// Start a fresh in-memory session while preserving resumability via persisted rollout
// history.
let model = self.chat_widget.current_model().to_string();
let summary = session_summary(
self.chat_widget.token_usage(),
self.chat_widget.thread_id(),
self.chat_widget.thread_name(),
);
self.shutdown_current_thread().await;
if let Err(err) = self.server.remove_and_close_all_threads().await {
tracing::warn!(error = %err, "failed to close all threads");
}
let init = crate::chatwidget::ChatWidgetInit {
config: self.config.clone(),
frame_requester: tui.frame_requester(),
app_event_tx: self.app_event_tx.clone(),
// New sessions start without prefilled message content.
initial_user_message: None,
enhanced_keys_supported: self.enhanced_keys_supported,
auth_manager: self.auth_manager.clone(),
models_manager: self.server.get_models_manager(),
feedback: self.feedback.clone(),
is_first_run: false,
feedback_audience: self.feedback_audience,
model: Some(model),
status_line_invalid_items_warned: self.status_line_invalid_items_warned.clone(),
otel_manager: self.otel_manager.clone(),
};
self.chat_widget = ChatWidget::new(init, self.server.clone());
self.reset_thread_event_state();
if let Some(summary) = summary {
let mut lines: Vec<Line<'static>> = vec![summary.usage_line.clone().into()];
if let Some(command) = summary.resume_command {
let spans = vec!["To continue this session, run ".into(), command.cyan()];
lines.push(spans.into());
}
self.chat_widget.add_plain_history_lines(lines);
}
tui.frame_requester().schedule_frame();
}
async fn drain_active_thread_events(&mut self, tui: &mut tui::Tui) -> Result<()> {
let Some(mut rx) = self.active_thread_rx.take() else {
return Ok(());
@@ -1294,6 +1382,7 @@ impl App {
pending_shutdown_exit_thread_id: None,
windows_sandbox: WindowsSandboxState::default(),
thread_event_channels: HashMap::new(),
agent_picker_threads: HashMap::new(),
active_thread_id: None,
active_thread_rx: None,
primary_thread_id: None,
@@ -1481,47 +1570,18 @@ impl App {
async fn handle_event(&mut self, tui: &mut tui::Tui, event: AppEvent) -> Result<AppRunControl> {
match event {
AppEvent::NewSession => {
let model = self.chat_widget.current_model().to_string();
let summary = session_summary(
self.chat_widget.token_usage(),
self.chat_widget.thread_id(),
self.chat_widget.thread_name(),
);
self.shutdown_current_thread().await;
if let Err(err) = self.server.remove_and_close_all_threads().await {
tracing::warn!(error = %err, "failed to close all threads");
}
let init = crate::chatwidget::ChatWidgetInit {
config: self.config.clone(),
frame_requester: tui.frame_requester(),
app_event_tx: self.app_event_tx.clone(),
// New sessions start without prefilled message content.
initial_user_message: None,
enhanced_keys_supported: self.enhanced_keys_supported,
auth_manager: self.auth_manager.clone(),
models_manager: self.server.get_models_manager(),
feedback: self.feedback.clone(),
is_first_run: false,
feedback_audience: self.feedback_audience,
model: Some(model),
status_line_invalid_items_warned: self.status_line_invalid_items_warned.clone(),
otel_manager: self.otel_manager.clone(),
};
self.chat_widget = ChatWidget::new(init, self.server.clone());
self.reset_thread_event_state();
if let Some(summary) = summary {
let mut lines: Vec<Line<'static>> = vec![summary.usage_line.clone().into()];
if let Some(command) = summary.resume_command {
let spans = vec!["To continue this session, run ".into(), command.cyan()];
lines.push(spans.into());
}
self.chat_widget.add_plain_history_lines(lines);
}
tui.frame_requester().schedule_frame();
self.start_fresh_session_with_summary_hint(tui).await;
}
AppEvent::ClearUi => {
self.clear_terminal_ui(tui)?;
tui.frame_requester().schedule_frame();
self.clear_terminal_ui(tui, false)?;
self.overlay = None;
self.transcript_cells.clear();
self.deferred_history_lines.clear();
self.has_emitted_history_lines = false;
self.backtrack = BacktrackState::default();
self.backtrack_render_pending = false;
self.start_fresh_session_with_summary_hint(tui).await;
}
AppEvent::OpenResumePicker => {
match crate::resume_picker::run_resume_picker(tui, &self.config, false).await? {
@@ -2752,7 +2812,7 @@ impl App {
if let Some((closed_thread_id, primary_thread_id)) =
self.active_non_primary_shutdown_target(&event.msg)
{
self.thread_event_channels.remove(&closed_thread_id);
self.mark_agent_picker_thread_closed(closed_thread_id);
self.select_agent_thread(tui, primary_thread_id).await?;
if self.active_thread_id == Some(primary_thread_id) {
self.chat_widget.add_info_message(
@@ -2794,6 +2854,12 @@ impl App {
}
};
let config_snapshot = thread.config_snapshot().await;
self.upsert_agent_picker_thread(
thread_id,
config_snapshot.session_source.get_nickname(),
config_snapshot.session_source.get_agent_role(),
false,
);
let event = Event {
id: String::new(),
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
@@ -3078,28 +3144,6 @@ impl App {
}
}
fn format_agent_picker_item_name(
_thread_id: ThreadId,
agent_nickname: Option<&str>,
agent_role: Option<&str>,
is_primary: bool,
) -> String {
if is_primary {
return "Main [default]".to_string();
}
let agent_nickname = agent_nickname
.map(str::trim)
.filter(|nickname| !nickname.is_empty());
let agent_role = agent_role.map(str::trim).filter(|role| !role.is_empty());
match (agent_nickname, agent_role) {
(Some(agent_nickname), Some(agent_role)) => format!("{agent_nickname} [{agent_role}]"),
(Some(agent_nickname), None) => agent_nickname.to_string(),
(None, Some(agent_role)) => format!("[{agent_role}]"),
(None, None) => "Agent".to_string(),
}
}
#[cfg(test)]
mod tests {
use super::*;
@@ -3268,7 +3312,7 @@ mod tests {
}
#[tokio::test]
async fn open_agent_picker_prunes_missing_threads() -> Result<()> {
async fn open_agent_picker_keeps_missing_threads_for_replay() -> Result<()> {
let mut app = make_test_app().await;
let thread_id = ThreadId::new();
app.thread_event_channels
@@ -3276,7 +3320,44 @@ mod tests {
app.open_agent_picker().await;
assert_eq!(app.thread_event_channels.contains_key(&thread_id), false);
assert_eq!(app.thread_event_channels.contains_key(&thread_id), true);
assert_eq!(
app.agent_picker_threads.get(&thread_id),
Some(&AgentPickerThreadEntry {
agent_nickname: None,
agent_role: None,
is_closed: true,
})
);
Ok(())
}
#[tokio::test]
async fn open_agent_picker_keeps_cached_closed_threads() -> Result<()> {
let mut app = make_test_app().await;
let thread_id = ThreadId::new();
app.thread_event_channels
.insert(thread_id, ThreadEventChannel::new(1));
app.agent_picker_threads.insert(
thread_id,
AgentPickerThreadEntry {
agent_nickname: Some("Robie".to_string()),
agent_role: Some("explorer".to_string()),
is_closed: false,
},
);
app.open_agent_picker().await;
assert_eq!(app.thread_event_channels.contains_key(&thread_id), true);
assert_eq!(
app.agent_picker_threads.get(&thread_id),
Some(&AgentPickerThreadEntry {
agent_nickname: Some("Robie".to_string()),
agent_role: Some("explorer".to_string()),
is_closed: true,
})
);
Ok(())
}
@@ -3287,27 +3368,27 @@ mod tests {
let snapshot = [
format!(
"{} | {}",
format_agent_picker_item_name(thread_id, Some("Robie"), Some("explorer"), true),
format_agent_picker_item_name(Some("Robie"), Some("explorer"), true),
thread_id
),
format!(
"{} | {}",
format_agent_picker_item_name(thread_id, Some("Robie"), Some("explorer"), false),
format_agent_picker_item_name(Some("Robie"), Some("explorer"), false),
thread_id
),
format!(
"{} | {}",
format_agent_picker_item_name(thread_id, Some("Robie"), None, false),
format_agent_picker_item_name(Some("Robie"), None, false),
thread_id
),
format!(
"{} | {}",
format_agent_picker_item_name(thread_id, None, Some("explorer"), false),
format_agent_picker_item_name(None, Some("explorer"), false),
thread_id
),
format!(
"{} | {}",
format_agent_picker_item_name(thread_id, None, None, false),
format_agent_picker_item_name(None, None, false),
thread_id
),
]
@@ -3548,6 +3629,7 @@ mod tests {
pending_shutdown_exit_thread_id: None,
windows_sandbox: WindowsSandboxState::default(),
thread_event_channels: HashMap::new(),
agent_picker_threads: HashMap::new(),
active_thread_id: None,
active_thread_rx: None,
primary_thread_id: None,
@@ -3606,6 +3688,7 @@ mod tests {
pending_shutdown_exit_thread_id: None,
windows_sandbox: WindowsSandboxState::default(),
thread_event_channels: HashMap::new(),
agent_picker_threads: HashMap::new(),
active_thread_id: None,
active_thread_rx: None,
primary_thread_id: None,

View File

@@ -54,7 +54,8 @@ pub(crate) enum AppEvent {
/// Start a new session.
NewSession,
/// Clear the terminal UI (screen + scrollback) without changing session state.
/// Clear the terminal UI (screen + scrollback), start a fresh session, and keep the
/// previous chat resumable.
ClearUi,
/// Open the resume picker inside the running TUI session.

View File

@@ -220,6 +220,7 @@ impl CommandPopup {
};
GenericDisplayRow {
name,
name_prefix_spans: Vec::new(),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
display_shortcut: None,
description: Some(description),

View File

@@ -119,6 +119,7 @@ impl WidgetRef for &FileSearchPopup {
.iter()
.map(|m| GenericDisplayRow {
name: m.path.to_string_lossy().to_string(),
name_prefix_spans: Vec::new(),
match_indices: m
.indices
.as_ref()

View File

@@ -112,6 +112,7 @@ pub(crate) type OnCancelCallback = Option<Box<dyn Fn(&AppEventSender) + Send + S
#[derive(Default)]
pub(crate) struct SelectionItem {
pub name: String,
pub name_prefix_spans: Vec<Span<'static>>,
pub display_shortcut: Option<KeyBinding>,
pub description: Option<String>,
pub selected_description: Option<String>,
@@ -372,7 +373,9 @@ impl ListSelectionView {
format!("{prefix} {n}. ")
};
let wrap_prefix_width = UnicodeWidthStr::width(wrap_prefix.as_str());
let display_name = format!("{wrap_prefix}{name_with_marker}");
let mut name_prefix_spans = Vec::new();
name_prefix_spans.push(wrap_prefix.into());
name_prefix_spans.extend(item.name_prefix_spans.clone());
let description = is_selected
.then(|| item.selected_description.clone())
.flatten()
@@ -380,7 +383,8 @@ impl ListSelectionView {
let wrap_indent = description.is_none().then_some(wrap_prefix_width);
let is_disabled = item.is_disabled || item.disabled_reason.is_some();
GenericDisplayRow {
name: display_name,
name: name_with_marker,
name_prefix_spans,
display_shortcut: item.display_shortcut,
match_indices: None,
description,

View File

@@ -28,6 +28,7 @@ use super::scroll_state::ScrollState;
#[derive(Default)]
pub(crate) struct GenericDisplayRow {
pub name: String,
pub name_prefix_spans: Vec<Span<'static>>,
pub display_shortcut: Option<KeyBinding>,
pub match_indices: Option<Vec<usize>>, // indices to bold (char positions)
pub description: Option<String>, // optional grey text after the name
@@ -242,7 +243,8 @@ fn compute_desc_col(
.skip(start_idx)
.take(visible_items)
.map(|(_, row)| {
let mut spans: Vec<Span> = vec![row.name.clone().into()];
let mut spans = row.name_prefix_spans.clone();
spans.push(row.name.clone().into());
if row.disabled_reason.is_some() {
spans.push(" (disabled)".dim());
}
@@ -253,7 +255,8 @@ fn compute_desc_col(
ColumnWidthMode::AutoAllRows => rows_all
.iter()
.map(|row| {
let mut spans: Vec<Span> = vec![row.name.clone().into()];
let mut spans = row.name_prefix_spans.clone();
spans.push(row.name.clone().into());
if row.disabled_reason.is_some() {
spans.push(" (disabled)".dim());
}
@@ -291,6 +294,7 @@ fn should_wrap_name_in_column(row: &GenericDisplayRow) -> bool {
&& row.match_indices.is_none()
&& row.display_shortcut.is_none()
&& row.category_tag.is_none()
&& row.name_prefix_spans.is_empty()
}
fn wrap_two_column_row(row: &GenericDisplayRow, desc_col: usize, width: u16) -> Vec<Line<'static>> {
@@ -508,9 +512,10 @@ fn build_full_line(row: &GenericDisplayRow, desc_col: usize) -> Line<'static> {
// Enforce single-line name: allow at most desc_col - 2 cells for name,
// reserving two spaces before the description column.
let name_prefix_width = Line::from(row.name_prefix_spans.clone()).width();
let name_limit = combined_description
.as_ref()
.map(|_| desc_col.saturating_sub(2))
.map(|_| desc_col.saturating_sub(2).saturating_sub(name_prefix_width))
.unwrap_or(usize::MAX);
let mut name_spans: Vec<Span> = Vec::with_capacity(row.name.len());
@@ -558,8 +563,9 @@ fn build_full_line(row: &GenericDisplayRow, desc_col: usize) -> Line<'static> {
name_spans.push(" (disabled)".dim());
}
let this_name_width = Line::from(name_spans.clone()).width();
let mut full_spans: Vec<Span> = name_spans;
let this_name_width = name_prefix_width + Line::from(name_spans.clone()).width();
let mut full_spans: Vec<Span> = row.name_prefix_spans.clone();
full_spans.extend(name_spans);
if let Some(display_shortcut) = row.display_shortcut {
full_spans.push(" (".into());
full_spans.push(display_shortcut.into());

View File

@@ -101,6 +101,7 @@ impl SkillPopup {
let description = mention.description.clone().unwrap_or_default();
GenericDisplayRow {
name,
name_prefix_spans: Vec::new(),
match_indices: indices,
display_shortcut: None,
description: Some(description).filter(|desc| !desc.is_empty()),

View File

@@ -49,6 +49,7 @@ use codex_app_server_protocol::ConfigLayerSource;
use codex_backend_client::Client as BackendClient;
use codex_chatgpt::connectors;
use codex_core::config::Config;
use codex_core::config::Constrained;
use codex_core::config::ConstraintResult;
use codex_core::config::types::Notifications;
use codex_core::config::types::WindowsSandboxModeToml;
@@ -1076,6 +1077,27 @@ impl ChatWidget {
self.forked_from = event.forked_from_id;
self.current_rollout_path = event.rollout_path.clone();
self.current_cwd = Some(event.cwd.clone());
self.config.cwd = event.cwd.clone();
if let Err(err) = self
.config
.permissions
.approval_policy
.set(event.approval_policy)
{
tracing::warn!(%err, "failed to sync approval_policy from SessionConfigured");
self.config.permissions.approval_policy =
Constrained::allow_only(event.approval_policy);
}
if let Err(err) = self
.config
.permissions
.sandbox_policy
.set(event.sandbox_policy.clone())
{
tracing::warn!(%err, "failed to sync sandbox_policy from SessionConfigured");
self.config.permissions.sandbox_policy =
Constrained::allow_only(event.sandbox_policy.clone());
}
let initial_messages = event.initial_messages.clone();
let forked_from_id = event.forked_from_id;
let model_for_header = event.model.clone();
@@ -2809,9 +2831,7 @@ impl ChatWidget {
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
widget.bottom_pane.set_collaboration_modes_enabled(true);
widget.sync_personality_command_enabled();
widget
.bottom_pane
@@ -2979,9 +2999,7 @@ impl ChatWidget {
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
widget.bottom_pane.set_collaboration_modes_enabled(true);
widget.sync_personality_command_enabled();
widget
.bottom_pane
@@ -3138,9 +3156,7 @@ impl ChatWidget {
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
widget.bottom_pane.set_collaboration_modes_enabled(true);
widget.sync_personality_command_enabled();
widget
.bottom_pane
@@ -6325,22 +6341,6 @@ impl ChatWidget {
if feature == Feature::Steer {
self.bottom_pane.set_steer_enabled(enabled);
}
if feature == Feature::CollaborationModes {
self.bottom_pane.set_collaboration_modes_enabled(enabled);
let settings = self.current_collaboration_mode.settings.clone();
self.current_collaboration_mode = CollaborationMode {
mode: ModeKind::Default,
settings,
};
self.active_collaboration_mask = if enabled {
collaboration_modes::default_mask(self.models_manager.as_ref())
} else {
None
};
self.update_collaboration_mode_indicator();
self.refresh_model_display();
self.request_redraw();
}
if feature == Feature::Personality {
self.sync_personality_command_enabled();
}
@@ -6519,17 +6519,14 @@ impl ChatWidget {
}
fn collaboration_modes_enabled(&self) -> bool {
self.config.features.enabled(Feature::CollaborationModes)
true
}
fn initial_collaboration_mask(
config: &Config,
_config: &Config,
models_manager: &ModelsManager,
model_override: Option<&str>,
) -> Option<CollaborationModeMask> {
if !config.features.enabled(Feature::CollaborationModes) {
return None;
}
let mut mask = collaboration_modes::default_mask(models_manager)?;
if let Some(model_override) = model_override {
mask.model = Some(model_override.to_string());

View File

@@ -325,6 +325,57 @@ async fn replayed_user_message_preserves_remote_image_urls() {
assert_eq!(stored_remote_image_urls, remote_image_urls);
}
#[tokio::test]
async fn session_configured_syncs_widget_config_permissions_and_cwd() {
let (mut chat, _rx, _ops) = make_chatwidget_manual(None).await;
chat.config
.permissions
.approval_policy
.set(AskForApproval::OnRequest)
.expect("set approval policy");
chat.config
.permissions
.sandbox_policy
.set(SandboxPolicy::new_workspace_write_policy())
.expect("set sandbox policy");
chat.config.cwd = PathBuf::from("/home/user/main");
let expected_sandbox = SandboxPolicy::new_read_only_policy();
let expected_cwd = PathBuf::from("/home/user/sub-agent");
let configured = codex_protocol::protocol::SessionConfiguredEvent {
session_id: ThreadId::new(),
forked_from_id: None,
thread_name: None,
model: "test-model".to_string(),
model_provider_id: "test-provider".to_string(),
approval_policy: AskForApproval::Never,
sandbox_policy: expected_sandbox.clone(),
cwd: expected_cwd.clone(),
reasoning_effort: Some(ReasoningEffortConfig::default()),
history_log_id: 0,
history_entry_count: 0,
initial_messages: None,
network_proxy: None,
rollout_path: None,
};
chat.handle_codex_event(Event {
id: "session-configured".into(),
msg: EventMsg::SessionConfigured(configured),
});
assert_eq!(
chat.config_ref().permissions.approval_policy.value(),
AskForApproval::Never
);
assert_eq!(
chat.config_ref().permissions.sandbox_policy.get(),
&expected_sandbox
);
assert_eq!(&chat.config_ref().cwd, &expected_cwd);
}
#[tokio::test]
async fn replayed_user_message_with_only_remote_images_renders_history_cell() {
let (mut chat, mut rx, _ops) = make_chatwidget_manual(None).await;
@@ -1577,7 +1628,7 @@ async fn make_chatwidget_manual(
skills: None,
});
bottom.set_steer_enabled(true);
bottom.set_collaboration_modes_enabled(cfg.features.enabled(Feature::CollaborationModes));
bottom.set_collaboration_modes_enabled(true);
let auth_manager =
codex_core::test_support::auth_manager_from_auth(CodexAuth::from_api_key("test"));
let codex_home = cfg.codex_home.clone();
@@ -1592,6 +1643,7 @@ async fn make_chatwidget_manual(
},
};
let current_collaboration_mode = base_mode;
let active_collaboration_mask = collaboration_modes::default_mask(models_manager.as_ref());
let mut widget = ChatWidget {
app_event_tx,
codex_op_tx: op_tx,
@@ -1600,7 +1652,7 @@ async fn make_chatwidget_manual(
active_cell_revision: 0,
config: cfg,
current_collaboration_mode,
active_collaboration_mask: None,
active_collaboration_mask,
auth_manager,
models_manager,
otel_manager,
@@ -4004,17 +4056,10 @@ async fn slash_init_skips_when_project_doc_exists() {
}
#[tokio::test]
async fn collab_mode_shift_tab_cycles_only_when_enabled_and_idle() {
async fn collab_mode_shift_tab_cycles_only_when_idle() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual(None).await;
chat.set_feature_enabled(Feature::CollaborationModes, false);
let initial = chat.current_collaboration_mode().clone();
chat.handle_key_event(KeyEvent::from(KeyCode::BackTab));
assert_eq!(chat.current_collaboration_mode(), &initial);
assert_eq!(chat.active_collaboration_mode_kind(), ModeKind::Default);
chat.set_feature_enabled(Feature::CollaborationModes, true);
chat.handle_key_event(KeyEvent::from(KeyCode::BackTab));
assert_eq!(chat.active_collaboration_mode_kind(), ModeKind::Plan);
assert_eq!(chat.current_collaboration_mode(), &initial);
@@ -4379,26 +4424,12 @@ async fn collab_mode_is_sent_after_enabling() {
}
#[tokio::test]
async fn collab_mode_toggle_on_applies_default_preset() {
async fn collab_mode_applies_default_preset() {
let (mut chat, _rx, mut op_rx) = make_chatwidget_manual(None).await;
chat.thread_id = Some(ThreadId::new());
chat.bottom_pane
.set_composer_text("before toggle".to_string(), Vec::new(), Vec::new());
chat.handle_key_event(KeyEvent::from(KeyCode::Enter));
match next_submit_op(&mut op_rx) {
Op::UserTurn {
collaboration_mode: None,
personality: Some(Personality::Pragmatic),
..
} => {}
other => panic!("expected Op::UserTurn without collaboration_mode, got {other:?}"),
}
chat.set_feature_enabled(Feature::CollaborationModes, true);
chat.bottom_pane
.set_composer_text("after toggle".to_string(), Vec::new(), Vec::new());
.set_composer_text("hello".to_string(), Vec::new(), Vec::new());
chat.handle_key_event(KeyEvent::from(KeyCode::Enter));
match next_submit_op(&mut op_rx) {
Op::UserTurn {

View File

@@ -457,15 +457,16 @@ where
/// Hard-reset scrollback + visible screen using an explicit ANSI sequence.
///
/// This is a compatibility fallback for terminals that misbehave when purge
/// and full-screen clear are issued as separate backend commands.
/// Some terminals behave more reliably when purge + clear are emitted as a
/// single ANSI sequence instead of separate backend commands.
pub fn clear_scrollback_and_visible_screen_ansi(&mut self) -> io::Result<()> {
if self.viewport_area.is_empty() {
return Ok(());
}
// Reset scroll region + style state, purge scrollback, clear screen, home cursor.
write!(self.backend, "\x1b[r\x1b[0m\x1b[3J\x1b[2J\x1b[H")?;
// Reset scroll region + style state, home cursor, clear screen, purge scrollback.
// The order matches the common shell `clear && printf '\\e[3J'` behavior.
write!(self.backend, "\x1b[r\x1b[0m\x1b[H\x1b[2J\x1b[3J\x1b[H")?;
std::io::Write::flush(&mut self.backend)?;
self.last_known_cursor_pos = Position { x: 0, y: 0 };
self.visible_history_rows = 0;

View File

@@ -22,6 +22,13 @@ const COLLAB_PROMPT_PREVIEW_GRAPHEMES: usize = 160;
const COLLAB_AGENT_ERROR_PREVIEW_GRAPHEMES: usize = 160;
const COLLAB_AGENT_RESPONSE_PREVIEW_GRAPHEMES: usize = 240;
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) struct AgentPickerThreadEntry {
pub(crate) agent_nickname: Option<String>,
pub(crate) agent_role: Option<String>,
pub(crate) is_closed: bool,
}
#[derive(Clone, Copy)]
struct AgentLabel<'a> {
thread_id: Option<ThreadId>,
@@ -29,6 +36,44 @@ struct AgentLabel<'a> {
role: Option<&'a str>,
}
pub(crate) fn agent_picker_status_dot_spans(is_closed: bool) -> Vec<Span<'static>> {
let dot = if is_closed {
"".dark_gray()
} else {
"".green()
};
vec![dot, " ".into()]
}
pub(crate) fn format_agent_picker_item_name(
agent_nickname: Option<&str>,
agent_role: Option<&str>,
is_primary: bool,
) -> String {
if is_primary {
return "Main [default]".to_string();
}
let agent_nickname = agent_nickname
.map(str::trim)
.filter(|nickname| !nickname.is_empty());
let agent_role = agent_role.map(str::trim).filter(|role| !role.is_empty());
match (agent_nickname, agent_role) {
(Some(agent_nickname), Some(agent_role)) => format!("{agent_nickname} [{agent_role}]"),
(Some(agent_nickname), None) => agent_nickname.to_string(),
(None, Some(agent_role)) => format!("[{agent_role}]"),
(None, None) => "Agent".to_string(),
}
}
pub(crate) fn sort_agent_picker_threads(agent_threads: &mut [(ThreadId, AgentPickerThreadEntry)]) {
agent_threads.sort_by(|(left_id, left), (right_id, right)| {
left.is_closed
.cmp(&right.is_closed)
.then_with(|| left_id.to_string().cmp(&right_id.to_string()))
});
}
pub(crate) fn spawn_end(ev: CollabAgentSpawnEndEvent) -> PlainHistoryCell {
let CollabAgentSpawnEndEvent {
call_id: _,

View File

@@ -68,7 +68,7 @@ impl SlashCommand {
SlashCommand::Review => "review my current changes and find issues",
SlashCommand::Rename => "rename the current thread",
SlashCommand::Resume => "resume a saved chat",
SlashCommand::Clear => "clear the terminal screen and scrollback",
SlashCommand::Clear => "clear the terminal and start a new chat",
SlashCommand::Fork => "fork the current chat",
// SlashCommand::Undo => "ask Codex to undo a turn",
SlashCommand::Quit | SlashCommand::Exit => "exit Codex",

View File

@@ -7,5 +7,8 @@ license.workspace = true
[lints]
workspace = true
[dependencies]
regex-lite = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }

View File

@@ -62,11 +62,54 @@ pub fn sanitize_metric_tag_value(value: &str) -> String {
}
}
/// Find all UUIDs in a string.
#[allow(clippy::unwrap_used)]
pub fn find_uuids(s: &str) -> Vec<String> {
static RE: std::sync::OnceLock<regex_lite::Regex> = std::sync::OnceLock::new();
let re = RE.get_or_init(|| {
regex_lite::Regex::new(
r"[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}",
)
.unwrap() // Unwrap is safe thanks to the tests.
});
re.find_iter(s).map(|m| m.as_str().to_string()).collect()
}
#[cfg(test)]
mod tests {
use super::find_uuids;
use super::sanitize_metric_tag_value;
use pretty_assertions::assert_eq;
#[test]
fn find_uuids_finds_multiple() {
let input =
"x 00112233-4455-6677-8899-aabbccddeeff-k y 12345678-90ab-cdef-0123-456789abcdef";
assert_eq!(
find_uuids(input),
vec![
"00112233-4455-6677-8899-aabbccddeeff".to_string(),
"12345678-90ab-cdef-0123-456789abcdef".to_string(),
]
);
}
#[test]
fn find_uuids_ignores_invalid() {
let input = "not-a-uuid-1234-5678-9abc-def0-123456789abc";
assert_eq!(find_uuids(input), Vec::<String>::new());
}
#[test]
fn find_uuids_handles_non_ascii_without_overlap() {
let input = "🙂 55e5d6f7-8a7f-4d2a-8d88-123456789012abc";
assert_eq!(
find_uuids(input),
vec!["55e5d6f7-8a7f-4d2a-8d88-123456789012".to_string()]
);
}
#[test]
fn sanitize_metric_tag_value_trims_and_fills_unspecified() {
let msg = "///";