mirror of
https://github.com/openai/codex.git
synced 2026-04-30 01:16:54 +00:00
Add Smart Approvals guardian review across core, app-server, and TUI (#13860)
## Summary
- add `approvals_reviewer = "user" | "guardian_subagent"` as the runtime
control for who reviews approval requests
- route Smart Approvals guardian review through core for command
execution, file changes, managed-network approvals, MCP approvals, and
delegated/subagent approval flows
- expose guardian review in app-server with temporary unstable
`item/autoApprovalReview/{started,completed}` notifications carrying
`targetItemId`, `review`, and `action`
- update the TUI so Smart Approvals can be enabled from `/experimental`,
aligned with the matching `/approvals` mode, and surfaced clearly while
reviews are pending or resolved
## Runtime model
This PR does not introduce a new `approval_policy`.
Instead:
- `approval_policy` still controls when approval is needed
- `approvals_reviewer` controls who reviewable approval requests are
routed to:
- `user`
- `guardian_subagent`
`guardian_subagent` is a carefully prompted reviewer subagent that
gathers relevant context and applies a risk-based decision framework
before approving or denying the request.
The `smart_approvals` feature flag is a rollout/UI gate. Core runtime
behavior keys off `approvals_reviewer`.
When Smart Approvals is enabled from the TUI, it also switches the
current `/approvals` settings to the matching Smart Approvals mode so
users immediately see guardian review in the active thread:
- `approval_policy = on-request`
- `approvals_reviewer = guardian_subagent`
- `sandbox_mode = workspace-write`
Users can still change `/approvals` afterward.
Config-load behavior stays intentionally narrow:
- plain `smart_approvals = true` in `config.toml` remains just the
rollout/UI gate and does not auto-set `approvals_reviewer`
- the deprecated `guardian_approval = true` alias migration does
backfill `approvals_reviewer = "guardian_subagent"` in the same scope
when that reviewer is not already configured there, so old configs
preserve their original guardian-enabled behavior
ARC remains a separate safety check. For MCP tool approvals, ARC
escalations now flow into the configured reviewer instead of always
bypassing guardian and forcing manual review.
## Config stability
The runtime reviewer override is stable, but the config-backed
app-server protocol shape is still settling.
- `thread/start`, `thread/resume`, and `turn/start` keep stable
`approvalsReviewer` overrides
- the config-backed `approvals_reviewer` exposure returned via
`config/read` (including profile-level config) is now marked
`[UNSTABLE]` / experimental in the app-server protocol until we are more
confident in that config surface
## App-server surface
This PR intentionally keeps the guardian app-server shape narrow and
temporary.
It adds generic unstable lifecycle notifications:
- `item/autoApprovalReview/started`
- `item/autoApprovalReview/completed`
with payloads of the form:
- `{ threadId, turnId, targetItemId, review, action? }`
`review` is currently:
- `{ status, riskScore?, riskLevel?, rationale? }`
- where `status` is one of `inProgress`, `approved`, `denied`, or
`aborted`
`action` carries the guardian action summary payload from core when
available. This lets clients render temporary standalone pending-review
UI, including parallel reviews, even when the underlying tool item has
not been emitted yet.
These notifications are explicitly documented as `[UNSTABLE]` and
expected to change soon.
This PR does **not** persist guardian review state onto `thread/read`
tool items. The intended follow-up is to attach guardian review state to
the reviewed tool item lifecycle instead, which would improve
consistency with manual approvals and allow thread history / reconnect
flows to replay guardian review state directly.
## TUI behavior
- `/experimental` exposes the rollout gate as `Smart Approvals`
- enabling it in the TUI enables the feature and switches the current
session to the matching Smart Approvals `/approvals` mode
- disabling it in the TUI clears the persisted `approvals_reviewer`
override when appropriate and returns the session to default manual
review when the effective reviewer changes
- `/approvals` still exposes the reviewer choice directly
- the TUI renders:
- pending guardian review state in the live status footer, including
parallel review aggregation
- resolved approval/denial state in history
## Scope notes
This PR includes the supporting core/runtime work needed to make Smart
Approvals usable end-to-end:
- shell / unified-exec / apply_patch / managed-network / MCP guardian
review
- delegated/subagent approval routing into guardian review
- guardian review risk metadata and action summaries for app-server/TUI
- config/profile/TUI handling for `smart_approvals`, `guardian_approval`
alias migration, and `approvals_reviewer`
- a small internal cleanup of delegated approval forwarding to dedupe
fallback paths and simplify guardian-vs-parent approval waiting (no
intended behavior change)
Out of scope for this PR:
- redesigning the existing manual approval protocol shapes
- persisting guardian review state onto app-server `ThreadItem`s
- delegated MCP elicitation auto-review (the current delegated MCP
guardian shim only covers the legacy `RequestUserInput` path)
---------
Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
committed by
GitHub
parent
e3cbf913e8
commit
bc24017d64
@@ -1,7 +1,6 @@
|
||||
use std::collections::HashMap;
|
||||
use std::ffi::OsStr;
|
||||
use std::fs;
|
||||
use std::sync::OnceLock;
|
||||
|
||||
use anyhow::Context;
|
||||
use anyhow::Result;
|
||||
@@ -33,7 +32,6 @@ use core_test_support::wait_for_event;
|
||||
use core_test_support::wait_for_event_match;
|
||||
use core_test_support::wait_for_event_with_timeout;
|
||||
use pretty_assertions::assert_eq;
|
||||
use regex_lite::Regex;
|
||||
use serde_json::Value;
|
||||
use serde_json::json;
|
||||
use tokio::time::Duration;
|
||||
@@ -59,65 +57,49 @@ struct ParsedUnifiedExecOutput {
|
||||
|
||||
#[allow(clippy::expect_used)]
|
||||
fn parse_unified_exec_output(raw: &str) -> Result<ParsedUnifiedExecOutput> {
|
||||
static OUTPUT_REGEX: OnceLock<Regex> = OnceLock::new();
|
||||
let regex = OUTPUT_REGEX.get_or_init(|| {
|
||||
Regex::new(concat!(
|
||||
r#"(?s)^(?:Total output lines: \d+\n\n)?"#,
|
||||
r#"(?:Chunk ID: (?P<chunk_id>[^\n]+)\n)?"#,
|
||||
r#"Wall time: (?P<wall_time>-?\d+(?:\.\d+)?) seconds\n"#,
|
||||
r#"(?:Process exited with code (?P<exit_code>-?\d+)\n)?"#,
|
||||
r#"(?:Process running with session ID (?P<process_id>-?\d+)\n)?"#,
|
||||
r#"(?:Original token count: (?P<original_token_count>\d+)\n)?"#,
|
||||
r#"Output:\n?(?P<output>.*)$"#,
|
||||
))
|
||||
.expect("valid unified exec output regex")
|
||||
});
|
||||
|
||||
let cleaned = raw.trim_matches('\r');
|
||||
let captures = regex
|
||||
.captures(cleaned)
|
||||
let cleaned = raw.replace("\r\n", "\n");
|
||||
let (metadata, output) = cleaned
|
||||
.rsplit_once("\nOutput:")
|
||||
.ok_or_else(|| anyhow::anyhow!("missing Output section in unified exec output {raw}"))?;
|
||||
let output = output.strip_prefix('\n').unwrap_or(output);
|
||||
|
||||
let chunk_id = captures
|
||||
.name("chunk_id")
|
||||
.map(|value| value.as_str().to_string());
|
||||
let mut chunk_id = None;
|
||||
let mut wall_time_seconds = None;
|
||||
let mut process_id = None;
|
||||
let mut exit_code = None;
|
||||
let mut original_token_count = None;
|
||||
|
||||
let wall_time_seconds = captures
|
||||
.name("wall_time")
|
||||
.expect("wall_time group present")
|
||||
.as_str()
|
||||
.parse::<f64>()
|
||||
.context("failed to parse wall time seconds")?;
|
||||
for line in metadata.lines() {
|
||||
if let Some(value) = line.strip_prefix("Chunk ID: ") {
|
||||
chunk_id = Some(value.to_string());
|
||||
} else if let Some(value) = line.strip_prefix("Wall time: ") {
|
||||
let value = value.strip_suffix(" seconds").ok_or_else(|| {
|
||||
anyhow::anyhow!("invalid wall time line in unified exec output: {line}")
|
||||
})?;
|
||||
wall_time_seconds = Some(
|
||||
value
|
||||
.parse::<f64>()
|
||||
.context("failed to parse wall time seconds")?,
|
||||
);
|
||||
} else if let Some(value) = line.strip_prefix("Process exited with code ") {
|
||||
exit_code = Some(
|
||||
value
|
||||
.parse::<i32>()
|
||||
.context("failed to parse exit code from unified exec output")?,
|
||||
);
|
||||
} else if let Some(value) = line.strip_prefix("Process running with session ID ") {
|
||||
process_id = Some(value.to_string());
|
||||
} else if let Some(value) = line.strip_prefix("Original token count: ") {
|
||||
original_token_count = Some(
|
||||
value
|
||||
.parse::<usize>()
|
||||
.context("failed to parse original token count from unified exec output")?,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let exit_code = captures
|
||||
.name("exit_code")
|
||||
.map(|value| {
|
||||
value
|
||||
.as_str()
|
||||
.parse::<i32>()
|
||||
.context("failed to parse exit code from unified exec output")
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
let process_id = captures
|
||||
.name("process_id")
|
||||
.map(|value| value.as_str().to_string());
|
||||
|
||||
let original_token_count = captures
|
||||
.name("original_token_count")
|
||||
.map(|value| {
|
||||
value
|
||||
.as_str()
|
||||
.parse::<usize>()
|
||||
.context("failed to parse original token count from unified exec output")
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
let output = captures
|
||||
.name("output")
|
||||
.expect("output group present")
|
||||
.as_str()
|
||||
.to_string();
|
||||
let wall_time_seconds = wall_time_seconds
|
||||
.ok_or_else(|| anyhow::anyhow!("missing wall time in unified exec output {raw}"))?;
|
||||
|
||||
Ok(ParsedUnifiedExecOutput {
|
||||
chunk_id,
|
||||
@@ -125,7 +107,7 @@ fn parse_unified_exec_output(raw: &str) -> Result<ParsedUnifiedExecOutput> {
|
||||
process_id,
|
||||
exit_code,
|
||||
original_token_count,
|
||||
output,
|
||||
output: output.to_string(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -2567,8 +2549,22 @@ PY
|
||||
let large_output = outputs.get(call_id).expect("missing large output summary");
|
||||
|
||||
let output_text = large_output.output.replace("\r\n", "\n");
|
||||
let truncated_pattern = r"(?s)^Total output lines: \d+\n\n(token token \n){5,}.*…\d+ tokens truncated….*(token token \n){5,}$";
|
||||
assert_regex_match(truncated_pattern, &output_text);
|
||||
assert!(
|
||||
output_text.starts_with("Total output lines: "),
|
||||
"expected large output summary header, got {output_text:?}"
|
||||
);
|
||||
assert!(
|
||||
output_text.contains("…") && output_text.contains("tokens truncated"),
|
||||
"expected truncation marker in large output summary, got {output_text:?}"
|
||||
);
|
||||
assert!(
|
||||
output_text.contains("token token \ntoken token \ntoken token \n"),
|
||||
"expected preserved output prefix in large output summary, got {output_text:?}"
|
||||
);
|
||||
assert!(
|
||||
output_text.ends_with("token token ") || output_text.ends_with("token token \n"),
|
||||
"expected preserved output suffix in large output summary, got {output_text:?}"
|
||||
);
|
||||
|
||||
let original_tokens = large_output
|
||||
.original_token_count
|
||||
@@ -2652,7 +2648,7 @@ async fn unified_exec_runs_under_sandbox() -> Result<()> {
|
||||
let outputs = collect_tool_outputs(&bodies)?;
|
||||
let output = outputs.get(call_id).expect("missing output");
|
||||
|
||||
assert_regex_match("hello[\r\n]+", &output.output);
|
||||
assert_eq!(output.output.trim_end_matches(['\r', '\n']), "hello");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user