feat: execpolicy v2 (#6467)

## Summary
- Introduces the `codex-execpolicy2` crate.
- This PR covers only the prefix-rule subset of the planned execpolicy
v2 language; a richer language will follow.

## Policy
- Policy language centers on `prefix_rule(pattern=[...], decision?,
match?, not_match?)`, where `pattern` is an ordered list of tokens; any
element may be a list to denote alternatives. `decision` defaults to
`allow`; valid values are `allow`, `prompt`, and `forbidden`. `match` /
`not_match` hold example commands that are tokenized and validated at
load time (think of these as unit tests).

## Policy shapes
- Prefix rules use Starlark syntax:
```starlark
prefix_rule(
    pattern = ["cmd", ["alt1", "alt2"]], # ordered tokens; list entries denote alternatives
    decision = "prompt",                # allow | prompt | forbidden; defaults to allow
    match = [["cmd", "alt1"]],          # examples that must match this rule (enforced at compile time)
    not_match = [["cmd", "oops"]],      # examples that must not match this rule (enforced at compile time)
)
```

## Response shapes
- Match:

```json
{
  "match": {
    "decision": "allow|prompt|forbidden",
    "matchedRules": [
      {
        "prefixRuleMatch": {
          "matchedPrefix": ["<token>", "..."],
          "decision": "allow|prompt|forbidden"
        }
      }
    ]
  }
}
```

- No match:

```json
"noMatch"
```

- `matchedRules` lists every rule whose prefix matched the command;
`matchedPrefix` is the exact prefix that matched.
- The effective `decision` is the strictest severity across all matches
(`forbidden` > `prompt` > `allow`).

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
This commit is contained in:
zhao-oai
2025-11-17 10:15:45 -08:00
committed by GitHub
parent 2c665fb1dd
commit a941ae7632
13 changed files with 1006 additions and 0 deletions

15
codex-rs/Cargo.lock generated
View File

@@ -1202,6 +1202,21 @@ dependencies = [
"tempfile",
]
[[package]]
name = "codex-execpolicy2"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"multimap",
"pretty_assertions",
"serde",
"serde_json",
"shlex",
"starlark",
"thiserror 2.0.17",
]
[[package]]
name = "codex-feedback"
version = "0.0.0"

View File

@@ -17,6 +17,7 @@ members = [
"core",
"exec",
"execpolicy",
"execpolicy2",
"keyring-store",
"file-search",
"linux-sandbox",

View File

@@ -0,0 +1,29 @@
[package]
name = "codex-execpolicy2"
version = { workspace = true }
edition = "2024"
description = "Codex exec policy v2: prefix-based Starlark rules for command decisions."
[lib]
name = "codex_execpolicy2"
path = "src/lib.rs"
[[bin]]
name = "codex-execpolicy2"
path = "src/main.rs"
[lints]
workspace = true
[dependencies]
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
multimap = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
shlex = { workspace = true }
starlark = { workspace = true }
thiserror = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }

View File

@@ -0,0 +1,54 @@
# codex-execpolicy2
## Overview
- Policy engine and CLI built around `prefix_rule(pattern=[...], decision?, match?, not_match?)`.
- This release covers only the prefix-rule subset of the planned execpolicy v2 language; a richer language will follow.
- Tokens are matched in order; any `pattern` element may be a list to denote alternatives. `decision` defaults to `allow`; valid values: `allow`, `prompt`, `forbidden`.
- `match` / `not_match` supply example invocations that are validated at load time (think of them as unit tests); examples can be token arrays or strings (strings are tokenized with `shlex`).
- The CLI always prints the JSON serialization of the evaluation result (whether a match or not).
## Policy shapes
- Prefix rules use Starlark syntax:
```starlark
prefix_rule(
pattern = ["cmd", ["alt1", "alt2"]], # ordered tokens; list entries denote alternatives
decision = "prompt", # allow | prompt | forbidden; defaults to allow
match = [["cmd", "alt1"], "cmd alt2"], # examples that must match this rule
not_match = [["cmd", "oops"], "cmd alt3"], # examples that must not match this rule
)
```
## Response shapes
- Match:
```json
{
"match": {
"decision": "allow|prompt|forbidden",
"matchedRules": [
{
"prefixRuleMatch": {
"matchedPrefix": ["<token>", "..."],
"decision": "allow|prompt|forbidden"
}
}
]
}
}
```
- No match:
```json
"noMatch"
```
- `matchedRules` lists every rule whose prefix matched the command; `matchedPrefix` is the exact prefix that matched.
- The effective `decision` is the strictest severity across all matches (`forbidden` > `prompt` > `allow`).
## CLI
- Provide a policy file (for example `src/default.codexpolicy`) to check a command:
```bash
cargo run -p codex-execpolicy2 -- check --policy path/to/policy.codexpolicy git status
```
- Example outcomes:
- Match: `{"match": { ... "decision": "allow" ... }}`
- No match: `"noMatch"`

View File

@@ -0,0 +1,77 @@
# Example policy to illustrate syntax; not comprehensive and not recommended for actual use.
prefix_rule(
pattern = ["git", "reset", "--hard"],
decision = "forbidden",
match = [
["git", "reset", "--hard"],
],
not_match = [
["git", "reset", "--keep"],
"git reset --merge",
],
)
prefix_rule(
pattern = ["ls"],
match = [
["ls"],
["ls", "-l"],
["ls", "-a", "."],
],
)
prefix_rule(
pattern = ["cat"],
match = [
["cat", "file.txt"],
["cat", "-n", "README.md"],
],
)
prefix_rule(
pattern = ["cp"],
decision = "prompt",
match = [
["cp", "foo", "bar"],
"cp -r src dest",
],
)
prefix_rule(
pattern = ["head"],
match = [
["head", "README.md"],
["head", "-n", "5", "CHANGELOG.md"],
],
not_match = [
["hea", "-n", "1,5p", "CHANGELOG.md"],
],
)
prefix_rule(
pattern = ["printenv"],
match = [
["printenv"],
["printenv", "PATH"],
],
not_match = [
["print", "-0"],
],
)
prefix_rule(
pattern = ["pwd"],
match = [
["pwd"],
],
)
prefix_rule(
pattern = ["which"],
match = [
["which", "python3"],
["which", "-a", "python3"],
],
)

View File

@@ -0,0 +1,27 @@
use serde::Deserialize;
use serde::Serialize;
use crate::error::Error;
use crate::error::Result;
#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum Decision {
/// Command may run without further approval.
Allow,
/// Request explicit user approval; rejected outright when running with `approval_policy="never"`.
Prompt,
/// Command is blocked without further consideration.
Forbidden,
}
impl Decision {
pub fn parse(raw: &str) -> Result<Self> {
match raw {
"allow" => Ok(Self::Allow),
"prompt" => Ok(Self::Prompt),
"forbidden" => Ok(Self::Forbidden),
other => Err(Error::InvalidDecision(other.to_string())),
}
}
}

View File

@@ -0,0 +1,26 @@
use starlark::Error as StarlarkError;
use thiserror::Error;
pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, Error)]
pub enum Error {
#[error("invalid decision: {0}")]
InvalidDecision(String),
#[error("invalid pattern element: {0}")]
InvalidPattern(String),
#[error("invalid example: {0}")]
InvalidExample(String),
#[error(
"expected every example to match at least one rule. rules: {rules:?}; unmatched examples: \
{examples:?}"
)]
ExampleDidNotMatch {
rules: Vec<String>,
examples: Vec<String>,
},
#[error("expected example to not match rule `{rule}`: {example}")]
ExampleDidMatch { rule: String, example: String },
#[error("starlark error: {0}")]
Starlark(StarlarkError),
}

View File

@@ -0,0 +1,15 @@
pub mod decision;
pub mod error;
pub mod parser;
pub mod policy;
pub mod rule;
pub use decision::Decision;
pub use error::Error;
pub use error::Result;
pub use parser::PolicyParser;
pub use policy::Evaluation;
pub use policy::Policy;
pub use rule::Rule;
pub use rule::RuleMatch;
pub use rule::RuleRef;

View File

@@ -0,0 +1,54 @@
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use anyhow::Context;
use anyhow::Result;
use clap::Parser;
use codex_execpolicy2::PolicyParser;
/// CLI for evaluating exec policies
#[derive(Parser)]
#[command(name = "codex-execpolicy2")]
enum Cli {
/// Evaluate a command against a policy.
Check {
#[arg(short, long, value_name = "PATH")]
policy: PathBuf,
/// Command tokens to check.
#[arg(
value_name = "COMMAND",
required = true,
trailing_var_arg = true,
allow_hyphen_values = true
)]
command: Vec<String>,
},
}
fn main() -> Result<()> {
let cli = Cli::parse();
match cli {
Cli::Check { policy, command } => cmd_check(policy, command),
}
}
fn cmd_check(policy_path: PathBuf, args: Vec<String>) -> Result<()> {
let policy = load_policy(&policy_path)?;
let eval = policy.check(&args);
let json = serde_json::to_string_pretty(&eval)?;
println!("{json}");
Ok(())
}
fn load_policy(policy_path: &Path) -> Result<codex_execpolicy2::Policy> {
let policy_file_contents = fs::read_to_string(policy_path)
.with_context(|| format!("failed to read policy at {}", policy_path.display()))?;
let policy_identifier = policy_path.to_string_lossy();
Ok(PolicyParser::parse(
policy_identifier.as_ref(),
&policy_file_contents,
)?)
}

View File

@@ -0,0 +1,247 @@
use multimap::MultiMap;
use shlex;
use starlark::any::ProvidesStaticType;
use starlark::environment::GlobalsBuilder;
use starlark::environment::Module;
use starlark::eval::Evaluator;
use starlark::starlark_module;
use starlark::syntax::AstModule;
use starlark::syntax::Dialect;
use starlark::values::Value;
use starlark::values::list::ListRef;
use starlark::values::list::UnpackList;
use starlark::values::none::NoneType;
use std::cell::RefCell;
use std::cell::RefMut;
use std::sync::Arc;
use crate::decision::Decision;
use crate::error::Error;
use crate::error::Result;
use crate::rule::PatternToken;
use crate::rule::PrefixPattern;
use crate::rule::PrefixRule;
use crate::rule::RuleRef;
use crate::rule::validate_match_examples;
use crate::rule::validate_not_match_examples;
// todo: support parsing multiple policies
pub struct PolicyParser;
impl PolicyParser {
/// Parses a policy, tagging parser errors with `policy_identifier` so failures include the
/// identifier alongside line numbers.
pub fn parse(
policy_identifier: &str,
policy_file_contents: &str,
) -> Result<crate::policy::Policy> {
let mut dialect = Dialect::Extended.clone();
dialect.enable_f_strings = true;
let ast = AstModule::parse(
policy_identifier,
policy_file_contents.to_string(),
&dialect,
)
.map_err(Error::Starlark)?;
let globals = GlobalsBuilder::standard().with(policy_builtins).build();
let module = Module::new();
let builder = RefCell::new(PolicyBuilder::new());
{
let mut eval = Evaluator::new(&module);
eval.extra = Some(&builder);
eval.eval_module(ast, &globals).map_err(Error::Starlark)?;
}
Ok(builder.into_inner().build())
}
}
#[derive(Debug, ProvidesStaticType)]
struct PolicyBuilder {
rules_by_program: MultiMap<String, RuleRef>,
}
impl PolicyBuilder {
fn new() -> Self {
Self {
rules_by_program: MultiMap::new(),
}
}
fn add_rule(&mut self, rule: RuleRef) {
self.rules_by_program
.insert(rule.program().to_string(), rule);
}
fn build(self) -> crate::policy::Policy {
crate::policy::Policy::new(self.rules_by_program)
}
}
fn parse_pattern<'v>(pattern: UnpackList<Value<'v>>) -> Result<Vec<PatternToken>> {
let tokens: Vec<PatternToken> = pattern
.items
.into_iter()
.map(parse_pattern_token)
.collect::<Result<_>>()?;
if tokens.is_empty() {
Err(Error::InvalidPattern("pattern cannot be empty".to_string()))
} else {
Ok(tokens)
}
}
fn parse_pattern_token<'v>(value: Value<'v>) -> Result<PatternToken> {
if let Some(s) = value.unpack_str() {
Ok(PatternToken::Single(s.to_string()))
} else if let Some(list) = ListRef::from_value(value) {
let tokens: Vec<String> = list
.content()
.iter()
.map(|value| {
value
.unpack_str()
.ok_or_else(|| {
Error::InvalidPattern(format!(
"pattern alternative must be a string (got {})",
value.get_type()
))
})
.map(str::to_string)
})
.collect::<Result<_>>()?;
match tokens.as_slice() {
[] => Err(Error::InvalidPattern(
"pattern alternatives cannot be empty".to_string(),
)),
[single] => Ok(PatternToken::Single(single.clone())),
_ => Ok(PatternToken::Alts(tokens)),
}
} else {
Err(Error::InvalidPattern(format!(
"pattern element must be a string or list of strings (got {})",
value.get_type()
)))
}
}
fn parse_examples<'v>(examples: UnpackList<Value<'v>>) -> Result<Vec<Vec<String>>> {
examples.items.into_iter().map(parse_example).collect()
}
fn parse_example<'v>(value: Value<'v>) -> Result<Vec<String>> {
if let Some(raw) = value.unpack_str() {
parse_string_example(raw)
} else if let Some(list) = ListRef::from_value(value) {
parse_list_example(list)
} else {
Err(Error::InvalidExample(format!(
"example must be a string or list of strings (got {})",
value.get_type()
)))
}
}
fn parse_string_example(raw: &str) -> Result<Vec<String>> {
let tokens = shlex::split(raw).ok_or_else(|| {
Error::InvalidExample("example string has invalid shell syntax".to_string())
})?;
if tokens.is_empty() {
Err(Error::InvalidExample(
"example cannot be an empty string".to_string(),
))
} else {
Ok(tokens)
}
}
fn parse_list_example(list: &ListRef) -> Result<Vec<String>> {
let tokens: Vec<String> = list
.content()
.iter()
.map(|value| {
value
.unpack_str()
.ok_or_else(|| {
Error::InvalidExample(format!(
"example tokens must be strings (got {})",
value.get_type()
))
})
.map(str::to_string)
})
.collect::<Result<_>>()?;
if tokens.is_empty() {
Err(Error::InvalidExample(
"example cannot be an empty list".to_string(),
))
} else {
Ok(tokens)
}
}
fn policy_builder<'v, 'a>(eval: &Evaluator<'v, 'a, '_>) -> RefMut<'a, PolicyBuilder> {
#[expect(clippy::expect_used)]
eval.extra
.as_ref()
.expect("policy_builder requires Evaluator.extra to be populated")
.downcast_ref::<RefCell<PolicyBuilder>>()
.expect("Evaluator.extra must contain a PolicyBuilder")
.borrow_mut()
}
#[starlark_module]
fn policy_builtins(builder: &mut GlobalsBuilder) {
fn prefix_rule<'v>(
pattern: UnpackList<Value<'v>>,
decision: Option<&'v str>,
r#match: Option<UnpackList<Value<'v>>>,
not_match: Option<UnpackList<Value<'v>>>,
eval: &mut Evaluator<'v, '_, '_>,
) -> anyhow::Result<NoneType> {
let decision = match decision {
Some(raw) => Decision::parse(raw)?,
None => Decision::Allow,
};
let pattern_tokens = parse_pattern(pattern)?;
let matches: Vec<Vec<String>> =
r#match.map(parse_examples).transpose()?.unwrap_or_default();
let not_matches: Vec<Vec<String>> = not_match
.map(parse_examples)
.transpose()?
.unwrap_or_default();
let mut builder = policy_builder(eval);
let (first_token, remaining_tokens) = pattern_tokens
.split_first()
.ok_or_else(|| Error::InvalidPattern("pattern cannot be empty".to_string()))?;
let rest: Arc<[PatternToken]> = remaining_tokens.to_vec().into();
let rules: Vec<RuleRef> = first_token
.alternatives()
.iter()
.map(|head| {
Arc::new(PrefixRule {
pattern: PrefixPattern {
first: Arc::from(head.as_str()),
rest: rest.clone(),
},
decision,
}) as RuleRef
})
.collect();
validate_not_match_examples(&rules, &not_matches)?;
validate_match_examples(&rules, &matches)?;
rules.into_iter().for_each(|rule| builder.add_rule(rule));
Ok(NoneType)
}
}

View File

@@ -0,0 +1,58 @@
use crate::decision::Decision;
use crate::rule::RuleMatch;
use crate::rule::RuleRef;
use multimap::MultiMap;
use serde::Deserialize;
use serde::Serialize;
#[derive(Clone, Debug)]
pub struct Policy {
rules_by_program: MultiMap<String, RuleRef>,
}
impl Policy {
pub fn new(rules_by_program: MultiMap<String, RuleRef>) -> Self {
Self { rules_by_program }
}
pub fn rules(&self) -> &MultiMap<String, RuleRef> {
&self.rules_by_program
}
pub fn check(&self, cmd: &[String]) -> Evaluation {
let rules = match cmd.first() {
Some(first) => match self.rules_by_program.get_vec(first) {
Some(rules) => rules,
None => return Evaluation::NoMatch,
},
None => return Evaluation::NoMatch,
};
let matched_rules: Vec<RuleMatch> =
rules.iter().filter_map(|rule| rule.matches(cmd)).collect();
match matched_rules.iter().map(RuleMatch::decision).max() {
Some(decision) => Evaluation::Match {
decision,
matched_rules,
},
None => Evaluation::NoMatch,
}
}
}
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum Evaluation {
NoMatch,
Match {
decision: Decision,
#[serde(rename = "matchedRules")]
matched_rules: Vec<RuleMatch>,
},
}
impl Evaluation {
pub fn is_match(&self) -> bool {
matches!(self, Self::Match { .. })
}
}

View File

@@ -0,0 +1,147 @@
use crate::decision::Decision;
use crate::error::Error;
use crate::error::Result;
use serde::Deserialize;
use serde::Serialize;
use shlex::try_join;
use std::any::Any;
use std::fmt::Debug;
use std::sync::Arc;
/// Matches a single command token, either a fixed string or one of several allowed alternatives.
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum PatternToken {
Single(String),
Alts(Vec<String>),
}
impl PatternToken {
fn matches(&self, token: &str) -> bool {
match self {
Self::Single(expected) => expected == token,
Self::Alts(alternatives) => alternatives.iter().any(|alt| alt == token),
}
}
pub fn alternatives(&self) -> &[String] {
match self {
Self::Single(expected) => std::slice::from_ref(expected),
Self::Alts(alternatives) => alternatives,
}
}
}
/// Prefix matcher for commands with support for alternative match tokens.
/// First token is fixed since we key by the first token in policy.
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct PrefixPattern {
pub first: Arc<str>,
pub rest: Arc<[PatternToken]>,
}
impl PrefixPattern {
pub fn matches_prefix(&self, cmd: &[String]) -> Option<Vec<String>> {
let pattern_length = self.rest.len() + 1;
if cmd.len() < pattern_length || cmd[0] != self.first.as_ref() {
return None;
}
for (pattern_token, cmd_token) in self.rest.iter().zip(&cmd[1..pattern_length]) {
if !pattern_token.matches(cmd_token) {
return None;
}
}
Some(cmd[..pattern_length].to_vec())
}
}
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum RuleMatch {
PrefixRuleMatch {
#[serde(rename = "matchedPrefix")]
matched_prefix: Vec<String>,
decision: Decision,
},
}
impl RuleMatch {
pub fn decision(&self) -> Decision {
match self {
Self::PrefixRuleMatch { decision, .. } => *decision,
}
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct PrefixRule {
pub pattern: PrefixPattern,
pub decision: Decision,
}
pub trait Rule: Any + Debug + Send + Sync {
fn program(&self) -> &str;
fn matches(&self, cmd: &[String]) -> Option<RuleMatch>;
}
pub type RuleRef = Arc<dyn Rule>;
impl Rule for PrefixRule {
fn program(&self) -> &str {
self.pattern.first.as_ref()
}
fn matches(&self, cmd: &[String]) -> Option<RuleMatch> {
self.pattern
.matches_prefix(cmd)
.map(|matched_prefix| RuleMatch::PrefixRuleMatch {
matched_prefix,
decision: self.decision,
})
}
}
/// Count how many rules match each provided example and error if any example is unmatched.
pub(crate) fn validate_match_examples(rules: &[RuleRef], matches: &[Vec<String>]) -> Result<()> {
let mut unmatched_examples = Vec::new();
for example in matches {
if rules.iter().any(|rule| rule.matches(example).is_some()) {
continue;
}
unmatched_examples.push(
try_join(example.iter().map(String::as_str))
.unwrap_or_else(|_| "unable to render example".to_string()),
);
}
if unmatched_examples.is_empty() {
Ok(())
} else {
Err(Error::ExampleDidNotMatch {
rules: rules.iter().map(|rule| format!("{rule:?}")).collect(),
examples: unmatched_examples,
})
}
}
/// Ensure that no rule matches any provided negative example.
pub(crate) fn validate_not_match_examples(
rules: &[RuleRef],
not_matches: &[Vec<String>],
) -> Result<()> {
for example in not_matches {
if let Some(rule) = rules.iter().find(|rule| rule.matches(example).is_some()) {
return Err(Error::ExampleDidMatch {
rule: format!("{rule:?}"),
example: try_join(example.iter().map(String::as_str))
.unwrap_or_else(|_| "unable to render example".to_string()),
});
}
}
Ok(())
}

View File

@@ -0,0 +1,256 @@
use std::any::Any;
use std::sync::Arc;
use codex_execpolicy2::Decision;
use codex_execpolicy2::Evaluation;
use codex_execpolicy2::PolicyParser;
use codex_execpolicy2::RuleMatch;
use codex_execpolicy2::RuleRef;
use codex_execpolicy2::rule::PatternToken;
use codex_execpolicy2::rule::PrefixPattern;
use codex_execpolicy2::rule::PrefixRule;
use pretty_assertions::assert_eq;
fn tokens(cmd: &[&str]) -> Vec<String> {
cmd.iter().map(std::string::ToString::to_string).collect()
}
#[derive(Clone, Debug, Eq, PartialEq)]
enum RuleSnapshot {
Prefix(PrefixRule),
}
fn rule_snapshots(rules: &[RuleRef]) -> Vec<RuleSnapshot> {
rules
.iter()
.map(|rule| {
let rule_any = rule.as_ref() as &dyn Any;
if let Some(prefix_rule) = rule_any.downcast_ref::<PrefixRule>() {
RuleSnapshot::Prefix(prefix_rule.clone())
} else {
panic!("unexpected rule type in RuleRef: {rule:?}");
}
})
.collect()
}
#[test]
fn basic_match() {
let policy_src = r#"
prefix_rule(
pattern = ["git", "status"],
)
"#;
let policy = PolicyParser::parse("test.codexpolicy", policy_src).expect("parse policy");
let cmd = tokens(&["git", "status"]);
let evaluation = policy.check(&cmd);
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git", "status"]),
decision: Decision::Allow,
}],
},
evaluation
);
}
#[test]
fn only_first_token_alias_expands_to_multiple_rules() {
let policy_src = r#"
prefix_rule(
pattern = [["bash", "sh"], ["-c", "-l"]],
)
"#;
let policy = PolicyParser::parse("test.codexpolicy", policy_src).expect("parse policy");
let bash_rules = rule_snapshots(policy.rules().get_vec("bash").expect("bash rules"));
let sh_rules = rule_snapshots(policy.rules().get_vec("sh").expect("sh rules"));
assert_eq!(
vec![RuleSnapshot::Prefix(PrefixRule {
pattern: PrefixPattern {
first: Arc::from("bash"),
rest: vec![PatternToken::Alts(vec!["-c".to_string(), "-l".to_string()])].into(),
},
decision: Decision::Allow,
})],
bash_rules
);
assert_eq!(
vec![RuleSnapshot::Prefix(PrefixRule {
pattern: PrefixPattern {
first: Arc::from("sh"),
rest: vec![PatternToken::Alts(vec!["-c".to_string(), "-l".to_string()])].into(),
},
decision: Decision::Allow,
})],
sh_rules
);
let bash_eval = policy.check(&tokens(&["bash", "-c", "echo", "hi"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["bash", "-c"]),
decision: Decision::Allow,
}],
},
bash_eval
);
let sh_eval = policy.check(&tokens(&["sh", "-l", "echo", "hi"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["sh", "-l"]),
decision: Decision::Allow,
}],
},
sh_eval
);
}
#[test]
fn tail_aliases_are_not_cartesian_expanded() {
let policy_src = r#"
prefix_rule(
pattern = ["npm", ["i", "install"], ["--legacy-peer-deps", "--no-save"]],
)
"#;
let policy = PolicyParser::parse("test.codexpolicy", policy_src).expect("parse policy");
let rules = rule_snapshots(policy.rules().get_vec("npm").expect("npm rules"));
assert_eq!(
vec![RuleSnapshot::Prefix(PrefixRule {
pattern: PrefixPattern {
first: Arc::from("npm"),
rest: vec![
PatternToken::Alts(vec!["i".to_string(), "install".to_string()]),
PatternToken::Alts(vec![
"--legacy-peer-deps".to_string(),
"--no-save".to_string(),
]),
]
.into(),
},
decision: Decision::Allow,
})],
rules
);
let npm_i = policy.check(&tokens(&["npm", "i", "--legacy-peer-deps"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["npm", "i", "--legacy-peer-deps"]),
decision: Decision::Allow,
}],
},
npm_i
);
let npm_install = policy.check(&tokens(&["npm", "install", "--no-save", "leftpad"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["npm", "install", "--no-save"]),
decision: Decision::Allow,
}],
},
npm_install
);
}
#[test]
fn match_and_not_match_examples_are_enforced() {
let policy_src = r#"
prefix_rule(
pattern = ["git", "status"],
match = [["git", "status"], "git status"],
not_match = [
["git", "--config", "color.status=always", "status"],
"git --config color.status=always status",
],
)
"#;
let policy = PolicyParser::parse("test.codexpolicy", policy_src).expect("parse policy");
let match_eval = policy.check(&tokens(&["git", "status"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Allow,
matched_rules: vec![RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git", "status"]),
decision: Decision::Allow,
}],
},
match_eval
);
let no_match_eval = policy.check(&tokens(&[
"git",
"--config",
"color.status=always",
"status",
]));
assert_eq!(Evaluation::NoMatch, no_match_eval);
}
#[test]
fn strictest_decision_wins_across_matches() {
let policy_src = r#"
prefix_rule(
pattern = ["git", "status"],
decision = "allow",
)
prefix_rule(
pattern = ["git"],
decision = "prompt",
)
prefix_rule(
pattern = ["git", "commit"],
decision = "forbidden",
)
"#;
let policy = PolicyParser::parse("test.codexpolicy", policy_src).expect("parse policy");
let status = policy.check(&tokens(&["git", "status"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Prompt,
matched_rules: vec![
RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git", "status"]),
decision: Decision::Allow,
},
RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git"]),
decision: Decision::Prompt,
},
],
},
status
);
let commit = policy.check(&tokens(&["git", "commit", "-m", "hi"]));
assert_eq!(
Evaluation::Match {
decision: Decision::Forbidden,
matched_rules: vec![
RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git"]),
decision: Decision::Prompt,
},
RuleMatch::PrefixRuleMatch {
matched_prefix: tokens(&["git", "commit"]),
decision: Decision::Forbidden,
},
],
},
commit
);
}