Focus Python SDK approval mode

Default high-level thread and turn starts to auto-review, keep deny_all as the explicit opt-out, and remove the generated AskForApproval alias customization.

Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
Ahmed Ibrahim
2026-05-10 12:06:14 +03:00
parent 800aa1d6ba
commit bfd11aa1fc
11 changed files with 99 additions and 215 deletions

View File

@@ -3,7 +3,7 @@
Public surface of `openai_codex` for app-server v2.
This SDK surface is experimental. Turn streams are routed by turn ID so one client can consume multiple active turns concurrently.
Thread and turn starts expose `approval_mode`. `ApprovalMode.deny_all` is the default and denies escalated permissions; `ApprovalMode.auto_review` routes escalated permission requests to auto-review.
Thread and turn starts expose `approval_mode`. `ApprovalMode.auto_review` is the default; use `ApprovalMode.deny_all` to deny escalated permissions.
## Package Entry
@@ -47,10 +47,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> None`
- `thread_start(*, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_start(*, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> ThreadListResponse`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_archive(thread_id: str) -> ThreadArchiveResponse`
- `thread_unarchive(thread_id: str) -> Thread`
- `models(*, include_hidden: bool = False) -> ModelListResponse`
@@ -82,10 +82,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> Awaitable[None]`
- `thread_start(*, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_start(*, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> Awaitable[ThreadListResponse]`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_archive(thread_id: str) -> Awaitable[ThreadArchiveResponse]`
- `thread_unarchive(thread_id: str) -> Awaitable[AsyncThread]`
- `models(*, include_hidden: bool = False) -> Awaitable[ModelListResponse]`
@@ -103,16 +103,16 @@ async with AsyncCodex() as codex:
### Thread
- `run(input: str | Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `run(input: str | Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `read(*, include_turns: bool = False) -> ThreadReadResponse`
- `set_name(name: str) -> ThreadSetNameResponse`
- `compact() -> ThreadCompactStartResponse`
### AsyncThread
- `run(input: str | Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `run(input: str | Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `read(*, include_turns: bool = False) -> Awaitable[ThreadReadResponse]`
- `set_name(name: str) -> Awaitable[ThreadSetNameResponse]`
- `compact() -> Awaitable[ThreadCompactStartResponse]`

View File

@@ -18,7 +18,6 @@ ensure_local_sdk_src()
import asyncio
from openai_codex import (
ApprovalMode,
AsyncCodex,
TextInput,
)
@@ -46,18 +45,14 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_MODE = ApprovalMode.auto_review
async def main() -> None:
async with AsyncCodex(config=runtime_config()) as codex:
thread = await codex.thread_start(
model="gpt-5.4", config={"model_reasoning_effort": "high"}
)
thread = await codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
turn = await thread.turn(
TextInput(PROMPT),
approval_mode=APPROVAL_MODE,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,
@@ -69,16 +64,12 @@ async def main() -> None:
try:
structured = json.loads(structured_text)
except json.JSONDecodeError as exc:
raise RuntimeError(
f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}"
) from exc
raise RuntimeError(f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}") from exc
summary = structured.get("summary")
actions = structured.get("actions")
if (
not isinstance(summary, str)
or not isinstance(actions, list)
or not all(isinstance(action, str) for action in actions)
if not isinstance(summary, str) or not isinstance(actions, list) or not all(
isinstance(action, str) for action in actions
):
raise RuntimeError(
f"Expected structured output with string summary/actions, got: {structured!r}"
@@ -89,9 +80,7 @@ async def main() -> None:
print("actions:")
for action in actions:
print("-", action)
print(
"Items:", 0 if persisted_turn is None else len(persisted_turn.items or [])
)
print("Items:", 0 if persisted_turn is None else len(persisted_turn.items or []))
if __name__ == "__main__":

View File

@@ -16,7 +16,6 @@ from _bootstrap import (
ensure_local_sdk_src()
from openai_codex import (
ApprovalMode,
Codex,
TextInput,
)
@@ -44,16 +43,12 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_MODE = ApprovalMode.auto_review
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(
model="gpt-5.4", config={"model_reasoning_effort": "high"}
)
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
turn = thread.turn(
TextInput(PROMPT),
approval_mode=APPROVAL_MODE,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,
@@ -65,20 +60,14 @@ with Codex(config=runtime_config()) as codex:
try:
structured = json.loads(structured_text)
except json.JSONDecodeError as exc:
raise RuntimeError(
f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}"
) from exc
raise RuntimeError(f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}") from exc
summary = structured.get("summary")
actions = structured.get("actions")
if (
not isinstance(summary, str)
or not isinstance(actions, list)
or not all(isinstance(action, str) for action in actions)
if not isinstance(summary, str) or not isinstance(actions, list) or not all(
isinstance(action, str) for action in actions
):
raise RuntimeError(
f"Expected structured output with string summary/actions, got: {structured!r}"
)
raise RuntimeError(f"Expected structured output with string summary/actions, got: {structured!r}")
print("Status:", result.status)
print("summary:", summary)

View File

@@ -5,19 +5,13 @@ _EXAMPLES_ROOT = Path(__file__).resolve().parents[1]
if str(_EXAMPLES_ROOT) not in sys.path:
sys.path.insert(0, str(_EXAMPLES_ROOT))
from _bootstrap import (
assistant_text_from_turn,
ensure_local_sdk_src,
find_turn_by_id,
runtime_config,
)
from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn_by_id, runtime_config
ensure_local_sdk_src()
import asyncio
from openai_codex import (
ApprovalMode,
AsyncCodex,
TextInput,
)
@@ -41,16 +35,11 @@ PREFERRED_MODEL = "gpt-5.4"
def _pick_highest_model(models):
visible = [m for m in models if not m.hidden] or models
preferred = next(
(m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL),
None,
)
preferred = next((m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL), None)
if preferred is not None:
return preferred
known_names = {m.id for m in visible} | {m.model for m in visible}
top_candidates = [
m for m in visible if not (m.upgrade and m.upgrade in known_names)
]
top_candidates = [m for m in visible if not (m.upgrade and m.upgrade in known_names)]
pool = top_candidates or visible
return max(pool, key=lambda m: (m.model, m.id))
@@ -85,7 +74,6 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_MODE = ApprovalMode.auto_review
async def main() -> None:
@@ -112,16 +100,10 @@ async def main() -> None:
first_persisted_turn = find_turn_by_id(persisted.thread.turns, first.id)
print("agent.message:", assistant_text_from_turn(first_persisted_turn))
print(
"items:",
0
if first_persisted_turn is None
else len(first_persisted_turn.items or []),
)
print("items:", 0 if first_persisted_turn is None else len(first_persisted_turn.items or []))
second_turn = await thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_mode=APPROVAL_MODE,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,
@@ -135,12 +117,7 @@ async def main() -> None:
second_persisted_turn = find_turn_by_id(persisted.thread.turns, second.id)
print("agent.message.params:", assistant_text_from_turn(second_persisted_turn))
print(
"items.params:",
0
if second_persisted_turn is None
else len(second_persisted_turn.items or []),
)
print("items.params:", 0 if second_persisted_turn is None else len(second_persisted_turn.items or []))
if __name__ == "__main__":

View File

@@ -5,17 +5,11 @@ _EXAMPLES_ROOT = Path(__file__).resolve().parents[1]
if str(_EXAMPLES_ROOT) not in sys.path:
sys.path.insert(0, str(_EXAMPLES_ROOT))
from _bootstrap import (
assistant_text_from_turn,
ensure_local_sdk_src,
find_turn_by_id,
runtime_config,
)
from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn_by_id, runtime_config
ensure_local_sdk_src()
from openai_codex import (
ApprovalMode,
Codex,
TextInput,
)
@@ -39,16 +33,11 @@ PREFERRED_MODEL = "gpt-5.4"
def _pick_highest_model(models):
visible = [m for m in models if not m.hidden] or models
preferred = next(
(m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL),
None,
)
preferred = next((m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL), None)
if preferred is not None:
return preferred
known_names = {m.id for m in visible} | {m.model for m in visible}
top_candidates = [
m for m in visible if not (m.upgrade and m.upgrade in known_names)
]
top_candidates = [m for m in visible if not (m.upgrade and m.upgrade in known_names)]
pool = top_candidates or visible
return max(pool, key=lambda m: (m.model, m.id))
@@ -83,7 +72,6 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_MODE = ApprovalMode.auto_review
with Codex(config=runtime_config()) as codex:
@@ -112,7 +100,6 @@ with Codex(config=runtime_config()) as codex:
second = thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_mode=APPROVAL_MODE,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,

View File

@@ -246,9 +246,6 @@
"# Cell 5b: one turn with most optional turn params\n",
"from pathlib import Path\n",
"from openai_codex import (\n",
" ApprovalMode,\n",
")\n",
"from openai_codex.types import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -272,7 +269,6 @@
" thread = codex.thread_start(model='gpt-5.4', config={'model_reasoning_effort': 'high'})\n",
" turn = thread.turn(\n",
" TextInput('Propose a safe production feature-flag rollout. Return JSON matching the schema.'),\n",
" approval_mode=ApprovalMode.auto_review,\n",
" cwd=str(Path.cwd()),\n",
" effort=ReasoningEffort.medium,\n",
" model='gpt-5.4',\n",
@@ -298,9 +294,6 @@
"# Cell 5c: choose highest model + highest supported reasoning, then run turns\n",
"from pathlib import Path\n",
"from openai_codex import (\n",
" ApprovalMode,\n",
")\n",
"from openai_codex.types import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -365,7 +358,6 @@
"\n",
" second = thread.turn(\n",
" TextInput('Return JSON for a safe feature-flag rollout plan.'),\n",
" approval_mode=ApprovalMode.auto_review,\n",
" cwd=str(Path.cwd()),\n",
" effort=selected_effort,\n",
" model=selected_model.model,\n",

View File

@@ -615,50 +615,6 @@ def generate_v2_all(schema_dir: Path) -> None:
cwd=sdk_root(),
)
_normalize_generated_timestamps(out_path)
_add_ask_for_approval_aliases(out_path)
def _add_ask_for_approval_aliases(out_path: Path) -> None:
"""Add ergonomic approval policy constants to the generated RootModel class."""
source = out_path.read_text()
source = source.replace(
"from typing import Annotated, Any, Literal",
"from typing import Annotated, Any, ClassVar, Literal",
)
if "AskForApproval.never =" in source:
out_path.write_text(source)
return
needle = """class AskForApproval(RootModel[AskForApprovalValue | GranularAskForApproval]):
model_config = ConfigDict(
populate_by_name=True,
)
root: AskForApprovalValue | GranularAskForApproval
"""
replacement = """class AskForApproval(RootModel[AskForApprovalValue | GranularAskForApproval]):
model_config = ConfigDict(
populate_by_name=True,
)
root: AskForApprovalValue | GranularAskForApproval
untrusted: ClassVar[AskForApproval]
on_failure: ClassVar[AskForApproval]
on_request: ClassVar[AskForApproval]
never: ClassVar[AskForApproval]
AskForApproval.untrusted = AskForApproval(root=AskForApprovalValue.untrusted)
AskForApproval.on_failure = AskForApproval(root=AskForApprovalValue.on_failure)
AskForApproval.on_request = AskForApproval(root=AskForApprovalValue.on_request)
AskForApproval.never = AskForApproval(root=AskForApprovalValue.never)
"""
updated, count = source.replace(needle, replacement, 1), source.count(needle)
if count != 1:
raise RuntimeError("Could not add AskForApproval aliases to generated types")
out_path.write_text(updated)
def _notification_specs(schema_dir: Path) -> list[tuple[str, str]]:
@@ -743,7 +699,13 @@ def _type_tuple_source(class_names: list[str]) -> str:
def generate_notification_registry(schema_dir: Path) -> None:
"""Regenerate notification dispatch metadata from the runtime notification schema."""
out = sdk_root() / "src" / "openai_codex" / "generated" / "notification_registry.py"
out = (
sdk_root()
/ "src"
/ "openai_codex"
/ "generated"
/ "notification_registry.py"
)
specs = _notification_specs(schema_dir)
class_names = sorted({class_name for _, class_name in specs})
direct_turn_id_types, nested_turn_types = _notification_turn_id_specs(
@@ -921,7 +883,7 @@ def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
def _approval_mode_signature_lines() -> list[str]:
"""Return the public approval mode kwarg emitted on start helpers."""
return [" approval_mode: ApprovalMode = ApprovalMode.deny_all,"]
return [" approval_mode: ApprovalMode = ApprovalMode.auto_review,"]
def _approval_mode_assignment_line(*, indent: str = " ") -> str:
@@ -943,10 +905,7 @@ def _approval_mode_model_arg_lines(*, indent: str = " ") -> list[str]
def _model_arg_lines(
fields: list[PublicFieldSpec], *, indent: str = " "
) -> list[str]:
lines: list[str] = []
for field in fields:
lines.append(f"{indent}{field.wire_name}={field.py_name},")
return lines
return [f"{indent}{field.wire_name}={field.py_name}," for field in fields]
def _replace_generated_block(source: str, block_name: str, body: str) -> str:

View File

@@ -10,6 +10,7 @@ from .client import AppServerClient, AppServerConfig
from .generated.v2_all import (
ApprovalsReviewer,
AskForApproval,
AskForApprovalValue,
ModelListResponse,
Personality,
ReasoningEffort,
@@ -81,10 +82,13 @@ def _approval_mode_settings(
approval_mode: ApprovalMode,
) -> tuple[AskForApproval, ApprovalsReviewer | None]:
"""Map the public approval mode to generated app-server start params."""
if approval_mode == ApprovalMode.deny_all:
return AskForApproval.never, None
if approval_mode == ApprovalMode.auto_review:
return AskForApproval.on_request, ApprovalsReviewer.auto_review
return (
AskForApproval(root=AskForApprovalValue.on_request),
ApprovalsReviewer.auto_review,
)
if approval_mode == ApprovalMode.deny_all:
return AskForApproval(root=AskForApprovalValue.never), None
# TODO: Add a public approval result callback API before exposing more modes.
supported = ", ".join(mode.value for mode in ApprovalMode)
@@ -162,7 +166,7 @@ class Codex:
def thread_start(
self,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -230,7 +234,7 @@ class Codex:
self,
thread_id: str,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -263,7 +267,7 @@ class Codex:
self,
thread_id: str,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -363,7 +367,7 @@ class AsyncCodex:
async def thread_start(
self,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -433,7 +437,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -467,7 +471,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -524,7 +528,7 @@ class Thread:
self,
input: RunInput,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -557,7 +561,7 @@ class Thread:
self,
input: Input,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -607,7 +611,7 @@ class AsyncThread:
self,
input: RunInput,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -640,7 +644,7 @@ class AsyncThread:
self,
input: Input,
*,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,

View File

@@ -3,7 +3,7 @@
from __future__ import annotations
from pydantic import BaseModel, ConfigDict, Field, RootModel
from typing import Annotated, Any, ClassVar, Literal
from typing import Annotated, Any, Literal
from enum import Enum
@@ -248,16 +248,6 @@ class AskForApproval(RootModel[AskForApprovalValue | GranularAskForApproval]):
populate_by_name=True,
)
root: AskForApprovalValue | GranularAskForApproval
untrusted: ClassVar[AskForApproval]
on_failure: ClassVar[AskForApproval]
on_request: ClassVar[AskForApproval]
never: ClassVar[AskForApproval]
AskForApproval.untrusted = AskForApproval(root=AskForApprovalValue.untrusted)
AskForApproval.on_failure = AskForApproval(root=AskForApprovalValue.on_failure)
AskForApproval.on_request = AskForApproval(root=AskForApprovalValue.on_request)
AskForApproval.never = AskForApproval(root=AskForApprovalValue.never)
class AuthMode(Enum):

View File

@@ -30,7 +30,6 @@ from openai_codex.api import (
Thread,
TurnHandle,
)
from openai_codex.types import AskForApproval
ROOT = Path(__file__).resolve().parents[1]
@@ -249,21 +248,6 @@ def test_async_codex_initializes_only_once_under_concurrency() -> None:
asyncio.run(scenario())
def test_ask_for_approval_exposes_simple_policy_constants() -> None:
"""AskForApproval should expose enum-like aliases for simple policies."""
assert {
"untrusted": AskForApproval.untrusted.model_dump(mode="json"),
"on_failure": AskForApproval.on_failure.model_dump(mode="json"),
"on_request": AskForApproval.on_request.model_dump(mode="json"),
"never": AskForApproval.never.model_dump(mode="json"),
} == {
"untrusted": "untrusted",
"on_failure": "on-failure",
"on_request": "on-request",
"never": "never",
}
def test_sync_api_maps_approval_modes_for_started_work() -> None:
"""Sync start methods should serialize only supported approval modes."""
captured: list[Any] = []
@@ -307,23 +291,23 @@ def test_sync_api_maps_approval_modes_for_started_work() -> None:
codex.thread_resume("thread-1")
codex.thread_fork("thread-1")
Thread(client, "thread-1").turn(TextInput("hello"))
codex.thread_start(approval_mode=ApprovalMode.auto_review)
codex.thread_resume("thread-1", approval_mode=ApprovalMode.auto_review)
codex.thread_fork("thread-1", approval_mode=ApprovalMode.auto_review)
codex.thread_start(approval_mode=ApprovalMode.deny_all)
codex.thread_resume("thread-1", approval_mode=ApprovalMode.deny_all)
codex.thread_fork("thread-1", approval_mode=ApprovalMode.deny_all)
Thread(client, "thread-1").turn(
TextInput("hello"),
approval_mode=ApprovalMode.auto_review,
approval_mode=ApprovalMode.deny_all,
)
assert _approval_settings(captured) == [
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
]
@@ -373,7 +357,6 @@ def test_sync_api_rejects_unknown_approval_mode_before_rpc() -> None:
def test_async_api_maps_approval_modes_for_started_work() -> None:
"""Async start methods should serialize only supported approval modes."""
async def scenario() -> None:
"""Exercise the async wrappers without spawning a real app server."""
captured: list[Any] = []
@@ -417,26 +400,26 @@ def test_async_api_maps_approval_modes_for_started_work() -> None:
await codex.thread_resume("thread-1")
await codex.thread_fork("thread-1")
await AsyncThread(codex, "thread-1").turn(TextInput("hello"))
await codex.thread_start(approval_mode=ApprovalMode.auto_review)
await codex.thread_start(approval_mode=ApprovalMode.deny_all)
await codex.thread_resume(
"thread-1",
approval_mode=ApprovalMode.auto_review,
approval_mode=ApprovalMode.deny_all,
)
await codex.thread_fork("thread-1", approval_mode=ApprovalMode.auto_review)
await codex.thread_fork("thread-1", approval_mode=ApprovalMode.deny_all)
await AsyncThread(codex, "thread-1").turn(
TextInput("hello"),
approval_mode=ApprovalMode.auto_review,
approval_mode=ApprovalMode.deny_all,
)
assert _approval_settings(captured) == [
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
]
asyncio.run(scenario())
@@ -444,7 +427,6 @@ def test_async_api_maps_approval_modes_for_started_work() -> None:
def test_async_api_rejects_unknown_approval_mode_before_rpc() -> None:
"""Unknown async approval modes should fail before awaiting client calls."""
async def scenario() -> None:
"""Exercise async validation without starting a real app-server process."""
calls: list[str] = []
@@ -524,7 +506,6 @@ def test_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
def test_async_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two async TurnHandle streams should advance independently on one client."""
async def scenario() -> None:
"""Interleave two async streams backed by separate per-turn queues."""
codex = AsyncCodex()
@@ -607,10 +588,7 @@ def test_thread_run_accepts_string_input_and_returns_run_result() -> None:
client.turn_start = fake_turn_start # type: ignore[method-assign]
result = Thread(client, "thread-1").run(
"hello",
approval_mode=ApprovalMode.auto_review,
)
result = Thread(client, "thread-1").run("hello")
assert (
seen["thread_id"],
@@ -797,7 +775,6 @@ def test_stream_text_registers_and_consumes_turn_notifications() -> None:
def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
"""Async Thread.run should normalize string input and collect routed results."""
async def scenario() -> None:
"""Feed item, usage, and completion events through the async turn stream."""
codex = AsyncCodex()
@@ -832,10 +809,7 @@ def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
codex._client.turn_start = fake_turn_start # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
result = await AsyncThread(codex, "thread-1").run(
"hello",
approval_mode=ApprovalMode.auto_review,
)
result = await AsyncThread(codex, "thread-1").run("hello")
assert (
seen["thread_id"],
@@ -860,7 +834,6 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
None
):
"""Async run should use the last final assistant message as the response text."""
async def scenario() -> None:
"""Feed two completed agent messages through the async per-turn stream."""
codex = AsyncCodex()
@@ -908,7 +881,6 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
def test_async_thread_run_returns_none_when_only_commentary_messages_complete() -> None:
"""Async Thread.run should ignore commentary-only messages for final text."""
async def scenario() -> None:
"""Feed a commentary item and completion through the async turn stream."""
codex = AsyncCodex()

View File

@@ -97,6 +97,11 @@ def _keyword_only_names(fn: object) -> list[str]:
]
def _keyword_default(fn: object, name: str) -> object:
"""Return the default value for one keyword parameter on a public method."""
return inspect.signature(fn).parameters[name].default
def _assert_no_any_annotations(fn: object) -> None:
"""Reject loose annotations on public wrapper methods."""
signature = inspect.signature(fn)
@@ -378,6 +383,26 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
_assert_no_any_annotations(fn)
def test_generated_public_methods_default_to_auto_review() -> None:
"""Thread and turn starts should use auto-review unless callers opt out."""
funcs = [
Codex.thread_start,
Codex.thread_resume,
Codex.thread_fork,
Thread.turn,
Thread.run,
AsyncCodex.thread_start,
AsyncCodex.thread_resume,
AsyncCodex.thread_fork,
AsyncThread.turn,
AsyncThread.run,
]
assert {fn: _keyword_default(fn, "approval_mode") for fn in funcs} == {
fn: ApprovalMode.auto_review for fn in funcs
}
def test_lifecycle_methods_are_codex_scoped() -> None:
"""Lifecycle operations should hang off the client rather than thread objects."""
assert hasattr(Codex, "thread_resume")