Add high-level Python SDK approval mode

Expose approval_mode with deny_all and auto_review options on the high-level Python SDK, and map those choices to generated app-server approval params internally.

Update examples, docs, notebooks, and public API tests to use the new mode instead of raw generated approval fields.

Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
Ahmed Ibrahim
2026-05-10 11:44:34 +03:00
parent 7edbdc555c
commit ffe6e44a03
11 changed files with 314 additions and 159 deletions

View File

@@ -3,7 +3,7 @@
Public surface of `openai_codex` for app-server v2.
This SDK surface is experimental. Turn streams are routed by turn ID so one client can consume multiple active turns concurrently.
Thread and turn starts currently send `AskForApproval.never` while SDK approval request handling is still pending.
Thread and turn starts expose `approval_mode`. `ApprovalMode.deny_all` is the default and denies escalated permissions; `ApprovalMode.auto_review` routes escalated permission requests to auto-review.
## Package Entry
@@ -11,6 +11,7 @@ Thread and turn starts currently send `AskForApproval.never` while SDK approval
from openai_codex import (
Codex,
AsyncCodex,
ApprovalMode,
RunResult,
Thread,
AsyncThread,
@@ -46,10 +47,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> None`
- `thread_start(*, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_start(*, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> ThreadListResponse`
- `thread_resume(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_archive(thread_id: str) -> ThreadArchiveResponse`
- `thread_unarchive(thread_id: str) -> Thread`
- `models(*, include_hidden: bool = False) -> ModelListResponse`
@@ -81,10 +82,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> Awaitable[None]`
- `thread_start(*, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_start(*, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> Awaitable[ThreadListResponse]`
- `thread_resume(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.deny_all, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_archive(thread_id: str) -> Awaitable[ThreadArchiveResponse]`
- `thread_unarchive(thread_id: str) -> Awaitable[AsyncThread]`
- `models(*, include_hidden: bool = False) -> Awaitable[ModelListResponse]`
@@ -102,16 +103,16 @@ async with AsyncCodex() as codex:
### Thread
- `run(input: str | Input, *, approval_policy=None, approvals_reviewer=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_policy=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `run(input: str | Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `read(*, include_turns: bool = False) -> ThreadReadResponse`
- `set_name(name: str) -> ThreadSetNameResponse`
- `compact() -> ThreadCompactStartResponse`
### AsyncThread
- `run(input: str | Input, *, approval_policy=None, approvals_reviewer=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_policy=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `run(input: str | Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_mode=ApprovalMode.deny_all, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `read(*, include_turns: bool = False) -> Awaitable[ThreadReadResponse]`
- `set_name(name: str) -> Awaitable[ThreadSetNameResponse]`
- `compact() -> Awaitable[ThreadCompactStartResponse]`
@@ -175,7 +176,6 @@ The SDK wrappers return and accept public app-server models wherever possible:
```python
from openai_codex.types import (
AskForApproval,
ThreadReadResponse,
Turn,
TurnStatus,

View File

@@ -18,11 +18,11 @@ ensure_local_sdk_src()
import asyncio
from openai_codex import (
ApprovalMode,
AsyncCodex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningSummary,
)
@@ -46,16 +46,18 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.never
APPROVAL_MODE = ApprovalMode.auto_review
async def main() -> None:
async with AsyncCodex(config=runtime_config()) as codex:
thread = await codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
thread = await codex.thread_start(
model="gpt-5.4", config={"model_reasoning_effort": "high"}
)
turn = await thread.turn(
TextInput(PROMPT),
approval_policy=APPROVAL_POLICY,
approval_mode=APPROVAL_MODE,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,
@@ -67,12 +69,16 @@ async def main() -> None:
try:
structured = json.loads(structured_text)
except json.JSONDecodeError as exc:
raise RuntimeError(f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}") from exc
raise RuntimeError(
f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}"
) from exc
summary = structured.get("summary")
actions = structured.get("actions")
if not isinstance(summary, str) or not isinstance(actions, list) or not all(
isinstance(action, str) for action in actions
if (
not isinstance(summary, str)
or not isinstance(actions, list)
or not all(isinstance(action, str) for action in actions)
):
raise RuntimeError(
f"Expected structured output with string summary/actions, got: {structured!r}"
@@ -83,7 +89,9 @@ async def main() -> None:
print("actions:")
for action in actions:
print("-", action)
print("Items:", 0 if persisted_turn is None else len(persisted_turn.items or []))
print(
"Items:", 0 if persisted_turn is None else len(persisted_turn.items or [])
)
if __name__ == "__main__":

View File

@@ -16,11 +16,11 @@ from _bootstrap import (
ensure_local_sdk_src()
from openai_codex import (
ApprovalMode,
Codex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningSummary,
)
@@ -44,14 +44,16 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.never
APPROVAL_MODE = ApprovalMode.auto_review
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
thread = codex.thread_start(
model="gpt-5.4", config={"model_reasoning_effort": "high"}
)
turn = thread.turn(
TextInput(PROMPT),
approval_policy=APPROVAL_POLICY,
approval_mode=APPROVAL_MODE,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,
@@ -63,14 +65,20 @@ with Codex(config=runtime_config()) as codex:
try:
structured = json.loads(structured_text)
except json.JSONDecodeError as exc:
raise RuntimeError(f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}") from exc
raise RuntimeError(
f"Expected JSON matching OUTPUT_SCHEMA, got: {structured_text!r}"
) from exc
summary = structured.get("summary")
actions = structured.get("actions")
if not isinstance(summary, str) or not isinstance(actions, list) or not all(
isinstance(action, str) for action in actions
if (
not isinstance(summary, str)
or not isinstance(actions, list)
or not all(isinstance(action, str) for action in actions)
):
raise RuntimeError(f"Expected structured output with string summary/actions, got: {structured!r}")
raise RuntimeError(
f"Expected structured output with string summary/actions, got: {structured!r}"
)
print("Status:", result.status)
print("summary:", summary)

View File

@@ -5,18 +5,23 @@ _EXAMPLES_ROOT = Path(__file__).resolve().parents[1]
if str(_EXAMPLES_ROOT) not in sys.path:
sys.path.insert(0, str(_EXAMPLES_ROOT))
from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn_by_id, runtime_config
from _bootstrap import (
assistant_text_from_turn,
ensure_local_sdk_src,
find_turn_by_id,
runtime_config,
)
ensure_local_sdk_src()
import asyncio
from openai_codex import (
ApprovalMode,
AsyncCodex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningEffort,
ReasoningSummary,
@@ -36,11 +41,16 @@ PREFERRED_MODEL = "gpt-5.4"
def _pick_highest_model(models):
visible = [m for m in models if not m.hidden] or models
preferred = next((m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL), None)
preferred = next(
(m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL),
None,
)
if preferred is not None:
return preferred
known_names = {m.id for m in visible} | {m.model for m in visible}
top_candidates = [m for m in visible if not (m.upgrade and m.upgrade in known_names)]
top_candidates = [
m for m in visible if not (m.upgrade and m.upgrade in known_names)
]
pool = top_candidates or visible
return max(pool, key=lambda m: (m.model, m.id))
@@ -75,7 +85,7 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.never
APPROVAL_MODE = ApprovalMode.auto_review
async def main() -> None:
@@ -102,11 +112,16 @@ async def main() -> None:
first_persisted_turn = find_turn_by_id(persisted.thread.turns, first.id)
print("agent.message:", assistant_text_from_turn(first_persisted_turn))
print("items:", 0 if first_persisted_turn is None else len(first_persisted_turn.items or []))
print(
"items:",
0
if first_persisted_turn is None
else len(first_persisted_turn.items or []),
)
second_turn = await thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_policy=APPROVAL_POLICY,
approval_mode=APPROVAL_MODE,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,
@@ -120,7 +135,12 @@ async def main() -> None:
second_persisted_turn = find_turn_by_id(persisted.thread.turns, second.id)
print("agent.message.params:", assistant_text_from_turn(second_persisted_turn))
print("items.params:", 0 if second_persisted_turn is None else len(second_persisted_turn.items or []))
print(
"items.params:",
0
if second_persisted_turn is None
else len(second_persisted_turn.items or []),
)
if __name__ == "__main__":

View File

@@ -5,16 +5,21 @@ _EXAMPLES_ROOT = Path(__file__).resolve().parents[1]
if str(_EXAMPLES_ROOT) not in sys.path:
sys.path.insert(0, str(_EXAMPLES_ROOT))
from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn_by_id, runtime_config
from _bootstrap import (
assistant_text_from_turn,
ensure_local_sdk_src,
find_turn_by_id,
runtime_config,
)
ensure_local_sdk_src()
from openai_codex import (
ApprovalMode,
Codex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningEffort,
ReasoningSummary,
@@ -34,11 +39,16 @@ PREFERRED_MODEL = "gpt-5.4"
def _pick_highest_model(models):
visible = [m for m in models if not m.hidden] or models
preferred = next((m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL), None)
preferred = next(
(m for m in visible if m.model == PREFERRED_MODEL or m.id == PREFERRED_MODEL),
None,
)
if preferred is not None:
return preferred
known_names = {m.id for m in visible} | {m.model for m in visible}
top_candidates = [m for m in visible if not (m.upgrade and m.upgrade in known_names)]
top_candidates = [
m for m in visible if not (m.upgrade and m.upgrade in known_names)
]
pool = top_candidates or visible
return max(pool, key=lambda m: (m.model, m.id))
@@ -73,7 +83,7 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.never
APPROVAL_MODE = ApprovalMode.auto_review
with Codex(config=runtime_config()) as codex:
@@ -102,7 +112,7 @@ with Codex(config=runtime_config()) as codex:
second = thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_policy=APPROVAL_POLICY,
approval_mode=APPROVAL_MODE,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,

View File

@@ -246,7 +246,9 @@
"# Cell 5b: one turn with most optional turn params\n",
"from pathlib import Path\n",
"from openai_codex import (\n",
" AskForApproval,\n",
" ApprovalMode,\n",
")\n",
"from openai_codex.types import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -270,7 +272,7 @@
" thread = codex.thread_start(model='gpt-5.4', config={'model_reasoning_effort': 'high'})\n",
" turn = thread.turn(\n",
" TextInput('Propose a safe production feature-flag rollout. Return JSON matching the schema.'),\n",
" approval_policy=AskForApproval.never,\n",
" approval_mode=ApprovalMode.auto_review,\n",
" cwd=str(Path.cwd()),\n",
" effort=ReasoningEffort.medium,\n",
" model='gpt-5.4',\n",
@@ -296,7 +298,9 @@
"# Cell 5c: choose highest model + highest supported reasoning, then run turns\n",
"from pathlib import Path\n",
"from openai_codex import (\n",
" AskForApproval,\n",
" ApprovalMode,\n",
")\n",
"from openai_codex.types import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -361,7 +365,7 @@
"\n",
" second = thread.turn(\n",
" TextInput('Return JSON for a safe feature-flag rollout plan.'),\n",
" approval_policy=AskForApproval.never,\n",
" approval_mode=ApprovalMode.auto_review,\n",
" cwd=str(Path.cwd()),\n",
" effort=selected_effort,\n",
" model=selected_model.model,\n",

View File

@@ -743,13 +743,7 @@ def _type_tuple_source(class_names: list[str]) -> str:
def generate_notification_registry(schema_dir: Path) -> None:
"""Regenerate notification dispatch metadata from the runtime notification schema."""
out = (
sdk_root()
/ "src"
/ "openai_codex"
/ "generated"
/ "notification_registry.py"
)
out = sdk_root() / "src" / "openai_codex" / "generated" / "notification_registry.py"
specs = _notification_specs(schema_dir)
class_names = sorted({class_name for _, class_name in specs})
direct_turn_id_types, nested_turn_types = _notification_turn_id_specs(
@@ -925,17 +919,33 @@ def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
return lines
def _approval_mode_signature_lines() -> list[str]:
"""Return the public approval mode kwarg emitted on start helpers."""
return [" approval_mode: ApprovalMode = ApprovalMode.deny_all,"]
def _approval_mode_assignment_line(*, indent: str = " ") -> str:
"""Return the local mapping from public mode to app-server params."""
return (
f"{indent}approval_policy, approvals_reviewer = "
"_approval_mode_settings(approval_mode)"
)
def _approval_mode_model_arg_lines(*, indent: str = " ") -> list[str]:
"""Return app-server approval params derived from ApprovalMode."""
return [
f"{indent}approval_policy=approval_policy,",
f"{indent}approvals_reviewer=approvals_reviewer,",
]
def _model_arg_lines(
fields: list[PublicFieldSpec], *, indent: str = " "
) -> list[str]:
lines: list[str] = []
for field in fields:
value = field.py_name
if field.py_name == "approval_policy":
# TODO: Add a public approval callback API that lets callers return
# typed approval results, then honor caller-supplied policies.
value = "_approval_policy_never(approval_policy)"
lines.append(f"{indent}{field.wire_name}={value},")
lines.append(f"{indent}{field.wire_name}={field.py_name},")
return lines
@@ -960,9 +970,12 @@ def _render_codex_block(
" def thread_start(",
" self,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(thread_start_fields),
" ) -> Thread:",
_approval_mode_assignment_line(),
" params = ThreadStartParams(",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(thread_start_fields),
" )",
" started = self._client.thread_start(params)",
@@ -982,10 +995,13 @@ def _render_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(resume_fields),
" ) -> Thread:",
_approval_mode_assignment_line(),
" params = ThreadResumeParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(resume_fields),
" )",
" resumed = self._client.thread_resume(thread_id, params)",
@@ -995,10 +1011,13 @@ def _render_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(fork_fields),
" ) -> Thread:",
_approval_mode_assignment_line(),
" params = ThreadForkParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(fork_fields),
" )",
" forked = self._client.thread_fork(thread_id, params)",
@@ -1024,10 +1043,13 @@ def _render_async_codex_block(
" async def thread_start(",
" self,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(thread_start_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line(),
" params = ThreadStartParams(",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(thread_start_fields),
" )",
" started = await self._client.thread_start(params)",
@@ -1048,11 +1070,14 @@ def _render_async_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(resume_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line(),
" params = ThreadResumeParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(resume_fields),
" )",
" resumed = await self._client.thread_resume(thread_id, params)",
@@ -1062,11 +1087,14 @@ def _render_async_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(fork_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line(),
" params = ThreadForkParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(fork_fields),
" )",
" forked = await self._client.thread_fork(thread_id, params)",
@@ -1092,12 +1120,15 @@ def _render_thread_block(
" self,",
" input: Input,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(turn_fields),
" ) -> TurnHandle:",
" wire_input = _to_wire_input(input)",
_approval_mode_assignment_line(),
" params = TurnStartParams(",
" thread_id=self.id,",
" input=wire_input,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(turn_fields),
" )",
" turn = self._client.turn_start(self.id, wire_input, params=params)",
@@ -1114,13 +1145,16 @@ def _render_async_thread_block(
" self,",
" input: Input,",
" *,",
*_approval_mode_signature_lines(),
*_kw_signature_lines(turn_fields),
" ) -> AsyncTurnHandle:",
" await self._codex._ensure_initialized()",
" wire_input = _to_wire_input(input)",
_approval_mode_assignment_line(),
" params = TurnStartParams(",
" thread_id=self.id,",
" input=wire_input,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(turn_fields),
" )",
" turn = await self._codex._client.turn_start(",
@@ -1144,9 +1178,11 @@ def generate_public_api_flat_methods() -> None:
if src_dir_str not in sys.path:
sys.path.insert(0, src_dir_str)
approval_fields = {"approval_policy", "approvals_reviewer"}
thread_start_fields = _load_public_fields(
"openai_codex.generated.v2_all",
"ThreadStartParams",
exclude=approval_fields,
)
thread_list_fields = _load_public_fields(
"openai_codex.generated.v2_all",
@@ -1155,17 +1191,17 @@ def generate_public_api_flat_methods() -> None:
thread_resume_fields = _load_public_fields(
"openai_codex.generated.v2_all",
"ThreadResumeParams",
exclude={"thread_id"},
exclude={"thread_id", *approval_fields},
)
thread_fork_fields = _load_public_fields(
"openai_codex.generated.v2_all",
"ThreadForkParams",
exclude={"thread_id"},
exclude={"thread_id", *approval_fields},
)
turn_start_fields = _load_public_fields(
"openai_codex.generated.v2_all",
"TurnStartParams",
exclude={"thread_id", "input"},
exclude={"thread_id", "input", *approval_fields},
)
source = public_api_path.read_text()

View File

@@ -14,6 +14,7 @@ from .errors import (
is_retryable_error,
)
from .api import (
ApprovalMode,
AsyncCodex,
AsyncThread,
AsyncTurnHandle,
@@ -37,6 +38,7 @@ __all__ = [
"AppServerConfig",
"Codex",
"AsyncCodex",
"ApprovalMode",
"Thread",
"AsyncThread",
"TurnHandle",

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
from dataclasses import dataclass
from enum import Enum
from typing import AsyncIterator, Iterator
from .async_client import AsyncAppServerClient
@@ -69,10 +70,25 @@ def _split_user_agent(user_agent: str) -> tuple[str | None, str | None]:
return raw, None
def _approval_policy_never(_approval_policy: AskForApproval | None) -> AskForApproval:
# TODO: Add a public approval callback API that lets callers return typed
# approval results, then honor caller-supplied policies.
return AskForApproval.never
class ApprovalMode(str, Enum):
"""High-level approval behavior for escalated permission requests."""
deny_all = "deny_all"
auto_review = "auto_review"
def _approval_mode_settings(
approval_mode: ApprovalMode,
) -> tuple[AskForApproval, ApprovalsReviewer | None]:
"""Map the public approval mode to generated app-server start params."""
if approval_mode == ApprovalMode.deny_all:
return AskForApproval.never, None
if approval_mode == ApprovalMode.auto_review:
return AskForApproval.on_request, ApprovalsReviewer.auto_review
# TODO: Add a public approval result callback API before exposing more modes.
supported = ", ".join(mode.value for mode in ApprovalMode)
raise ValueError(f"approval_mode must be one of: {supported}")
class Codex:
@@ -146,8 +162,7 @@ class Codex:
def thread_start(
self,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -162,8 +177,9 @@ class Codex:
session_start_source: ThreadStartSource | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadStartParams(
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -214,8 +230,7 @@ class Codex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -226,9 +241,10 @@ class Codex:
sandbox: SandboxMode | None = None,
service_tier: str | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -247,8 +263,7 @@ class Codex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -260,9 +275,10 @@ class Codex:
service_tier: str | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -347,8 +363,7 @@ class AsyncCodex:
async def thread_start(
self,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -364,8 +379,9 @@ class AsyncCodex:
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadStartParams(
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -417,8 +433,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -430,9 +445,10 @@ class AsyncCodex:
service_tier: str | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -451,8 +467,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -465,9 +480,10 @@ class AsyncCodex:
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -508,8 +524,7 @@ class Thread:
self,
input: RunInput,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -521,8 +536,7 @@ class Thread:
) -> RunResult:
turn = self.turn(
_normalize_run_input(input),
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
approval_mode=approval_mode,
cwd=cwd,
effort=effort,
model=model,
@@ -543,8 +557,7 @@ class Thread:
self,
input: Input,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -555,10 +568,11 @@ class Thread:
summary: ReasoningSummary | None = None,
) -> TurnHandle:
wire_input = _to_wire_input(input)
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,
@@ -593,8 +607,7 @@ class AsyncThread:
self,
input: RunInput,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -606,8 +619,7 @@ class AsyncThread:
) -> RunResult:
turn = await self.turn(
_normalize_run_input(input),
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
approval_mode=approval_mode,
cwd=cwd,
effort=effort,
model=model,
@@ -628,8 +640,7 @@ class AsyncThread:
self,
input: Input,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.deny_all,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
@@ -641,10 +652,11 @@ class AsyncThread:
) -> AsyncTurnHandle:
await self._codex._ensure_initialized()
wire_input = _to_wire_input(input)
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
approval_policy=_approval_policy_never(approval_policy),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,

View File

@@ -20,6 +20,7 @@ from openai_codex.generated.v2_all import (
)
from openai_codex.models import InitializeResponse, Notification
from openai_codex.api import (
ApprovalMode,
AsyncCodex,
AsyncThread,
AsyncTurnHandle,
@@ -34,10 +35,18 @@ from openai_codex.types import AskForApproval
ROOT = Path(__file__).resolve().parents[1]
def _approval_policy_values(params: list[Any]) -> list[object]:
"""Return serialized approval policies from captured Pydantic params."""
def _approval_settings(params: list[Any]) -> list[dict[str, object]]:
"""Return serialized approval settings from captured Pydantic params."""
return [
param.model_dump(by_alias=True, mode="json").get("approvalPolicy")
{
key: value
for key, value in param.model_dump(
by_alias=True,
exclude_none=True,
mode="json",
).items()
if key in {"approvalPolicy", "approvalsReviewer"}
}
for param in params
]
@@ -255,8 +264,8 @@ def test_ask_for_approval_exposes_simple_policy_constants() -> None:
}
def test_sync_api_forces_approval_policy_never_for_started_work() -> None:
"""Sync start methods should send never until approval handling exists."""
def test_sync_api_maps_approval_modes_for_started_work() -> None:
"""Sync start methods should serialize only supported approval modes."""
captured: list[Any] = []
class FakeClient:
@@ -294,19 +303,33 @@ def test_sync_api_forces_approval_policy_never_for_started_work() -> None:
codex = object.__new__(Codex)
codex._client = client
codex.thread_start(approval_policy=AskForApproval.on_request)
codex.thread_resume("thread-1", approval_policy=AskForApproval.on_request)
codex.thread_fork("thread-1", approval_policy=AskForApproval.on_request)
codex.thread_start()
codex.thread_resume("thread-1")
codex.thread_fork("thread-1")
Thread(client, "thread-1").turn(TextInput("hello"))
codex.thread_start(approval_mode=ApprovalMode.auto_review)
codex.thread_resume("thread-1", approval_mode=ApprovalMode.auto_review)
codex.thread_fork("thread-1", approval_mode=ApprovalMode.auto_review)
Thread(client, "thread-1").turn(
TextInput("hello"),
approval_policy=AskForApproval.on_request,
approval_mode=ApprovalMode.auto_review,
)
assert _approval_policy_values(captured) == ["never", "never", "never", "never"]
assert _approval_settings(captured) == [
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
]
def test_async_api_forces_approval_policy_never_for_started_work() -> None:
"""Async start methods should send never until approval handling exists."""
def test_async_api_maps_approval_modes_for_started_work() -> None:
"""Async start methods should serialize only supported approval modes."""
async def scenario() -> None:
"""Exercise the async wrappers without spawning a real app server."""
captured: list[Any] = []
@@ -346,19 +369,30 @@ def test_async_api_forces_approval_policy_never_for_started_work() -> None:
codex._client = FakeAsyncClient()
codex._initialized = True
await codex.thread_start(approval_policy=AskForApproval.on_request)
await codex.thread_resume("thread-1", approval_policy=AskForApproval.on_request)
await codex.thread_fork("thread-1", approval_policy=AskForApproval.on_request)
await codex.thread_start()
await codex.thread_resume("thread-1")
await codex.thread_fork("thread-1")
await AsyncThread(codex, "thread-1").turn(TextInput("hello"))
await codex.thread_start(approval_mode=ApprovalMode.auto_review)
await codex.thread_resume(
"thread-1",
approval_mode=ApprovalMode.auto_review,
)
await codex.thread_fork("thread-1", approval_mode=ApprovalMode.auto_review)
await AsyncThread(codex, "thread-1").turn(
TextInput("hello"),
approval_policy=AskForApproval.on_request,
approval_mode=ApprovalMode.auto_review,
)
assert _approval_policy_values(captured) == [
"never",
"never",
"never",
"never",
assert _approval_settings(captured) == [
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "never"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"},
]
asyncio.run(scenario())
@@ -397,6 +431,7 @@ def test_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
def test_async_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two async TurnHandle streams should advance independently on one client."""
async def scenario() -> None:
"""Interleave two async streams backed by separate per-turn queues."""
codex = AsyncCodex()
@@ -479,14 +514,25 @@ def test_thread_run_accepts_string_input_and_returns_run_result() -> None:
client.turn_start = fake_turn_start # type: ignore[method-assign]
result = Thread(client, "thread-1").run("hello")
result = Thread(client, "thread-1").run(
"hello",
approval_mode=ApprovalMode.auto_review,
)
assert seen["thread_id"] == "thread-1"
assert seen["wire_input"] == [{"type": "text", "text": "hello"}]
assert result == RunResult(
final_response="Hello.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
assert (
seen["thread_id"],
seen["wire_input"],
_approval_settings([seen["params"]]),
result,
) == (
"thread-1",
[{"type": "text", "text": "hello"}],
[{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"}],
RunResult(
final_response="Hello.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
),
)
@@ -658,6 +704,7 @@ def test_stream_text_registers_and_consumes_turn_notifications() -> None:
def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
"""Async Thread.run should normalize string input and collect routed results."""
async def scenario() -> None:
"""Feed item, usage, and completion events through the async turn stream."""
codex = AsyncCodex()
@@ -692,14 +739,25 @@ def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
codex._client.turn_start = fake_turn_start # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
result = await AsyncThread(codex, "thread-1").run("hello")
result = await AsyncThread(codex, "thread-1").run(
"hello",
approval_mode=ApprovalMode.auto_review,
)
assert seen["thread_id"] == "thread-1"
assert seen["wire_input"] == [{"type": "text", "text": "hello"}]
assert result == RunResult(
final_response="Hello async.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
assert (
seen["thread_id"],
seen["wire_input"],
_approval_settings([seen["params"]]),
result,
) == (
"thread-1",
[{"type": "text", "text": "hello"}],
[{"approvalPolicy": "on-request", "approvalsReviewer": "auto_review"}],
RunResult(
final_response="Hello async.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
),
)
asyncio.run(scenario())
@@ -709,6 +767,7 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
None
):
"""Async run should use the last final assistant message as the response text."""
async def scenario() -> None:
"""Feed two completed agent messages through the async per-turn stream."""
codex = AsyncCodex()
@@ -756,6 +815,7 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
def test_async_thread_run_returns_none_when_only_commentary_messages_complete() -> None:
"""Async Thread.run should ignore commentary-only messages for final text."""
async def scenario() -> None:
"""Feed a commentary item and completion through the async turn stream."""
codex = AsyncCodex()

View File

@@ -10,6 +10,7 @@ import openai_codex
import openai_codex.types as public_types
from openai_codex import (
AppServerConfig,
ApprovalMode,
AsyncCodex,
AsyncThread,
Codex,
@@ -23,6 +24,7 @@ EXPECTED_ROOT_EXPORTS = [
"AppServerConfig",
"Codex",
"AsyncCodex",
"ApprovalMode",
"Thread",
"AsyncThread",
"TurnHandle",
@@ -117,6 +119,11 @@ def test_root_exports_run_result() -> None:
assert RunResult.__name__ == "RunResult"
def test_root_exports_approval_mode() -> None:
"""The root package should expose the high-level approval mode enum."""
assert ApprovalMode.deny_all.value == "deny_all"
def test_package_and_default_client_versions_follow_project_version() -> None:
"""The importable package version should stay aligned with pyproject metadata."""
pyproject_path = Path(__file__).resolve().parents[1] / "pyproject.toml"
@@ -135,18 +142,16 @@ def test_package_includes_py_typed_marker() -> None:
def test_package_root_exports_only_public_api() -> None:
"""The package root should expose the supported SDK surface, not internals."""
assert openai_codex.__all__ == EXPECTED_ROOT_EXPORTS
assert {
name: hasattr(openai_codex, name) for name in EXPECTED_ROOT_EXPORTS
} == {name: True for name in EXPECTED_ROOT_EXPORTS}
assert {name: hasattr(openai_codex, name) for name in EXPECTED_ROOT_EXPORTS} == {
name: True for name in EXPECTED_ROOT_EXPORTS
}
assert {
"AppServerClient": hasattr(openai_codex, "AppServerClient"),
"AsyncAppServerClient": hasattr(openai_codex, "AsyncAppServerClient"),
"InitializeResponse": hasattr(openai_codex, "InitializeResponse"),
"ThreadStartParams": hasattr(openai_codex, "ThreadStartParams"),
"TurnStartParams": hasattr(openai_codex, "TurnStartParams"),
"TurnCompletedNotification": hasattr(
openai_codex, "TurnCompletedNotification"
),
"TurnCompletedNotification": hasattr(openai_codex, "TurnCompletedNotification"),
"TurnStatus": hasattr(openai_codex, "TurnStatus"),
} == {
"AppServerClient": False,
@@ -210,8 +215,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"""Generated convenience methods should expose typed Pythonic keyword names."""
expected = {
Codex.thread_start: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -239,8 +243,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"use_state_db_only",
],
Codex.thread_resume: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -252,8 +255,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_tier",
],
Codex.thread_fork: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -266,8 +268,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"thread_source",
],
Thread.turn: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -278,8 +279,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
Thread.run: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -290,8 +290,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
AsyncCodex.thread_start: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -319,8 +318,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"use_state_db_only",
],
AsyncCodex.thread_resume: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -332,8 +330,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_tier",
],
AsyncCodex.thread_fork: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -346,8 +343,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"thread_source",
],
AsyncThread.turn: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -358,8 +354,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
AsyncThread.run: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",