## Why The SDK package root should be the ergonomic public client API, not a dump of every generated app-server schema type. Generated models still need a supported import path, but callers should be able to tell which names are high-level SDK entrypoints and which names are protocol value models. ## What - Define a curated root `__all__` for clients, handles, input helpers, retry helpers, config, and public errors. - Add a `types` module as the supported home for generated app-server response, event, enum, and helper models. - Update docs and examples to import protocol/value models from the type module. - Add tests that lock root exports, type-module exports, star-import behavior, and example import hygiene. ## Stack 1. #21891 `[1/8]` Pin Python SDK runtime dependency 2. #21893 `[2/8]` Generate Python SDK types from pinned runtime 3. #21895 `[3/8]` Run Python SDK tests in CI 4. This PR `[4/8]` Define Python SDK public API surface 5. #21905 `[5/8]` Rename Python SDK package to `openai-codex` 6. #21910 `[6/8]` Add high-level Python SDK approval mode 7. #22014 `[7/8]` Add Python SDK app-server integration harness 8. #22021 `[8/8]` Add Python SDK Ruff formatting ## Verification - Added public API signature tests for root exports, `types` exports, and example imports. --------- Co-authored-by: Codex <noreply@openai.com>
Codex App Server Python SDK (Experimental)
Experimental Python SDK for codex app-server JSON-RPC v2 over stdio, with a small default surface optimized for real scripts and apps.
The generated wire-model layer is sourced from the pinned openai-codex-cli-bin
runtime package and exposed as Pydantic models with snake_case Python fields
that serialize back to the app-server’s camelCase wire format.
The package root exports the ergonomic client API; public app-server value and
event types live in codex_app_server.types.
Install
cd sdk/python
uv sync
source .venv/bin/activate
Published SDK builds pin an exact openai-codex-cli-bin runtime dependency
with the same version as the SDK. For local repo development, either pass
AppServerConfig(codex_bin=...) to point at a local build explicitly, or use
the repo examples/notebook bootstrap which installs the pinned runtime package
automatically.
Quickstart
from codex_app_server import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5")
result = thread.run("Say hello in one sentence.")
print(result.final_response)
print(len(result.items))
result.final_response is None when the turn completes without a final-answer
or phase-less assistant message item.
Docs map
- Golden path tutorial:
docs/getting-started.md - API reference (signatures + behavior):
docs/api-reference.md - Common decisions and pitfalls:
docs/faq.md - Runnable examples index:
examples/README.md - Jupyter walkthrough notebook:
notebooks/sdk_walkthrough.ipynb
Examples
Start here:
cd sdk/python
python examples/01_quickstart_constructor/sync.py
python examples/01_quickstart_constructor/async.py
Runtime packaging
The repo no longer checks codex binaries into sdk/python.
Published SDK builds are pinned to an exact openai-codex-cli-bin package
version, and that runtime package carries the platform-specific binary for the
target wheel. The SDK package version and runtime package version must match.
For local repo development, the checked-in sdk/python-runtime package is only
a template for staged release artifacts. Editable installs should use an
explicit codex_bin override for manual SDK usage; the repo examples and
notebook bootstrap the pinned runtime package automatically.
Maintainer workflow
cd sdk/python
uv sync
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex-app-server-sdk \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
/tmp/codex-python-release/openai-codex-cli-bin \
/path/to/codex \
--codex-version <codex-release-tag-or-pep440-version>
Pass --platform-tag ... to stage-runtime when the wheel should be tagged for
a Rust target that differs from the Python build host. The intended one-off
matrix is macosx_11_0_arm64, macosx_10_9_x86_64,
musllinux_1_1_aarch64, musllinux_1_1_x86_64, win_arm64, and
win_amd64.
This supports the CI release flow:
- run
generate-typesbefore packaging - stage
openai-codex-app-server-sdkonce with an exactopenai-codex-cli-bin==...dependency - stage
openai-codex-cli-binon each supported platform runner with the same pinned runtime version - build and publish
openai-codex-cli-binas platform wheels only through PyPI trusted publishing; do not publish an sdist
Compatibility and versioning
- Package:
openai-codex-app-server-sdk - Runtime package:
openai-codex-cli-bin - Python:
>=3.10 - Target protocol: Codex
app-serverJSON-RPC v2 - Versioning rule: the SDK package version is the underlying Codex runtime version
Notes
Codex()is eager and performs startup +initializein the constructor.- Use context managers (
with Codex() as codex:) to ensure shutdown. - Prefer
thread.run("...")for the common case. Usethread.turn(...)when you need streaming, steering, or interrupt control. - For transient overload, use
retry_on_overloadfrom the package root.