## Why The SDK package root should be the ergonomic public client API, not a dump of every generated app-server schema type. Generated models still need a supported import path, but callers should be able to tell which names are high-level SDK entrypoints and which names are protocol value models. ## What - Define a curated root `__all__` for clients, handles, input helpers, retry helpers, config, and public errors. - Add a `types` module as the supported home for generated app-server response, event, enum, and helper models. - Update docs and examples to import protocol/value models from the type module. - Add tests that lock root exports, type-module exports, star-import behavior, and example import hygiene. ## Stack 1. #21891 `[1/8]` Pin Python SDK runtime dependency 2. #21893 `[2/8]` Generate Python SDK types from pinned runtime 3. #21895 `[3/8]` Run Python SDK tests in CI 4. This PR `[4/8]` Define Python SDK public API surface 5. #21905 `[5/8]` Rename Python SDK package to `openai-codex` 6. #21910 `[6/8]` Add high-level Python SDK approval mode 7. #22014 `[7/8]` Add Python SDK app-server integration harness 8. #22021 `[8/8]` Add Python SDK Ruff formatting ## Verification - Added public API signature tests for root exports, `types` exports, and example imports. --------- Co-authored-by: Codex <noreply@openai.com>
Python SDK Examples
Each example folder contains runnable versions:
sync.py(public sync surface:Codex)async.py(public async surface:AsyncCodex)
All examples intentionally use only public SDK exports from codex_app_server
and codex_app_server.types.
Prerequisites
- Python
>=3.10 - Install SDK dependencies for the same Python interpreter you will use to run examples
Recommended setup (from sdk/python):
uv sync
source .venv/bin/activate
When running examples from this repo checkout, the SDK source uses the local
tree and does not bundle a runtime binary. The helper in examples/_bootstrap.py
uses the installed openai-codex-cli-bin runtime package.
If the pinned openai-codex-cli-bin runtime is not already installed, the bootstrap
will download the matching GitHub release artifact, stage a temporary local
openai-codex-cli-bin package, install it into your active interpreter, and clean up
the temporary files afterward.
The pinned runtime version comes from the SDK package dependency.
Run examples
From sdk/python:
python examples/<example-folder>/sync.py
python examples/<example-folder>/async.py
The examples bootstrap local imports from sdk/python/src automatically, so no
SDK wheel install is required. You only need the Python dependencies for your
active interpreter and an installed openai-codex-cli-bin runtime package (either
already present or automatically provisioned by the bootstrap).
Recommended first run
python examples/01_quickstart_constructor/sync.py
python examples/01_quickstart_constructor/async.py
Index
01_quickstart_constructor/- first run / sanity check
02_turn_run/- inspect full turn output fields
03_turn_stream_events/- stream a turn with a small curated event view
04_models_and_metadata/- discover visible models for the connected runtime
05_existing_thread/- resume a real existing thread (created in-script)
06_thread_lifecycle_and_controls/- thread lifecycle + control calls
07_image_and_text/- remote image URL + text multimodal turn
08_local_image_and_text/- local image + text multimodal turn using a generated temporary sample image
09_async_parity/- parity-style sync flow (see async parity in other examples)
10_error_handling_and_retry/- overload retry pattern + typed error handling structure
11_cli_mini_app/- interactive chat loop
12_turn_params_kitchen_sink/- structured output with a curated advanced
turn(...)configuration
- structured output with a curated advanced
13_model_select_and_turn_params/- list models, pick highest model + highest supported reasoning effort, run turns, print message and usage
14_turn_controls/- separate best-effort
steer()andinterrupt()demos with concise summaries
- separate best-effort