3.8 KiB
Codex App Server Python SDK (Experimental)
Experimental Python SDK for codex app-server JSON-RPC v2 over stdio, with a small default surface optimized for real scripts and apps.
The generated wire-model layer is sourced from the pinned runtime's codex app-server generate-json-schema output and exposed as Pydantic models with snake_case Python fields that serialize back to the app-server’s camelCase wire format.
Install
cd sdk/python
python -m pip install -e .
Published SDK builds pin an exact openai-codex-cli-bin runtime dependency.
For local repo development, either pass AppServerConfig(codex_bin=...) to
point at a local build explicitly, or use the repo examples/notebook bootstrap
which installs the pinned runtime package automatically.
When published, normal installs should use:
python -m pip install openai-codex
Quickstart
from codex_app_server import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5")
result = thread.run("Say hello in one sentence.")
print(result.final_response)
print(len(result.items))
result.final_response is None when the turn completes without a final-answer
or phase-less assistant message item.
Docs map
- Golden path tutorial:
docs/getting-started.md - API reference (signatures + behavior):
docs/api-reference.md - Common decisions and pitfalls:
docs/faq.md - Runnable examples index:
examples/README.md - Jupyter walkthrough notebook:
notebooks/sdk_walkthrough.ipynb
Examples
Start here:
cd sdk/python
python examples/01_quickstart_constructor/sync.py
python examples/01_quickstart_constructor/async.py
Runtime packaging
The repo no longer checks codex binaries into sdk/python.
Published SDK builds are pinned to an exact openai-codex-cli-bin package
version, and that runtime package carries the platform-specific binary bundle
for the target wheel.
For local repo development, the checked-in sdk/python-runtime package is only
a template for staged release artifacts. Editable installs should use an
explicit codex_bin override for manual SDK usage; the repo examples and
notebook bootstrap the pinned runtime package automatically.
Maintainer workflow
cd sdk/python
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex \
--runtime-version 1.2.3
python scripts/update_sdk_artifacts.py \
stage-runtime \
/tmp/codex-python-release/openai-codex-cli-bin \
/path/to/runtime-bundle-dir \
--runtime-version 1.2.3
This supports the CI release flow:
- run
generate-typesbefore packaging - generate types from the pinned runtime schema, then convert that schema to Python
- stage
openai-codexonce with an exactopenai-codex-cli-bin==...dependency - stage
openai-codex-cli-binon each supported platform runner with the same pinned runtime version - build and publish
openai-codex-cli-binas platform wheels only; do not publish an sdist
Compatibility and versioning
- Package:
openai-codex - Runtime package:
openai-codex-cli-bin - Current SDK version in this repo:
0.2.0 - Python:
>=3.10 - Target protocol: Codex
app-serverJSON-RPC v2 - Release tags map to Python package versions as follows:
rust-v1.2.3->1.2.3,rust-v1.2.3-alpha.4->1.2.3a4, andrust-v1.2.3-beta.5->1.2.3b5. - Recommendation: keep SDK and
codexCLI at the exact same published version.
Notes
Codex()is eager and performs startup +initializein the constructor.- Use context managers (
with Codex() as codex:) to ensure shutdown. - Prefer
thread.run("...")for the common case. Usethread.turn(...)when you need streaming, steering, or interrupt control. - For transient overload, use
codex_app_server.retry.retry_on_overload.