Files
codex/sdk/python/docs/getting-started.md
Steve Coffey 0127cef5db Stage publishable Python runtime wheels (#18865)
This is PR 2 of the Python SDK PyPI publishing split. [PR
1](https://github.com/openai/codex/pull/18862) refreshed the generated
SDK bindings; this PR makes the runtime package itself publishable, and
PR 3 will wire the SDK package/version pinning to this runtime package.

## Summary
- Rename the runtime distribution to `openai-codex-cli-bin` while
keeping the import package as `codex_cli_bin`.
- Make the runtime package wheel-only and build `py3-none-<platform>`
wheels instead of interpreter-specific wheels.
- Add `stage-runtime --codex-version` and `--platform-tag` so release
staging can produce the platform wheel matrix from Codex release tags.
- Add focused artifact workflow tests for version normalization,
platform tag injection, and runtime wheel metadata.

## Why Rename
There is already an unofficial PyPI package,
[`codex-bin`](https://pypi.org/project/codex-bin/), distributing OpenAI
Codex binaries. Publishing the official SDK runtime dependency as
`openai-codex-cli-bin` makes the ownership clear, avoids confusing the
SDK-pinned runtime wheel with that unowned wrapper, and keeps the import
package unchanged as `codex_cli_bin`.

## Tests
- `uv run --extra dev pytest
tests/test_artifact_workflow_and_binaries.py` -> 21 passed
- `uv run --extra dev python scripts/update_sdk_artifacts.py
stage-runtime /tmp/codex-python-pr2-rebased/runtime-stage
/tmp/codex-python-pr2-rebased/codex --codex-version
rust-v0.116.0-alpha.1 --platform-tag macosx_11_0_arm64`
- `uv run --with build --extra dev python -m build --wheel
/tmp/codex-python-pr2-rebased/runtime-stage`
- `uv run --with twine --extra dev twine check
/tmp/codex-python-pr2-rebased/runtime-stage/dist/openai_codex_cli_bin-0.116.0a1-py3-none-macosx_11_0_arm64.whl`

## Note
- Full `uv run --extra dev pytest` currently fails because regenerating
from schemas already on `main` adds new DeviceKey Python types. I left
that generated catch-up out of this runtime-only PR.
2026-04-22 08:14:48 -07:00

3.2 KiB

Getting Started

This is the fastest path from install to a multi-turn thread using the public SDK surface.

The SDK is experimental. Treat the API, bundled runtime strategy, and packaging details as unstable until the first public release.

1) Install

From repo root:

cd sdk/python
python -m pip install -e .

Requirements:

  • Python >=3.10
  • installed openai-codex-cli-bin runtime package, or an explicit codex_bin override
  • local Codex auth/session configured

2) Run your first turn (sync)

from codex_app_server import Codex

with Codex() as codex:
    server = codex.metadata.serverInfo
    print("Server:", None if server is None else server.name, None if server is None else server.version)

    thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
    result = thread.run("Say hello in one sentence.")

    print("Thread:", thread.id)
    print("Text:", result.final_response)
    print("Items:", len(result.items))

What happened:

  • Codex() started and initialized codex app-server.
  • thread_start(...) created a thread.
  • thread.run("...") started a turn, consumed events until completion, and returned the final assistant response plus collected items and usage.
  • result.final_response is None when no final-answer or phase-less assistant message item completes for the turn.
  • use thread.turn(...) when you need a TurnHandle for streaming, steering, interrupting, or turn IDs/status
  • one client can have only one active turn consumer (thread.run(...), TurnHandle.stream(), or TurnHandle.run()) at a time in the current experimental build

3) Continue the same thread (multi-turn)

from codex_app_server import Codex

with Codex() as codex:
    thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

    first = thread.run("Summarize Rust ownership in 2 bullets.")
    second = thread.run("Now explain it to a Python developer.")

    print("first:", first.final_response)
    print("second:", second.final_response)

4) Async parity

Use async with AsyncCodex() as the normal async entrypoint. AsyncCodex initializes lazily, and context entry makes startup/shutdown explicit.

import asyncio
from codex_app_server import AsyncCodex


async def main() -> None:
    async with AsyncCodex() as codex:
        thread = await codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
        result = await thread.run("Continue where we left off.")
        print(result.final_response)


asyncio.run(main())

5) Resume an existing thread

from codex_app_server import Codex

THREAD_ID = "thr_123"  # replace with a real id

with Codex() as codex:
    thread = codex.thread_resume(THREAD_ID)
    result = thread.run("Continue where we left off.")
    print(result.final_response)

6) Generated models

The convenience wrappers live at the package root, but the canonical app-server models live under:

from codex_app_server.generated.v2_all import Turn, TurnStatus, ThreadReadResponse

7) Next stops

  • API surface and signatures: docs/api-reference.md
  • Common decisions/pitfalls: docs/faq.md
  • End-to-end runnable examples: examples/README.md