mirror of
https://github.com/openai/codex.git
synced 2026-05-14 08:12:36 +00:00
## Why The SDK should publish under the reserved public distribution name `openai-codex`, and its import module should match that name in the Python style. Since package names can contain hyphens but import modules cannot, the public import path becomes `openai_codex`. Keeping the rename separate from the public API surface change makes the naming change easy to review and avoids mixing it with API curation. ## What - Rename the SDK distribution from `openai-codex-app-server-sdk` to `openai-codex`. - Rename the import package from `codex_app_server` to `openai_codex`. - Keep the runtime wheel as the separate `openai-codex-cli-bin` dependency. - Update docs, examples, notebooks, artifact scripts, lockfile metadata, and tests for the new distribution/module names. ## Stack 1. #21891 `[1/8]` Pin Python SDK runtime dependency 2. #21893 `[2/8]` Generate Python SDK types from pinned runtime 3. #21895 `[3/8]` Run Python SDK tests in CI 4. #21896 `[4/8]` Define Python SDK public API surface 5. This PR `[5/8]` Rename Python SDK package to `openai-codex` 6. #21910 `[6/8]` Add high-level Python SDK approval mode 7. #22014 `[7/8]` Add Python SDK app-server integration harness 8. #22021 `[8/8]` Add Python SDK Ruff formatting ## Verification - Updated package metadata and public API tests to assert the distribution and import names. Co-authored-by: Codex <noreply@openai.com>
112 lines
3.1 KiB
Markdown
112 lines
3.1 KiB
Markdown
# Getting Started
|
|
|
|
This is the fastest path from install to a multi-turn thread using the public SDK surface.
|
|
|
|
The SDK is experimental. Treat the API, bundled runtime strategy, and packaging details as unstable until the first public release.
|
|
|
|
## 1) Install
|
|
|
|
From repo root:
|
|
|
|
```bash
|
|
cd sdk/python
|
|
uv sync
|
|
source .venv/bin/activate
|
|
```
|
|
|
|
Requirements:
|
|
|
|
- Python `>=3.10`
|
|
- uv
|
|
- installed `openai-codex-cli-bin` runtime package, or an explicit `codex_bin` override
|
|
- local Codex auth/session configured
|
|
|
|
## 2) Run your first turn (sync)
|
|
|
|
```python
|
|
from openai_codex import Codex
|
|
|
|
with Codex() as codex:
|
|
server = codex.metadata.serverInfo
|
|
print("Server:", None if server is None else server.name, None if server is None else server.version)
|
|
|
|
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
|
result = thread.run("Say hello in one sentence.")
|
|
|
|
print("Thread:", thread.id)
|
|
print("Text:", result.final_response)
|
|
print("Items:", len(result.items))
|
|
```
|
|
|
|
What happened:
|
|
|
|
- `Codex()` started and initialized `codex app-server`.
|
|
- `thread_start(...)` created a thread.
|
|
- `thread.run("...")` started a turn, consumed events until completion, and returned the final assistant response plus collected items and usage.
|
|
- `result.final_response` is `None` when no final-answer or phase-less assistant message item completes for the turn.
|
|
- use `thread.turn(...)` when you need a `TurnHandle` for streaming, steering, interrupting, or turn IDs/status
|
|
- one client can consume multiple active turns concurrently; turn streams are routed by turn ID
|
|
|
|
## 3) Continue the same thread (multi-turn)
|
|
|
|
```python
|
|
from openai_codex import Codex
|
|
|
|
with Codex() as codex:
|
|
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
|
|
|
first = thread.run("Summarize Rust ownership in 2 bullets.")
|
|
second = thread.run("Now explain it to a Python developer.")
|
|
|
|
print("first:", first.final_response)
|
|
print("second:", second.final_response)
|
|
```
|
|
|
|
## 4) Async parity
|
|
|
|
Use `async with AsyncCodex()` as the normal async entrypoint. `AsyncCodex`
|
|
initializes lazily, and context entry makes startup/shutdown explicit.
|
|
|
|
```python
|
|
import asyncio
|
|
from openai_codex import AsyncCodex
|
|
|
|
|
|
async def main() -> None:
|
|
async with AsyncCodex() as codex:
|
|
thread = await codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
|
result = await thread.run("Continue where we left off.")
|
|
print(result.final_response)
|
|
|
|
|
|
asyncio.run(main())
|
|
```
|
|
|
|
## 5) Resume an existing thread
|
|
|
|
```python
|
|
from openai_codex import Codex
|
|
|
|
THREAD_ID = "thr_123" # replace with a real id
|
|
|
|
with Codex() as codex:
|
|
thread = codex.thread_resume(THREAD_ID)
|
|
result = thread.run("Continue where we left off.")
|
|
print(result.final_response)
|
|
```
|
|
|
|
## 6) Public app-server types
|
|
|
|
The convenience wrappers live at the package root. Public app-server value and
|
|
event types live under:
|
|
|
|
```python
|
|
from openai_codex.types import ThreadReadResponse, Turn, TurnStatus
|
|
```
|
|
|
|
## 7) Next stops
|
|
|
|
- API surface and signatures: `docs/api-reference.md`
|
|
- Common decisions/pitfalls: `docs/faq.md`
|
|
- End-to-end runnable examples: `examples/README.md`
|