mirror of
https://github.com/openai/codex.git
synced 2026-04-29 00:55:38 +00:00
Add Python SDK public API and examples (#14446)
## TL;DR WIP esp the examples Thin the Python SDK public surface so the wrapper layer returns canonical app-server generated models directly. - keeps `Codex` / `AsyncCodex` / `Thread` / `Turn` and input helpers, but removes alias-only type layers and custom result models - `metadata` now returns `InitializeResponse` and `run()` returns the generated app-server `Turn` - updates docs, examples, notebook, and tests to use canonical generated types and regenerates `v2_all.py` against current schema - keeps the pinned runtime-package integration flow and real integration coverage ## Validation - `PYTHONPATH=sdk/python/src python3 -m pytest sdk/python/tests` - `GH_TOKEN="$(gh auth token)" RUN_REAL_CODEX_TESTS=1 PYTHONPATH=sdk/python/src python3 -m pytest sdk/python/tests -rs` --------- Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
@@ -1,6 +1,8 @@
|
||||
# Getting Started
|
||||
|
||||
This is the fastest path from install to a multi-turn thread using the minimal SDK surface.
|
||||
This is the fastest path from install to a multi-turn thread using the public SDK surface.
|
||||
|
||||
The SDK is experimental. Treat the API, bundled runtime strategy, and packaging details as unstable until the first public release.
|
||||
|
||||
## 1) Install
|
||||
|
||||
@@ -15,30 +17,32 @@ Requirements:
|
||||
|
||||
- Python `>=3.10`
|
||||
- installed `codex-cli-bin` runtime package, or an explicit `codex_bin` override
|
||||
- Local Codex auth/session configured
|
||||
- local Codex auth/session configured
|
||||
|
||||
## 2) Run your first turn
|
||||
## 2) Run your first turn (sync)
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex, TextInput
|
||||
|
||||
with Codex() as codex:
|
||||
print("Server:", codex.metadata.server_name, codex.metadata.server_version)
|
||||
server = codex.metadata.serverInfo
|
||||
print("Server:", None if server is None else server.name, None if server is None else server.version)
|
||||
|
||||
thread = codex.thread_start(model="gpt-5")
|
||||
result = thread.turn(TextInput("Say hello in one sentence.")).run()
|
||||
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
||||
completed_turn = thread.turn(TextInput("Say hello in one sentence.")).run()
|
||||
|
||||
print("Thread:", result.thread_id)
|
||||
print("Turn:", result.turn_id)
|
||||
print("Status:", result.status)
|
||||
print("Text:", result.text)
|
||||
print("Thread:", thread.id)
|
||||
print("Turn:", completed_turn.id)
|
||||
print("Status:", completed_turn.status)
|
||||
print("Items:", len(completed_turn.items or []))
|
||||
```
|
||||
|
||||
What happened:
|
||||
|
||||
- `Codex()` started and initialized `codex app-server`.
|
||||
- `thread_start(...)` created a thread.
|
||||
- `turn(...).run()` consumed events until `turn/completed` and returned a `TurnResult`.
|
||||
- `turn(...).run()` consumed events until `turn/completed` and returned the canonical generated app-server `Turn` model.
|
||||
- one client can have only one active `TurnHandle.stream()` / `TurnHandle.run()` consumer at a time in the current experimental build
|
||||
|
||||
## 3) Continue the same thread (multi-turn)
|
||||
|
||||
@@ -46,16 +50,37 @@ What happened:
|
||||
from codex_app_server import Codex, TextInput
|
||||
|
||||
with Codex() as codex:
|
||||
thread = codex.thread_start(model="gpt-5")
|
||||
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
||||
|
||||
first = thread.turn(TextInput("Summarize Rust ownership in 2 bullets.")).run()
|
||||
second = thread.turn(TextInput("Now explain it to a Python developer.")).run()
|
||||
|
||||
print("first:", first.text)
|
||||
print("second:", second.text)
|
||||
print("first:", first.id, first.status)
|
||||
print("second:", second.id, second.status)
|
||||
```
|
||||
|
||||
## 4) Resume an existing thread
|
||||
## 4) Async parity
|
||||
|
||||
Use `async with AsyncCodex()` as the normal async entrypoint. `AsyncCodex`
|
||||
initializes lazily, and context entry makes startup/shutdown explicit.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from codex_app_server import AsyncCodex, TextInput
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
async with AsyncCodex() as codex:
|
||||
thread = await codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
||||
turn = await thread.turn(TextInput("Continue where we left off."))
|
||||
completed_turn = await turn.run()
|
||||
print(completed_turn.id, completed_turn.status)
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## 5) Resume an existing thread
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex, TextInput
|
||||
@@ -63,12 +88,20 @@ from codex_app_server import Codex, TextInput
|
||||
THREAD_ID = "thr_123" # replace with a real id
|
||||
|
||||
with Codex() as codex:
|
||||
thread = codex.thread(THREAD_ID)
|
||||
result = thread.turn(TextInput("Continue where we left off.")).run()
|
||||
print(result.text)
|
||||
thread = codex.thread_resume(THREAD_ID)
|
||||
completed_turn = thread.turn(TextInput("Continue where we left off.")).run()
|
||||
print(completed_turn.id, completed_turn.status)
|
||||
```
|
||||
|
||||
## 5) Next stops
|
||||
## 6) Generated models
|
||||
|
||||
The convenience wrappers live at the package root, but the canonical app-server models live under:
|
||||
|
||||
```python
|
||||
from codex_app_server.generated.v2_all import Turn, TurnStatus, ThreadReadResponse
|
||||
```
|
||||
|
||||
## 7) Next stops
|
||||
|
||||
- API surface and signatures: `docs/api-reference.md`
|
||||
- Common decisions/pitfalls: `docs/faq.md`
|
||||
|
||||
Reference in New Issue
Block a user