mirror of
https://github.com/openai/codex.git
synced 2026-04-25 23:24:55 +00:00
## Summary Foundation PR only (base for PR #3). This PR contains the SDK runtime foundation and generated artifacts: - pinned runtime binary in `sdk/python/bin/` (`codex` or `codex.exe` by platform) - single maintenance script: `sdk/python/scripts/update_sdk_artifacts.py` - generated protocol/types artifacts under: - `sdk/python/src/codex_app_server/generated/protocol_types.py` - `sdk/python/src/codex_app_server/generated/schema_types.py` - `sdk/python/src/codex_app_server/generated/v2_all/*` - generation-contract test wiring (`tests/test_contract_generation.py`) ## Release asset behavior `update_sdk_artifacts.py` now: - selects latest release by channel (`--channel stable|alpha`) - resolves the correct asset for current OS/arch - extracts platform binary (`codex` on macOS/Linux, `codex.exe` on Windows) - keeps runtime on single pinned binary source in `sdk/python/bin/` ## Scope boundary - ✅ PR #2 = binary + generation pipeline + generated types foundation - ❌ PR #2 does **not** include examples/integration logic polish (that is PR #3) ## Validation - Ran: `python scripts/update_sdk_artifacts.py --channel stable` - Regenerated and committed resulting generated artifacts - Local tests pass on branch
76 lines
1.8 KiB
Markdown
76 lines
1.8 KiB
Markdown
# Getting Started
|
|
|
|
This is the fastest path from install to a multi-turn thread using the minimal SDK surface.
|
|
|
|
## 1) Install
|
|
|
|
From repo root:
|
|
|
|
```bash
|
|
cd sdk/python
|
|
python -m pip install -e .
|
|
```
|
|
|
|
Requirements:
|
|
|
|
- Python `>=3.10`
|
|
- bundled runtime binary for your platform (shipped in package)
|
|
- Local Codex auth/session configured
|
|
|
|
## 2) Run your first turn
|
|
|
|
```python
|
|
from codex_app_server import Codex, TextInput
|
|
|
|
with Codex() as codex:
|
|
print("Server:", codex.metadata.server_name, codex.metadata.server_version)
|
|
|
|
thread = codex.thread_start(model="gpt-5")
|
|
result = thread.turn(TextInput("Say hello in one sentence.")).run()
|
|
|
|
print("Thread:", result.thread_id)
|
|
print("Turn:", result.turn_id)
|
|
print("Status:", result.status)
|
|
print("Text:", result.text)
|
|
```
|
|
|
|
What happened:
|
|
|
|
- `Codex()` started and initialized `codex app-server`.
|
|
- `thread_start(...)` created a thread.
|
|
- `turn(...).run()` consumed events until `turn/completed` and returned a `TurnResult`.
|
|
|
|
## 3) Continue the same thread (multi-turn)
|
|
|
|
```python
|
|
from codex_app_server import Codex, TextInput
|
|
|
|
with Codex() as codex:
|
|
thread = codex.thread_start(model="gpt-5")
|
|
|
|
first = thread.turn(TextInput("Summarize Rust ownership in 2 bullets.")).run()
|
|
second = thread.turn(TextInput("Now explain it to a Python developer.")).run()
|
|
|
|
print("first:", first.text)
|
|
print("second:", second.text)
|
|
```
|
|
|
|
## 4) Resume an existing thread
|
|
|
|
```python
|
|
from codex_app_server import Codex, TextInput
|
|
|
|
THREAD_ID = "thr_123" # replace with a real id
|
|
|
|
with Codex() as codex:
|
|
thread = codex.thread(THREAD_ID)
|
|
result = thread.turn(TextInput("Continue where we left off.")).run()
|
|
print(result.text)
|
|
```
|
|
|
|
## 5) Next stops
|
|
|
|
- API surface and signatures: `docs/api-reference.md`
|
|
- Common decisions/pitfalls: `docs/faq.md`
|
|
- End-to-end runnable examples: `examples/README.md`
|