mirror of
https://github.com/openai/codex.git
synced 2026-05-14 08:12:36 +00:00
[5/8] Rename Python SDK package to openai-codex (#21905)
## Why The SDK should publish under the reserved public distribution name `openai-codex`, and its import module should match that name in the Python style. Since package names can contain hyphens but import modules cannot, the public import path becomes `openai_codex`. Keeping the rename separate from the public API surface change makes the naming change easy to review and avoids mixing it with API curation. ## What - Rename the SDK distribution from `openai-codex-app-server-sdk` to `openai-codex`. - Rename the import package from `codex_app_server` to `openai_codex`. - Keep the runtime wheel as the separate `openai-codex-cli-bin` dependency. - Update docs, examples, notebooks, artifact scripts, lockfile metadata, and tests for the new distribution/module names. ## Stack 1. #21891 `[1/8]` Pin Python SDK runtime dependency 2. #21893 `[2/8]` Generate Python SDK types from pinned runtime 3. #21895 `[3/8]` Run Python SDK tests in CI 4. #21896 `[4/8]` Define Python SDK public API surface 5. This PR `[5/8]` Rename Python SDK package to `openai-codex` 6. #21910 `[6/8]` Add high-level Python SDK approval mode 7. #22014 `[7/8]` Add Python SDK app-server integration harness 8. #22021 `[8/8]` Add Python SDK Ruff formatting ## Verification - Updated package metadata and public API tests to assert the distribution and import names. Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
@@ -1,13 +1,13 @@
|
||||
# Codex App Server SDK — API Reference
|
||||
# OpenAI Codex SDK — API Reference
|
||||
|
||||
Public surface of `codex_app_server` for app-server v2.
|
||||
Public surface of `openai_codex` for app-server v2.
|
||||
|
||||
This SDK surface is experimental. Turn streams are routed by turn ID so one client can consume multiple active turns concurrently.
|
||||
|
||||
## Package Entry
|
||||
|
||||
```python
|
||||
from codex_app_server import (
|
||||
from openai_codex import (
|
||||
Codex,
|
||||
AsyncCodex,
|
||||
RunResult,
|
||||
@@ -23,7 +23,7 @@ from codex_app_server import (
|
||||
SkillInput,
|
||||
MentionInput,
|
||||
)
|
||||
from codex_app_server.types import (
|
||||
from openai_codex.types import (
|
||||
InitializeResponse,
|
||||
ThreadItem,
|
||||
ThreadTokenUsage,
|
||||
@@ -31,9 +31,9 @@ from codex_app_server.types import (
|
||||
)
|
||||
```
|
||||
|
||||
- Version: `codex_app_server.__version__`
|
||||
- Version: `openai_codex.__version__`
|
||||
- Requires Python >= 3.10
|
||||
- Public app-server value and event types live in `codex_app_server.types`
|
||||
- Public app-server value and event types live in `openai_codex.types`
|
||||
|
||||
## Codex (sync)
|
||||
|
||||
@@ -136,7 +136,7 @@ Use `turn(...)` when you need low-level turn control (`stream()`, `steer()`,
|
||||
- `steer(input: Input) -> TurnSteerResponse`
|
||||
- `interrupt() -> TurnInterruptResponse`
|
||||
- `stream() -> Iterator[Notification]`
|
||||
- `run() -> codex_app_server.types.Turn`
|
||||
- `run() -> openai_codex.types.Turn`
|
||||
|
||||
Behavior notes:
|
||||
|
||||
@@ -148,7 +148,7 @@ Behavior notes:
|
||||
- `steer(input: Input) -> Awaitable[TurnSteerResponse]`
|
||||
- `interrupt() -> Awaitable[TurnInterruptResponse]`
|
||||
- `stream() -> AsyncIterator[Notification]`
|
||||
- `run() -> Awaitable[codex_app_server.types.Turn]`
|
||||
- `run() -> Awaitable[openai_codex.types.Turn]`
|
||||
|
||||
Behavior notes:
|
||||
|
||||
@@ -173,7 +173,7 @@ Input = list[InputItem] | InputItem
|
||||
The SDK wrappers return and accept public app-server models wherever possible:
|
||||
|
||||
```python
|
||||
from codex_app_server.types import (
|
||||
from openai_codex.types import (
|
||||
AskForApproval,
|
||||
ThreadReadResponse,
|
||||
Turn,
|
||||
@@ -184,7 +184,7 @@ from codex_app_server.types import (
|
||||
## Retry + errors
|
||||
|
||||
```python
|
||||
from codex_app_server import (
|
||||
from openai_codex import (
|
||||
retry_on_overload,
|
||||
JsonRpcError,
|
||||
MethodNotFoundError,
|
||||
@@ -200,7 +200,7 @@ from codex_app_server import (
|
||||
## Example
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex
|
||||
from openai_codex import Codex
|
||||
|
||||
with Codex() as codex:
|
||||
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
## `run()` vs `stream()`
|
||||
|
||||
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the public app-server `Turn` model from `codex_app_server.types`.
|
||||
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the public app-server `Turn` model from `openai_codex.types`.
|
||||
- `TurnHandle.stream()` / `AsyncTurnHandle.stream()` yields raw notifications (`Notification`) so you can react event-by-event.
|
||||
|
||||
Choose `run()` for most apps. Choose `stream()` for progress UIs, custom timeout logic, or custom parsing.
|
||||
@@ -68,7 +68,7 @@ cd sdk/python
|
||||
python scripts/update_sdk_artifacts.py generate-types
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-sdk \
|
||||
/tmp/codex-python-release/openai-codex-app-server-sdk \
|
||||
/tmp/codex-python-release/openai-codex \
|
||||
--codex-version <codex-release-tag-or-pep440-version>
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-runtime \
|
||||
|
||||
@@ -24,7 +24,7 @@ Requirements:
|
||||
## 2) Run your first turn (sync)
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex
|
||||
from openai_codex import Codex
|
||||
|
||||
with Codex() as codex:
|
||||
server = codex.metadata.serverInfo
|
||||
@@ -50,7 +50,7 @@ What happened:
|
||||
## 3) Continue the same thread (multi-turn)
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex
|
||||
from openai_codex import Codex
|
||||
|
||||
with Codex() as codex:
|
||||
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
|
||||
@@ -69,7 +69,7 @@ initializes lazily, and context entry makes startup/shutdown explicit.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from codex_app_server import AsyncCodex
|
||||
from openai_codex import AsyncCodex
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
@@ -85,7 +85,7 @@ asyncio.run(main())
|
||||
## 5) Resume an existing thread
|
||||
|
||||
```python
|
||||
from codex_app_server import Codex
|
||||
from openai_codex import Codex
|
||||
|
||||
THREAD_ID = "thr_123" # replace with a real id
|
||||
|
||||
@@ -101,7 +101,7 @@ The convenience wrappers live at the package root. Public app-server value and
|
||||
event types live under:
|
||||
|
||||
```python
|
||||
from codex_app_server.types import ThreadReadResponse, Turn, TurnStatus
|
||||
from openai_codex.types import ThreadReadResponse, Turn, TurnStatus
|
||||
```
|
||||
|
||||
## 7) Next stops
|
||||
|
||||
Reference in New Issue
Block a user