mirror of
https://github.com/openai/codex.git
synced 2026-05-17 01:32:32 +00:00
[codex] Refine Python SDK user-facing docs (#22941)
## Summary - Remove maintainer and release-process wording from the Python SDK README and docs. - Rewrite SDK-facing comments/docstrings so they read as standalone product documentation. - Add a real app-server integration smoke that follows the public quickstart-style `Codex() -> thread_start() -> run()` path. ## Integration coverage - Add `test_real_quickstart_style_flow_smoke` in the real app-server integration suite. ## Validation - Local tests were not run per repo guidance. CI should validate this branch once the PR is online.
This commit is contained in:
@@ -17,10 +17,8 @@ source .venv/bin/activate
|
||||
```
|
||||
|
||||
Published SDK builds pin an exact `openai-codex-cli-bin` runtime dependency
|
||||
with the same version as the SDK. For local repo development, either pass
|
||||
`AppServerConfig(codex_bin=...)` to point at a local build explicitly, or use
|
||||
the repo examples/notebook bootstrap which installs the pinned runtime package
|
||||
automatically.
|
||||
with the same version as the SDK. Pass `AppServerConfig(codex_bin=...)` only
|
||||
when you intentionally want to run against a specific local app-server binary.
|
||||
|
||||
## Quickstart
|
||||
|
||||
@@ -55,49 +53,12 @@ python examples/01_quickstart_constructor/sync.py
|
||||
python examples/01_quickstart_constructor/async.py
|
||||
```
|
||||
|
||||
## Runtime packaging
|
||||
|
||||
The repo no longer checks `codex` binaries into `sdk/python`.
|
||||
## Runtime
|
||||
|
||||
Published SDK builds are pinned to an exact `openai-codex-cli-bin` package
|
||||
version, and that runtime package carries the platform-specific binary for the
|
||||
target wheel. The SDK package version and runtime package version must match.
|
||||
|
||||
For local repo development, the checked-in `sdk/python-runtime` package is only
|
||||
a template for staged release artifacts. Editable installs should use an
|
||||
explicit `codex_bin` override for manual SDK usage; the repo examples and
|
||||
notebook bootstrap the pinned runtime package automatically.
|
||||
|
||||
## Maintainer workflow
|
||||
|
||||
```bash
|
||||
cd sdk/python
|
||||
uv sync
|
||||
python scripts/update_sdk_artifacts.py generate-types
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-sdk \
|
||||
/tmp/codex-python-release/openai-codex \
|
||||
--codex-version <codex-release-tag-or-pep440-version>
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-runtime \
|
||||
/tmp/codex-python-release/openai-codex-cli-bin \
|
||||
/path/to/codex \
|
||||
--codex-version <codex-release-tag-or-pep440-version>
|
||||
```
|
||||
|
||||
Pass `--platform-tag ...` to `stage-runtime` when the wheel should be tagged for
|
||||
a Rust target that differs from the Python build host. The intended one-off
|
||||
matrix is `macosx_11_0_arm64`, `macosx_10_9_x86_64`,
|
||||
`musllinux_1_1_aarch64`, `musllinux_1_1_x86_64`, `win_arm64`, and
|
||||
`win_amd64`.
|
||||
|
||||
This supports the CI release flow:
|
||||
|
||||
- run `generate-types` before packaging
|
||||
- stage `openai-codex` once with an exact `openai-codex-cli-bin==...` dependency
|
||||
- stage `openai-codex-cli-bin` on each supported platform runner with the same pinned runtime version
|
||||
- build and publish `openai-codex-cli-bin` as platform wheels only through PyPI trusted publishing; do not publish an sdist
|
||||
|
||||
## Compatibility and versioning
|
||||
|
||||
- Package: `openai-codex`
|
||||
|
||||
@@ -59,29 +59,6 @@ Common causes:
|
||||
- local auth/session is missing
|
||||
- incompatible/old app-server
|
||||
|
||||
Maintainers stage releases by building the SDK once and the runtime once per
|
||||
platform with the same pinned runtime version. Publish `openai-codex-cli-bin`
|
||||
as platform wheels only; do not publish an sdist:
|
||||
|
||||
```bash
|
||||
cd sdk/python
|
||||
python scripts/update_sdk_artifacts.py generate-types
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-sdk \
|
||||
/tmp/codex-python-release/openai-codex \
|
||||
--codex-version <codex-release-tag-or-pep440-version>
|
||||
python scripts/update_sdk_artifacts.py \
|
||||
stage-runtime \
|
||||
/tmp/codex-python-release/openai-codex-cli-bin \
|
||||
/path/to/codex \
|
||||
--codex-version <codex-release-tag-or-pep440-version>
|
||||
```
|
||||
|
||||
If you are packaging a binary for a different target than the Python build
|
||||
host, pass `--platform-tag ...` to `stage-runtime`. The intended one-off matrix
|
||||
is `macosx_11_0_arm64`, `macosx_10_9_x86_64`, `musllinux_1_1_aarch64`,
|
||||
`musllinux_1_1_x86_64`, `win_arm64`, and `win_amd64`.
|
||||
|
||||
## Why does a turn "hang"?
|
||||
|
||||
A turn is complete only when `turn/completed` arrives for that turn ID.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This is the fastest path from install to a multi-turn thread using the public SDK surface.
|
||||
|
||||
The SDK is experimental. Treat the API, bundled runtime strategy, and packaging details as unstable until the first public release.
|
||||
The SDK is experimental, so the public API and runtime requirements may keep evolving before the first public release.
|
||||
|
||||
## 1) Install
|
||||
|
||||
|
||||
@@ -113,7 +113,7 @@ def _approval_mode_override_settings(
|
||||
|
||||
|
||||
class Codex:
|
||||
"""Minimal typed SDK surface for app-server v2."""
|
||||
"""Typed Python client for app-server v2 workflows."""
|
||||
|
||||
def __init__(self, config: AppServerConfig | None = None) -> None:
|
||||
self._client = AppServerClient(config=config)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Public generated app-server model exports for type annotations and matching."""
|
||||
"""Public app-server model exports for type annotations and matching."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
|
||||
@@ -295,6 +295,38 @@ def test_real_thread_run_convenience_smoke(runtime_env: PreparedRuntimeEnv) -> N
|
||||
assert isinstance(data["has_usage"], bool)
|
||||
|
||||
|
||||
def test_real_quickstart_style_flow_smoke(runtime_env: PreparedRuntimeEnv) -> None:
|
||||
data = _run_json_python(
|
||||
runtime_env,
|
||||
textwrap.dedent(
|
||||
"""
|
||||
import json
|
||||
from openai_codex import Codex
|
||||
|
||||
with Codex() as codex:
|
||||
thread = codex.thread_start()
|
||||
result = thread.run("Say hello in one sentence.")
|
||||
print(json.dumps({
|
||||
"thread_id": thread.id,
|
||||
"final_response": result.final_response,
|
||||
"items_count": len(result.items),
|
||||
}))
|
||||
"""
|
||||
),
|
||||
)
|
||||
|
||||
assert {
|
||||
"thread_id_is_text": isinstance(data["thread_id"], str) and bool(data["thread_id"].strip()),
|
||||
"final_response_is_text": isinstance(data["final_response"], str)
|
||||
and bool(data["final_response"].strip()),
|
||||
"items_count_is_int": isinstance(data["items_count"], int),
|
||||
} == {
|
||||
"thread_id_is_text": True,
|
||||
"final_response_is_text": True,
|
||||
"items_count_is_int": True,
|
||||
}
|
||||
|
||||
|
||||
def test_real_async_thread_turn_usage_and_ids_smoke(
|
||||
runtime_env: PreparedRuntimeEnv,
|
||||
) -> None:
|
||||
|
||||
Reference in New Issue
Block a user