python-sdk: expose canonical app-server types (2026-03-16)

Remove the SDK alias/result layers so the wrapper surface returns canonical generated app-server models directly.

- delete public type alias modules and regenerate v2_all.py against current schema
- return InitializeResponse from metadata and generated Turn from run()
- update docs, examples, notebook, and tests to use canonical generated models and repo-only text extraction helpers

Validation:
- PYTHONPATH=sdk/python/src python3 -m pytest sdk/python/tests
- GH_TOKEN="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" RUN_REAL_CODEX_TESTS=1 PYTHONPATH=sdk/python/src python3 -m pytest sdk/python/tests -rs

Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
Shaqayeq
2026-03-16 14:04:55 -07:00
parent 3bb1a1325f
commit d864b8c836
36 changed files with 481 additions and 2966 deletions

View File

@@ -5,16 +5,24 @@ _EXAMPLES_ROOT = Path(__file__).resolve().parents[1]
if str(_EXAMPLES_ROOT) not in sys.path:
sys.path.insert(0, str(_EXAMPLES_ROOT))
from _bootstrap import ensure_local_sdk_src, runtime_config
from _bootstrap import (
assistant_text_from_turn,
ensure_local_sdk_src,
find_turn_by_id,
runtime_config,
server_label,
)
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
with Codex(config=runtime_config()) as codex:
print("Server:", codex.metadata.server_name, codex.metadata.server_version)
print("Server:", server_label(codex.metadata))
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
result = thread.turn(TextInput("Say hello in one sentence.")).run()
persisted = thread.read(include_turns=True)
persisted_turn = find_turn_by_id(persisted.thread.turns, result.id)
print("Status:", result.status)
print("Text:", result.text)
print("Text:", assistant_text_from_turn(persisted_turn))