mirror of
https://github.com/openai/codex.git
synced 2026-05-15 00:32:51 +00:00
Summary: - Add codex-thread-manager-sample, a one-shot binary that starts a ThreadManager thread, submits a prompt, and prints the final assistant output. - Pass ThreadStore into ThreadManager::new and expose thread_store_from_config for existing callsites. - Build the sample Config directly with only --model and prompt inputs. Verification: - just fmt - cargo check -p codex-thread-manager-sample -p codex-app-server -p codex-mcp-server - git diff --check Tests: Not run per request.
ThreadManager Sample
Small one-shot binary that starts a Codex thread with ThreadManager, submits a
single user turn, and prints the final assistant message.
cargo run -p codex-thread-manager-sample -- "Say hello"
Use --model to override the configured default model:
cargo run -p codex-thread-manager-sample -- --model gpt-5.2 "Say hello"
The prompt can also be piped through stdin:
printf 'Say hello\n' | cargo run -p codex-thread-manager-sample