mirror of
https://github.com/openai/codex.git
synced 2026-05-14 08:12:36 +00:00
## Why For reproducibility. A hand-written `config.toml` is not enough to recreate what a Codex session actually ran with because layered config, CLI overrides, defaults, feature aliases, resolved feature config, prompt setup, and model-catalog/session values can all affect the final runtime behavior. This PR adds an effective config lockfile path: one run can export the resolved session config, and a later run can replay that lockfile and fail early if the regenerated effective config drifts. ## What Changed - Add a dedicated `ConfigLockfileToml` wrapper with top-level lockfile metadata plus the replayable config: ```toml version = 1 codex_version = "..." [config] # effective ConfigToml fields ``` - Keep lockfile metadata out of regular `ConfigToml`; replay loads `ConfigLockfileToml` and then uses its nested `config` as the authoritative config layer. - Add `debug.config_lockfile.export_dir` to write `<thread_id>.config.lock.toml` when a root session starts. - Add `debug.config_lockfile.load_path` to replay a saved lockfile and validate the regenerated session lockfile against it. - Add `debug.config_lockfile.allow_codex_version_mismatch` to optionally tolerate Codex binary version drift while still comparing the rest of the lockfile. - Add `debug.config_lockfile.save_fields_resolved_from_model_catalog` so lock creation can either save model-catalog/session-resolved fields or intentionally leave those fields dynamic. - Build lockfiles from the effective config plus resolved runtime values such as model selection, reasoning settings, prompts, service tier, web search mode, feature states/config, memories config, skill instructions, and agent limits. - Materialize feature aliases and custom feature config into the lockfile so replay compares canonical resolved behavior instead of user-authored alias shape. - Strip profile/debug/file-include/environment-specific inputs from generated lockfiles so they contain replayable values rather than the inputs that produced those values. - Surface JSON-RPC server error code/data in app-server client and TUI bootstrap errors so config-lock replay failures include the actual TOML diff. - Regenerate the config schema for the new debug config keys. ## Review Notes The main flow is split across these files: - `config/src/config_toml.rs`: lockfile/debug TOML shapes. - `core/src/config/mod.rs`: loading `debug.config_lockfile.*`, replaying a lockfile as a config layer, and preserving the expected lockfile for validation. - `core/src/session/config_lock.rs`: exporting the current session lockfile and materializing resolved session/config values. - `core/src/config_lock.rs`: lockfile parsing, metadata/version checks, replay comparison, and diff formatting. ## Usage Export a lockfile from a normal session: ```sh codex -c 'debug.config_lockfile.export_dir="/tmp/codex-locks"' ``` Export a lockfile without saving model-catalog/session-resolved fields: ```sh codex -c 'debug.config_lockfile.export_dir="/tmp/codex-locks"' \ -c 'debug.config_lockfile.save_fields_resolved_from_model_catalog=false' ``` Replay a saved lockfile in a later session: ```sh codex -c 'debug.config_lockfile.load_path="/tmp/codex-locks/<thread_id>.config.lock.toml"' ``` If replay resolves to a different effective config, startup fails with a TOML diff. To tolerate Codex binary version drift during replay: ```sh codex -c 'debug.config_lockfile.load_path="/tmp/codex-locks/<thread_id>.config.lock.toml"' \ -c 'debug.config_lockfile.allow_codex_version_mismatch=true' ``` ## Limitations This does not support custom rules/network policies. ## Verification - `cargo test -p codex-core config_lock` - `cargo test -p codex-config` - `cargo test -p codex-thread-manager-sample`
ThreadManager Sample
Small one-shot binary that starts a Codex thread with ThreadManager from
codex-core-api, submits a single user turn, and prints the final assistant
message.
cargo run -p codex-thread-manager-sample -- "Say hello"
Use --model to override the configured default model:
cargo run -p codex-thread-manager-sample -- --model gpt-5.2 "Say hello"
The prompt can also be piped through stdin:
printf 'Say hello\n' | cargo run -p codex-thread-manager-sample