mirror of
https://github.com/openai/codex.git
synced 2026-05-16 09:12:54 +00:00
## Why SQLite state was still being opened from consumer paths, including lazy `OnceCell`-backed thread-store call sites. That let one process construct multiple state DB connections for the same Codex home, which makes SQLite lock contention and `database is locked` failures much easier to hit. State DB lifetime should be chosen by main-like entrypoints and tests, then passed through explicitly. Consumers should use the supplied `Option<StateDbHandle>` or `StateDbHandle` and keep their existing filesystem fallback or error behavior when no handle is available. The startup path also needs to keep the rollout crate in charge of SQLite state initialization. Opening `codex_state::StateRuntime` directly bypasses rollout metadata backfill, so entrypoints should initialize through `codex_rollout::state_db` and receive a handle only after required rollout backfills have completed. ## What Changed - Initialize the state DB in main-like entrypoints for CLI, TUI, app-server, exec, MCP server, and the thread-manager sample. - Pass `Option<StateDbHandle>` through `ThreadManager`, `LocalThreadStore`, app-server processors, TUI app wiring, rollout listing/recording, personality migration, shell snapshot cleanup, session-name lookup, and memory/device-key consumers. - Remove the lazy local state DB wrapper from the thread store so non-test consumers use only the supplied handle or their existing fallback path. - Make `codex_rollout::state_db::init` the local state startup path: it opens/migrates SQLite, runs rollout metadata backfill when needed, waits for concurrent backfill workers up to a bounded timeout, verifies completion, and then returns the initialized handle. - Keep optional/non-owning SQLite helpers, such as remote TUI local reads, as open-only paths that do not run startup backfill. - Switch app-server startup from direct `codex_state::StateRuntime::init` to the rollout state initializer so app-server cannot skip rollout backfill. - Collapse split rollout lookup/list APIs so callers use the normal methods with an optional state handle instead of `_with_state_db` variants. - Restore `getConversationSummary(ThreadId)` to delegate through `ThreadStore::read_thread` instead of a LocalThreadStore-specific rollout path special case. - Keep DB-backed rollout path lookup keyed on the DB row and file existence, without imposing the filesystem filename convention on existing DB rows. - Verify readable DB-backed rollout paths against `session_meta.id` before returning them, so a stale SQLite row that points at another thread's JSONL falls back to filesystem search and read-repairs the DB row. - Keep `debug prompt-input` filesystem-only so a one-off debug command does not initialize or backfill SQLite state just to print prompt input. - Keep goal-session test Codex homes alive only in the goal-specific helper, rather than leaking tempdirs from the shared session test helper. - Update tests and call sites to pass explicit state handles where DB behavior is expected and explicit `None` where filesystem-only behavior is intended. ## Validation - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo check -p codex-rollout -p codex-thread-store -p codex-app-server -p codex-core -p codex-tui -p codex-exec -p codex-cli --tests` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-rollout state_db_` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-rollout find_thread_path` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-rollout find_thread_path -- --nocapture` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-rollout try_init_ -- --nocapture` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-rollout` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo clippy -p codex-rollout --lib -- -D warnings` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-thread-store read_thread_falls_back_when_sqlite_path_points_to_another_thread -- --nocapture` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-thread-store` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core shell_snapshot` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core --test all personality_migration` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core --test all rollout_list_find` - `RUST_MIN_STACK=8388608 CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core --test all rollout_list_find::find_prefers_sqlite_path_by_id -- --nocapture` - `RUST_MIN_STACK=8388608 CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core --test all rollout_list_find -- --nocapture` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-core interrupt_accounts_active_goal_before_pausing` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-app-server get_auth_status -- --test-threads=1` - `CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo test -p codex-app-server --lib` - `CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db cargo check -p codex-rollout -p codex-app-server --tests` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db just fix -p codex-rollout -p codex-thread-store -p codex-core -p codex-app-server -p codex-tui -p codex-exec -p codex-cli` - `CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db just fix -p codex-rollout -p codex-app-server` - `CARGO_TARGET_DIR=/tmp/codex-target-state-db just fix -p codex-rollout` - `CODEX_SKIP_VENDORED_BWRAP=1 CARGO_TARGET_DIR=/tmp/codex-target-state-db just fix -p codex-core` - `just argument-comment-lint -p codex-core` - `just argument-comment-lint -p codex-rollout` Focused coverage added in `codex-rollout`: - `recorder::tests::state_db_init_backfills_before_returning` verifies the rollout metadata row exists before startup init returns. - `state_db::tests::try_init_waits_for_concurrent_startup_backfill` verifies startup waits for another worker to finish backfill instead of disabling the handle for the process. - `state_db::tests::try_init_times_out_waiting_for_stuck_startup_backfill` verifies startup does not hang indefinitely on a stuck backfill lease. - `tests::find_thread_path_accepts_existing_state_db_path_without_canonical_filename` verifies DB-backed lookup accepts valid existing rollout paths even when the filename does not include the thread UUID. - `tests::find_thread_path_falls_back_when_db_path_points_to_another_thread` verifies DB-backed lookup ignores a stale row whose existing path belongs to another thread and read-repairs the row after filesystem fallback. Focused coverage updated in `codex-core`: - `rollout_list_find::find_prefers_sqlite_path_by_id` now uses a DB-preferred rollout file with matching `session_meta.id`, so it still verifies that valid SQLite paths win without depending on stale/empty rollout contents. `cargo test -p codex-app-server thread_list_respects_search_term_filter -- --test-threads=1 --nocapture` was attempted locally but timed out waiting for the app-server test harness `initialize` response before reaching the changed thread-list code path. `bazel test //codex-rs/thread-store:thread-store-unit-tests --test_output=errors` was attempted locally after the thread-store fix, but this container failed before target analysis while fetching `v8+` through BuildBuddy/direct GitHub. The equivalent local crate coverage, including `cargo test -p codex-thread-store`, passes. A plain local `cargo check -p codex-rollout -p codex-app-server --tests` also requires system `libcap.pc` for `codex-linux-sandbox`; the follow-up app-server check above used `CODEX_SKIP_VENDORED_BWRAP=1` in this container.
658 lines
21 KiB
Rust
658 lines
21 KiB
Rust
use anyhow::Result;
|
|
use app_test_support::McpProcess;
|
|
use app_test_support::create_fake_rollout;
|
|
use app_test_support::create_mock_responses_server_repeating_assistant;
|
|
use app_test_support::to_response;
|
|
use codex_app_server_protocol::JSONRPCError;
|
|
use codex_app_server_protocol::JSONRPCResponse;
|
|
use codex_app_server_protocol::RequestId;
|
|
use codex_app_server_protocol::ThreadArchiveParams;
|
|
use codex_app_server_protocol::ThreadArchiveResponse;
|
|
use codex_app_server_protocol::ThreadArchivedNotification;
|
|
use codex_app_server_protocol::ThreadResumeParams;
|
|
use codex_app_server_protocol::ThreadResumeResponse;
|
|
use codex_app_server_protocol::ThreadStartParams;
|
|
use codex_app_server_protocol::ThreadStartResponse;
|
|
use codex_app_server_protocol::ThreadStatus;
|
|
use codex_app_server_protocol::ThreadUnarchiveParams;
|
|
use codex_app_server_protocol::ThreadUnarchiveResponse;
|
|
use codex_app_server_protocol::TurnStartParams;
|
|
use codex_app_server_protocol::TurnStartResponse;
|
|
use codex_app_server_protocol::UserInput;
|
|
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
|
|
use codex_core::find_archived_thread_path_by_id_str;
|
|
use codex_core::find_thread_path_by_id_str;
|
|
use codex_protocol::ThreadId;
|
|
use codex_state::DirectionalThreadSpawnEdgeStatus;
|
|
use codex_state::StateRuntime;
|
|
use pretty_assertions::assert_eq;
|
|
use std::path::Path;
|
|
use tempfile::TempDir;
|
|
use tokio::time::timeout;
|
|
|
|
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
|
|
|
|
#[tokio::test]
|
|
async fn thread_archive_requires_materialized_rollout() -> Result<()> {
|
|
let server = create_mock_responses_server_repeating_assistant("Done").await;
|
|
let codex_home = TempDir::new()?;
|
|
create_config_toml(codex_home.path(), &server.uri())?;
|
|
|
|
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
// Start a thread.
|
|
let start_id = mcp
|
|
.send_thread_start_request(ThreadStartParams {
|
|
model: Some("mock-model".to_string()),
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let start_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
|
|
)
|
|
.await??;
|
|
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
|
|
assert!(!thread.id.is_empty());
|
|
|
|
let rollout_path = thread.path.clone().expect("thread path");
|
|
assert!(
|
|
!rollout_path.exists(),
|
|
"fresh thread rollout should not exist yet at {}",
|
|
rollout_path.display()
|
|
);
|
|
assert!(
|
|
find_thread_path_by_id_str(codex_home.path(), &thread.id, /*state_db_ctx*/ None)
|
|
.await?
|
|
.is_none(),
|
|
"thread id should not be discoverable before rollout materialization"
|
|
);
|
|
|
|
// Archive should fail before the rollout is materialized.
|
|
let archive_id = mcp
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: thread.id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_err: JSONRPCError = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_error_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
assert!(
|
|
archive_err
|
|
.error
|
|
.message
|
|
.contains("no rollout found for thread id"),
|
|
"unexpected archive error: {}",
|
|
archive_err.error.message
|
|
);
|
|
|
|
// Materialize rollout via a real user turn and confirm archive succeeds.
|
|
let turn_start_id = mcp
|
|
.send_turn_start_request(TurnStartParams {
|
|
thread_id: thread.id.clone(),
|
|
input: vec![UserInput::Text {
|
|
text: "materialize".to_string(),
|
|
text_elements: Vec::new(),
|
|
}],
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let turn_start_response: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(turn_start_id)),
|
|
)
|
|
.await??;
|
|
let _: TurnStartResponse = to_response::<TurnStartResponse>(turn_start_response)?;
|
|
timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_notification_message("turn/completed"),
|
|
)
|
|
.await??;
|
|
|
|
assert!(
|
|
rollout_path.exists(),
|
|
"expected rollout path {} to exist after first user message",
|
|
rollout_path.display()
|
|
);
|
|
|
|
let discovered_path =
|
|
find_thread_path_by_id_str(codex_home.path(), &thread.id, /*state_db_ctx*/ None)
|
|
.await?
|
|
.expect("expected rollout path for thread id to exist after materialization");
|
|
assert_paths_match_on_disk(&discovered_path, &rollout_path)?;
|
|
|
|
let archive_id = mcp
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: thread.id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
|
|
let archive_notification = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await??;
|
|
let archived_notification: ThreadArchivedNotification = serde_json::from_value(
|
|
archive_notification
|
|
.params
|
|
.expect("thread/archived notification params"),
|
|
)?;
|
|
assert_eq!(archived_notification.thread_id, thread.id);
|
|
|
|
// Verify file moved.
|
|
let archived_directory = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
|
|
// The archived file keeps the original filename (rollout-...-<id>.jsonl).
|
|
let archived_rollout_path =
|
|
archived_directory.join(rollout_path.file_name().expect("rollout file name"));
|
|
assert!(
|
|
!rollout_path.exists(),
|
|
"expected rollout path {} to be moved",
|
|
rollout_path.display()
|
|
);
|
|
assert!(
|
|
archived_rollout_path.exists(),
|
|
"expected archived rollout path {} to exist",
|
|
archived_rollout_path.display()
|
|
);
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn thread_archive_archives_spawned_descendants() -> Result<()> {
|
|
let server = create_mock_responses_server_repeating_assistant("Done").await;
|
|
let codex_home = TempDir::new()?;
|
|
create_config_toml(codex_home.path(), &server.uri())?;
|
|
|
|
let parent_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-00-00",
|
|
"2025-01-01T00:00:00Z",
|
|
"parent",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
let child_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-01-00",
|
|
"2025-01-01T00:01:00Z",
|
|
"child",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
let grandchild_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-02-00",
|
|
"2025-01-01T00:02:00Z",
|
|
"grandchild",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
|
|
let parent_thread_id = ThreadId::from_string(&parent_id)?;
|
|
let child_thread_id = ThreadId::from_string(&child_id)?;
|
|
let grandchild_thread_id = ThreadId::from_string(&grandchild_id)?;
|
|
let state_db =
|
|
StateRuntime::init(codex_home.path().to_path_buf(), "mock_provider".into()).await?;
|
|
state_db
|
|
.mark_backfill_complete(/*last_watermark*/ None)
|
|
.await?;
|
|
state_db
|
|
.upsert_thread_spawn_edge(
|
|
parent_thread_id,
|
|
child_thread_id,
|
|
DirectionalThreadSpawnEdgeStatus::Closed,
|
|
)
|
|
.await?;
|
|
state_db
|
|
.upsert_thread_spawn_edge(
|
|
child_thread_id,
|
|
grandchild_thread_id,
|
|
DirectionalThreadSpawnEdgeStatus::Open,
|
|
)
|
|
.await?;
|
|
|
|
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
let archive_id = mcp
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: parent_id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
|
|
|
|
let mut archived_ids = Vec::new();
|
|
for _ in 0..3 {
|
|
let notification = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await??;
|
|
let archived_notification: ThreadArchivedNotification = serde_json::from_value(
|
|
notification
|
|
.params
|
|
.expect("thread/archived notification params"),
|
|
)?;
|
|
archived_ids.push(archived_notification.thread_id);
|
|
}
|
|
assert_eq!(archived_ids, vec![parent_id, grandchild_id, child_id]);
|
|
|
|
for thread_id in [parent_thread_id, child_thread_id, grandchild_thread_id] {
|
|
assert!(
|
|
find_thread_path_by_id_str(
|
|
codex_home.path(),
|
|
&thread_id.to_string(),
|
|
/*state_db_ctx*/ None,
|
|
)
|
|
.await?
|
|
.is_none(),
|
|
"expected active rollout for {thread_id} to be archived"
|
|
);
|
|
assert!(
|
|
find_archived_thread_path_by_id_str(
|
|
codex_home.path(),
|
|
&thread_id.to_string(),
|
|
/*state_db_ctx*/ None,
|
|
)
|
|
.await?
|
|
.is_some(),
|
|
"expected archived rollout for {thread_id} to exist"
|
|
);
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn thread_archive_succeeds_when_descendant_archive_fails() -> Result<()> {
|
|
let server = create_mock_responses_server_repeating_assistant("Done").await;
|
|
let codex_home = TempDir::new()?;
|
|
create_config_toml(codex_home.path(), &server.uri())?;
|
|
|
|
let parent_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-00-00",
|
|
"2025-01-01T00:00:00Z",
|
|
"parent",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
let child_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-01-00",
|
|
"2025-01-01T00:01:00Z",
|
|
"child",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
let grandchild_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-02-00",
|
|
"2025-01-01T00:02:00Z",
|
|
"grandchild",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
|
|
let parent_thread_id = ThreadId::from_string(&parent_id)?;
|
|
let child_thread_id = ThreadId::from_string(&child_id)?;
|
|
let grandchild_thread_id = ThreadId::from_string(&grandchild_id)?;
|
|
let state_db =
|
|
StateRuntime::init(codex_home.path().to_path_buf(), "mock_provider".into()).await?;
|
|
state_db
|
|
.mark_backfill_complete(/*last_watermark*/ None)
|
|
.await?;
|
|
state_db
|
|
.upsert_thread_spawn_edge(
|
|
parent_thread_id,
|
|
child_thread_id,
|
|
DirectionalThreadSpawnEdgeStatus::Closed,
|
|
)
|
|
.await?;
|
|
state_db
|
|
.upsert_thread_spawn_edge(
|
|
child_thread_id,
|
|
grandchild_thread_id,
|
|
DirectionalThreadSpawnEdgeStatus::Open,
|
|
)
|
|
.await?;
|
|
|
|
let child_rollout_path =
|
|
find_thread_path_by_id_str(codex_home.path(), &child_id, /*state_db_ctx*/ None)
|
|
.await?
|
|
.expect("child rollout path");
|
|
let archived_child_path = codex_home
|
|
.path()
|
|
.join(ARCHIVED_SESSIONS_SUBDIR)
|
|
.join(child_rollout_path.file_name().expect("rollout file name"));
|
|
std::fs::create_dir_all(&archived_child_path)?;
|
|
|
|
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
let archive_id = mcp
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: parent_id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
|
|
|
|
let mut archived_ids = Vec::new();
|
|
for _ in 0..2 {
|
|
let notification = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await??;
|
|
let archived_notification: ThreadArchivedNotification = serde_json::from_value(
|
|
notification
|
|
.params
|
|
.expect("thread/archived notification params"),
|
|
)?;
|
|
archived_ids.push(archived_notification.thread_id);
|
|
}
|
|
assert_eq!(archived_ids, vec![parent_id, grandchild_id]);
|
|
|
|
assert!(
|
|
timeout(
|
|
std::time::Duration::from_millis(250),
|
|
mcp.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await
|
|
.is_err()
|
|
);
|
|
|
|
assert!(
|
|
child_rollout_path.exists(),
|
|
"child should stay active after descendant archive failure"
|
|
);
|
|
assert!(
|
|
archived_child_path.is_dir(),
|
|
"test conflict should remain in archived sessions"
|
|
);
|
|
for thread_id in [parent_thread_id, grandchild_thread_id] {
|
|
assert!(
|
|
find_thread_path_by_id_str(
|
|
codex_home.path(),
|
|
&thread_id.to_string(),
|
|
/*state_db_ctx*/ None,
|
|
)
|
|
.await?
|
|
.is_none(),
|
|
"expected active rollout for {thread_id} to be archived"
|
|
);
|
|
assert!(
|
|
find_archived_thread_path_by_id_str(
|
|
codex_home.path(),
|
|
&thread_id.to_string(),
|
|
/*state_db_ctx*/ None,
|
|
)
|
|
.await?
|
|
.is_some(),
|
|
"expected archived rollout for {thread_id} to exist"
|
|
);
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn thread_archive_succeeds_when_spawned_descendant_is_missing() -> Result<()> {
|
|
let server = create_mock_responses_server_repeating_assistant("Done").await;
|
|
let codex_home = TempDir::new()?;
|
|
create_config_toml(codex_home.path(), &server.uri())?;
|
|
|
|
let parent_id = create_fake_rollout(
|
|
codex_home.path(),
|
|
"2025-01-01T00-00-00",
|
|
"2025-01-01T00:00:00Z",
|
|
"parent",
|
|
Some("mock_provider"),
|
|
/*git_info*/ None,
|
|
)?;
|
|
let parent_thread_id = ThreadId::from_string(&parent_id)?;
|
|
let missing_child_thread_id = ThreadId::from_string("00000000-0000-0000-0000-000000000901")?;
|
|
|
|
let state_db =
|
|
StateRuntime::init(codex_home.path().to_path_buf(), "mock_provider".into()).await?;
|
|
state_db
|
|
.mark_backfill_complete(/*last_watermark*/ None)
|
|
.await?;
|
|
state_db
|
|
.upsert_thread_spawn_edge(
|
|
parent_thread_id,
|
|
missing_child_thread_id,
|
|
DirectionalThreadSpawnEdgeStatus::Closed,
|
|
)
|
|
.await?;
|
|
|
|
let mut mcp = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
|
|
|
|
let archive_id = mcp
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: parent_id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
|
|
|
|
let notification = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
mcp.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await??;
|
|
let archived_notification: ThreadArchivedNotification = serde_json::from_value(
|
|
notification
|
|
.params
|
|
.expect("thread/archived notification params"),
|
|
)?;
|
|
assert_eq!(archived_notification.thread_id, parent_id);
|
|
|
|
assert!(
|
|
find_thread_path_by_id_str(codex_home.path(), &parent_id, /*state_db_ctx*/ None)
|
|
.await?
|
|
.is_none(),
|
|
"parent should be archived even when a descendant is missing"
|
|
);
|
|
assert!(
|
|
find_archived_thread_path_by_id_str(
|
|
codex_home.path(),
|
|
&parent_id,
|
|
/*state_db_ctx*/ None,
|
|
)
|
|
.await?
|
|
.is_some(),
|
|
"parent should be moved into archived sessions"
|
|
);
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn thread_archive_clears_stale_subscriptions_before_resume() -> Result<()> {
|
|
let server = create_mock_responses_server_repeating_assistant("Done").await;
|
|
let codex_home = TempDir::new()?;
|
|
create_config_toml(codex_home.path(), &server.uri())?;
|
|
|
|
let mut primary = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, primary.initialize()).await??;
|
|
|
|
let start_id = primary
|
|
.send_thread_start_request(ThreadStartParams {
|
|
model: Some("mock-model".to_string()),
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let start_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_response_message(RequestId::Integer(start_id)),
|
|
)
|
|
.await??;
|
|
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
|
|
|
|
let turn_start_id = primary
|
|
.send_turn_start_request(TurnStartParams {
|
|
thread_id: thread.id.clone(),
|
|
input: vec![UserInput::Text {
|
|
text: "materialize".to_string(),
|
|
text_elements: Vec::new(),
|
|
}],
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let turn_start_response: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_response_message(RequestId::Integer(turn_start_id)),
|
|
)
|
|
.await??;
|
|
let _: TurnStartResponse = to_response::<TurnStartResponse>(turn_start_response)?;
|
|
timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_notification_message("turn/completed"),
|
|
)
|
|
.await??;
|
|
primary.clear_message_buffer();
|
|
|
|
let mut secondary = McpProcess::new(codex_home.path()).await?;
|
|
timeout(DEFAULT_READ_TIMEOUT, secondary.initialize()).await??;
|
|
|
|
let archive_id = primary
|
|
.send_thread_archive_request(ThreadArchiveParams {
|
|
thread_id: thread.id.clone(),
|
|
})
|
|
.await?;
|
|
let archive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_response_message(RequestId::Integer(archive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
|
|
timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_notification_message("thread/archived"),
|
|
)
|
|
.await??;
|
|
|
|
let unarchive_id = primary
|
|
.send_thread_unarchive_request(ThreadUnarchiveParams {
|
|
thread_id: thread.id.clone(),
|
|
})
|
|
.await?;
|
|
let unarchive_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_response_message(RequestId::Integer(unarchive_id)),
|
|
)
|
|
.await??;
|
|
let _: ThreadUnarchiveResponse = to_response::<ThreadUnarchiveResponse>(unarchive_resp)?;
|
|
timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
primary.read_stream_until_notification_message("thread/unarchived"),
|
|
)
|
|
.await??;
|
|
primary.clear_message_buffer();
|
|
|
|
let resume_id = secondary
|
|
.send_thread_resume_request(ThreadResumeParams {
|
|
thread_id: thread.id.clone(),
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let resume_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
secondary.read_stream_until_response_message(RequestId::Integer(resume_id)),
|
|
)
|
|
.await??;
|
|
let resume: ThreadResumeResponse = to_response::<ThreadResumeResponse>(resume_resp)?;
|
|
assert_eq!(resume.thread.status, ThreadStatus::Idle);
|
|
primary.clear_message_buffer();
|
|
secondary.clear_message_buffer();
|
|
|
|
let resumed_turn_id = secondary
|
|
.send_turn_start_request(TurnStartParams {
|
|
thread_id: thread.id,
|
|
input: vec![UserInput::Text {
|
|
text: "secondary turn".to_string(),
|
|
text_elements: Vec::new(),
|
|
}],
|
|
..Default::default()
|
|
})
|
|
.await?;
|
|
let resumed_turn_resp: JSONRPCResponse = timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
secondary.read_stream_until_response_message(RequestId::Integer(resumed_turn_id)),
|
|
)
|
|
.await??;
|
|
let _: TurnStartResponse = to_response::<TurnStartResponse>(resumed_turn_resp)?;
|
|
|
|
assert!(
|
|
timeout(
|
|
std::time::Duration::from_millis(250),
|
|
primary.read_stream_until_notification_message("turn/started"),
|
|
)
|
|
.await
|
|
.is_err()
|
|
);
|
|
|
|
timeout(
|
|
DEFAULT_READ_TIMEOUT,
|
|
secondary.read_stream_until_notification_message("turn/completed"),
|
|
)
|
|
.await??;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
|
|
let config_toml = codex_home.join("config.toml");
|
|
std::fs::write(config_toml, config_contents(server_uri))
|
|
}
|
|
|
|
fn config_contents(server_uri: &str) -> String {
|
|
format!(
|
|
r#"model = "mock-model"
|
|
approval_policy = "never"
|
|
sandbox_mode = "read-only"
|
|
|
|
model_provider = "mock_provider"
|
|
|
|
[model_providers.mock_provider]
|
|
name = "Mock provider for test"
|
|
base_url = "{server_uri}/v1"
|
|
wire_api = "responses"
|
|
request_max_retries = 0
|
|
stream_max_retries = 0
|
|
"#
|
|
)
|
|
}
|
|
|
|
fn assert_paths_match_on_disk(actual: &Path, expected: &Path) -> std::io::Result<()> {
|
|
let actual = actual.canonicalize()?;
|
|
let expected = expected.canonicalize()?;
|
|
assert_eq!(actual, expected);
|
|
Ok(())
|
|
}
|