Compare commits

...

2 Commits

Author SHA1 Message Date
aibrahim-oai
61e968ce26 Update models.json 2026-03-18 20:10:27 +00:00
Felipe Coury
334164a6f7 feat(tui): restore composer history in app-server tui (#14945)
## Problem

The app-server TUI (`tui_app_server`) lacked composer history support.
Pressing Up/Down to recall previous prompts hit a stub that logged a
warning and displayed "Not available in app-server TUI yet." New
submissions were silently dropped from the shared history file, so
nothing persisted for future sessions.

## Mental model

Codex maintains a single, append-only history file
(`$CODEX_HOME/history.jsonl`) shared across all TUI processes on the
same machine. The legacy (in-process) TUI already reads/writes this file
through `codex_core::message_history`. The app-server TUI delegates most
operations to a separate process over RPC, but history is intentionally
*not* an RPC concern — it's a client-local file.

This PR makes the app-server TUI access the same history file directly,
bypassing the app-server process entirely. The composer's Up/Down
navigation and submit-time persistence now follow the same code paths as
the legacy TUI, with the only difference being *where* the call is
dispatched (locally in `App`, rather than inside `CodexThread`).

The branch is rebuilt directly on top of `upstream/main`, so it keeps
the
existing app-server restore architecture intact.
`AppServerStartedThread`
still restores transcript history from the server `Thread` snapshot via
`thread_snapshot_events`; this PR only adds composer-history support.

## Non-goals

- Adding history support to the app-server protocol. History remains
client-local.
- Changing the on-disk format or location of `history.jsonl`.
- Surfacing history I/O errors to the user (failures are logged and
silently swallowed, matching the legacy TUI).

## Tradeoffs

| Decision | Why | Risk |
|----------|-----|------|
| Widen `message_history` from `pub(crate)` to `pub` | Avoids
duplicating file I/O logic; the module already has a clean, minimal API
surface. | Other workspace crates can now call these functions — the
contract is no longer crate-private. However, this is consistent with
recent precedent: `590cfa617` exposed `mention_syntax` for TUI
consumption, `752402c4f` exposed plugin APIs (`PluginsManager`), and
`14fcb6645`/`edacbf7b6` widened internal core APIs for other crates.
These were all narrow, intentional exposures of specific APIs — not
broad "make internals public" moves. `1af2a37ad` even went the other
direction, reducing broad re-exports to tighten boundaries. This change
follows the same pattern: a small, deliberate API surface (3 functions)
rather than a wholesale visibility change. |
| Intercept `AddToHistory` / `GetHistoryEntryRequest` in `App` before
RPC fallback | Keeps history ops out of the "unsupported op" error path
without changing app-server protocol. | This now routes through a single
`submit_thread_op` entry point, which is safer than the original
duplicated dispatch. The remaining risk is organizational: future
thread-op submission paths need to keep using that shared entry point. |
| `session_configured_from_thread_response` is now `async` | Needs
`await` on `history_metadata()` to populate real `history_log_id` /
`history_entry_count`. | Adds an async file-stat + full-file newline
scan to the session bootstrap path. The scan is bounded by
`history.max_bytes` and matches the legacy TUI's cost profile, but
startup latency still scales with file size. |

## Architecture

```
User presses Up                     User submits a prompt
       │                                    │
       ▼                                    ▼
ChatComposerHistory                 ChatWidget::do_submit_turn
  navigate_up()                       encode_history_mentions()
       │                                    │
       ▼                                    ▼
  AppEvent::CodexOp                  Op::AddToHistory { text }
  (GetHistoryEntryRequest)                  │
       │                                    ▼
       ▼                            App::try_handle_local_history_op
  App::try_handle_local_history_op    message_history::append_entry()
    spawn_blocking {                        │
      message_history::lookup()             ▼
    }                                $CODEX_HOME/history.jsonl
       │
       ▼
  AppEvent::ThreadEvent
  (GetHistoryEntryResponse)
       │
       ▼
  ChatComposerHistory::on_entry_response()
```

## Observability

- `tracing::warn` on `append_entry` failure (includes thread ID).
- `tracing::warn` on `spawn_blocking` lookup join error.
- `tracing::warn` from `message_history` internals on file-open, lock,
or parse failures.

## Tests

- `chat_composer_history::tests::navigation_with_async_fetch` — verifies
that Up emits `Op::GetHistoryEntryRequest` (was: checked for stub error
cell).
- `app::tests::history_lookup_response_is_routed_to_requesting_thread` —
verifies multi-thread composer recall routes the lookup result back to
the originating thread.
-
`app_server_session::tests::resume_response_relies_on_snapshot_replay_not_initial_messages`
— verifies app-server session restore still uses the upstream
thread-snapshot path.
-
`app_server_session::tests::session_configured_populates_history_metadata`
— verifies bootstrap sets nonzero `history_log_id` /
`history_entry_count` from the shared local history file.
2026-03-18 11:54:11 -06:00
10 changed files with 605 additions and 377 deletions

File diff suppressed because one or more lines are too long

View File

@@ -63,7 +63,7 @@ mod mcp_tool_call;
mod memories;
pub mod mention_syntax;
mod mentions;
mod message_history;
pub mod message_history;
mod model_provider_info;
pub mod path_utils;
pub mod personality_migration;

View File

@@ -66,14 +66,22 @@ fn history_filepath(config: &Config) -> PathBuf {
path
}
/// Append a `text` entry associated with `conversation_id` to the history file. Uses
/// advisory file locking to ensure that concurrent writes do not interleave,
/// which entails a small amount of blocking I/O internally.
pub(crate) async fn append_entry(
text: &str,
conversation_id: &ThreadId,
config: &Config,
) -> Result<()> {
/// Append a `text` entry associated with `conversation_id` to the history file.
///
/// Uses advisory file locking (`File::try_lock`) with a retry loop to ensure
/// concurrent writes from multiple TUI processes do not interleave. The lock
/// acquisition and write are performed inside `spawn_blocking` so the caller's
/// async runtime is not blocked.
///
/// The entry is silently skipped when `config.history.persistence` is
/// [`HistoryPersistence::None`].
///
/// # Errors
///
/// Returns an I/O error if the history file cannot be opened/created, the
/// system clock is before the Unix epoch, or the exclusive lock cannot be
/// acquired after [`MAX_RETRIES`] attempts.
pub async fn append_entry(text: &str, conversation_id: &ThreadId, config: &Config) -> Result<()> {
match config.history.persistence {
HistoryPersistence::SaveAll => {
// Save everything: proceed.
@@ -243,22 +251,29 @@ fn trim_target_bytes(max_bytes: u64, newest_entry_len: u64) -> u64 {
soft_cap_bytes.max(newest_entry_len)
}
/// Asynchronously fetch the history file's *identifier* (inode on Unix) and
/// the current number of entries by counting newline characters.
pub(crate) async fn history_metadata(config: &Config) -> (u64, usize) {
/// Asynchronously fetch the history file's *identifier* and current entry count.
///
/// The identifier is the file's inode on Unix or creation time on Windows.
/// The entry count is derived by counting newline bytes in the file. Returns
/// `(0, 0)` when the file does not exist or its metadata cannot be read. If
/// metadata succeeds but the file cannot be opened or scanned, returns
/// `(log_id, 0)` so callers can still detect that a history file exists.
pub async fn history_metadata(config: &Config) -> (u64, usize) {
let path = history_filepath(config);
history_metadata_for_file(&path).await
}
/// Given a `log_id` (on Unix this is the file's inode number,
/// on Windows this is the file's creation time) and a zero-based
/// `offset`, return the corresponding `HistoryEntry` if the identifier matches
/// the current history file **and** the requested offset exists. Any I/O or
/// parsing errors are logged and result in `None`.
/// Look up a single history entry by file identity and zero-based offset.
///
/// Note this function is not async because it uses a sync advisory file
/// locking API.
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
/// Returns `Some(entry)` when the current history file's identifier (inode on
/// Unix, creation time on Windows) matches `log_id` **and** a valid JSON
/// record exists at `offset`. Returns `None` on any mismatch, I/O error, or
/// parse failure, all of which are logged at `warn` level.
///
/// This function is synchronous because it acquires a shared advisory file lock
/// via `File::try_lock_shared`. Callers on an async runtime should wrap it in
/// `spawn_blocking`.
pub fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
let path = history_filepath(config);
lookup_history_entry(&path, log_id, offset)
}

View File

@@ -69,6 +69,7 @@ use codex_core::config::types::ApprovalsReviewer;
use codex_core::config::types::ModelAvailabilityNuxConfig;
use codex_core::config_loader::ConfigLayerStackOrdering;
use codex_core::features::Feature;
use codex_core::message_history;
use codex_core::models_manager::collaboration_mode_presets::CollaborationModesConfig;
use codex_core::models_manager::model_presets::HIDE_GPT_5_1_CODEX_MAX_MIGRATION_PROMPT_CONFIG;
use codex_core::models_manager::model_presets::HIDE_GPT5_1_MIGRATION_PROMPT_CONFIG;
@@ -86,10 +87,10 @@ use codex_protocol::openai_models::ModelUpgrade;
use codex_protocol::openai_models::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::FinalOutput;
use codex_protocol::protocol::GetHistoryEntryResponseEvent;
use codex_protocol::protocol::ListSkillsResponseEvent;
#[cfg(test)]
use codex_protocol::protocol::McpAuthStatus;
#[cfg(test)]
use codex_protocol::protocol::Op;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
@@ -457,6 +458,7 @@ struct ThreadEventSnapshot {
enum ThreadBufferedEvent {
Notification(ServerNotification),
Request(ServerRequest),
HistoryEntryResponse(GetHistoryEntryResponseEvent),
LegacyWarning(String),
LegacyRollback { num_turns: u32 },
}
@@ -616,6 +618,7 @@ impl ThreadEventStore {
.pending_interactive_replay
.should_replay_snapshot_request(request),
ThreadBufferedEvent::Notification(_)
| ThreadBufferedEvent::HistoryEntryResponse(_)
| ThreadBufferedEvent::LegacyWarning(_)
| ThreadBufferedEvent::LegacyRollback { .. } => true,
})
@@ -1763,8 +1766,21 @@ impl App {
return Ok(());
};
self.submit_thread_op(app_server, thread_id, op).await
}
async fn submit_thread_op(
&mut self,
app_server: &mut AppServerSession,
thread_id: ThreadId,
op: AppCommand,
) -> Result<()> {
crate::session_log::log_outbound_op(&op);
if self.try_handle_local_history_op(thread_id, &op).await? {
return Ok(());
}
if self
.try_resolve_app_server_request(app_server, thread_id, &op)
.await?
@@ -1777,7 +1793,7 @@ impl App {
.await?
{
if ThreadEventStore::op_can_change_pending_replay_state(&op) {
self.note_active_thread_outbound_op(&op).await;
self.note_thread_outbound_op(thread_id, &op).await;
self.refresh_pending_thread_approvals().await;
}
return Ok(());
@@ -1855,6 +1871,66 @@ impl App {
}
}
/// Intercept composer-history operations and handle them locally against
/// `$CODEX_HOME/history.jsonl`, bypassing the app-server RPC layer.
async fn try_handle_local_history_op(
&mut self,
thread_id: ThreadId,
op: &AppCommand,
) -> Result<bool> {
match op.view() {
AppCommandView::Other(Op::AddToHistory { text }) => {
let text = text.clone();
let config = self.chat_widget.config_ref().clone();
tokio::spawn(async move {
if let Err(err) =
message_history::append_entry(&text, &thread_id, &config).await
{
tracing::warn!(
thread_id = %thread_id,
error = %err,
"failed to append to message history"
);
}
});
Ok(true)
}
AppCommandView::Other(Op::GetHistoryEntryRequest { offset, log_id }) => {
let offset = *offset;
let log_id = *log_id;
let config = self.chat_widget.config_ref().clone();
let app_event_tx = self.app_event_tx.clone();
tokio::spawn(async move {
let entry_opt = tokio::task::spawn_blocking(move || {
message_history::lookup(log_id, offset, &config)
})
.await
.unwrap_or_else(|err| {
tracing::warn!(error = %err, "history lookup task failed");
None
});
app_event_tx.send(AppEvent::ThreadHistoryEntryResponse {
thread_id,
event: GetHistoryEntryResponseEvent {
offset,
log_id,
entry: entry_opt.map(|entry| {
codex_protocol::message_history::HistoryEntry {
conversation_id: entry.session_id,
ts: entry.ts,
text: entry.text,
}
}),
},
});
});
Ok(true)
}
_ => Ok(false),
}
}
async fn try_submit_active_thread_op_via_app_server(
&mut self,
app_server: &mut AppServerSession,
@@ -2213,6 +2289,50 @@ impl App {
Ok(())
}
async fn enqueue_thread_history_entry_response(
&mut self,
thread_id: ThreadId,
event: GetHistoryEntryResponseEvent,
) -> Result<()> {
let (sender, store) = {
let channel = self.ensure_thread_channel(thread_id);
(channel.sender.clone(), Arc::clone(&channel.store))
};
let should_send = {
let mut guard = store.lock().await;
guard
.buffer
.push_back(ThreadBufferedEvent::HistoryEntryResponse(event.clone()));
if guard.buffer.len() > guard.capacity
&& let Some(removed) = guard.buffer.pop_front()
&& let ThreadBufferedEvent::Request(request) = &removed
{
guard
.pending_interactive_replay
.note_evicted_server_request(request);
}
guard.active
};
if should_send {
match sender.try_send(ThreadBufferedEvent::HistoryEntryResponse(event)) {
Ok(()) => {}
Err(TrySendError::Full(event)) => {
tokio::spawn(async move {
if let Err(err) = sender.send(event).await {
tracing::warn!("thread {thread_id} event channel closed: {err}");
}
});
}
Err(TrySendError::Closed(_)) => {
tracing::warn!("thread {thread_id} event channel closed");
}
}
}
Ok(())
}
async fn enqueue_thread_legacy_rollback(
&mut self,
thread_id: ThreadId,
@@ -2304,6 +2424,10 @@ impl App {
ThreadBufferedEvent::Request(request) => {
self.enqueue_thread_request(thread_id, request).await?;
}
ThreadBufferedEvent::HistoryEntryResponse(event) => {
self.enqueue_thread_history_entry_response(thread_id, event)
.await?;
}
ThreadBufferedEvent::LegacyWarning(message) => {
self.enqueue_thread_legacy_warning(thread_id, message)
.await?;
@@ -3465,22 +3589,12 @@ impl App {
self.submit_active_thread_op(app_server, op.into()).await?;
}
AppEvent::SubmitThreadOp { thread_id, op } => {
let app_command: AppCommand = op.into();
if self
.try_resolve_app_server_request(app_server, thread_id, &app_command)
.await?
{
return Ok(AppRunControl::Continue);
}
crate::session_log::log_outbound_op(&app_command);
tracing::error!(
thread_id = %thread_id,
op = ?app_command,
"unexpected unresolved thread-scoped app command"
);
self.chat_widget.add_error_message(format!(
"Thread-scoped request is no longer pending for thread {thread_id}."
));
self.submit_thread_op(app_server, thread_id, op.into())
.await?;
}
AppEvent::ThreadHistoryEntryResponse { thread_id, event } => {
self.enqueue_thread_history_entry_response(thread_id, event)
.await?;
}
AppEvent::DiffResult(text) => {
// Clear the in-progress state in the bottom pane
@@ -4639,6 +4753,9 @@ impl App {
self.chat_widget
.handle_server_request(request, /*replay_kind*/ None);
}
ThreadBufferedEvent::HistoryEntryResponse(event) => {
self.chat_widget.handle_history_entry_response(event);
}
ThreadBufferedEvent::LegacyWarning(message) => {
self.chat_widget.add_warning_message(message);
}
@@ -4660,6 +4777,9 @@ impl App {
ThreadBufferedEvent::Request(request) => self
.chat_widget
.handle_server_request(request, Some(ReplayKind::ThreadSnapshot)),
ThreadBufferedEvent::HistoryEntryResponse(event) => {
self.chat_widget.handle_history_entry_response(event)
}
ThreadBufferedEvent::LegacyWarning(message) => {
self.chat_widget.add_warning_message(message);
}
@@ -5520,6 +5640,44 @@ mod tests {
.expect("listener task drop notification should succeed");
}
#[tokio::test]
async fn history_lookup_response_is_routed_to_requesting_thread() -> Result<()> {
let (mut app, mut app_event_rx, _op_rx) = make_test_app_with_channels().await;
let thread_id = ThreadId::new();
let handled = app
.try_handle_local_history_op(
thread_id,
&Op::GetHistoryEntryRequest {
offset: 0,
log_id: 1,
}
.into(),
)
.await?;
assert!(handled);
let app_event = tokio::time::timeout(Duration::from_secs(1), app_event_rx.recv())
.await
.expect("history lookup should emit an app event")
.expect("app event channel should stay open");
let AppEvent::ThreadHistoryEntryResponse {
thread_id: routed_thread_id,
event,
} = app_event
else {
panic!("expected thread-routed history response");
};
assert_eq!(routed_thread_id, thread_id);
assert_eq!(event.offset, 0);
assert_eq!(event.log_id, 1);
assert!(event.entry.is_none());
Ok(())
}
#[tokio::test]
async fn enqueue_thread_event_does_not_block_when_channel_full() -> Result<()> {
let mut app = make_test_app().await;

View File

@@ -15,6 +15,7 @@ use codex_chatgpt::connectors::AppInfo;
use codex_file_search::FileMatch;
use codex_protocol::ThreadId;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::protocol::GetHistoryEntryResponseEvent;
use codex_protocol::protocol::Op;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_utils_approval_presets::ApprovalPreset;
@@ -81,6 +82,12 @@ pub(crate) enum AppEvent {
op: Op,
},
/// Deliver a synthetic history lookup response to a specific thread channel.
ThreadHistoryEntryResponse {
thread_id: ThreadId,
event: GetHistoryEntryResponseEvent,
},
/// Start a new session.
NewSession,

View File

@@ -54,6 +54,7 @@ use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnSteerParams;
use codex_app_server_protocol::TurnSteerResponse;
use codex_core::config::Config;
use codex_core::message_history;
use codex_otel::TelemetryAuthMode;
use codex_protocol::ThreadId;
use codex_protocol::openai_models::ModelAvailabilityNux;
@@ -277,7 +278,7 @@ impl AppServerSession {
})
.await
.wrap_err("thread/start failed during TUI bootstrap")?;
started_thread_from_start_response(response)
started_thread_from_start_response(response, config).await
}
pub(crate) async fn resume_thread(
@@ -291,14 +292,14 @@ impl AppServerSession {
.request_typed(ClientRequest::ThreadResume {
request_id,
params: thread_resume_params_from_config(
config,
config.clone(),
thread_id,
self.thread_params_mode(),
),
})
.await
.wrap_err("thread/resume failed during TUI bootstrap")?;
started_thread_from_resume_response(&response)
started_thread_from_resume_response(response, &config).await
}
pub(crate) async fn fork_thread(
@@ -312,14 +313,14 @@ impl AppServerSession {
.request_typed(ClientRequest::ThreadFork {
request_id,
params: thread_fork_params_from_config(
config,
config.clone(),
thread_id,
self.thread_params_mode(),
),
})
.await
.wrap_err("thread/fork failed during TUI bootstrap")?;
started_thread_from_fork_response(&response)
started_thread_from_fork_response(response, &config).await
}
fn thread_params_mode(&self) -> ThreadParamsMode {
@@ -843,10 +844,12 @@ fn thread_cwd_from_config(config: &Config, thread_params_mode: ThreadParamsMode)
}
}
fn started_thread_from_start_response(
async fn started_thread_from_start_response(
response: ThreadStartResponse,
config: &Config,
) -> Result<AppServerStartedThread> {
let session = thread_session_state_from_thread_start_response(&response)
let session = thread_session_state_from_thread_start_response(&response, config)
.await
.map_err(color_eyre::eyre::Report::msg)?;
Ok(AppServerStartedThread {
session,
@@ -854,30 +857,35 @@ fn started_thread_from_start_response(
})
}
fn started_thread_from_resume_response(
response: &ThreadResumeResponse,
async fn started_thread_from_resume_response(
response: ThreadResumeResponse,
config: &Config,
) -> Result<AppServerStartedThread> {
let session = thread_session_state_from_thread_resume_response(response)
let session = thread_session_state_from_thread_resume_response(&response, config)
.await
.map_err(color_eyre::eyre::Report::msg)?;
Ok(AppServerStartedThread {
session,
turns: response.thread.turns.clone(),
turns: response.thread.turns,
})
}
fn started_thread_from_fork_response(
response: &ThreadForkResponse,
async fn started_thread_from_fork_response(
response: ThreadForkResponse,
config: &Config,
) -> Result<AppServerStartedThread> {
let session = thread_session_state_from_thread_fork_response(response)
let session = thread_session_state_from_thread_fork_response(&response, config)
.await
.map_err(color_eyre::eyre::Report::msg)?;
Ok(AppServerStartedThread {
session,
turns: response.thread.turns.clone(),
turns: response.thread.turns,
})
}
fn thread_session_state_from_thread_start_response(
async fn thread_session_state_from_thread_start_response(
response: &ThreadStartResponse,
config: &Config,
) -> Result<ThreadSessionState, String> {
thread_session_state_from_thread_response(
&response.thread.id,
@@ -891,11 +899,14 @@ fn thread_session_state_from_thread_start_response(
response.sandbox.to_core(),
response.cwd.clone(),
response.reasoning_effort,
config,
)
.await
}
fn thread_session_state_from_thread_resume_response(
async fn thread_session_state_from_thread_resume_response(
response: &ThreadResumeResponse,
config: &Config,
) -> Result<ThreadSessionState, String> {
thread_session_state_from_thread_response(
&response.thread.id,
@@ -909,11 +920,14 @@ fn thread_session_state_from_thread_resume_response(
response.sandbox.to_core(),
response.cwd.clone(),
response.reasoning_effort,
config,
)
.await
}
fn thread_session_state_from_thread_fork_response(
async fn thread_session_state_from_thread_fork_response(
response: &ThreadForkResponse,
config: &Config,
) -> Result<ThreadSessionState, String> {
thread_session_state_from_thread_response(
&response.thread.id,
@@ -927,7 +941,9 @@ fn thread_session_state_from_thread_fork_response(
response.sandbox.to_core(),
response.cwd.clone(),
response.reasoning_effort,
config,
)
.await
}
fn review_target_to_app_server(
@@ -953,7 +969,7 @@ fn review_target_to_app_server(
clippy::too_many_arguments,
reason = "session mapping keeps explicit fields"
)]
fn thread_session_state_from_thread_response(
async fn thread_session_state_from_thread_response(
thread_id: &str,
thread_name: Option<String>,
rollout_path: Option<PathBuf>,
@@ -965,9 +981,12 @@ fn thread_session_state_from_thread_response(
sandbox_policy: SandboxPolicy,
cwd: PathBuf,
reasoning_effort: Option<codex_protocol::openai_models::ReasoningEffort>,
config: &Config,
) -> Result<ThreadSessionState, String> {
let thread_id = ThreadId::from_string(thread_id)
.map_err(|err| format!("thread id `{thread_id}` is invalid: {err}"))?;
let (history_log_id, history_entry_count) = message_history::history_metadata(config).await;
let history_entry_count = u64::try_from(history_entry_count).unwrap_or(u64::MAX);
Ok(ThreadSessionState {
thread_id,
@@ -981,8 +1000,8 @@ fn thread_session_state_from_thread_response(
sandbox_policy,
cwd,
reasoning_effort,
history_log_id: 0,
history_entry_count: 0,
history_log_id,
history_entry_count,
network_proxy: None,
rollout_path,
})
@@ -1084,8 +1103,10 @@ mod tests {
assert_eq!(fork.model_provider, None);
}
#[test]
fn resume_response_restores_turns_from_thread_items() {
#[tokio::test]
async fn resume_response_restores_turns_from_thread_items() {
let temp_dir = tempfile::tempdir().expect("tempdir");
let config = build_config(&temp_dir).await;
let thread_id = ThreadId::new();
let response = ThreadResumeResponse {
thread: codex_app_server_protocol::Thread {
@@ -1135,9 +1156,44 @@ mod tests {
reasoning_effort: None,
};
let started =
started_thread_from_resume_response(&response).expect("resume response should map");
let started = started_thread_from_resume_response(response.clone(), &config)
.await
.expect("resume response should map");
assert_eq!(started.turns.len(), 1);
assert_eq!(started.turns[0], response.thread.turns[0]);
}
#[tokio::test]
async fn session_configured_populates_history_metadata() {
let temp_dir = tempfile::tempdir().expect("tempdir");
let config = build_config(&temp_dir).await;
let thread_id = ThreadId::new();
message_history::append_entry("older", &thread_id, &config)
.await
.expect("history append should succeed");
message_history::append_entry("newer", &thread_id, &config)
.await
.expect("history append should succeed");
let session = thread_session_state_from_thread_response(
&thread_id.to_string(),
Some("restore".to_string()),
None,
"gpt-5.4".to_string(),
"openai".to_string(),
None,
AskForApproval::Never,
codex_protocol::config_types::ApprovalsReviewer::User,
SandboxPolicy::new_read_only_policy(),
PathBuf::from("/tmp/project"),
None,
&config,
)
.await
.expect("session should map");
assert_ne!(session.history_log_id, 0);
assert_eq!(session.history_entry_count, 2);
}
}

View File

@@ -740,7 +740,6 @@ impl ChatComposer {
/// composer rehydrates the entry immediately. This path intentionally routes through
/// [`Self::apply_history_entry`] so cursor placement remains aligned with keyboard history
/// recall semantics.
#[cfg(test)]
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,

View File

@@ -4,10 +4,9 @@ use std::path::PathBuf;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::MentionBinding;
use crate::history_cell;
use crate::mention_codec::decode_history_mentions;
use codex_protocol::protocol::Op;
use codex_protocol::user_input::TextElement;
use tracing::warn;
/// A composer history entry that can rehydrate draft state.
#[derive(Debug, Clone, PartialEq)]
@@ -237,7 +236,6 @@ impl ChatComposerHistory {
}
/// Integrate a GetHistoryEntryResponse event.
#[cfg(test)]
pub fn on_entry_response(
&mut self,
log_id: u64,
@@ -280,16 +278,10 @@ impl ChatComposerHistory {
self.last_history_text = Some(entry.text.clone());
return Some(entry);
} else if let Some(log_id) = self.history_log_id {
warn!(
app_event_tx.send(AppEvent::CodexOp(Op::GetHistoryEntryRequest {
offset: global_idx,
log_id,
offset = global_idx,
"composer history fetch is unavailable in app-server TUI"
);
app_event_tx.send(AppEvent::InsertHistoryCell(Box::new(
history_cell::new_error_event(
"Composer history fetch: Not available in app-server TUI yet.".to_string(),
),
)));
}));
}
None
}
@@ -344,17 +336,18 @@ mod tests {
assert!(history.should_handle_navigation("", 0));
assert!(history.navigate_up(&tx).is_none()); // don't replace the text yet
// Verify that the app-server TUI emits an explicit user-facing stub error instead.
// Verify that a history lookup request was sent.
let event = rx.try_recv().expect("expected AppEvent to be sent");
let AppEvent::InsertHistoryCell(cell) = event else {
let AppEvent::CodexOp(op) = event else {
panic!("unexpected event variant");
};
let rendered = cell
.display_lines(80)
.into_iter()
.map(|line| line.to_string())
.collect::<String>();
assert!(rendered.contains("Composer history fetch: Not available in app-server TUI yet."));
assert_eq!(
Op::GetHistoryEntryRequest {
log_id: 1,
offset: 2,
},
op
);
// Inject the async response.
assert_eq!(
@@ -365,17 +358,18 @@ mod tests {
// Next Up should move to offset 1.
assert!(history.navigate_up(&tx).is_none()); // don't replace the text yet
// Verify second stub error for offset 1.
// Verify second lookup request for offset 1.
let event2 = rx.try_recv().expect("expected second event");
let AppEvent::InsertHistoryCell(cell) = event2 else {
let AppEvent::CodexOp(op) = event2 else {
panic!("unexpected event variant");
};
let rendered = cell
.display_lines(80)
.into_iter()
.map(|line| line.to_string())
.collect::<String>();
assert!(rendered.contains("Composer history fetch: Not available in app-server TUI yet."));
assert_eq!(
Op::GetHistoryEntryRequest {
log_id: 1,
offset: 1,
},
op
);
assert_eq!(
Some(HistoryEntry::new("older".to_string())),

View File

@@ -1073,7 +1073,6 @@ impl BottomPane {
|| self.composer.is_in_paste_burst()
}
#[cfg(test)]
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,

View File

@@ -46,6 +46,8 @@ use crate::audio_device::list_realtime_audio_device_names;
use crate::bottom_pane::StatusLineItem;
use crate::bottom_pane::StatusLinePreviewData;
use crate::bottom_pane::StatusLineSetupView;
use crate::mention_codec::LinkedMention;
use crate::mention_codec::encode_history_mentions;
use crate::model_catalog::ModelCatalog;
use crate::multi_agents;
use crate::status::RateLimitWindowDisplay;
@@ -3474,8 +3476,7 @@ impl ChatWidget {
}
}
#[cfg(test)]
fn on_get_history_entry_response(
pub(crate) fn handle_history_entry_response(
&mut self,
event: codex_protocol::protocol::GetHistoryEntryResponseEvent,
) {
@@ -5316,9 +5317,19 @@ impl ChatWidget {
return;
}
// Persist the text to cross-session message history.
// Persist the text to cross-session message history. Mentions are
// encoded into placeholder syntax so recall can reconstruct the
// mention bindings in a future session.
if !text.is_empty() {
warn!("skipping composer history persistence in app-server TUI");
let encoded_mentions = mention_bindings
.iter()
.map(|binding| LinkedMention {
mention: binding.mention.clone(),
path: binding.path.clone(),
})
.collect::<Vec<_>>();
let history_text = encode_history_mentions(&text, &encoded_mentions);
self.submit_op(Op::AddToHistory { text: history_text });
}
if let Some(pending_steer) = pending_steer {
@@ -6440,7 +6451,7 @@ impl ChatWidget {
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::WebSearchEnd(ev) => self.on_web_search_end(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.handle_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ListCustomPromptsResponse(_) => {
tracing::warn!(