tui: restore interactive slash queue behavior

Keep bare /model and /review interactive while preserving serialized queue replay, restore queued slash drafts into the composer on interrupt, and align queued slash parsing with the same feature-gated lookup used by the composer.

Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
Charles Cunningham
2026-03-11 11:28:38 -07:00
parent 35e8aa9ce0
commit fa50564579
5 changed files with 315 additions and 95 deletions

View File

@@ -0,0 +1,22 @@
---
source: tui/src/chatwidget/tests.rs
expression: term.backend().vt100().screen().contents()
---
Select Model and Effort
Access legacy models by running codex -m <model_name> or in your config.toml
1. gpt-5.3-codex (default) Latest frontier agentic coding model.
2. gpt-5.4 Latest frontier agentic coding model.
3. gpt-5.2-codex Frontier agentic coding model.
4. gpt-5.1-codex-max Codex-optimized flagship for deep and fast
reasoning.
5. gpt-5.2 Latest frontier model with improvements across
knowledge, reasoning and coding
6. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but less
capable.
Press enter to select reasoning effort, or esc to dismiss.

View File

@@ -1,22 +0,0 @@
---
source: tui/src/chatwidget/tests.rs
expression: term.backend().vt100().screen().contents()
---
• Working (0s • esc to interrupt)
• Queued follow-up messages
↳ /model
⌥ + ↑ edit last queued message
Ask Codex to do anything
? for shortcuts 100% context left