mirror of
https://github.com/openai/codex.git
synced 2026-05-03 19:06:58 +00:00
tui: restore interactive slash queue behavior
Keep bare /model and /review interactive while preserving serialized queue replay, restore queued slash drafts into the composer on interrupt, and align queued slash parsing with the same feature-gated lookup used by the composer. Co-authored-by: Codex <noreply@openai.com>
This commit is contained in:
@@ -0,0 +1,22 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: term.backend().vt100().screen().contents()
|
||||
---
|
||||
|
||||
|
||||
|
||||
|
||||
Select Model and Effort
|
||||
Access legacy models by running codex -m <model_name> or in your config.toml
|
||||
|
||||
› 1. gpt-5.3-codex (default) Latest frontier agentic coding model.
|
||||
2. gpt-5.4 Latest frontier agentic coding model.
|
||||
3. gpt-5.2-codex Frontier agentic coding model.
|
||||
4. gpt-5.1-codex-max Codex-optimized flagship for deep and fast
|
||||
reasoning.
|
||||
5. gpt-5.2 Latest frontier model with improvements across
|
||||
knowledge, reasoning and coding
|
||||
6. gpt-5.1-codex-mini Optimized for codex. Cheaper, faster, but less
|
||||
capable.
|
||||
|
||||
Press enter to select reasoning effort, or esc to dismiss.
|
||||
@@ -1,22 +0,0 @@
|
||||
---
|
||||
source: tui/src/chatwidget/tests.rs
|
||||
expression: term.backend().vt100().screen().contents()
|
||||
---
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
• Working (0s • esc to interrupt)
|
||||
|
||||
• Queued follow-up messages
|
||||
↳ /model
|
||||
⌥ + ↑ edit last queued message
|
||||
|
||||
› Ask Codex to do anything
|
||||
|
||||
? for shortcuts 100% context left
|
||||
Reference in New Issue
Block a user