Summary:
- Fixes issue #9932: https://github.com/openai/codex/issues/9932
- Prevents `$CODEX_HOME` (typically `~/.codex`) from being discovered as
a project `.codex` layer by skipping it during project layer traversal.
We compare both normalized absolute paths and best-effort canonicalized
paths to handle symlinks.
- Adds regression tests for home-directory invocation and for the case
where `CODEX_HOME` points to a project `.codex` directory (e.g.,
worktrees/editor integrations).
Testing:
- `cargo build -p codex-cli --bin codex`
- `cargo build -p codex-rmcp-client --bin test_stdio_server`
- `cargo test -p codex-core`
- `cargo test --all-features`
- Manual: ran `target/debug/codex` from `~` and confirmed the
disabled-folder warning and trust prompt no longer appear.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
When using ChatGPT in names of types, we should be consistent, so this
renames some types with `ChatGpt` in the name to `Chatgpt`. From
https://rust-lang.github.io/api-guidelines/naming.html:
> In `UpperCamelCase`, acronyms and contractions of compound words count
as one word: use `Uuid` rather than `UUID`, `Usize` rather than `USize`
or `Stdin` rather than `StdIn`. In `snake_case`, acronyms and
contractions are lower-cased: `is_xid_start`.
This PR updates existing uses of `ChatGpt` and changes them to
`Chatgpt`. Though in all cases where it could affect the wire format, I
visually inspected that we don't change anything there. That said, this
_will_ change the codegen because it will affect the spelling of type
names.
For example, this renames `AuthMode::ChatGPT` to `AuthMode::Chatgpt` in
`app-server-protocol`, but the wire format is still `"chatgpt"`.
This PR also updates a number of types in `codex-rs/core/src/auth.rs`.
Title
Hide Code mode footer label/cycle hint; add Plan footer-collapse
snapshots
Summary
- Keep Code mode internal naming but suppress the footer mode label +
cycle hint when Code is active.
- Only show the cycle hint when a non‑Code mode indicator is present.
- Add Plan-mode footer collapse snapshot coverage (empty + queued,
across widths) and update existing footer collapse snapshots for the new
Code behavior.
Notes
- The test run currently fails in codex-cloud-requirements on
origin/main due to a stale auth.mode field; no fix is included in this
PR to keep the diff minimal.
Codex author
`codex resume 019c0296-cfd4-7193-9b0a-6949048e4546`
## Summary
- Stream proposed plans in Plan Mode using `<proposed_plan>` tags parsed
in core, emitting plan deltas plus a plan `ThreadItem`, while stripping
tags from normal assistant output.
- Persist plan items and rebuild them on resume so proposed plans show
in thread history.
- Wire plan items/deltas through app-server protocol v2 and render a
dedicated proposed-plan view in the TUI, including the “Implement this
plan?” prompt only when a plan item is present.
## Changes
### Core (`codex-rs/core`)
- Added a generic, line-based tag parser that buffers each line until it
can disprove a tag prefix; implements auto-close on `finish()` for
unterminated tags. `codex-rs/core/src/tagged_block_parser.rs`
- Refactored proposed plan parsing to wrap the generic parser.
`codex-rs/core/src/proposed_plan_parser.rs`
- In plan mode, stream assistant deltas as:
- **Normal text** → `AgentMessageContentDelta`
- **Plan text** → `PlanDelta` + `TurnItem::Plan` start/completion
(`codex-rs/core/src/codex.rs`)
- Final plan item content is derived from the completed assistant
message (authoritative), not necessarily the concatenated deltas.
- Strips `<proposed_plan>` blocks from assistant text in plan mode so
tags don’t appear in normal messages.
(`codex-rs/core/src/stream_events_utils.rs`)
- Persist `ItemCompleted` events only for plan items for rollout replay.
(`codex-rs/core/src/rollout/policy.rs`)
- Guard `update_plan` tool in Plan Mode with a clear error message.
(`codex-rs/core/src/tools/handlers/plan.rs`)
- Updated Plan Mode prompt to:
- keep `<proposed_plan>` out of non-final reasoning/preambles
- require exact tag formatting
- allow only one `<proposed_plan>` block per turn
(`codex-rs/core/templates/collaboration_mode/plan.md`)
### Protocol / App-server protocol
- Added `TurnItem::Plan` and `PlanDeltaEvent` to core protocol items.
(`codex-rs/protocol/src/items.rs`, `codex-rs/protocol/src/protocol.rs`)
- Added v2 `ThreadItem::Plan` and `PlanDeltaNotification` with
EXPERIMENTAL markers and note that deltas may not match the final plan
item. (`codex-rs/app-server-protocol/src/protocol/v2.rs`)
- Added plan delta route in app-server protocol common mapping.
(`codex-rs/app-server-protocol/src/protocol/common.rs`)
- Rebuild plan items from persisted `ItemCompleted` events on resume.
(`codex-rs/app-server-protocol/src/protocol/thread_history.rs`)
### App-server
- Forward plan deltas to v2 clients and map core plan items to v2 plan
items. (`codex-rs/app-server/src/bespoke_event_handling.rs`,
`codex-rs/app-server/src/codex_message_processor.rs`)
- Added v2 plan item tests.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)
### TUI
- Added a dedicated proposed plan history cell with special background
and padding, and moved “• Proposed Plan” outside the highlighted block.
(`codex-rs/tui/src/history_cell.rs`, `codex-rs/tui/src/style.rs`)
- Only show “Implement this plan?” when a plan item exists.
(`codex-rs/tui/src/chatwidget.rs`,
`codex-rs/tui/src/chatwidget/tests.rs`)
<img width="831" height="847" alt="Screenshot 2026-01-29 at 7 06 24 PM"
src="https://github.com/user-attachments/assets/69794c8c-f96b-4d36-92ef-c1f5c3a8f286"
/>
### Docs / Misc
- Updated protocol docs to mention plan deltas.
(`codex-rs/docs/protocol_v1.md`)
- Minor plumbing updates in exec/debug clients to tolerate plan deltas.
(`codex-rs/debug-client/src/reader.rs`, `codex-rs/exec/...`)
## Tests
- Added core integration tests:
- Plan mode strips plan from agent messages.
- Missing `</proposed_plan>` closes at end-of-message.
(`codex-rs/core/tests/suite/items.rs`)
- Added unit tests for generic tag parser (prefix buffering, non-tag
lines, auto-close). (`codex-rs/core/src/tagged_block_parser.rs`)
- Existing app-server plan item tests in v2.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)
## Notes / Behavior
- Plan output no longer appears in standard assistant text in Plan Mode;
it streams via `PlanDelta` and completes as a `TurnItem::Plan`.
- The final plan item content is authoritative and may diverge from
streamed deltas (documented as experimental).
- Reasoning summaries are not filtered; prompt instructs the model not
to include `<proposed_plan>` outside the final plan message.
## Codex Author
`codex fork 019bec2d-b09d-7450-b292-d7bcdddcdbfb`
## Summary
- Tightens Plan Mode to encourage exploration-first behavior and more
back-and-forth alignment.
- Adds a required TL;DR checkpoint before drafting the full plan.
- Clarifies client behavior that can cause premature “Implement this
plan?” prompts.
## What changed
- Require at least one targeted non-mutating exploration pass before the
first user question.
- Insert a TL;DR checkpoint between Phase 2 (intent) and Phase 3
(implementation).
- TL;DR checkpoint guidance:
- Label: “Proposed Plan (TL;DR)”
- Format: 3–5 bullets using `- `
- Options: exactly one option, “Approve”
- `isOther: true`, with explicit guidance that “None of the above” is
the edit path in the current UI.
- Require the final plan to include a TL;DR consistent with the approved
checkpoint.
## Why
- In Plan Mode, any normal assistant message at turn completion is
treated as plan content by the client. This can trigger premature
“Implement this plan?” prompts.
- The TL;DR checkpoint aligns on direction before Codex drafts a long,
decision-complete plan.
## Testing
- Manual: built the local CLI and verified the flow now explores first,
presents a TL;DR checkpoint, and only drafts the full plan after
approval.
---------
Co-authored-by: Nick Baumann <@openai.com>
`requirements.toml` should be able to specify rules which always run.
My intention here was that these rules could only ever be restrictive,
which means the decision can be "prompt" or "forbidden" but never
"allow". A requirement of "you must always allow this command" didn't
make sense to me, but happy to be gaveled otherwise.
Rules already applies the most restrictive decision, so we can safely
merge these with rules found in other config folders.
Previously, `CodexAuth` was defined as follows:
d550fbf41a/codex-rs/core/src/auth.rs (L39-L46)
But if you looked at its constructors, we had creation for
`AuthMode::ApiKey` where `storage` was built using a nonsensical path
(`PathBuf::new()`) and `auth_dot_json` was `None`:
d550fbf41a/codex-rs/core/src/auth.rs (L212-L220)
By comparison, when `AuthMode::ChatGPT` was used, `api_key` was always
`None`:
d550fbf41a/codex-rs/core/src/auth.rs (L665-L671)https://github.com/openai/codex/pull/10012 took things further because
it introduced a new `ChatgptAuthTokens` variant to `AuthMode`, which is
important in when invoking `account/login/start` via the app server, but
most logic _internal_ to the app server should just reason about two
`AuthMode` variants: `ApiKey` and `ChatGPT`.
This PR tries to clean things up as follows:
- `LoginAccountParams` and `AuthMode` in `codex-rs/app-server-protocol/`
both continue to have the `ChatgptAuthTokens` variant, though it is used
exclusively for the on-the-wire messaging.
- `codex-rs/core/src/auth.rs` now has its own `AuthMode` enum, which
only has two variants: `ApiKey` and `ChatGPT`.
- `CodexAuth` has been changed from a struct to an enum. It is a
disjoint union where each variant (`ApiKey`, `ChatGpt`, and
`ChatGptAuthTokens`) have only the associated fields that make sense for
that variant.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/10208).
* #10224
* __->__ #10208
Load requirements from Codex Backend. It only does this for enterprise
customers signed in with ChatGPT.
Todo in follow-up PRs:
* Add to app-server and exec too
* Switch from fail-open to fail-closed on failure
Session renaming:
- `/rename my_session`
- `/rename` without arg and passing an argument in `customViewPrompt`
- AppExitInfo shows resume hint using the session name if set instead of
uuid, defaults to uuid if not set
- Names are stored in `CODEX_HOME/sessions.jsonl`
Session resuming:
- codex resume <name> lookup for `CODEX_HOME/sessions.jsonl` first entry
matching the name and resumes the session
---------
Co-authored-by: jif-oai <jif@openai.com>
## Summary
- add startup tooltip for OpenAI community Discord
- add startup tooltip for Codex community forum
## Testing
- not run (text-only tooltip change)
## Summary
Let's dial in this api contract in a bit more with more robust fallback
behavior when model_instructions_template is false.
Switches to a more explicit template / variables structure, with more
fallbacks.
## Testing
- [x] Adding unit tests
- [x] Tested locally
## Problem
OpenAI employees were sent to the public GitHub issue flow after
`/feedback`, which is the wrong follow-up path internally.
## Mental model
After feedback upload completes, we render a follow-up link/message.
That link should be audience-aware but must not change the upload
pipeline itself.
## Non-goals
- Changing how feedback is captured or uploaded
- Changing external user behavior
## Tradeoffs
We detect employees via the authenticated account email suffix
(`@openai.com`). If the email is unavailable (e.g., API key auth), we
default to the external behavior.
## Architecture
- Introduce `FeedbackAudience` and thread it from `App` -> `ChatWidget`
-> `FeedbackNoteView`
- Gate internal messaging/links on `FeedbackAudience::OpenAiEmployee`
- Internal follow-up link is now `http://go/codex-feedback-internal`
- External GitHub URL remains byte-for-byte identical
## Observability
No new telemetry; this only changes rendered follow-up instructions.
## Tests
- `just fmt`
- `cargo test -p codex-tui --lib`
Add dynamic tools to rollout file for persistence & read from rollout on
resume. Ran a real example and spotted the following in the rollout
file:
```
{"timestamp":"2026-01-29T01:27:57.468Z","type":"session_meta","payload":{"id":"019c075d-3f0b-77e3-894e-c1c159b04b1e","timestamp":"2026-01-29T01:27:57.451Z","...."dynamic_tools":[{"name":"demo_tool","description":"Demo dynamic tool","inputSchema":{"additionalProperties":false,"properties":{"city":{"type":"string"}},"required":["city"],"type":"object"}}],"git":{"commit_hash":"ebc573f15c01b8af158e060cfedd401f043e9dfa","branch":"dev/cc/dynamic-tools","repository_url":"https://github.com/openai/codex.git"}}}
```
## Summary
Adds a simple `/plan` slash command in the TUI that switches the active
collaboration mode to Plan mode. The command is only available when the
`collaboration_modes` feature is enabled.
## Changes
- Add `plan_mask` helper in `codex-rs/tui/src/collaboration_modes.rs`
- Add `SlashCommand::Plan` metadata in
`codex-rs/tui/src/slash_command.rs`
- Implement and hard-gate `/plan` dispatch in
`codex-rs/tui/src/chatwidget.rs`
- Hide `/plan` when collaboration modes are disabled in
`codex-rs/tui/src/bottom_pane/slash_commands.rs`
- Update command popup tests in
`codex-rs/tui/src/bottom_pane/command_popup.rs`
- Add a focused unit test for `/plan` in
`codex-rs/tui/src/chatwidget/tests.rs`
## Behavior notes
- `/plan` is now a no-op if `Feature::CollaborationModes` is disabled.
- When enabled, `/plan` switches directly to Plan mode without opening
the picker.
## Codex author
`codex resume 019c05da-d7c3-7322-ae2c-3ca38d0ef702`
This enables a new use case where `codex app-server` is embedded into a
parent application that will directly own the user's ChatGPT auth
lifecycle, which means it owns the user’s auth tokens and refreshes it
when necessary. The parent application would just want a way to pass in
the auth tokens for codex to use directly.
The idea is that we are introducing a new "auth mode" currently only
exposed via app server: **`chatgptAuthTokens`** which consist of the
`id_token` (stores account metadata) and `access_token` (the bearer
token used directly for backend API calls). These auth tokens are only
stored in-memory. This new mode is in addition to the existing `apiKey`
and `chatgpt` auth modes.
This PR reuses the shape of our existing app-server account APIs as much
as possible:
- Update `account/login/start` with a new `chatgptAuthTokens` variant,
which will allow the client to pass in the tokens and have codex
app-server use them directly. Upon success, the server emits
`account/login/completed` and `account/updated` notifications.
- A new server->client request called
`account/chatgptAuthTokens/refresh` which the server can use whenever
the access token previously passed in has expired and it needs a new one
from the parent application.
I leveraged the core 401 retry loop which typically triggers auth token
refreshes automatically, but made it pluggable:
- **chatgpt** mode refreshes internally, as usual.
- **chatgptAuthTokens** mode calls the client via
`account/chatgptAuthTokens/refresh`, the client responds with updated
tokens, codex updates its in-memory auth, then retries. This RPC has a
10s timeout and handles JSON-RPC errors from the client.
Also some additional things:
- chatgpt logins are blocked while external auth is active (have to log
out first. typically clients will pick one OR the other, not support
both)
- `account/logout` clears external auth in memory
- Ensures that if `forced_chatgpt_workspace_id` is set via the user's
config, we respect it in both:
- `account/login/start` with `chatgptAuthTokens` (returns a JSON-RPC
error back to the client)
- `account/chatgptAuthTokens/refresh` (fails the turn, and on next
request app-server will send another `account/chatgptAuthTokens/refresh`
request to the client).
###### Problem
Users get generic 429s with no guidance when a model is at capacity.
###### Solution
Detect model-cap headers, surface a clear “try a different model”
message, and keep behavior non‑intrusive (no auto‑switch).
###### Scope
CLI/TUI only; protocol + error mapping updated to carry model‑cap info.
###### Tests
- just fmt
- cargo test -p codex-tui
- cargo test -p codex-core --lib
shell_snapshot::tests::try_new_creates_and_deletes_snapshot_file --
--nocapture (ran in isolated env)
- validate local build with backend
<img width="719" height="845" alt="image"
src="https://github.com/user-attachments/assets/1470b33d-0974-4b1f-b8e6-d11f892f4b54"
/>
## Summary
- add `codex features enable <feature>` and `codex features disable
<feature>`
- persist feature flag changes to `config.toml` (respecting profile)
- print the under-development feature warning when enabling prerelease
features
- keep `features list` behavior unchanged and add unit/integration tests
## Testing
- cargo test -p codex-cli
An experimental flow for env var skill dependencies. Skills can now
declare required env vars in SKILL.md; if missing, the CLI prompts the
user to get the value, and Core will store it in memory (eventually to a
local persistent store)
<img width="790" height="169" alt="image"
src="https://github.com/user-attachments/assets/cd928918-9403-43cb-a7e7-b8d59bcccd9a"
/>
On the back of:
https://github.com/openai/codex/pull/10138
Let's ensure that every folder with a `package.json` is listed in
`pnpm-workspace.yaml` (not sure why `docs` was in there...) and that we
are using `pnpm` over `npm` consistently (which is why this PR deletes
`codex-cli/package-lock.json`).