Commit Graph

3221 Commits

Author SHA1 Message Date
pakrym-oai
aacd530a41 Update copy (#10256)
<img width="839" height="62" alt="image"
src="https://github.com/user-attachments/assets/ca987cdb-9e8c-403e-8856-a9b37baa7673"
/>
2026-01-30 12:57:19 -08:00
daniel-oai
dd6c1d3787 Skip loading codex home as project layer (#10207)
Summary:
- Fixes issue #9932: https://github.com/openai/codex/issues/9932
- Prevents `$CODEX_HOME` (typically `~/.codex`) from being discovered as
a project `.codex` layer by skipping it during project layer traversal.
We compare both normalized absolute paths and best-effort canonicalized
paths to handle symlinks.
- Adds regression tests for home-directory invocation and for the case
where `CODEX_HOME` points to a project `.codex` directory (e.g.,
worktrees/editor integrations).

Testing:
- `cargo build -p codex-cli --bin codex`
- `cargo build -p codex-rmcp-client --bin test_stdio_server`
- `cargo test -p codex-core`
- `cargo test --all-features`
- Manual: ran `target/debug/codex` from `~` and confirmed the
disabled-folder warning and trust prompt no longer appear.
2026-01-30 12:42:07 -08:00
Charley Cunningham
83317ed4bf Make plan highlight use popup grey background (#10253)
## Summary
- align proposed plan background with popup surface color by reusing
`user_message_bg`
- remove the custom blue-tinted plan background

<img width="1572" height="1568" alt="image"
src="https://github.com/user-attachments/assets/63a5341e-4342-4c07-b6b0-c4350c3b2639"
/>
2026-01-30 12:39:15 -08:00
Ahmed Ibrahim
b7351f7f53 plan prompt (#10255)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-30 12:22:37 -08:00
Charley Cunningham
2457bb3c40 Fix deploy (#10251)
Fix
https://github.com/openai/codex/actions/runs/21527697445/job/62035898666
2026-01-30 11:57:13 -08:00
Ahmed Ibrahim
9b29a48a09 Plan mode prompt (#10238)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-30 11:48:03 -08:00
Michael Bolin
e6d913af2d chore: rename ChatGpt -> Chatgpt in type names (#10244)
When using ChatGPT in names of types, we should be consistent, so this
renames some types with `ChatGpt` in the name to `Chatgpt`. From
https://rust-lang.github.io/api-guidelines/naming.html:

> In `UpperCamelCase`, acronyms and contractions of compound words count
as one word: use `Uuid` rather than `UUID`, `Usize` rather than `USize`
or `Stdin` rather than `StdIn`. In `snake_case`, acronyms and
contractions are lower-cased: `is_xid_start`.

This PR updates existing uses of `ChatGpt` and changes them to
`Chatgpt`. Though in all cases where it could affect the wire format, I
visually inspected that we don't change anything there. That said, this
_will_ change the codegen because it will affect the spelling of type
names.

For example, this renames `AuthMode::ChatGPT` to `AuthMode::Chatgpt` in
`app-server-protocol`, but the wire format is still `"chatgpt"`.

This PR also updates a number of types in `codex-rs/core/src/auth.rs`.
2026-01-30 11:18:39 -08:00
Charley Cunningham
2d10aa6859 Tui: hide Code mode footer label (#10063)
Title
Hide Code mode footer label/cycle hint; add Plan footer-collapse
snapshots

Summary
- Keep Code mode internal naming but suppress the footer mode label +
cycle hint when Code is active.
- Only show the cycle hint when a non‑Code mode indicator is present.
- Add Plan-mode footer collapse snapshot coverage (empty + queued,
across widths) and update existing footer collapse snapshots for the new
Code behavior.

Notes
- The test run currently fails in codex-cloud-requirements on
origin/main due to a stale auth.mode field; no fix is included in this
PR to keep the diff minimal.

Codex author
`codex resume 019c0296-cfd4-7193-9b0a-6949048e4546`
2026-01-30 11:15:21 -08:00
Charley Cunningham
ec4a2d07e4 Plan mode: stream proposed plans, emit plan items, and render in TUI (#9786)
## Summary
- Stream proposed plans in Plan Mode using `<proposed_plan>` tags parsed
in core, emitting plan deltas plus a plan `ThreadItem`, while stripping
tags from normal assistant output.
- Persist plan items and rebuild them on resume so proposed plans show
in thread history.
- Wire plan items/deltas through app-server protocol v2 and render a
dedicated proposed-plan view in the TUI, including the “Implement this
plan?” prompt only when a plan item is present.

## Changes

### Core (`codex-rs/core`)
- Added a generic, line-based tag parser that buffers each line until it
can disprove a tag prefix; implements auto-close on `finish()` for
unterminated tags. `codex-rs/core/src/tagged_block_parser.rs`
- Refactored proposed plan parsing to wrap the generic parser.
`codex-rs/core/src/proposed_plan_parser.rs`
- In plan mode, stream assistant deltas as:
  - **Normal text** → `AgentMessageContentDelta`
  - **Plan text** → `PlanDelta` + `TurnItem::Plan` start/completion  
  (`codex-rs/core/src/codex.rs`)
- Final plan item content is derived from the completed assistant
message (authoritative), not necessarily the concatenated deltas.
- Strips `<proposed_plan>` blocks from assistant text in plan mode so
tags don’t appear in normal messages.
(`codex-rs/core/src/stream_events_utils.rs`)
- Persist `ItemCompleted` events only for plan items for rollout replay.
(`codex-rs/core/src/rollout/policy.rs`)
- Guard `update_plan` tool in Plan Mode with a clear error message.
(`codex-rs/core/src/tools/handlers/plan.rs`)
- Updated Plan Mode prompt to:  
  - keep `<proposed_plan>` out of non-final reasoning/preambles  
  - require exact tag formatting  
  - allow only one `<proposed_plan>` block per turn  
  (`codex-rs/core/templates/collaboration_mode/plan.md`)

### Protocol / App-server protocol
- Added `TurnItem::Plan` and `PlanDeltaEvent` to core protocol items.
(`codex-rs/protocol/src/items.rs`, `codex-rs/protocol/src/protocol.rs`)
- Added v2 `ThreadItem::Plan` and `PlanDeltaNotification` with
EXPERIMENTAL markers and note that deltas may not match the final plan
item. (`codex-rs/app-server-protocol/src/protocol/v2.rs`)
- Added plan delta route in app-server protocol common mapping.
(`codex-rs/app-server-protocol/src/protocol/common.rs`)
- Rebuild plan items from persisted `ItemCompleted` events on resume.
(`codex-rs/app-server-protocol/src/protocol/thread_history.rs`)

### App-server
- Forward plan deltas to v2 clients and map core plan items to v2 plan
items. (`codex-rs/app-server/src/bespoke_event_handling.rs`,
`codex-rs/app-server/src/codex_message_processor.rs`)
- Added v2 plan item tests.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)

### TUI
- Added a dedicated proposed plan history cell with special background
and padding, and moved “• Proposed Plan” outside the highlighted block.
(`codex-rs/tui/src/history_cell.rs`, `codex-rs/tui/src/style.rs`)
- Only show “Implement this plan?” when a plan item exists.
(`codex-rs/tui/src/chatwidget.rs`,
`codex-rs/tui/src/chatwidget/tests.rs`)

<img width="831" height="847" alt="Screenshot 2026-01-29 at 7 06 24 PM"
src="https://github.com/user-attachments/assets/69794c8c-f96b-4d36-92ef-c1f5c3a8f286"
/>

### Docs / Misc
- Updated protocol docs to mention plan deltas.
(`codex-rs/docs/protocol_v1.md`)
- Minor plumbing updates in exec/debug clients to tolerate plan deltas.
(`codex-rs/debug-client/src/reader.rs`, `codex-rs/exec/...`)

## Tests
- Added core integration tests:
  - Plan mode strips plan from agent messages.
  - Missing `</proposed_plan>` closes at end-of-message.  
  (`codex-rs/core/tests/suite/items.rs`)
- Added unit tests for generic tag parser (prefix buffering, non-tag
lines, auto-close). (`codex-rs/core/src/tagged_block_parser.rs`)
- Existing app-server plan item tests in v2.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)

## Notes / Behavior
- Plan output no longer appears in standard assistant text in Plan Mode;
it streams via `PlanDelta` and completes as a `TurnItem::Plan`.
- The final plan item content is authoritative and may diverge from
streamed deltas (documented as experimental).
- Reasoning summaries are not filtered; prompt instructs the model not
to include `<proposed_plan>` outside the final plan message.

## Codex Author
`codex fork 019bec2d-b09d-7450-b292-d7bcdddcdbfb`
2026-01-30 18:59:30 +00:00
Michael Bolin
40bf11bd52 chore: fix the build breakage that came from a merge race (#10239)
I think I needed to rebase on top of
https://github.com/openai/codex/pull/10167 before merging
https://github.com/openai/codex/pull/10208.
2026-01-30 10:29:54 -08:00
baumann-oai
1ce722ed2e plan mode: add TL;DR checkpoint and client behavior note (#10195)
## Summary
- Tightens Plan Mode to encourage exploration-first behavior and more
back-and-forth alignment.
- Adds a required TL;DR checkpoint before drafting the full plan.
- Clarifies client behavior that can cause premature “Implement this
plan?” prompts.

## What changed
- Require at least one targeted non-mutating exploration pass before the
first user question.
- Insert a TL;DR checkpoint between Phase 2 (intent) and Phase 3
(implementation).
- TL;DR checkpoint guidance:
  - Label: “Proposed Plan (TL;DR)”
  - Format: 3–5 bullets using `- `
  - Options: exactly one option, “Approve”
- `isOther: true`, with explicit guidance that “None of the above” is
the edit path in the current UI.
- Require the final plan to include a TL;DR consistent with the approved
checkpoint.

## Why
- In Plan Mode, any normal assistant message at turn completion is
treated as plan content by the client. This can trigger premature
“Implement this plan?” prompts.
- The TL;DR checkpoint aligns on direction before Codex drafts a long,
decision-complete plan.

## Testing
- Manual: built the local CLI and verified the flow now explores first,
presents a TL;DR checkpoint, and only drafts the full plan after
approval.

---------

Co-authored-by: Nick Baumann <@openai.com>
2026-01-30 10:14:46 -08:00
gt-oai
5662eb8b75 Load exec policy rules from requirements (#10190)
`requirements.toml` should be able to specify rules which always run. 

My intention here was that these rules could only ever be restrictive,
which means the decision can be "prompt" or "forbidden" but never
"allow". A requirement of "you must always allow this command" didn't
make sense to me, but happy to be gaveled otherwise.

Rules already applies the most restrictive decision, so we can safely
merge these with rules found in other config folders.
2026-01-30 18:04:09 +00:00
Dylan Hurd
23db79fae2 chore(feature) Experimental: Smart Approvals (#10211)
## Summary
Let's start getting feedback on this feature 😅 

## Testing
- [x] existing tests pass
2026-01-30 10:41:37 -07:00
Dylan Hurd
dfafc546ab chore(feature) Experimental: Personality (#10212)
## Summary
Let users start opting in to trying out personalities

## Testing
- [x] existing tests pass
2026-01-30 10:41:22 -07:00
Michael Bolin
377ab0c77c feat: refactor CodexAuth so invalid state cannot be represented (#10208)
Previously, `CodexAuth` was defined as follows:


d550fbf41a/codex-rs/core/src/auth.rs (L39-L46)

But if you looked at its constructors, we had creation for
`AuthMode::ApiKey` where `storage` was built using a nonsensical path
(`PathBuf::new()`) and `auth_dot_json` was `None`:


d550fbf41a/codex-rs/core/src/auth.rs (L212-L220)

By comparison, when `AuthMode::ChatGPT` was used, `api_key` was always
`None`:


d550fbf41a/codex-rs/core/src/auth.rs (L665-L671)

https://github.com/openai/codex/pull/10012 took things further because
it introduced a new `ChatgptAuthTokens` variant to `AuthMode`, which is
important in when invoking `account/login/start` via the app server, but
most logic _internal_ to the app server should just reason about two
`AuthMode` variants: `ApiKey` and `ChatGPT`.

This PR tries to clean things up as follows:

- `LoginAccountParams` and `AuthMode` in `codex-rs/app-server-protocol/`
both continue to have the `ChatgptAuthTokens` variant, though it is used
exclusively for the on-the-wire messaging.
- `codex-rs/core/src/auth.rs` now has its own `AuthMode` enum, which
only has two variants: `ApiKey` and `ChatGPT`.
- `CodexAuth` has been changed from a struct to an enum. It is a
disjoint union where each variant (`ApiKey`, `ChatGpt`, and
`ChatGptAuthTokens`) have only the associated fields that make sense for
that variant.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/10208).
* #10224
* __->__ #10208
2026-01-30 09:33:23 -08:00
jif-oai
0212f4010e nit: fix db with multiple metadata lines (#10237) 2026-01-30 17:32:10 +00:00
jif-oai
079f4952e0 feat: heuristic coloring of logs (#10228) 2026-01-30 18:26:49 +01:00
jif-oai
eff11f792b feat: improve logs client (#10229) 2026-01-30 18:23:18 +01:00
jif-oai
887bec0dee chore: do not clean the DB anymore (#10232) 2026-01-30 18:23:00 +01:00
jif-oai
09d25e91e9 fix: make sure the shell exists (#10222) 2026-01-30 14:18:31 +01:00
jif-oai
6cee538380 explorer prompt (#10225) 2026-01-30 13:50:33 +01:00
gt-oai
e85d019daa Fetch Requirements from cloud (#10167)
Load requirements from Codex Backend. It only does this for enterprise
customers signed in with ChatGPT.

Todo in follow-up PRs:
* Add to app-server and exec too
* Switch from fail-open to fail-closed on failure
2026-01-30 12:03:29 +00:00
pap-openai
1ef5455eb6 Conversation naming (#8991)
Session renaming:
- `/rename my_session`
- `/rename` without arg and passing an argument in `customViewPrompt`
- AppExitInfo shows resume hint using the session name if set instead of
uuid, defaults to uuid if not set
- Names are stored in `CODEX_HOME/sessions.jsonl`

Session resuming:
- codex resume <name> lookup for `CODEX_HOME/sessions.jsonl` first entry
matching the name and resumes the session

---------

Co-authored-by: jif-oai <jif@openai.com>
2026-01-30 10:40:09 +00:00
jif-oai
25ad414680 chore: unify metric (#10220) 2026-01-30 11:13:43 +01:00
jif-oai
129787493f feat: backfill timing metric (#10218)
1. Add a metric to measure the backfill time
2. Add a unit to the timing histogram
2026-01-30 10:19:41 +01:00
Shijie Rao
a0ccef9d5c Chore: plan mode do not include free form question and always include isOther (#10210)
We should never ask a freeform question when planning and we should
always include isOther as an escape hatch.
2026-01-30 01:19:24 -08:00
Josh McKinney
c0cad80668 Add community links to startup tooltips (#10177)
## Summary
- add startup tooltip for OpenAI community Discord
- add startup tooltip for Codex community forum

## Testing
- not run (text-only tooltip change)
2026-01-30 10:14:15 +01:00
jif-oai
f8056e62d4 nit: actually run tests (#10217) 2026-01-30 10:02:46 +01:00
jif-oai
a270a28a06 feat: add output to /ps (#10154)
<img width="599" height="238" alt="Screenshot 2026-01-29 at 13 24 57"
src="https://github.com/user-attachments/assets/1e9a5af2-f649-476c-b310-ae4938814538"
/>
2026-01-30 09:00:44 +01:00
Matthew Zeng
34f89b12d0 MCP tool call approval (simplified version) (#10200)
Add elicitation approval request for MCP tool call requests.
2026-01-29 23:40:32 -08:00
Dylan Hurd
e3ab0bd973 chore(personality) new schema with fallbacks (#10147)
## Summary
Let's dial in this api contract in a bit more with more robust fallback
behavior when model_instructions_template is false.

Switches to a more explicit template / variables structure, with more
fallbacks.

## Testing
- [x] Adding unit tests
- [x] Tested locally
2026-01-30 00:10:12 -07:00
alexsong-oai
d550fbf41a load from yaml (#10194) 2026-01-29 21:44:12 -05:00
Josh McKinney
36f2fe8af9 feat(tui): route employee feedback follow-ups to internal link (#10198)
## Problem
OpenAI employees were sent to the public GitHub issue flow after
`/feedback`, which is the wrong follow-up path internally.

## Mental model
After feedback upload completes, we render a follow-up link/message.
That link should be audience-aware but must not change the upload
pipeline itself.

## Non-goals
- Changing how feedback is captured or uploaded
- Changing external user behavior

## Tradeoffs
We detect employees via the authenticated account email suffix
(`@openai.com`). If the email is unavailable (e.g., API key auth), we
default to the external behavior.

## Architecture
- Introduce `FeedbackAudience` and thread it from `App` -> `ChatWidget`
-> `FeedbackNoteView`
- Gate internal messaging/links on `FeedbackAudience::OpenAiEmployee`
- Internal follow-up link is now `http://go/codex-feedback-internal`
- External GitHub URL remains byte-for-byte identical

## Observability
No new telemetry; this only changes rendered follow-up instructions.

## Tests
- `just fmt`
- `cargo test -p codex-tui --lib`
2026-01-30 02:12:46 +00:00
willwang-openai
a9cf449a80 add error messages for the go plan type (#10181)
Adds support for the Go plan type
Updates rate limit error messages to point to the usage page
2026-01-30 01:17:25 +00:00
Celia Chen
7151387474 [feat] persist dynamic tools in session rollout file (#10130)
Add dynamic tools to rollout file for persistence & read from rollout on
resume. Ran a real example and spotted the following in the rollout
file:
```
{"timestamp":"2026-01-29T01:27:57.468Z","type":"session_meta","payload":{"id":"019c075d-3f0b-77e3-894e-c1c159b04b1e","timestamp":"2026-01-29T01:27:57.451Z","...."dynamic_tools":[{"name":"demo_tool","description":"Demo dynamic tool","inputSchema":{"additionalProperties":false,"properties":{"city":{"type":"string"}},"required":["city"],"type":"object"}}],"git":{"commit_hash":"ebc573f15c01b8af158e060cfedd401f043e9dfa","branch":"dev/cc/dynamic-tools","repository_url":"https://github.com/openai/codex.git"}}}
```
2026-01-30 01:10:00 +00:00
Owen Lin
c6e1288ef1 chore(app-server): document AuthMode (#10191)
Explain what this is and what it's used for.
2026-01-29 16:48:15 -08:00
Charley Cunningham
11958221a3 tui: add feature-gated /plan slash command to switch to Plan mode (#10103)
## Summary
Adds a simple `/plan` slash command in the TUI that switches the active
collaboration mode to Plan mode. The command is only available when the
`collaboration_modes` feature is enabled.

## Changes
- Add `plan_mask` helper in `codex-rs/tui/src/collaboration_modes.rs`
- Add `SlashCommand::Plan` metadata in
`codex-rs/tui/src/slash_command.rs`
- Implement and hard-gate `/plan` dispatch in
`codex-rs/tui/src/chatwidget.rs`
- Hide `/plan` when collaboration modes are disabled in
`codex-rs/tui/src/bottom_pane/slash_commands.rs`
- Update command popup tests in
`codex-rs/tui/src/bottom_pane/command_popup.rs`
- Add a focused unit test for `/plan` in
`codex-rs/tui/src/chatwidget/tests.rs`

## Behavior notes
- `/plan` is now a no-op if `Feature::CollaborationModes` is disabled.
- When enabled, `/plan` switches directly to Plan mode without opening
the picker.

## Codex author
`codex resume 019c05da-d7c3-7322-ae2c-3ca38d0ef702`
2026-01-29 16:40:43 -08:00
Owen Lin
81a17bb2c1 feat(app-server): support external auth mode (#10012)
This enables a new use case where `codex app-server` is embedded into a
parent application that will directly own the user's ChatGPT auth
lifecycle, which means it owns the user’s auth tokens and refreshes it
when necessary. The parent application would just want a way to pass in
the auth tokens for codex to use directly.

The idea is that we are introducing a new "auth mode" currently only
exposed via app server: **`chatgptAuthTokens`** which consist of the
`id_token` (stores account metadata) and `access_token` (the bearer
token used directly for backend API calls). These auth tokens are only
stored in-memory. This new mode is in addition to the existing `apiKey`
and `chatgpt` auth modes.

This PR reuses the shape of our existing app-server account APIs as much
as possible:
- Update `account/login/start` with a new `chatgptAuthTokens` variant,
which will allow the client to pass in the tokens and have codex
app-server use them directly. Upon success, the server emits
`account/login/completed` and `account/updated` notifications.
- A new server->client request called
`account/chatgptAuthTokens/refresh` which the server can use whenever
the access token previously passed in has expired and it needs a new one
from the parent application.

I leveraged the core 401 retry loop which typically triggers auth token
refreshes automatically, but made it pluggable:
- **chatgpt** mode refreshes internally, as usual.
- **chatgptAuthTokens** mode calls the client via
`account/chatgptAuthTokens/refresh`, the client responds with updated
tokens, codex updates its in-memory auth, then retries. This RPC has a
10s timeout and handles JSON-RPC errors from the client.

Also some additional things:
- chatgpt logins are blocked while external auth is active (have to log
out first. typically clients will pick one OR the other, not support
both)
- `account/logout` clears external auth in memory
- Ensures that if `forced_chatgpt_workspace_id` is set via the user's
config, we respect it in both:
- `account/login/start` with `chatgptAuthTokens` (returns a JSON-RPC
error back to the client)
- `account/chatgptAuthTokens/refresh` (fails the turn, and on next
request app-server will send another `account/chatgptAuthTokens/refresh`
request to the client).
2026-01-29 23:46:04 +00:00
Colin Young
b79bf69af6 [Codex][CLI] Show model-capacity guidance on 429 (#10118)
###### Problem
Users get generic 429s with no guidance when a model is at capacity.
###### Solution
Detect model-cap headers, surface a clear “try a different model”
message, and keep behavior non‑intrusive (no auto‑switch).
###### Scope
CLI/TUI only; protocol + error mapping updated to carry model‑cap info.
###### Tests
      - just fmt
      - cargo test -p codex-tui
- cargo test -p codex-core --lib
shell_snapshot::tests::try_new_creates_and_deletes_snapshot_file --
--nocapture (ran in isolated env)
      - validate local build with backend
     
<img width="719" height="845" alt="image"
src="https://github.com/user-attachments/assets/1470b33d-0974-4b1f-b8e6-d11f892f4b54"
/>
2026-01-29 14:59:07 -08:00
natea-oai
ca9d417633 updating comment to better indicate intent of skipping quit in the main slash command menu (#10186)
Updates comment indicating intent for skipping `quit` in the main slash
command dropdown.
2026-01-29 14:41:42 -08:00
pakrym-oai
fbb3a30953 Remove WebSocket wire format (#10179)
I'd like WireApi to go away (when chat is removed) and WebSockets is
still responses API just over a different transport.
2026-01-29 13:50:53 -08:00
Michael Bolin
2d9ac8227a fix: /approvals -> /permissions (#10184)
I believe we should be recommending `/permissions` in light of
https://github.com/openai/codex/pull/9561.
2026-01-29 20:36:53 +00:00
Josh McKinney
03aee7140f Add features enable/disable subcommands (#10180)
## Summary
- add `codex features enable <feature>` and `codex features disable
<feature>`
- persist feature flag changes to `config.toml` (respecting profile)
- print the under-development feature warning when enabling prerelease
features
- keep `features list` behavior unchanged and add unit/integration tests

## Testing
- cargo test -p codex-cli
2026-01-29 20:35:03 +00:00
Michael Bolin
48f203120d fix: unify npm publish call across shell-tool-mcp.yml and rust-release.yml (#10182)
We are seeing flakiness in the `npm publish` step for
https://www.npmjs.com/package/@openai/codex-shell-tool-mcp, so this is a
shot in the dark for a fix:

https://github.com/openai/codex/actions/runs/21490679301/job/61913765060

Note this removes `actions/checkout@v6` and `pnpm/action-setup@v4`
steps, which I believe are superflous for the `npm publish` call.
2026-01-29 11:51:33 -08:00
xl-openai
bdd8a7d58b Better handling skill depdenencies on ENV VAR. (#9017)
An experimental flow for env var skill dependencies. Skills can now
declare required env vars in SKILL.md; if missing, the CLI prompts the
user to get the value, and Core will store it in memory (eventually to a
local persistent store)
<img width="790" height="169" alt="image"
src="https://github.com/user-attachments/assets/cd928918-9403-43cb-a7e7-b8d59bcccd9a"
/>
2026-01-29 14:13:30 -05:00
Michael Bolin
b7f26d74f0 chore: ensure pnpm-workspace.yaml is up-to-date (#10140)
On the back of:

https://github.com/openai/codex/pull/10138

Let's ensure that every folder with a `package.json` is listed in
`pnpm-workspace.yaml` (not sure why `docs` was in there...) and that we
are using `pnpm` over `npm` consistently (which is why this PR deletes
`codex-cli/package-lock.json`).
2026-01-29 10:49:03 -08:00
pakrym-oai
3b1cddf001 Fall back to http when websockets fail (#10139)
I expect not all proxies work with websockets, fall back to http if
websockets fail.
2026-01-29 10:36:21 -08:00
jif-oai
798c4b3260 feat: reduce span exposition (#10171)
This only avoids the creation of duplicates spans
2026-01-29 18:15:22 +00:00
Josh McKinney
3e798c5a7d Add OpenAI docs MCP tooltip (#10175) 2026-01-29 17:34:59 +00:00
jif-oai
e6c4f548ab chore: unify log queries (#10152)
Unify log queries to only have SQLX code in the runtime and use it for
both the log client and for tests
2026-01-29 16:28:15 +00:00