Commit Graph

785 Commits

Author SHA1 Message Date
Michael Bolin
b00a738989 fix: add more fields to ThreadStartResponse and ThreadResumeResponse 2025-11-18 17:22:26 -08:00
iceweasel-oai
cf57320b9f windows sandbox: support multiple workspace roots (#6854)
The Windows sandbox did not previously support multiple workspace roots
via config. Now it does
2025-11-18 16:35:00 -08:00
jif-oai
c56d0c159b fix: local compaction (#6844) 2025-11-18 22:18:10 +00:00
Anton Panasenko
f7a921039c [codex][otel] support mtls configuration (#6228)
fix for https://github.com/openai/codex/issues/6153

supports mTLS configuration and includes TLS features in the library
build to enable secure HTTPS connections with custom root certificates.

grpc:
https://docs.rs/tonic/0.13.1/src/tonic/transport/channel/endpoint.rs.html#63
https:
https://docs.rs/reqwest/0.12.23/src/reqwest/async_impl/client.rs.html#516
2025-11-18 14:01:01 -08:00
jif-oai
8ddae8cde3 feat: review in app server (#6613) 2025-11-18 21:58:54 +00:00
Dylan Hurd
29ca89c414 chore(config) enable shell_command (#6843)
## Summary
Enables shell_command as default for `gpt-5*` and `codex-*` models.

## Testing
- [x] Updated unit tests
2025-11-18 12:46:02 -08:00
iceweasel-oai
4bada5a84d Prompt to turn on windows sandbox when auto mode selected. (#6618)
- stop prompting users to install WSL 
- prompt users to turn on Windows sandbox when auto mode requested.

<img width="1660" height="195" alt="Screenshot 2025-11-17 110612"
src="https://github.com/user-attachments/assets/c67fc239-a227-417e-94bb-599a8ed8f11e"
/>
<img width="1684" height="168" alt="Screenshot 2025-11-17 110637"
src="https://github.com/user-attachments/assets/d18c3370-830d-4971-8746-04757ae2f709"
/>
<img width="1655" height="293" alt="Screenshot 2025-11-17 110719"
src="https://github.com/user-attachments/assets/d21f6ce9-c23e-4842-baf6-8938b77c16db"
/>
2025-11-18 11:38:18 -08:00
Ahmed Ibrahim
3de8790714 Add the utility to truncate by tokens (#6746)
- This PR is to make it on path for truncating by tokens. This path will
be initially used by unified exec and context manager (responsible for
MCP calls mainly).
- We are exposing new config `calls_output_max_tokens`
- Use `tokens` as the main budget unit but truncate based on the model
family by Introducing `TruncationPolicy`.
- Introduce `truncate_text` as a router for truncation based on the
mode.

In next PRs:
- remove truncate_with_line_bytes_budget
- Add the ability to the model to override the token budget.
2025-11-18 11:36:23 -08:00
zhao-oai
e9e644a119 fixing localshell tool calls (#6823)
- Local-shell tool responses were always tagged as
`ExecCommandSource::UserShell` because handler would call
`run_exec_like` with `is_user_shell_cmd` set to true.
- Treat `ToolPayload::LocalShell` the same as other model generated
shell tool calls by deleting `is_user_shell_cmd` from `run_exec_like`
(since actual user shell commands follow a separate code path)
2025-11-18 17:28:26 +00:00
jif-oai
f5d9939cda feat: enable parallel tool calls (#6796) 2025-11-18 17:10:14 +00:00
jif-oai
838531d3e4 feat: remote compaction (#6795)
Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-11-18 16:51:16 +00:00
jif-oai
c20df79a38 nit: mark ghost commit as stable (#6833) 2025-11-18 16:05:49 +00:00
Dylan Hurd
28ebe1c97a fix(windows) shell_command on windows, minor parsing (#6811)
## Summary
Enables shell_command for windows users, and starts adding some basic
command parsing here, to at least remove powershell prefixes. We'll
follow this up with command parsing but I wanted to land this change
separately with some basic UX.

**NOTE**: This implementation parses bash and powershell on both
platforms. In theory this is possible, since you can use git bash on
windows or powershell on linux. In practice, this may not be worth the
complexity of supporting, so I don't feel strongly about the current
approach vs. platform-specific branching.

## Testing
- [x] Added a bunch of tests 
- [x] Ran on both windows and os x
2025-11-17 22:23:53 -08:00
Dylan Hurd
2b7378ac77 chore(core) Add shell_serialization coverage (#6810)
## Summary
Similar to #6545, this PR updates the shell_serialization test suite to
cover the various `shell` tool invocations we have. Note that this does
not cover unified_exec, which has its own suite of tests. This should
provide some test coverage for when we eventually consolidate
serialization logic.

## Testing
- [x] These are tests
2025-11-17 19:10:56 -08:00
Ahmed Ibrahim
ddcc60a085 Update defaults to gpt-5.1 (#6652)
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults

## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
2025-11-17 17:40:11 -08:00
Owen Lin
cecbd5b021 [app-server] feat: add v2 command execution approval flow (#6758)
This PR adds the API V2 version of the command‑execution approval flow
for the shell tool.

This PR wires the new RPC (`item/commandExecution/requestApproval`, V2
only) and related events (`item/started`, `item/completed`, and
`item/commandExecution/delta`, which are emitted in both V1 and V2)
through the app-server
protocol. The new approval RPC is only sent when the user initiates a
turn with the new `turn/start` API so we don't break backwards
compatibility with VSCE.

The approach I took was to make as few changes to the Codex core as
possible, leveraging existing `EventMsg` core events, and translating
those in app-server. I did have to add additional fields to
`EventMsg::ExecCommandEndEvent` to capture the command's input so that
app-server can statelessly transform these events to a
`ThreadItem::CommandExecution` item for the `item/completed` event.

Once we stabilize the API and it's complete enough for our partners, we
can work on migrating the core to be aware of command execution items as
a first-class concept.

**Note**: We'll need followup work to make sure these APIs work for the
unified exec tool, but will wait til that's stable and landed before
doing a pass on app-server.

Example payloads below:
```
{
  "method": "item/started",
  "params": {
    "item": {
      "aggregatedOutput": null,
      "command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
      "cwd": "/Users/owen/repos/codex/codex-rs",
      "durationMs": null,
      "exitCode": null,
      "id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
      "parsedCmd": [
        {
          "cmd": "touch /tmp/should-trigger-approval",
          "type": "unknown"
        }
      ],
      "status": "inProgress",
      "type": "commandExecution"
    }
  }
}
```

```
{
  "id": 0,
  "method": "item/commandExecution/requestApproval",
  "params": {
    "itemId": "call_lNWWsbXl1e47qNaYjFRs0dyU",
    "parsedCmd": [
      {
        "cmd": "touch /tmp/should-trigger-approval",
        "type": "unknown"
      }
    ],
    "reason": "Need to create file in /tmp which is outside workspace sandbox",
    "risk": null,
    "threadId": "019a93e8-0a52-7fe3-9808-b6bc40c0989a",
    "turnId": "1"
  }
}
```

```
{
  "id": 0,
  "result": {
    "acceptSettings": {
      "forSession": false
    },
    "decision": "accept"
  }
}
```

```
{
  "params": {
    "item": {
      "aggregatedOutput": null,
      "command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
      "cwd": "/Users/owen/repos/codex/codex-rs",
      "durationMs": 224,
      "exitCode": 0,
      "id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
      "parsedCmd": [
        {
          "cmd": "touch /tmp/should-trigger-approval",
          "type": "unknown"
        }
      ],
      "status": "completed",
      "type": "commandExecution"
    }
  }
}
```
2025-11-18 00:23:54 +00:00
iceweasel-oai
e032d338f2 move cap_sid file into ~/.codex so the sandbox cannot overwrite it (#6798)
The `cap_sid` file contains the IDs of the two custom SIDs that the
Windows sandbox creates/manages to implement read-only and
workspace-write sandbox policies.

It previously lived in `<cwd>/.codex` which means that the sandbox could
write to it, which could degrade the efficacy of the sandbox. This
change moves it to `~/.codex/` (or wherever `CODEX_HOME` points to) so
that it is outside the workspace.
2025-11-17 15:49:41 -08:00
Jeremy Rose
ab2e7499f8 core: add a feature to disable the shell tool (#6481)
`--disable shell_tool` disables the built-in shell tool. This is useful
for MCP-only operation.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-11-17 22:56:19 +00:00
Dylan Hurd
daf77b8452 chore(core) Update shell instructions (#6679)
## Summary
Consolidates `shell` and `shell_command` tool instructions.
## Testing 
- [x] Updated tests, tested locally
2025-11-17 13:05:15 -08:00
rugvedS07
837bc98a1d LM Studio OSS Support (#2312)
## Overview

Adds LM Studio OSS support. Closes #1883


### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.

If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.

#### Additional enhancements 
The default can be set using `oss-provider` in config like:

```
oss_provider = "lmstudio"
```

For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`

### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support

---------

Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-17 11:49:09 -08:00
Jeremy Rose
03ffe4d595 core/tui: non-blocking MCP startup (#6334)
This makes MCP startup not block TUI startup. Messages sent while MCPs
are booting will be queued.


https://github.com/user-attachments/assets/96e1d234-5d8f-4932-a935-a675d35c05e0


Fixes #6317

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-11-17 11:26:11 -08:00
Dylan Hurd
497fb4a19c fix(core) serialize shell_command (#6744)
## Summary
Ensures we're serializing calls to `shell_command`

## Testing
- [x] Added unit test
2025-11-16 23:16:51 -08:00
Xiao-Yong Jin
5860481bc4 Fix FreeBSD/OpenBSD builds: target-specific keyring features and BSD hardening (#6680)
## Summary
Builds on FreeBSD and OpenBSD were failing due to globally enabled
Linux-specific keyring features and hardening code paths not gated by
OS. This PR scopes keyring native backends to the
appropriate targets, disables default features at the workspace root,
and adds a BSD-specific hardening function. Linux/macOS/Windows behavior
remains unchanged, while FreeBSD/OpenBSD
  now build and run with a supported backend.

## Key Changes

  - Keyring features:
- Disable keyring default features at the workspace root to avoid
pulling Linux backends on non-Linux.
- Move native backend features into target-specific sections in the
affected crates:
          - Linux: linux-native-async-persistent
          - macOS: apple-native
          - Windows: windows-native
          - FreeBSD/OpenBSD: sync-secret-service
  - Process hardening:
      - Add pre_main_hardening_bsd() for FreeBSD/OpenBSD, applying:
          - Set RLIMIT_CORE to 0
          - Clear LD_* environment variables
- Simplify process-hardening Cargo deps to unconditional libc (avoid
conflicting OS fragments).
  - No changes to CODEX_SANDBOX_* behavior.

## Rationale

- Previously, enabling keyring native backends globally pulled
Linux-only features on BSD, causing build errors.
- Hardening logic was tailored for Linux/macOS; BSD builds lacked a
gated path with equivalent safeguards.
- Target-scoped features and BSD hardening make the crates portable
across these OSes without affecting existing behavior elsewhere.

## Impact by Platform

  - Linux: No functional change; backends now selected via target cfg.
  - macOS: No functional change; explicit apple-native mapping.
  - Windows: No functional change; explicit windows-native mapping.
- FreeBSD/OpenBSD: Builds succeed using sync-secret-service; BSD
hardening applied during startup.

## Testing

- Verified compilation across affected crates with target-specific
features.
- Smoke-checked that Linux/macOS/Windows feature sets remain identical
functionally after scoping.
- On BSD, confirmed keyring resolves to sync-secret-service and
hardening compiles.

## Risks / Compatibility

  - Minimal risk: only feature scoping and OS-gated additions.
- No public API changes in the crates; runtime behavior on non-BSD
platforms is preserved.
- On BSD, the new hardening clears LD_*; this is consistent with
security posture on other Unix platforms.

## Reviewer Notes

- Pay attention to target-specific sections for keyring in the affected
Cargo.toml files.
- Confirm pre_main_hardening_bsd() mirrors the safe subset of
Linux/macOS hardening without introducing Linux-only calls.
- Confirm no references to CODEX_SANDBOX_ENV_VAR or
CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR were added/modified.

## Checklist

  - Disable keyring default features at workspace root.
- Target-specific keyring features mapped per OS
(Linux/macOS/Windows/BSD).
  - Add BSD hardening (RLIMIT_CORE=0, clear LD_*).
  - Simplify process-hardening dependencies to unconditional libc.
  - No changes to sandbox env var code.
  - Formatting and linting: just fmt + just fix -p for changed crates.
  - Project tests pass for changed crates; broader suite unchanged.

---------

Co-authored-by: celia-oai <celia@openai.com>
2025-11-17 05:07:34 +00:00
dulikaifazr
de1768d3ba Fix: Claude models return incomplete responses due to empty finish_reason handling (#6728)
## Summary
Fixes streaming issue where Claude models return only 1-4 characters
instead of full responses when used through certain API
providers/proxies.

## Environment
- **OS**: Windows
- **Models affected**: Claude models (e.g., claude-haiku-4-5-20251001)
- **API Provider**: AAAI API proxy (https://api.aaai.vip/v1)
- **Working models**: GLM, Google models work correctly

## Problem
When using Claude models in both TUI and exec modes, only 1-4 characters
are displayed despite the backend receiving the full response. Debug
logs revealed that some API providers send SSE chunks with an empty
string finish_reason during active streaming, rather than null or
omitting the field entirely.

The current code treats any non-null finish_reason as a termination
signal, causing the stream to exit prematurely after the first chunk.
The problematic chunks contain finish_reason with an empty string
instead of null.

## Solution
Fix empty finish_reason handling in chat_completions.rs by adding a
check to only process non-empty finish_reason values. This ensures empty
strings are ignored and streaming continues normally.

## Testing
- Tested on Windows with Claude Haiku model via AAAI API proxy
- Full responses now received and displayed correctly in both TUI and
exec modes
- Other models (GLM, Google) continue to work as expected
- No regression in existing functionality

## Impact
- Improves compatibility with API providers that send empty
finish_reason during streaming
- Enables Claude models to work correctly in Windows environment
- No breaking changes to existing functionality

## Related Issues
This fix resolves the issue where Claude models appeared to return
incomplete responses. The root cause was identified as a compatibility
issue in parsing SSE responses from certain API providers/proxies,
rather than a model-specific problem. This change improves overall
robustness when working with various API endpoints.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-16 19:50:36 -08:00
Ahmed Ibrahim
3f1c4b9add Tighten panic on double truncation (#6701) 2025-11-15 07:28:59 +00:00
Ahmed Ibrahim
0b28e72b66 Improve compact (#6692)
This PR does the following:
- Add compact prefix to the summary
- Change the compaction prompt
- Allow multiple compaction for long running tasks
- Filter out summary messages on the following compaction

Considerations:
- Filtering out the summary message isn't the most clean
- Theoretically, we can end up in infinite compaction loop if the user
messages > compaction limit . However, that's not possible in today's
code because we have hard cap on user messages.
- We need to address having multiple user messages because it confuses
the model.

Testing:
- Making sure that after compact we always end up with one user message
(task) and one summary, even on multiple compaction.
2025-11-15 07:17:51 +00:00
Ahmed Ibrahim
94dfb211af Refactor truncation helpers into its own file (#6683)
That's to centralize the truncation in one place. Next step would be to
make only two methods public: one with bytes/lines and one with tokens.
2025-11-15 06:44:23 +00:00
Vinicius da Motta
89ecc00b79 Handle "Don't Trust" directory selection in onboarding (#4941)
Fixes #4940
Fixes #4892

When selecting "No, ask me to approve edits and commands" during
onboarding, the code wasn't applying the correct approval policy,
causing Codex to block all write operations instead of requesting
approval.

This PR fixes the issue by persisting the "DontTrust" decision in
config.toml as `trust_level = "untrusted"` and handling it in the
sandbox and approval policy logic, so Codex correctly asks for approval
before making changes.

## Before (bug)
<img width="709" height="500" alt="bef"
src="https://github.com/user-attachments/assets/5aced26d-d810-4754-879a-89d9e4e0073b"
/>

## After (fixed)
<img width="713" height="359" alt="aft"
src="https://github.com/user-attachments/assets/9887bbcb-a9a5-4e54-8e76-9125a782226b"
/>

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-14 15:23:35 -08:00
pakrym-oai
018a2d2e50 Ignore unified_exec_respects_workdir_override (#6693) 2025-11-14 15:00:31 -08:00
pakrym-oai
cfcc87a953 Order outputs before inputs (#6691)
For better caching performance all output items should be rendered in
the order they were produced before all new input items (for example,
all function_call before all function_call_output).
2025-11-14 14:54:11 -08:00
Jeremy Rose
799364de87 Enable TUI notifications by default (#6633)
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default

## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
2025-11-14 09:28:09 -08:00
jif-oai
f17b392470 feat: cache tokenizer (#6609) 2025-11-14 17:05:00 +01:00
jif-oai
63c8c01f40 feat: better UI for unified_exec (#6515)
<img width="376" height="132" alt="Screenshot 2025-11-12 at 17 36 22"
src="https://github.com/user-attachments/assets/ce693f0d-5ca0-462e-b170-c20811dcc8d5"
/>
2025-11-14 16:31:12 +01:00
pakrym-oai
6c384eb9c6 tests: replace mount_sse_once_match with mount_sse_once for SSE mocking (#6640) 2025-11-13 18:04:05 -08:00
Ahmed Ibrahim
2a6e9b20df Promote shared helpers for suite tests (#6460)
## Summary
- add `TestCodex::submit_turn_with_policies` and extend the response
helpers with reusable tool-call utilities
- update the grep_files, read_file, list_dir, shell_serialization, and
tools suites to rely on the shared helpers instead of local copies
- make the list_dir helper return `anyhow::Result` so clippy no longer
warns about `expect`

## Testing
- `just fix -p codex-core`
- `cargo test -p codex-core --test all
suite::grep_files::grep_files_tool_collects_matches`
- `cargo test -p codex-core
suite::grep_files::grep_files_tool_collects_matches -- --ignored`
(filter requests ignored tests so nothing runs, but the build stays
clean)


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69112d53abac83219813cab4d7cb6446)
2025-11-13 17:12:10 -08:00
Ahmed Ibrahim
f3c6b1334b Use shared network gating helper in chat completion tests (#6461)
## Summary
- replace the bespoke network check in the chat completion payload and
SSE tests with the existing `skip_if_no_network!` helper so they follow
the same gating convention as the rest of the suite

## Testing
- `just fmt`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69112d4cb9f08321ba773e8ccf39778e)
2025-11-13 17:11:43 -08:00
Ahmed Ibrahim
9890ceb939 Avoid double truncation (#6631)
1. Avoid double truncation by giving 10% above the tool default constant
2. Add tests that fails when const = 1
2025-11-13 16:59:31 -08:00
pakrym-oai
7b027e7536 Revert "Revert "Overhaul shell detection and centralize command generation for unified exec"" (#6607)
Reverts openai/codex#6606
2025-11-13 16:45:17 -08:00
Celia Chen
b8ec97c0ef [App-server] add new v2 events:item/reasoning/delta, item/agentMessage/delta & item/reasoning/summaryPartAdded (#6559)
core event to app server event mapping:
1. `codex/event/reasoning_content_delta` ->
`item/reasoning/summaryTextDelta`.
2. `codex/event/reasoning_raw_content_delta` ->
`item/reasoning/textDelta`
3. `codex/event/agent_message_content_delta` →
`item/agentMessage/delta`.
4. `codex/event/agent_reasoning_section_break` ->
`item/reasoning/summaryPartAdded`.

Also added a change in core to pass down content index, summary index
and item id from events.

Tested with the `git checkout owen/app_server_test_client && cargo run
-p codex-app-server-test-client -- send-message-v2 "hello"` and verified
that new events are emitted correctly.
2025-11-14 00:25:01 +00:00
Dylan Hurd
2c1b693da4 chore(core) Consolidate apply_patch tests (#6545)
## Summary
Consolidates our apply_patch tests into one suite, and ensures each test
case tests the various ways the harness supports apply_patch:
1. Freeform custom tool call
2. JSON function tool
3. Simple shell call
4. Heredoc shell call

There are a few test cases that are specific to a particular variant,
I've left those alone.

## Testing
- [x] This adds a significant number of tests
2025-11-13 15:52:39 -08:00
pakrym-oai
0792a7953d Update default yield time (#6610)
10s for exec and 250ms for write_stdin
2025-11-13 10:24:41 -08:00
pakrym-oai
041d6ad902 Migrate prompt caching tests to test_codex (#6605)
To hopefully fix the flakiness
2025-11-13 09:19:38 -08:00
pakrym-oai
e6995174c1 Revert "Overhaul shell detection and centralize command generation for unified exec" (#6606)
Reverts openai/codex#6577
2025-11-13 08:43:00 -08:00
pakrym-oai
d28e912214 Overhaul shell detection and centralize command generation for unified exec (#6577)
This fixes command display for unified exec. All `cd`s and `ls`es are
now parsed.

<img width="452" height="237" alt="image"
src="https://github.com/user-attachments/assets/ce92d81f-f74c-485a-9b34-1eaa29290ec6"
/>

Deletes a ton of tests that were doing nothing from shell.rs.

---------

Co-authored-by: Pavel Krymets <pavel@krymets.com>
2025-11-13 08:28:09 -08:00
jif-oai
2a417c47ac feat: proxy context left after compaction (#6597) 2025-11-13 16:54:03 +01:00
Dylan Hurd
8dcbd29edd chore(core) Update prompt for gpt-5.1 (#6588)
## Summary
Updates the prompt for GPT-5.1
2025-11-13 07:51:28 -08:00
pakrym-oai
34621166d5 Default to explicit medium reasoning for 5.1 (#6593) 2025-11-13 07:58:42 +00:00
Ahmed Ibrahim
b1979b70a8 remove porcupine model slug (#6580) 2025-11-13 04:43:31 +00:00
Eric Traut
73ed30d7e5 Avoid hang when tool's process spawns grandchild that shares stderr/stdout (#6575)
We've received many reports of codex hanging when calling certain tools.
[Here](https://github.com/openai/codex/issues/3204) is one example. This
is likely a major cause. The problem occurs when
`consume_truncated_output` waits for `stdout` and `stderr` to be closed
once the child process terminates. This normally works fine, but it
doesn't handle the case where the child has spawned grandchild processes
that inherits `stdout` and `stderr`.

The fix was originally written by @md-oai in [this
PR](https://github.com/openai/codex/pull/1852), which has gone stale.
I've copied the original fix (which looks sound to me) and added an
integration test to prevent future regressions.
2025-11-12 20:08:12 -08:00
Ahmed Ibrahim
ad7eaa80f9 Change model picker to include gpt5.1 (#6569)
- Change the presets
- Change the tests that make sure we keep the list of tools updated
- Filter out deprecated models
2025-11-12 19:44:53 -08:00