Compare commits

...

47 Commits

Author SHA1 Message Date
Michael Bolin
f8b5119504 feat: introduce prettify_command_for_display() 2025-09-14 23:12:50 -07:00
Thibault Sottiaux
d7d9d96d6c feat: add reasoning level to header (#3622) 2025-09-15 03:59:22 +00:00
Ahmed Ibrahim
26f1246a89 Revert "refactor transcript view to handle HistoryCells" (#3614)
Reverts openai/codex#3538
It panics on forking first message. It also calculates the index in a
wrong way.
2025-09-15 03:39:36 +00:00
Ahmed Ibrahim
6581da9b57 Show the header when resuming a conversation (#3615) 2025-09-15 03:31:08 +00:00
Eric Traut
900bb01486 When logging in using ChatGPT, make sure to overwrite API key (#3611)
When logging in using ChatGPT using the `codex login` command, a
successful login should write a new `auth.json` file with the ChatGPT
token information. The old code attempted to retain the API key and
merge the token information into the existing `auth.json` file. With the
new simplified login mechanism, `auth.json` should have auth information
for only ChatGPT or API Key, not both.

The `codex login --api-key <key>` code path was already doing the right
thing here, but the `codex login` command was incorrect. This PR fixes
the problem and adds test cases for both commands.
2025-09-14 19:48:18 -07:00
Ahmed Ibrahim
2ad6a37192 Don't show the model for apikey (#3607) 2025-09-15 01:32:18 +00:00
Eric Traut
e5dd7f0934 Fix get_auth_status response when using custom provider (#3581)
This PR addresses an edge-case bug that appears in the VS Code extension
in the following situation:
1. Log in using ChatGPT (using either the CLI or extension). This will
create an `auth.json` file.
2. Manually modify `config.toml` to specify a custom provider.
3. Start a fresh copy of the VS Code extension.

The profile menu in the VS Code extension will indicate that you are
logged in using ChatGPT even though you're not.

This is caused by the `get_auth_status` method returning an
`auth_method: 'chatgpt'` when a custom provider is configured and it
doesn't use OpenAI auth (i.e. `requires_openai_auth` is false). The
method should always return `auth_method: None` if
`requires_openai_auth` is false.

The same bug also causes the NUX (new user experience) screen to be
displayed in the VSCE in this situation.
2025-09-14 18:27:02 -07:00
Dylan
b6673838e8 fix: model family and apply_patch consistency (#3603)
## Summary
Resolves a merge conflict between #3597 and #3560, and adds tests to
double check our apply_patch configuration.

## Testing
- [x] Added unit tests

---------

Co-authored-by: dedrisian-oai <dedrisian@openai.com>
2025-09-14 18:20:37 -07:00
Fouad Matin
1823906215 fix(tui): update full-auto to default preset (#3608)
Update `--full-auto` to use default preset
2025-09-14 18:14:11 -07:00
Fouad Matin
5185d69f13 fix(core): flaky test completed_commands_do_not_persist_sessions (#3596)
Fix flaky test:
```
        FAIL [   2.641s] codex-core unified_exec::tests::completed_commands_do_not_persist_sessions
  stdout ───

    running 1 test
    test unified_exec::tests::completed_commands_do_not_persist_sessions ... FAILED

    failures:

    failures:
        unified_exec::tests::completed_commands_do_not_persist_sessions

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 235 filtered out; finished in 2.63s
    
  stderr ───

    thread 'unified_exec::tests::completed_commands_do_not_persist_sessions' panicked at core/src/unified_exec/mod.rs:582:9:
    assertion failed: result.output.contains("codex")
```
2025-09-14 18:04:05 -07:00
pakrym-oai
4dffa496ac Skip frames files in codespell (#3606)
Fixes CI
2025-09-14 18:00:23 -07:00
Ahmed Ibrahim
ce984b2c71 Add session header to chat widget (#3592)
<img width="570" height="332" alt="image"
src="https://github.com/user-attachments/assets/ca6dfcb0-f3a1-4b3e-978d-4f844ba77527"
/>
2025-09-14 17:53:50 -07:00
pakrym-oai
c47febf221 Append full raw reasoning event text (#3605)
We don't emit correct delta events and only get full reasoning back.
Append it to history.
2025-09-14 17:50:06 -07:00
jimmyfraiture2
76c37c5493 feat: UI animation (#3590)
Add NUX animation

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-09-14 17:42:17 -07:00
dedrisian-oai
2aa84b8891 Fix EventMsg Optional (#3604) 2025-09-15 00:34:33 +00:00
pakrym-oai
9177bdae5e Only one branch for swiftfox (#3601)
Make each model family have a single branch.
2025-09-14 16:56:22 -07:00
Ahmed Ibrahim
a30e5e40ee enable-resume (#3537)
Adding the ability to resume conversations.
we have one verb `resume`. 

Behavior:

`tui`:
`codex resume`: opens session picker
`codex resume --last`: continue last message
`codex resume <session id>`: continue conversation with `session id`

`exec`:
`codex resume --last`: continue last conversation
`codex resume <session id>`: continue conversation with `session id`

Implementation:
- I added a function to find the path in `~/.codex/sessions/` with a
`UUID`. This is helpful in resuming with session id.
- Added the above mentioned flags
- Added lots of testing
2025-09-14 19:33:19 -04:00
jimmyfraiture2
99e1d33bd1 feat: update model save (#3589)
Edit model save to save by default as global or on the profile depending
on the session
2025-09-14 16:25:43 -07:00
dedrisian-oai
b2f6fc3b9a Fix flaky windows test (#3564)
There are exactly 4 types of flaky tests in Windows x86 right now:

1. `review_input_isolated_from_parent_history` => Times out waiting for
closing events
2. `review_does_not_emit_agent_message_on_structured_output` => Times
out waiting for closing events
3. `auto_compact_runs_after_token_limit_hit` => Times out waiting for
closing events
4. `auto_compact_runs_after_token_limit_hit` => Also has a problem where
auto compact should add a third request, but receives 4 requests.

1, 2, and 3 seem to be solved with increasing threads on windows runner
from 2 -> 4.

Don't know yet why # 4 is happening, but probably also because of
WireMock issues on windows causing races.
2025-09-14 23:20:25 +00:00
pakrym-oai
51f88fd04a Fix swiftfox model selector (#3598)
The model shouldn't be saved with a suffix. The effort is a separate
field.
2025-09-14 23:12:21 +00:00
pakrym-oai
916fdc2a37 Add per-model-family prompts (#3597)
Allows more flexibility in defining prompts.
2025-09-14 22:45:15 +00:00
pakrym-oai
863d9c237e Include command output when sending timeout to model (#3576)
Being able to see the output helps the model decide how to handle the
timeout.
2025-09-14 14:38:26 -07:00
Ahmed Ibrahim
7e1543f5d8 Align user history message prefix width (#3467)
<img width="798" height="340" alt="image"
src="https://github.com/user-attachments/assets/fdd63f40-9c94-4e3a-bce5-2d2f333a384f"
/>
2025-09-14 20:51:08 +00:00
Ahmed Ibrahim
d701eb32d7 Gate model upgrade prompt behind ChatGPT auth (#3586)
- refresh the login_state after onboarding.
- should be on chatgpt for upgrade
2025-09-14 13:08:24 -07:00
Michael Bolin
9baae77533 chore: update output_lines() to take a struct instead of a sequence of bools (#3591)
I found the boolean literals hard to follow.
2025-09-14 13:07:38 -07:00
Ahmed Ibrahim
e932722292 Add spacing before queued status indicator messages (#3474)
<img width="687" height="174" alt="image"
src="https://github.com/user-attachments/assets/e68f5a29-cb2d-4aa6-9cbd-f492878d8d0a"
/>
2025-09-14 15:37:28 -04:00
Ahmed Ibrahim
bbea6bbf7e Handle resuming/forking after compact (#3533)
We need to construct the history different when compact happens. For
this, we need to just consider the history after compact and convert
compact to a response item.

This needs to change and use `build_compact_history` when this #3446 is
merged.
2025-09-14 13:23:31 +00:00
Jeremy Rose
4891ee29c5 refactor transcript view to handle HistoryCells (#3538)
No (intended) functional change.

This refactors the transcript view to hold a list of HistoryCells
instead of a list of Lines. This simplifies and makes much of the logic
more robust, as well as laying the groundwork for future changes, e.g.
live-updating history cells in the transcript.

Similar to #2879 in goal. Fixes #2755.
2025-09-13 19:23:14 -07:00
Thibault Sottiaux
bac8a427f3 chore: default swiftfox models to experimental reasoning summaries (#3560) 2025-09-13 23:40:54 +00:00
Thibault Sottiaux
14ab1063a7 chore: rename 2025-09-12 23:17:41 -07:00
Thibault Sottiaux
a77364bbaa chore: remove descriptions 2025-09-12 22:55:40 -07:00
Thibault Sottiaux
19b4ed3c96 w 2025-09-12 22:44:05 -07:00
pakrym-oai
3d4acbaea0 Preserve IDs for more item types in azure (#3542)
https://github.com/openai/codex/issues/3509
2025-09-13 01:09:56 +00:00
pakrym-oai
414b8be8b6 Always request encrypted cot (#3539)
Otherwise future requests will fail with 500
2025-09-12 23:51:30 +00:00
dedrisian-oai
90a0fd342f Review Mode (Core) (#3401)
## 📝 Review Mode -- Core

This PR introduces the Core implementation for Review mode:

- New op `Op::Review { prompt: String }:` spawns a child review task
with isolated context, a review‑specific system prompt, and a
`Config.review_model`.
- `EnteredReviewMode`: emitted when the child review session starts.
Every event from this point onwards reflects the review session.
- `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review
finishes or is interrupted, with optional structured findings:

```json
{
  "findings": [
    {
      "title": "<≤ 80 chars, imperative>",
      "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
      "confidence_score": <float 0.0-1.0>,
      "priority": <int 0-3>,
      "code_location": {
        "absolute_file_path": "<file path>",
        "line_range": {"start": <int>, "end": <int>}
      }
    }
  ],
  "overall_correctness": "patch is correct" | "patch is incorrect",
  "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
  "overall_confidence_score": <float 0.0-1.0>
}
```

## Questions

### Why separate out its own message history?

We want the review thread to match the training of our review models as
much as possible -- that means using a custom prompt, removing user
instructions, and starting a clean chat history.

We also want to make sure the review thread doesn't leak into the parent
thread.

### Why do this as a mode, vs. sub-agents?

1. We want review to be a synchronous task, so it's fine for now to do a
bespoke implementation.
2. We're still unclear about the final structure for sub-agents. We'd
prefer to land this quickly and then refactor into sub-agents without
rushing that implementation.
2025-09-12 23:25:10 +00:00
jif-oai
8d56d2f655 fix: NIT None reasoning effort (#3536)
Fix the reasoning effort not being set to None in the UI
2025-09-12 21:17:49 +00:00
jif-oai
8408f3e8ed Fix NUX UI (#3534)
Fix NUX UI
2025-09-12 14:09:31 -07:00
Jeremy Rose
b8ccfe9b65 core: expand default sandbox (#3483)
this adds some more capabilities to the default sandbox which I feel are
safe. Most are in the
[renderer.sb](https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb)
sandbox for chrome renderers, which i feel is fair game for codex
commands.

Specific changes:

1. Allow processes in the sandbox to send signals to any other process
in the same sandbox (e.g. child processes or daemonized processes),
instead of just themselves.
2. Allow user-preference-read
3. Allow process-info* to anything in the same sandbox. This is a bit
wider than Chromium allows, but it seems OK to me to allow anything in
the sandbox to get details about other processes in the same sandbox.
Bazel uses these to e.g. wait for another process to exit.
4. Allow all CPU feature detection, this seems harmless to me. It's
wider than Chromium, but Chromium is concerned about fingerprinting, and
tightly controls what CPU features they actually care about, and we
don't have either that restriction or that advantage.
5. Allow new sysctl-reads:
   ```
     (sysctl-name "vm.loadavg")
     (sysctl-name-prefix "kern.proc.pgrp.")
     (sysctl-name-prefix "kern.proc.pid.")
     (sysctl-name-prefix "net.routetable.")
   ```
bazel needs these for waiting on child processes and for communicating
with its local build server, i believe. I wonder if we should just allow
all (sysctl-read), as reading any arbitrary info about the system seems
fine to me.
6. Allow iokit-open on RootDomainUserClient. This has to do with power
management I believe, and Chromium allows renderers to do this, so okay.
Bazel needs it to boot successfully, possibly for sleep/wake callbacks?
7. Mach lookup to `com.apple.system.opendirectoryd.libinfo`, which has
to do with user data, and which Chrome allows.
8. Mach lookup to `com.apple.PowerManagement.control`. Chromium allows
its GPU process to do this, but not its renderers. Bazel needs this to
boot, probably relatedly to sleep/wake stuff.
2025-09-12 14:03:02 -07:00
pakrym-oai
e3c6903199 Add Azure Responses API workaround (#3528)
Azure Responses API doesn't work well with store:false and response
items.

If store = false and id is sent an error is thrown that ID is not found
If store = false and id is not sent an error is thrown that ID is
required

Add detection for Azure urls and add a workaround to preserve reasoning
item IDs and send store:true
2025-09-12 13:52:15 -07:00
Jeremy Rose
5f6e95b592 if a command parses as a patch, do not attempt to run it (#3382)
sometimes the model forgets to actually invoke `apply_patch` and puts a
patch as the script body. trying to execute this as bash sometimes
creates files named `,` or `{` or does other unknown things, so catch
this situation and return an error to the model.
2025-09-12 13:47:41 -07:00
Ahmed Ibrahim
a2e9cc5530 Update interruption error message styling (#3470)
<img width="497" height="76" alt="image"
src="https://github.com/user-attachments/assets/a1ad279d-1d01-41cd-ac14-b3343a392563"
/>

<img width="493" height="74" alt="image"
src="https://github.com/user-attachments/assets/baf487ba-430e-40fe-8944-2071ec052962"
/>
2025-09-12 16:17:02 -04:00
jif-oai
ea225df22e feat: context compaction (#3446)
## Compact feature:
1. Stops the model when the context window become too large
2. Add a user turn, asking for the model to summarize
3. Build a bridge that contains all the previous user message + the
summary. Rendered from a template
4. Start sampling again from a clean conversation with only that bridge
2025-09-12 13:07:10 -07:00
Ahmed Ibrahim
d4848e558b Add spacing before composer footer hints (#3469)
<img width="647" height="82" alt="image"
src="https://github.com/user-attachments/assets/867eb5d9-3076-4018-846e-260a50408185"
/>
2025-09-12 15:31:24 -04:00
Ahmed Ibrahim
1a6a95fb2a Add spacing between dropdown headers and items (#3472)
<img width="927" height="194" alt="image"
src="https://github.com/user-attachments/assets/f4cb999b-16c3-448a-aed4-060bed8b96dd"
/>

<img width="1246" height="205" alt="image"
src="https://github.com/user-attachments/assets/5d9ba5bd-0c02-46da-a809-b583a176528a"
/>
2025-09-12 15:31:15 -04:00
jif-oai
c6fd056aa6 feat: reasoning effort as optional (#3527)
Allow the reasoning effort to be optional
2025-09-12 12:06:33 -07:00
Michael Bolin
abdcb40f4c feat: change the behavior of SetDefaultModel RPC so None clears the value. (#3529)
It turns out that we want slightly different behavior for the
`SetDefaultModel` RPC because some models do not work with reasoning
(like GPT-4.1), so we should be able to explicitly clear this value.

Verified in `codex-rs/mcp-server/tests/suite/set_default_model.rs`.
2025-09-12 11:35:51 -07:00
Dylan
4ae6b9787a standardize shell description (#3514)
## Summary
Standardizes the shell description across sandbox_types, since we cover
this in the prompt, and have moved necessary details (like
network_access and writeable workspace roots) to EnvironmentContext
messages.

## Test Plan
- [x] updated unit tests
2025-09-12 14:24:09 -04:00
394 changed files with 14559 additions and 1196 deletions

View File

@@ -25,3 +25,4 @@ jobs:
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
with:
ignore_words_file: .codespellignore
skip: frame*.txt

72
codex-rs/Cargo.lock generated
View File

@@ -212,6 +212,50 @@ dependencies = [
"term",
]
[[package]]
name = "askama"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b79091df18a97caea757e28cd2d5fda49c6cd4bd01ddffd7ff01ace0c0ad2c28"
dependencies = [
"askama_derive",
"askama_escape",
"humansize",
"num-traits",
"percent-encoding",
]
[[package]]
name = "askama_derive"
version = "0.12.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19fe8d6cb13c4714962c072ea496f3392015f0989b1a2847bb4b2d9effd71d83"
dependencies = [
"askama_parser",
"basic-toml",
"mime",
"mime_guess",
"proc-macro2",
"quote",
"serde",
"syn 2.0.104",
]
[[package]]
name = "askama_escape"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "619743e34b5ba4e9703bba34deac3427c72507c7159f5fd030aea8cac0cfe341"
[[package]]
name = "askama_parser"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acb1161c6b64d1c3d83108213c2a2533a342ac225aabd0bda218278c2ddb00c0"
dependencies = [
"nom",
]
[[package]]
name = "assert-json-diff"
version = "2.0.2"
@@ -305,6 +349,15 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "basic-toml"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba62675e8242a4c4e806d12f11d136e626e6c8361d6b829310732241652a178a"
dependencies = [
"serde",
]
[[package]]
name = "beef"
version = "0.5.2"
@@ -606,12 +659,14 @@ name = "codex-core"
version = "0.0.0"
dependencies = [
"anyhow",
"askama",
"assert_cmd",
"async-channel",
"base64",
"bytes",
"chrono",
"codex-apply-patch",
"codex-file-search",
"codex-mcp-client",
"codex-protocol",
"core_test_support",
@@ -679,6 +734,8 @@ dependencies = [
"tokio",
"tracing",
"tracing-subscriber",
"uuid",
"walkdir",
"wiremock",
]
@@ -1981,6 +2038,15 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "humansize"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6cb51c9a029ddc91b07a787f1d86b53ccfa49b0e86688c946ebe8d3555685dd7"
dependencies = [
"libm",
]
[[package]]
name = "hyper"
version = "1.7.0"
@@ -2536,6 +2602,12 @@ version = "0.2.175"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a82ae493e598baaea5209805c49bbf2ea7de956d50d7da0da1164f9c6d28543"
[[package]]
name = "libm"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredox"
version = "0.1.6"

View File

@@ -40,6 +40,11 @@ pub enum ApplyPatchError {
/// Error that occurs while computing replacements when applying patch chunks
#[error("{0}")]
ComputeReplacements(String),
/// A raw patch body was provided without an explicit `apply_patch` invocation.
#[error(
"patch detected without explicit call to apply_patch. Rerun as [\"apply_patch\", \"<patch>\"]"
)]
ImplicitInvocation,
}
impl From<std::io::Error> for ApplyPatchError {
@@ -93,10 +98,12 @@ pub struct ApplyPatchArgs {
pub fn maybe_parse_apply_patch(argv: &[String]) -> MaybeApplyPatch {
match argv {
// Direct invocation: apply_patch <patch>
[cmd, body] if APPLY_PATCH_COMMANDS.contains(&cmd.as_str()) => match parse_patch(body) {
Ok(source) => MaybeApplyPatch::Body(source),
Err(e) => MaybeApplyPatch::PatchParseError(e),
},
// Bash heredoc form: (optional `cd <path> &&`) apply_patch <<'EOF' ...
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
match extract_apply_patch_from_bash(script) {
Ok((body, workdir)) => match parse_patch(&body) {
@@ -207,6 +214,26 @@ impl ApplyPatchAction {
/// cwd must be an absolute path so that we can resolve relative paths in the
/// patch.
pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApplyPatchVerified {
// Detect a raw patch body passed directly as the command or as the body of a bash -lc
// script. In these cases, report an explicit error rather than applying the patch.
match argv {
[body] => {
if parse_patch(body).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
if parse_patch(script).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
_ => {}
}
match maybe_parse_apply_patch(argv) {
MaybeApplyPatch::Body(ApplyPatchArgs {
patch,
@@ -875,6 +902,28 @@ mod tests {
));
}
#[test]
fn test_implicit_patch_single_arg_is_error() {
let patch = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch".to_string();
let args = vec![patch];
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_implicit_patch_bash_script_is_error() {
let script = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch";
let args = args_bash(script);
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_literal() {
let args = strs_to_strings(&[

View File

@@ -73,6 +73,9 @@ enum Subcommand {
#[clap(visible_alias = "a")]
Apply(ApplyCommand),
/// Resume a previous interactive session (picker by default; use --last to continue the most recent).
Resume(ResumeCommand),
/// Internal: generate TypeScript protocol bindings.
#[clap(hide = true)]
GenerateTs(GenerateTsCommand),
@@ -85,6 +88,18 @@ struct CompletionCommand {
shell: Shell,
}
#[derive(Debug, Parser)]
struct ResumeCommand {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
session_id: Option<String>,
/// Continue the most recent session without showing the picker.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
last: bool,
}
#[derive(Debug, Parser)]
struct DebugArgs {
#[command(subcommand)]
@@ -143,26 +158,54 @@ fn main() -> anyhow::Result<()> {
}
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let cli = MultitoolCli::parse();
let MultitoolCli {
config_overrides: root_config_overrides,
mut interactive,
subcommand,
} = MultitoolCli::parse();
match cli.subcommand {
match subcommand {
None => {
let mut tui_cli = cli.interactive;
prepend_config_flags(&mut tui_cli.config_overrides, cli.config_overrides);
let usage = codex_tui::run_main(tui_cli, codex_linux_sandbox_exe).await?;
prepend_config_flags(
&mut interactive.config_overrides,
root_config_overrides.clone(),
);
let usage = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
if !usage.is_zero() {
println!("{}", codex_core::protocol::FinalOutput::from(usage));
}
}
Some(Subcommand::Exec(mut exec_cli)) => {
prepend_config_flags(&mut exec_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut exec_cli.config_overrides,
root_config_overrides.clone(),
);
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
codex_mcp_server::run_main(codex_linux_sandbox_exe, root_config_overrides.clone())
.await?;
}
Some(Subcommand::Resume(ResumeCommand { session_id, last })) => {
// Start with the parsed interactive CLI so resume shares the same
// configuration surface area as `codex` without additional flags.
let resume_session_id = session_id;
interactive.resume_picker = resume_session_id.is_none() && !last;
interactive.resume_last = last;
interactive.resume_session_id = resume_session_id;
// Propagate any root-level config overrides (e.g. `-c key=value`).
prepend_config_flags(
&mut interactive.config_overrides,
root_config_overrides.clone(),
);
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut login_cli.config_overrides,
root_config_overrides.clone(),
);
match login_cli.action {
Some(LoginSubcommand::Status) => {
run_login_status(login_cli.config_overrides).await;
@@ -177,11 +220,17 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
}
Some(Subcommand::Logout(mut logout_cli)) => {
prepend_config_flags(&mut logout_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut logout_cli.config_overrides,
root_config_overrides.clone(),
);
run_logout(logout_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(&mut proto_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut proto_cli.config_overrides,
root_config_overrides.clone(),
);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
@@ -189,7 +238,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
DebugCommand::Seatbelt(mut seatbelt_cli) => {
prepend_config_flags(&mut seatbelt_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut seatbelt_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_seatbelt(
seatbelt_cli,
codex_linux_sandbox_exe,
@@ -197,7 +249,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
.await?;
}
DebugCommand::Landlock(mut landlock_cli) => {
prepend_config_flags(&mut landlock_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut landlock_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_landlock(
landlock_cli,
codex_linux_sandbox_exe,
@@ -206,7 +261,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
},
Some(Subcommand::Apply(mut apply_cli)) => {
prepend_config_flags(&mut apply_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut apply_cli.config_overrides,
root_config_overrides.clone(),
);
run_apply_command(apply_cli, None).await?;
}
Some(Subcommand::GenerateTs(gen_cli)) => {

View File

@@ -17,7 +17,10 @@ pub fn create_config_summary_entries(config: &Config) -> Vec<(&'static str, Stri
{
entries.push((
"reasoning effort",
config.model_reasoning_effort.to_string(),
config
.model_reasoning_effort
.map(|effort| effort.to_string())
.unwrap_or_else(|| "none".to_string()),
));
entries.push((
"reasoning summaries",

View File

@@ -1,4 +1,6 @@
use codex_core::config::SWIFTFOX_MEDIUM_MODEL;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::AuthMode;
/// A simple preset pairing a model slug with a reasoning effort.
#[derive(Debug, Clone, Copy)]
@@ -12,50 +14,68 @@ pub struct ModelPreset {
/// Model slug (e.g., "gpt-5").
pub model: &'static str,
/// Reasoning effort to apply for this preset.
pub effort: ReasoningEffort,
pub effort: Option<ReasoningEffort>,
}
/// Built-in list of model presets that pair a model with a reasoning effort.
///
/// Keep this UI-agnostic so it can be reused by both TUI and MCP server.
pub fn builtin_model_presets() -> &'static [ModelPreset] {
// Order reflects effort from minimal to high.
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: ReasoningEffort::Minimal,
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: ReasoningEffort::Low,
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: ReasoningEffort::Medium,
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: ReasoningEffort::High,
},
ModelPreset {
id: "gpt-5-high-new",
label: "gpt-5 high new",
description: "— our latest release tuned to rely on the model's built-in reasoning defaults",
model: "gpt-5-high-new",
effort: ReasoningEffort::Medium,
},
];
PRESETS
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "swiftfox-low",
label: "swiftfox low",
description: "",
model: "swiftfox",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "swiftfox-medium",
label: "swiftfox medium",
description: "",
model: "swiftfox",
effort: None,
},
ModelPreset {
id: "swiftfox-high",
label: "swiftfox high",
description: "",
model: "swiftfox",
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: Some(ReasoningEffort::High),
},
];
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
match auth_mode {
Some(AuthMode::ApiKey) => PRESETS
.iter()
.copied()
.filter(|p| !p.model.contains(SWIFTFOX_MEDIUM_MODEL))
.collect(),
_ => PRESETS.to_vec(),
}
}

View File

@@ -13,11 +13,13 @@ workspace = true
[dependencies]
anyhow = "1"
askama = "0.12"
async-channel = "2.3.1"
base64 = "0.22"
bytes = "1.10.1"
chrono = { version = "0.4", features = ["serde"] }
codex-apply-patch = { path = "../apply-patch" }
codex-file-search = { path = "../file-search" }
codex-mcp-client = { path = "../mcp-client" }
codex-protocol = { path = "../protocol" }
dirs = "6"

View File

@@ -0,0 +1,87 @@
# Review guidelines:
You are acting as a reviewer for a proposed code change made by another engineer.
Below are some default guidelines for determining whether the original author would appreciate the issue being flagged.
These are not the final word in determining whether an issue is a bug. In many cases, you will encounter other, more specific guidelines. These may be present elsewhere in a developer message, a user message, a file, or even elsewhere in this system message.
Those guidelines should be considered to override these general instructions.
Here are the general guidelines for determining whether something is a bug and should be flagged.
1. It meaningfully impacts the accuracy, performance, security, or maintainability of the code.
2. The bug is discrete and actionable (i.e. not a general issue with the codebase or a combination of multiple issues).
3. Fixing the bug does not demand a level of rigor that is not present in the rest of the codebase (e.g. one doesn't need very detailed comments and input validation in a repository of one-off scripts in personal projects)
4. The bug was introduced in the commit (pre-existing bugs should not be flagged).
5. The author of the original PR would likely fix the issue if they were made aware of it.
6. The bug does not rely on unstated assumptions about the codebase or author's intent.
7. It is not enough to speculate that a change may disrupt another part of the codebase, to be considered a bug, one must identify the other parts of the code that are provably affected.
8. The bug is clearly not just an intentional change by the original author.
When flagging a bug, you will also provide an accompanying comment. Once again, these guidelines are not the final word on how to construct a comment -- defer to any subsequent guidelines that you encounter.
1. The comment should be clear about why the issue is a bug.
2. The comment should appropriately communicate the severity of the issue. It should not claim that an issue is more severe than it actually is.
3. The comment should be brief. The body should be at most 1 paragraph. It should not introduce line breaks within the natural language flow unless it is necessary for the code fragment.
4. The comment should not include any chunks of code longer than 3 lines. Any code chunks should be wrapped in markdown inline code tags or a code block.
5. The comment should clearly and explicitly communicate the scenarios, environments, or inputs that are necessary for the bug to arise. The comment should immediately indicate that the issue's severity depends on these factors.
6. The comment's tone should be matter-of-fact and not accusatory or overly positive. It should read as a helpful AI assistant suggestion without sounding too much like a human reviewer.
7. The comment should be written such that the original author can immediately grasp the idea without close reading.
8. The comment should avoid excessive flattery and comments that are not helpful to the original author. The comment should avoid phrasing like "Great job ...", "Thanks for ...".
Below are some more detailed guidelines that you should apply to this specific review.
HOW MANY FINDINGS TO RETURN:
Output all findings that the original author would fix if they knew about it. If there is no finding that a person would definitely love to see and fix, prefer outputting no findings. Do not stop at the first qualifying finding. Continue until you've listed every qualifying finding.
GUIDELINES:
- Ignore trivial style unless it obscures meaning or violates documented standards.
- Use one comment per distinct issue (or a multi-line range if necessary).
- Use ```suggestion blocks ONLY for concrete replacement code (minimal lines; no commentary inside the block).
- In every ```suggestion block, preserve the exact leading whitespace of the replaced lines (spaces vs tabs, number of spaces).
- Do NOT introduce or remove outer indentation levels unless that is the actual fix.
The comments will be presented in the code review as inline comments. You should avoid providing unnecessary location details in the comment body. Always keep the line range as short as possible for interpreting the issue. Avoid ranges longer than 510 lines; instead, choose the most suitable subrange that pinpoints the problem.
At the beginning of the finding title, tag the bug with priority level. For example "[P1] Un-padding slices along wrong tensor dimensions". [P0] Drop everything to fix. Blocking release, operations, or major usage. Only use for universal issues that do not depend on any assumptions about the inputs. · [P1] Urgent. Should be addressed in the next cycle · [P2] Normal. To be fixed eventually · [P3] Low. Nice to have.
Additionally, include a numeric priority field in the JSON output for each finding: set "priority" to 0 for P0, 1 for P1, 2 for P2, or 3 for P3. If a priority cannot be determined, omit the field or use null.
At the end of your findings, output an "overall correctness" verdict of whether or not the patch should be considered "correct".
Correct implies that existing code and tests will not break, and the patch is free of bugs and other blocking issues.
Ignore non-blocking issues such as style, formatting, typos, documentation, and other nits.
FORMATTING GUIDELINES:
The finding description should be one paragraph.
OUTPUT FORMAT:
## Output schema — MUST MATCH *exactly*
```json
{
"findings": [
{
"title": "<≤ 80 chars, imperative>",
"body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
"confidence_score": <float 0.0-1.0>,
"priority": <int 0-3, optional>,
"code_location": {
"absolute_file_path": "<file path>",
"line_range": {"start": <int>, "end": <int>}
}
}
],
"overall_correctness": "patch is correct" | "patch is incorrect",
"overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
"overall_confidence_score": <float 0.0-1.0>
}
```
* **Do not** wrap the JSON in markdown fences or extra prose.
* The code_location field is required and must include absolute_file_path and line_range.
*Line ranges must be as short as possible for interpreting the issue (avoid ranges over 510 lines; pick the most suitable subrange).
* The code_location should overlap with the diff.
* Do not generate a PR fix.

View File

@@ -408,6 +408,32 @@ mod tests {
assert_eq!(auth_dot_json, same_auth_dot_json);
}
#[test]
fn login_with_api_key_overwrites_existing_auth_json() {
let dir = tempdir().unwrap();
let auth_path = dir.path().join("auth.json");
let stale_auth = json!({
"OPENAI_API_KEY": "sk-old",
"tokens": {
"id_token": "stale.header.payload",
"access_token": "stale-access",
"refresh_token": "stale-refresh",
"account_id": "stale-acc"
}
});
std::fs::write(
&auth_path,
serde_json::to_string_pretty(&stale_auth).unwrap(),
)
.unwrap();
super::login_with_api_key(dir.path(), "sk-new").expect("login_with_api_key should succeed");
let auth = super::try_read_auth_json(&auth_path).expect("auth.json should parse");
assert_eq!(auth.openai_api_key.as_deref(), Some("sk-new"));
assert!(auth.tokens.is_none(), "tokens should be cleared");
}
#[tokio::test]
async fn pro_account_with_no_api_key_uses_chatgpt_auth() {
let codex_home = tempdir().unwrap();

View File

@@ -1,3 +1,4 @@
use tree_sitter::Node;
use tree_sitter::Parser;
use tree_sitter::Tree;
use tree_sitter_bash::LANGUAGE as BASH;
@@ -59,9 +60,6 @@ pub fn try_parse_word_only_commands_sequence(tree: &Tree, src: &str) -> Option<V
}
} else {
// Reject any punctuation / operator tokens that are not explicitly allowed.
if kind.chars().any(|c| "&;|".contains(c)) && !ALLOWED_PUNCT_TOKENS.contains(&kind) {
return None;
}
if !(ALLOWED_PUNCT_TOKENS.contains(&kind) || kind.trim().is_empty()) {
// If it's a quote token or operator it's allowed above; we also allow whitespace tokens.
// Any other punctuation like parentheses, braces, redirects, backticks, etc are rejected.
@@ -75,7 +73,7 @@ pub fn try_parse_word_only_commands_sequence(tree: &Tree, src: &str) -> Option<V
let mut commands = Vec::new();
for node in command_nodes {
if let Some(words) = parse_plain_command_from_node(node, src) {
if let Some(words) = extract_words_from_command_node(node, src) {
commands.push(words);
} else {
return None;
@@ -84,7 +82,10 @@ pub fn try_parse_word_only_commands_sequence(tree: &Tree, src: &str) -> Option<V
Some(commands)
}
fn parse_plain_command_from_node(cmd: tree_sitter::Node, src: &str) -> Option<Vec<String>> {
/// Extract the plain words of a simple command node, normalizing quoted
/// strings into their contents. Returns None if the node contains unsupported
/// constructs for a word-only command.
pub(crate) fn extract_words_from_command_node(cmd: Node, src: &str) -> Option<Vec<String>> {
if cmd.kind() != "command" {
return None;
}
@@ -130,6 +131,48 @@ fn parse_plain_command_from_node(cmd: tree_sitter::Node, src: &str) -> Option<Ve
Some(words)
}
/// Find the earliest `command` node in source order within the parse tree.
pub(crate) fn find_first_command_node(tree: &Tree) -> Option<Node<'_>> {
let root = tree.root_node();
let mut cursor = root.walk();
let mut stack = vec![root];
let mut best: Option<Node> = None;
while let Some(node) = stack.pop() {
if node.is_named()
&& node.kind() == "command"
&& best.is_none_or(|b| node.start_byte() < b.start_byte())
{
best = Some(node);
}
for child in node.children(&mut cursor) {
stack.push(child);
}
}
best
}
/// Given the first command node, return the byte index in `src` at which the
/// remainder script starts, only if the next non-whitespace token is an allowed
/// sequencing operator for dropping the leading `cd`.
///
/// Allowed operators: `&&` (conditional on success) and `;` (unconditional).
/// Disallowed: `||`, `|` — removing `cd` would change semantics.
pub(crate) fn remainder_start_after_wrapper_operator(first_cmd: Node, src: &str) -> Option<usize> {
let mut sib = first_cmd.next_sibling()?;
while !sib.is_named() && sib.kind().trim().is_empty() {
sib = sib.next_sibling()?;
}
if sib.is_named() || (sib.kind() != "&&" && sib.kind() != ";") {
return None;
}
let mut idx = sib.end_byte();
let bytes = src.as_bytes();
while idx < bytes.len() && bytes[idx].is_ascii_whitespace() {
idx += 1;
}
if idx >= bytes.len() { None } else { Some(idx) }
}
#[cfg(test)]
mod tests {
use super::*;
@@ -215,4 +258,45 @@ mod tests {
fn rejects_trailing_operator_parse_error() {
assert!(parse_seq("ls &&").is_none());
}
#[test]
fn find_first_command_node_finds_cd() {
let src = "cd foo && ls; git status";
let tree = try_parse_bash(src).unwrap();
let first = find_first_command_node(&tree).unwrap();
let words = extract_words_from_command_node(first, src).unwrap();
assert_eq!(words, vec!["cd".to_string(), "foo".to_string()]);
}
#[test]
fn remainder_after_wrapper_operator_allows_and_and_semicolon() {
// Allows &&
let src = "cd foo && ls; git status";
let tree = try_parse_bash(src).unwrap();
let first = find_first_command_node(&tree).unwrap();
let idx = remainder_start_after_wrapper_operator(first, src).unwrap();
assert_eq!(&src[idx..], "ls; git status");
// Allows ;
let src2 = "cd foo; ls";
let tree2 = try_parse_bash(src2).unwrap();
let first2 = find_first_command_node(&tree2).unwrap();
let idx2 = remainder_start_after_wrapper_operator(first2, src2).unwrap();
assert_eq!(&src2[idx2..], "ls");
}
#[test]
fn remainder_after_wrapper_operator_rejects_or_and_pipe() {
// Rejects ||
let src = "cd foo || echo hi";
let tree = try_parse_bash(src).unwrap();
let first = find_first_command_node(&tree).unwrap();
assert!(remainder_start_after_wrapper_operator(first, src).is_none());
// Rejects |
let src2 = "cd foo | rg bar";
let tree2 = try_parse_bash(src2).unwrap();
let first2 = find_first_command_node(&tree2).unwrap();
assert!(remainder_start_after_wrapper_operator(first2, src2).is_none());
}
}

View File

@@ -72,7 +72,7 @@ pub struct ModelClient {
client: reqwest::Client,
provider: ModelProviderInfo,
conversation_id: ConversationId,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
}
@@ -81,7 +81,7 @@ impl ModelClient {
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
conversation_id: ConversationId,
) -> Self {
@@ -104,6 +104,12 @@ impl ModelClient {
.or_else(|| get_model_info(&self.config.model_family).map(|info| info.context_window))
}
pub fn get_auto_compact_token_limit(&self) -> Option<i64> {
self.config.model_auto_compact_token_limit.or_else(|| {
get_model_info(&self.config.model_family).and_then(|info| info.auto_compact_token_limit)
})
}
/// Dispatches to either the Responses or Chat implementation depending on
/// the provider config. Public callers always invoke `stream()` the
/// specialised helpers are private to avoid accidental misuse.
@@ -187,6 +193,15 @@ impl ModelClient {
None
};
// In general, we want to explicitly send `store: false` when using the Responses API,
// but in practice, the Azure Responses API rejects `store: false`:
//
// - If store = false and id is sent an error is thrown that ID is not found
// - If store = false and id is not sent an error is thrown that ID is required
//
// For Azure, we send `store: true` and preserve reasoning item IDs.
let azure_workaround = self.provider.is_azure_responses_endpoint();
let payload = ResponsesApiRequest {
model: &self.config.model,
instructions: &full_instructions,
@@ -195,13 +210,19 @@ impl ModelClient {
tool_choice: "auto",
parallel_tool_calls: false,
reasoning,
store: false,
store: azure_workaround,
stream: true,
include,
prompt_cache_key: Some(self.conversation_id.to_string()),
text,
};
let mut payload_json = serde_json::to_value(&payload)?;
if azure_workaround {
attach_item_ids(&mut payload_json, &input_with_instructions);
}
let payload_body = serde_json::to_string(&payload_json)?;
let mut attempt = 0;
let max_retries = self.provider.request_max_retries();
@@ -214,7 +235,7 @@ impl ModelClient {
trace!(
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(&payload)?
payload_body.as_str()
);
let mut req_builder = self
@@ -228,7 +249,7 @@ impl ModelClient {
.header("conversation_id", self.conversation_id.to_string())
.header("session_id", self.conversation_id.to_string())
.header(reqwest::header::ACCEPT, "text/event-stream")
.json(&payload);
.json(&payload_json);
if let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
@@ -356,7 +377,7 @@ impl ModelClient {
}
/// Returns the current reasoning effort setting.
pub fn get_reasoning_effort(&self) -> ReasoningEffortConfig {
pub fn get_reasoning_effort(&self) -> Option<ReasoningEffortConfig> {
self.effort
}
@@ -425,6 +446,33 @@ struct ResponseCompletedOutputTokensDetails {
reasoning_tokens: u64,
}
fn attach_item_ids(payload_json: &mut Value, original_items: &[ResponseItem]) {
let Some(input_value) = payload_json.get_mut("input") else {
return;
};
let serde_json::Value::Array(items) = input_value else {
return;
};
for (value, item) in items.iter_mut().zip(original_items.iter()) {
if let ResponseItem::Reasoning { id, .. }
| ResponseItem::Message { id: Some(id), .. }
| ResponseItem::WebSearchCall { id: Some(id), .. }
| ResponseItem::FunctionCall { id: Some(id), .. }
| ResponseItem::LocalShellCall { id: Some(id), .. }
| ResponseItem::CustomToolCall { id: Some(id), .. } = item
{
if id.is_empty() {
continue;
}
if let Some(obj) = value.as_object_mut() {
obj.insert("id".to_string(), Value::String(id.clone()));
}
}
}
}
async fn process_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent>>,

View File

@@ -10,14 +10,14 @@ use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Serialize;
use std::borrow::Cow;
use std::ops::Deref;
use std::pin::Pin;
use std::task::Context;
use std::task::Poll;
use tokio::sync::mpsc;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
/// Review thread system prompt. Edit `core/src/review_prompt.md` to customize.
pub const REVIEW_PROMPT: &str = include_str!("../review_prompt.md");
/// API request payload for a single model turn
#[derive(Default, Debug, Clone)]
@@ -38,11 +38,12 @@ impl Prompt {
let base = self
.base_instructions_override
.as_deref()
.unwrap_or(BASE_INSTRUCTIONS);
.unwrap_or(model.base_instructions.deref());
let mut sections: Vec<&str> = vec![base];
// When there are no custom instructions, add apply_patch_tool_instructions if either:
// - the model needs special instructions (4.1), or
// When there are no custom instructions, add apply_patch_tool_instructions if:
// - the model needs special instructions (4.1)
// AND
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
@@ -50,7 +51,8 @@ impl Prompt {
_ => false,
});
if self.base_instructions_override.is_none()
&& (model.needs_special_apply_patch_instructions || !is_apply_patch_tool_present)
&& model.needs_special_apply_patch_instructions
&& !is_apply_patch_tool_present
{
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
}
@@ -81,8 +83,10 @@ pub enum ResponseEvent {
#[derive(Debug, Serialize)]
pub(crate) struct Reasoning {
pub(crate) effort: ReasoningEffortConfig,
pub(crate) summary: ReasoningSummaryConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) effort: Option<ReasoningEffortConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) summary: Option<ReasoningSummaryConfig>,
}
/// Controls under the `text` field in the Responses API for GPT-5.
@@ -136,14 +140,17 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) fn create_reasoning_param_for_request(
model_family: &ModelFamily,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
) -> Option<Reasoning> {
if model_family.supports_reasoning_summaries {
Some(Reasoning { effort, summary })
} else {
None
if !model_family.supports_reasoning_summaries {
return None;
}
Some(Reasoning {
effort,
summary: Some(summary),
})
}
pub(crate) fn create_text_param_for_request(
@@ -169,18 +176,64 @@ impl Stream for ResponseStream {
#[cfg(test)]
mod tests {
use crate::model_family::find_family_for_model;
use pretty_assertions::assert_eq;
use super::*;
struct InstructionsTestCase {
pub slug: &'static str,
pub expects_apply_patch_instructions: bool,
}
#[test]
fn get_full_instructions_no_user_content() {
let prompt = Prompt {
..Default::default()
};
let expected = format!("{BASE_INSTRUCTIONS}\n{APPLY_PATCH_TOOL_INSTRUCTIONS}");
let model_family = find_family_for_model("gpt-4.1").expect("known model slug");
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
let test_cases = vec![
InstructionsTestCase {
slug: "gpt-3.5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4.1",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4o",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "codex-mini-latest",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-oss:120b",
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "swiftfox",
expects_apply_patch_instructions: false,
},
];
for test_case in test_cases {
let model_family = find_family_for_model(test_case.slug).expect("known model slug");
let expected = if test_case.expects_apply_patch_instructions {
format!(
"{}\n{}",
model_family.clone().base_instructions,
APPLY_PATCH_TOOL_INSTRUCTIONS
)
} else {
model_family.clone().base_instructions
};
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
}
}
#[test]

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,400 @@
use std::sync::Arc;
use super::AgentTask;
use super::MutexExt;
use super::Session;
use super::TurnContext;
use super::get_last_assistant_message_from_turn;
use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::protocol::AgentMessageEvent;
use crate::protocol::CompactedItem;
use crate::protocol::ErrorEvent;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::InputItem;
use crate::protocol::InputMessageKind;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TaskStartedEvent;
use crate::protocol::TurnContextItem;
use crate::util::backoff;
use askama::Template;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::RolloutItem;
use futures::prelude::*;
pub(super) const COMPACT_TRIGGER_TEXT: &str = "Start Summarization";
const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
#[derive(Template)]
#[template(path = "compact/history_bridge.md", escape = "none")]
struct HistoryBridgeTemplate<'a> {
user_messages_text: &'a str,
summary_text: &'a str,
}
pub(super) fn spawn_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
) {
let task = AgentTask::compact(
sess.clone(),
turn_context,
sub_id,
input,
SUMMARIZATION_PROMPT.to_string(),
);
sess.set_task(task);
}
pub(super) async fn run_inline_auto_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
) {
let sub_id = sess.next_internal_sub_id();
let input = vec![InputItem::Text {
text: COMPACT_TRIGGER_TEXT.to_string(),
}];
run_compact_task_inner(
sess,
turn_context,
sub_id,
input,
SUMMARIZATION_PROMPT.to_string(),
false,
)
.await;
}
pub(super) async fn run_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
) {
run_compact_task_inner(
sess,
turn_context,
sub_id,
input,
compact_instructions,
true,
)
.await;
}
async fn run_compact_task_inner(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
remove_task_on_completion: bool,
) {
let model_context_window = turn_context.client.get_model_context_window();
let start_event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window,
}),
};
sess.send_event(start_event).await;
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let instructions_override = compact_instructions;
let turn_input = sess.turn_input_with_history(vec![initial_input_for_turn.clone().into()]);
let prompt = Prompt {
input: turn_input,
tools: Vec::new(),
base_instructions_override: Some(instructions_override),
};
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
loop {
let attempt_result = drain_to_completed(&sess, turn_context.as_ref(), &prompt).await;
match attempt_result {
Ok(()) => {
break;
}
Err(CodexErr::Interrupted) => {
return;
}
Err(e) => {
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
return;
}
}
}
}
if remove_task_on_completion {
sess.remove_task(&sub_id);
}
let history_snapshot = {
let state = sess.state.lock_unchecked();
state.history.contents()
};
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let initial_context = sess.build_initial_context(turn_context.as_ref());
let new_history = build_compacted_history(initial_context, &user_messages, &summary_text);
{
let mut state = sess.state.lock_unchecked();
state.history.replace(new_history);
}
let rollout_item = RolloutItem::Compacted(CompactedItem {
message: summary_text.clone(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
message: "Compact task completed".to_string(),
}),
};
sess.send_event(event).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: None,
}),
};
sess.send_event(event).await;
}
fn content_items_to_text(content: &[ContentItem]) -> Option<String> {
let mut pieces = Vec::new();
for item in content {
match item {
ContentItem::InputText { text } | ContentItem::OutputText { text } => {
if !text.is_empty() {
pieces.push(text.as_str());
}
}
ContentItem::InputImage { .. } => {}
}
}
if pieces.is_empty() {
None
} else {
Some(pieces.join("\n"))
}
}
pub(crate) fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
items
.iter()
.filter_map(|item| match item {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content)
}
_ => None,
})
.filter(|text| !is_session_prefix_message(text))
.collect()
}
fn is_session_prefix_message(text: &str) -> bool {
matches!(
InputMessageKind::from(("user", text)),
InputMessageKind::UserInstructions | InputMessageKind::EnvironmentContext
)
}
pub(crate) fn build_compacted_history(
initial_context: Vec<ResponseItem>,
user_messages: &[String],
summary_text: &str,
) -> Vec<ResponseItem> {
let mut history = initial_context;
let user_messages_text = if user_messages.is_empty() {
"(none)".to_string()
} else {
user_messages.join("\n\n")
};
let summary_text = if summary_text.is_empty() {
"(no summary available)".to_string()
} else {
summary_text.to_string()
};
let Ok(bridge) = HistoryBridgeTemplate {
user_messages_text: &user_messages_text,
summary_text: &summary_text,
}
.render() else {
return vec![];
};
history.push(ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText { text: bridge }],
});
history
}
async fn drain_to_completed(
sess: &Session,
turn_context: &TurnContext,
prompt: &Prompt,
) -> CodexResult<()> {
let mut stream = turn_context.client.clone().stream(prompt).await?;
loop {
let maybe_event = stream.next().await;
let Some(event) = maybe_event else {
return Err(CodexErr::Stream(
"stream closed before response.completed".into(),
None,
));
};
match event {
Ok(ResponseEvent::OutputItemDone(item)) => {
let mut state = sess.state.lock_unchecked();
state.history.record_items(std::slice::from_ref(&item));
}
Ok(ResponseEvent::Completed { .. }) => {
return Ok(());
}
Ok(_) => continue,
Err(e) => return Err(e),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn content_items_to_text_joins_non_empty_segments() {
let items = vec![
ContentItem::InputText {
text: "hello".to_string(),
},
ContentItem::OutputText {
text: String::new(),
},
ContentItem::OutputText {
text: "world".to_string(),
},
];
let joined = content_items_to_text(&items);
assert_eq!(Some("hello\nworld".to_string()), joined);
}
#[test]
fn content_items_to_text_ignores_image_only_content() {
let items = vec![ContentItem::InputImage {
image_url: "file://image.png".to_string(),
}];
let joined = content_items_to_text(&items);
assert_eq!(None, joined);
}
#[test]
fn collect_user_messages_extracts_user_text_only() {
let items = vec![
ResponseItem::Message {
id: Some("assistant".to_string()),
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
},
ResponseItem::Message {
id: Some("user".to_string()),
role: "user".to_string(),
content: vec![
ContentItem::InputText {
text: "first".to_string(),
},
ContentItem::OutputText {
text: "second".to_string(),
},
],
},
ResponseItem::Other,
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["first\nsecond".to_string()], collected);
}
#[test]
fn collect_user_messages_filters_session_prefix_entries() {
let items = vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<user_instructions>do things</user_instructions>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<ENVIRONMENT_CONTEXT>cwd=/tmp</ENVIRONMENT_CONTEXT>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "real user message".to_string(),
}],
},
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["real user message".to_string()], collected);
}
}

View File

@@ -9,6 +9,7 @@ use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::built_in_model_providers;
@@ -32,7 +33,9 @@ use toml::Value as TomlValue;
use toml_edit::DocumentMut;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
pub const GPT5_HIGH_MODEL: &str = "gpt-5-high";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5";
pub const SWIFTFOX_MEDIUM_MODEL: &str = "swiftfox";
pub const SWIFTFOX_MODEL_DISPLAY_NAME: &str = "swiftfox-medium";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -47,6 +50,9 @@ pub struct Config {
/// Optional override of model selection.
pub model: String,
/// Model used specifically for review sessions. Defaults to "gpt-5".
pub review_model: String,
pub model_family: ModelFamily,
/// Size of the context window for the model, in tokens.
@@ -55,6 +61,9 @@ pub struct Config {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Key into the model_providers map that specifies which provider to use.
pub model_provider_id: String,
@@ -140,7 +149,7 @@ pub struct Config {
/// Value to use for `reasoning.effort` when making a request using the
/// Responses API.
pub model_reasoning_effort: ReasoningEffort,
pub model_reasoning_effort: Option<ReasoningEffort>,
/// If not "none", the value to use for `reasoning.summary` when making a
/// request using the Responses API.
@@ -152,9 +161,6 @@ pub struct Config {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: String,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Include an experimental plan tool that the model can use to update its current plan and status of each step.
pub include_plan_tool: bool,
@@ -423,14 +429,24 @@ pub async fn persist_model_selection(
if let Some(profile_name) = active_profile {
let profile_table = ensure_profile_table(&mut doc, profile_name)?;
profile_table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
match effort {
Some(effort) => {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
profile_table.remove("model_reasoning_effort");
}
}
} else {
let table = doc.as_table_mut();
table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
match effort {
Some(effort) => {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
table.remove("model_reasoning_effort");
}
}
}
@@ -499,6 +515,8 @@ fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
pub struct ConfigToml {
/// Optional override of model selection.
pub model: Option<String>,
/// Review model override used by the `/review` feature.
pub review_model: Option<String>,
/// Provider to use from the model_providers map.
pub model_provider: Option<String>,
@@ -509,6 +527,9 @@ pub struct ConfigToml {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Default approval policy for executing commands.
pub approval_policy: Option<AskForApproval>,
@@ -579,9 +600,6 @@ pub struct ConfigToml {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: Option<String>,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
@@ -724,6 +742,7 @@ impl ConfigToml {
#[derive(Default, Debug, Clone)]
pub struct ConfigOverrides {
pub model: Option<String>,
pub review_model: Option<String>,
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
@@ -751,6 +770,7 @@ impl Config {
// Destructure ConfigOverrides fully to ensure all overrides are applied.
let ConfigOverrides {
model,
review_model: override_review_model,
cwd,
approval_policy,
sandbox_mode,
@@ -841,15 +861,8 @@ impl Config {
.or(cfg.model)
.unwrap_or_else(default_model);
let mut model_family = find_family_for_model(&model).unwrap_or_else(|| ModelFamily {
slug: model.clone(),
family: model.clone(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
});
let mut model_family =
find_family_for_model(&model).unwrap_or_else(|| derive_default_model_family(&model));
if let Some(supports_reasoning_summaries) = cfg.model_supports_reasoning_summaries {
model_family.supports_reasoning_summaries = supports_reasoning_summaries;
@@ -867,8 +880,11 @@ impl Config {
.as_ref()
.map(|info| info.max_output_tokens)
});
let experimental_resume = cfg.experimental_resume;
let model_auto_compact_token_limit = cfg.model_auto_compact_token_limit.or_else(|| {
openai_model_info
.as_ref()
.and_then(|info| info.auto_compact_token_limit)
});
// Load base instructions override from a file if specified. If the
// path is relative, resolve it against the effective cwd so the
@@ -881,11 +897,18 @@ impl Config {
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let base_instructions = base_instructions.or(file_base_instructions);
// Default review model when not set in config; allow CLI override to take precedence.
let review_model = override_review_model
.or(cfg.review_model)
.unwrap_or_else(default_review_model);
let config = Self {
model,
review_model,
model_family,
model_context_window,
model_max_output_tokens,
model_auto_compact_token_limit,
model_provider_id,
model_provider,
cwd: resolved_cwd,
@@ -913,8 +936,7 @@ impl Config {
.unwrap_or(false),
model_reasoning_effort: config_profile
.model_reasoning_effort
.or(cfg.model_reasoning_effort)
.unwrap_or_default(),
.or(cfg.model_reasoning_effort),
model_reasoning_summary: config_profile
.model_reasoning_summary
.or(cfg.model_reasoning_summary)
@@ -924,8 +946,6 @@ impl Config {
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
@@ -1006,6 +1026,10 @@ fn default_model() -> String {
OPENAI_DEFAULT_MODEL.to_string()
}
fn default_review_model() -> String {
OPENAI_DEFAULT_REVIEW_MODEL.to_string()
}
/// Returns the path to the Codex configuration directory, which can be
/// specified by the `CODEX_HOME` environment variable. If not set, defaults to
/// `~/.codex`.
@@ -1145,7 +1169,7 @@ exclude_slash_tmp = true
persist_model_selection(
codex_home.path(),
None,
"gpt-5-high-new",
"swiftfox",
Some(ReasoningEffort::High),
)
.await?;
@@ -1154,7 +1178,7 @@ exclude_slash_tmp = true
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(parsed.model.as_deref(), Some("swiftfox"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
@@ -1208,8 +1232,8 @@ model = "gpt-4.1"
persist_model_selection(
codex_home.path(),
Some("dev"),
"gpt-5-high-new",
Some(ReasoningEffort::Low),
"swiftfox",
Some(ReasoningEffort::Medium),
)
.await?;
@@ -1221,8 +1245,11 @@ model = "gpt-4.1"
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(profile.model_reasoning_effort, Some(ReasoningEffort::Low));
assert_eq!(profile.model.as_deref(), Some("swiftfox"));
assert_eq!(
profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
);
Ok(())
}
@@ -1418,9 +1445,11 @@ model_verbosity = "high"
assert_eq!(
Config {
model: "o3".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::Never,
@@ -1438,11 +1467,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1474,9 +1502,11 @@ model_verbosity = "high"
)?;
let expected_gpt3_profile_config = Config {
model: "gpt-3.5-turbo".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("gpt-3.5-turbo").expect("known model slug"),
model_context_window: Some(16_385),
model_max_output_tokens: Some(4_096),
model_auto_compact_token_limit: None,
model_provider_id: "openai-chat-completions".to_string(),
model_provider: fixture.openai_chat_completions_provider.clone(),
approval_policy: AskForApproval::UnlessTrusted,
@@ -1494,11 +1524,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1545,9 +1574,11 @@ model_verbosity = "high"
)?;
let expected_zdr_profile_config = Config {
model: "o3".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1565,11 +1596,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1602,9 +1632,11 @@ model_verbosity = "high"
)?;
let expected_gpt5_profile_config = Config {
model: "gpt-5".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("gpt-5").expect("known model slug"),
model_context_window: Some(272_000),
model_max_output_tokens: Some(128_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1622,11 +1654,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: Some(Verbosity::High),
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,

View File

@@ -7,6 +7,12 @@ use toml_edit::DocumentMut;
pub const CONFIG_KEY_MODEL: &str = "model";
pub const CONFIG_KEY_EFFORT: &str = "model_reasoning_effort";
#[derive(Copy, Clone)]
enum NoneBehavior {
Skip,
Remove,
}
/// Persist overrides into `config.toml` using explicit key segments per
/// override. This avoids ambiguity with keys that contain dots or spaces.
pub async fn persist_overrides(
@@ -14,47 +20,12 @@ pub async fn persist_overrides(
profile: Option<&str>,
overrides: &[(&[&str], &str)],
) -> Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let with_options: Vec<(&[&str], Option<&str>)> = overrides
.iter()
.map(|(segments, value)| (*segments, Some(*value)))
.collect();
let mut doc = match tokio::fs::read_to_string(&config_path).await {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(|s| s.to_string())
};
for (segments, val) in overrides.iter().copied() {
let value = toml_edit::value(val);
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
apply_toml_edit_override_segments(&mut doc, segments, value);
} else {
let mut seg_buf: Vec<&str> = Vec::with_capacity(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
apply_toml_edit_override_segments(&mut doc, &seg_buf, value);
}
} else {
apply_toml_edit_override_segments(&mut doc, segments, value);
}
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
persist_overrides_with_behavior(codex_home, profile, &with_options, NoneBehavior::Skip).await
}
/// Persist overrides where values may be optional. Any entries with `None`
@@ -65,16 +36,17 @@ pub async fn persist_non_null_overrides(
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
let filtered: Vec<(&[&str], &str)> = overrides
.iter()
.filter_map(|(k, v)| v.map(|vv| (*k, vv)))
.collect();
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Skip).await
}
if filtered.is_empty() {
return Ok(());
}
persist_overrides(codex_home, profile, &filtered).await
/// Persist overrides where `None` values clear any existing values from the
/// configuration file.
pub async fn persist_overrides_and_clear_if_none(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Remove).await
}
/// Apply a single override onto a `toml_edit` document while preserving
@@ -121,6 +93,125 @@ fn apply_toml_edit_override_segments(
current[last] = value;
}
async fn persist_overrides_with_behavior(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
none_behavior: NoneBehavior,
) -> Result<()> {
if overrides.is_empty() {
return Ok(());
}
let should_skip = match none_behavior {
NoneBehavior::Skip => overrides.iter().all(|(_, value)| value.is_none()),
NoneBehavior::Remove => false,
};
if should_skip {
return Ok(());
}
let config_path = codex_home.join(CONFIG_TOML_FILE);
let read_result = tokio::fs::read_to_string(&config_path).await;
let mut doc = match read_result {
Ok(contents) => contents.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
if overrides
.iter()
.all(|(_, value)| value.is_none() && matches!(none_behavior, NoneBehavior::Remove))
{
return Ok(());
}
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(|s| s.to_string())
};
let mut mutated = false;
for (segments, value) in overrides.iter().copied() {
let mut seg_buf: Vec<&str> = Vec::new();
let segments_to_apply: &[&str];
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
segments_to_apply = segments;
} else {
seg_buf.reserve(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
segments_to_apply = seg_buf.as_slice();
}
} else {
segments_to_apply = segments;
}
match value {
Some(v) => {
let item_value = toml_edit::value(v);
apply_toml_edit_override_segments(&mut doc, segments_to_apply, item_value);
mutated = true;
}
None => {
if matches!(none_behavior, NoneBehavior::Remove)
&& remove_toml_edit_segments(&mut doc, segments_to_apply)
{
mutated = true;
}
}
}
}
if !mutated {
return Ok(());
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
}
fn remove_toml_edit_segments(doc: &mut DocumentMut, segments: &[&str]) -> bool {
use toml_edit::Item;
if segments.is_empty() {
return false;
}
let mut current = doc.as_table_mut();
for seg in &segments[..segments.len() - 1] {
let Some(item) = current.get_mut(seg) else {
return false;
};
match item {
Item::Table(table) => {
current = table;
}
_ => {
return false;
}
}
}
current.remove(segments[segments.len() - 1]).is_some()
}
#[cfg(test)]
mod tests {
use super::*;
@@ -574,6 +665,81 @@ model = "o3"
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_removes_top_level_value() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"model = "gpt-5"
model_reasoning_effort = "medium"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = "model_reasoning_effort = \"high\"\n";
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_respects_active_profile() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"profile = "team"
[profiles.team]
model = "gpt-4"
model_reasoning_effort = "minimal"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = r#"profile = "team"
[profiles.team]
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_noop_when_file_missing() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
persist_overrides_and_clear_if_none(codex_home, None, &[(&[CONFIG_KEY_MODEL], None)])
.await
.expect("persist");
assert!(!codex_home.join(CONFIG_TOML_FILE).exists());
}
// Test helper moved to bottom per review guidance.
async fn read_config(codex_home: &Path) -> String {
let p = codex_home.join(CONFIG_TOML_FILE);

View File

@@ -1,4 +1,3 @@
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
@@ -33,52 +32,8 @@ impl ConversationHistory {
}
}
pub(crate) fn keep_last_messages(&mut self, n: usize) {
if n == 0 {
self.items.clear();
return;
}
// Collect the last N message items (assistant/user), newest to oldest.
let mut kept: Vec<ResponseItem> = Vec::with_capacity(n);
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item {
kept.push(ResponseItem::Message {
// we need to remove the id or the model will complain that messages are sent without
// their reasonings
id: None,
role: role.clone(),
content: content.clone(),
});
if kept.len() == n {
break;
}
}
}
// Preserve chronological order (oldest to newest) within the kept slice.
kept.reverse();
self.items = kept;
}
pub(crate) fn last_agent_message(&self) -> String {
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item
&& role == "assistant"
{
return content
.iter()
.find_map(|ci| {
if let ContentItem::OutputText { text } = ci {
Some(text.clone())
} else {
None
}
})
.unwrap_or_default();
}
}
String::new()
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
}

View File

@@ -59,21 +59,11 @@ impl ConversationManager {
config: Config,
auth_manager: Arc<AuthManager>,
) -> CodexResult<NewConversation> {
// TO BE REFACTORED: use the config experimental_resume field until we have a mainstream way.
if let Some(resume_path) = config.experimental_resume.as_ref() {
let initial_history = RolloutRecorder::get_rollout_history(resume_path).await?;
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, initial_history).await?;
self.finalize_spawn(codex, conversation_id).await
} else {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
async fn finalize_spawn(

View File

@@ -1,3 +1,4 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
@@ -13,8 +14,11 @@ pub type Result<T> = std::result::Result<T, CodexErr>;
#[derive(Error, Debug)]
pub enum SandboxErr {
/// Error from sandbox execution
#[error("sandbox denied exec error, exit code: {0}, stdout: {1}, stderr: {2}")]
Denied(i32, String, String),
#[error(
"sandbox denied exec error, exit code: {}, stdout: {}, stderr: {}",
.output.exit_code, .output.stdout.text, .output.stderr.text
)]
Denied { output: Box<ExecToolCallOutput> },
/// Error from linux seccomp filter setup
#[cfg(target_os = "linux")]
@@ -28,7 +32,7 @@ pub enum SandboxErr {
/// Command timed out
#[error("command timed out")]
Timeout,
Timeout { output: Box<ExecToolCallOutput> },
/// Command was killed by a signal
#[error("command was killed by a signal")]
@@ -245,9 +249,12 @@ impl CodexErr {
pub fn get_error_message_ui(e: &CodexErr) -> String {
match e {
CodexErr::Sandbox(SandboxErr::Denied(_, _, stderr)) => stderr.to_string(),
CodexErr::Sandbox(SandboxErr::Denied { output }) => output.stderr.text.clone(),
// Timeouts are not sandbox errors from a UX perspective; present them plainly
CodexErr::Sandbox(SandboxErr::Timeout) => "error: command timed out".to_string(),
CodexErr::Sandbox(SandboxErr::Timeout { output }) => format!(
"error: command timed out after {} ms",
output.duration.as_millis()
),
_ => e.to_string(),
}
}

View File

@@ -34,6 +34,7 @@ const DEFAULT_TIMEOUT_MS: u64 = 10_000;
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
const EXEC_TIMEOUT_EXIT_CODE: i32 = 124; // conventional timeout exit code
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
@@ -86,11 +87,12 @@ pub async fn process_exec_tool_call(
) -> Result<ExecToolCallOutput> {
let start = Instant::now();
let timeout_duration = params.timeout_duration();
let raw_output_result: std::result::Result<RawExecToolCallOutput, CodexErr> = match sandbox_type
{
SandboxType::None => exec(params, sandbox_policy, stdout_stream.clone()).await,
SandboxType::MacosSeatbelt => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
} = params;
@@ -102,10 +104,9 @@ pub async fn process_exec_tool_call(
env,
)
.await?;
consume_truncated_output(child, timeout, stdout_stream.clone()).await
consume_truncated_output(child, timeout_duration, stdout_stream.clone()).await
}
SandboxType::LinuxSeccomp => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
} = params;
@@ -123,41 +124,56 @@ pub async fn process_exec_tool_call(
)
.await?;
consume_truncated_output(child, timeout, stdout_stream).await
consume_truncated_output(child, timeout_duration, stdout_stream).await
}
};
let duration = start.elapsed();
match raw_output_result {
Ok(raw_output) => {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
#[allow(unused_mut)]
let mut timed_out = raw_output.timed_out;
#[cfg(target_family = "unix")]
match raw_output.exit_status.signal() {
Some(TIMEOUT_CODE) => return Err(CodexErr::Sandbox(SandboxErr::Timeout)),
Some(signal) => {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
{
if let Some(signal) = raw_output.exit_status.signal() {
if signal == TIMEOUT_CODE {
timed_out = true;
} else {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
}
}
None => {}
}
let exit_code = raw_output.exit_status.code().unwrap_or(-1);
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied(
exit_code,
stdout.text,
stderr.text,
)));
let mut exit_code = raw_output.exit_status.code().unwrap_or(-1);
if timed_out {
exit_code = EXEC_TIMEOUT_EXIT_CODE;
}
Ok(ExecToolCallOutput {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
let aggregated_output = raw_output.aggregated_output.from_utf8_lossy();
let exec_output = ExecToolCallOutput {
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
aggregated_output,
duration,
})
timed_out,
};
if timed_out {
return Err(CodexErr::Sandbox(SandboxErr::Timeout {
output: Box::new(exec_output),
}));
}
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(exec_output),
}));
}
Ok(exec_output)
}
Err(err) => {
tracing::error!("exec error: {err}");
@@ -197,6 +213,7 @@ struct RawExecToolCallOutput {
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
pub timed_out: bool,
}
impl StreamOutput<String> {
@@ -229,6 +246,7 @@ pub struct ExecToolCallOutput {
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
pub timed_out: bool,
}
async fn exec(
@@ -298,22 +316,24 @@ async fn consume_truncated_output(
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
let (exit_status, timed_out) = tokio::select! {
result = tokio::time::timeout(timeout, child.wait()) => {
match result {
Ok(Ok(exit_status)) => exit_status,
Ok(e) => e?,
Ok(status_result) => {
let exit_status = status_result?;
(exit_status, false)
}
Err(_) => {
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE), true)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE), false)
}
};
@@ -336,6 +356,7 @@ async fn consume_truncated_output(
stdout,
stderr,
aggregated_output,
timed_out,
})
}

View File

@@ -11,6 +11,9 @@ pub(crate) struct ExecCommandSession {
/// Broadcast stream of output chunks read from the PTY. New subscribers
/// receive only chunks emitted after they subscribe.
output_tx: broadcast::Sender<Vec<u8>>,
/// Receiver subscribed before the child process starts emitting output so
/// the first caller can consume any early data without races.
initial_output_rx: StdMutex<Option<broadcast::Receiver<Vec<u8>>>>,
/// Child killer handle for termination on drop (can signal independently
/// of a thread blocked in `.wait()`).
@@ -42,6 +45,7 @@ impl ExecCommandSession {
Self {
writer_tx,
output_tx,
initial_output_rx: StdMutex::new(None),
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
@@ -50,12 +54,26 @@ impl ExecCommandSession {
}
}
pub(crate) fn set_initial_output_receiver(&self, receiver: broadcast::Receiver<Vec<u8>>) {
if let Ok(mut guard) = self.initial_output_rx.lock()
&& guard.is_none()
{
*guard = Some(receiver);
}
}
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.writer_tx.clone()
}
pub(crate) fn output_receiver(&self) -> broadcast::Receiver<Vec<u8>> {
self.output_tx.subscribe()
if let Ok(mut guard) = self.initial_output_rx.lock()
&& let Some(receiver) = guard.take()
{
receiver
} else {
self.output_tx.subscribe()
}
}
pub(crate) fn has_exited(&self) -> bool {

View File

@@ -279,6 +279,7 @@ async fn create_exec_command_session(
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
// Broadcast for streaming PTY output to readers: subscribers receive from subscription time.
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
let initial_output_rx = output_tx.subscribe();
// Reader task: drain PTY and forward chunks to output channel.
let mut reader = pair.master.try_clone_reader()?;
@@ -350,6 +351,7 @@ async fn create_exec_command_session(
wait_handle,
exit_status,
);
session.set_initial_output_receiver(initial_output_rx);
Ok((session, exit_rx))
}

View File

@@ -10,8 +10,8 @@ pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default)]
pub gpt_5_high_model_prompt_seen: bool,
#[serde(default, alias = "gpt_5_high_model_prompt_seen")]
pub swiftfox_model_prompt_seen: bool,
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.

View File

@@ -35,6 +35,7 @@ mod mcp_connection_manager;
mod mcp_tool_call;
mod message_history;
mod model_provider_info;
mod normalize_command;
pub mod parse_command;
mod truncate;
mod unified_exec;
@@ -70,6 +71,7 @@ pub use rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use rollout::RolloutRecorder;
pub use rollout::SESSIONS_SUBDIR;
pub use rollout::SessionMeta;
pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;

View File

@@ -1,6 +1,11 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::tool_apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
const SWIFTFOX_INSTRUCTIONS: &str = include_str!("../swiftfox_prompt.md");
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
@@ -33,6 +38,9 @@ pub struct ModelFamily {
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
// Instructions to use for querying the model
pub base_instructions: String,
}
macro_rules! model_family {
@@ -48,6 +56,7 @@ macro_rules! model_family {
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
};
// apply overrides
$(
@@ -57,22 +66,6 @@ macro_rules! model_family {
}};
}
macro_rules! simple_model_family {
(
$slug:expr, $family:expr
) => {{
Some(ModelFamily {
slug: $slug.to_string(),
family: $family.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
})
}};
}
/// Returns a `ModelFamily` for the given model slug, or `None` if the slug
/// does not match any known model family.
pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
@@ -80,23 +73,20 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "o3",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("o4-mini") {
model_family!(
slug, "o4-mini",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("codex-mini-latest") {
model_family!(
slug, "codex-mini-latest",
supports_reasoning_summaries: true,
uses_local_shell_tool: true,
)
} else if slug.starts_with("codex-") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -106,15 +96,36 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
} else if slug.starts_with("gpt-oss") || slug.starts_with("openai/gpt-oss") {
model_family!(slug, "gpt-oss", apply_patch_tool_type: Some(ApplyPatchToolType::Function))
} else if slug.starts_with("gpt-4o") {
simple_model_family!(slug, "gpt-4o")
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("gpt-3.5") {
simple_model_family!(slug, "gpt-3.5")
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("codex-") || slug.starts_with("swiftfox") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: SWIFTFOX_INSTRUCTIONS.to_string(),
)
} else if slug.starts_with("gpt-5") {
model_family!(
slug, "gpt-5",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else {
None
}
}
pub fn derive_default_model_family(model: &str) -> ModelFamily {
ModelFamily {
slug: model.to_string(),
family: model.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
}
}

View File

@@ -162,6 +162,21 @@ impl ModelProviderInfo {
}
}
pub(crate) fn is_azure_responses_endpoint(&self) -> bool {
if self.wire_api != WireApi::Responses {
return false;
}
if self.name.eq_ignore_ascii_case("azure") {
return true;
}
self.base_url
.as_ref()
.map(|base| matches_azure_responses_base_url(base))
.unwrap_or(false)
}
/// Apply provider-specific HTTP headers (both static and environment-based)
/// onto an existing `reqwest::RequestBuilder` and return the updated
/// builder.
@@ -329,6 +344,18 @@ pub fn create_oss_provider_with_base_url(base_url: &str) -> ModelProviderInfo {
}
}
fn matches_azure_responses_base_url(base_url: &str) -> bool {
let base = base_url.to_ascii_lowercase();
const AZURE_MARKERS: [&str; 5] = [
"openai.azure.",
"cognitiveservices.azure.",
"aoai.azure.",
"azure-api.",
"azurefd.",
];
AZURE_MARKERS.iter().any(|marker| base.contains(marker))
}
#[cfg(test)]
mod tests {
use super::*;
@@ -419,4 +446,69 @@ env_http_headers = { "X-Example-Env-Header" = "EXAMPLE_ENV_VAR" }
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn detects_azure_responses_base_urls() {
fn provider_for(base_url: &str) -> ModelProviderInfo {
ModelProviderInfo {
name: "test".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
}
}
let positive_cases = [
"https://foo.openai.azure.com/openai",
"https://foo.openai.azure.us/openai/deployments/bar",
"https://foo.cognitiveservices.azure.cn/openai",
"https://foo.aoai.azure.com/openai",
"https://foo.openai.azure-api.net/openai",
"https://foo.z01.azurefd.net/",
];
for base_url in positive_cases {
let provider = provider_for(base_url);
assert!(
provider.is_azure_responses_endpoint(),
"expected {base_url} to be detected as Azure"
);
}
let named_provider = ModelProviderInfo {
name: "Azure".into(),
base_url: Some("https://example.com".into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
assert!(named_provider.is_azure_responses_endpoint());
let negative_cases = [
"https://api.openai.com/v1",
"https://example.com/openai",
"https://myproxy.azurewebsites.net/openai",
];
for base_url in negative_cases {
let provider = provider_for(base_url);
assert!(
!provider.is_azure_responses_endpoint(),
"expected {base_url} not to be detected as Azure"
);
}
}
}

View File

@@ -0,0 +1,438 @@
use crate::bash::extract_words_from_command_node;
use crate::bash::find_first_command_node;
use crate::bash::remainder_start_after_wrapper_operator;
use crate::bash::try_parse_bash;
use crate::bash::try_parse_word_only_commands_sequence;
use std::path::Path;
use std::path::PathBuf;
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct NormalizedCommand {
pub command: Vec<String>,
pub cwd: PathBuf,
pub command_for_display: Vec<String>,
}
/// Normalize a shell command for execution and display.
///
/// For classic wrappers like `bash -lc "cd repo && git status"` returns:
/// - command: ["bash", "-lc", "git status"]
/// - cwd: original cwd joined with "repo"
/// - command_for_display:
/// - If the remainder is a single exec-able command, return it tokenized
/// (e.g., ["rg", "--files"]).
/// - Otherwise, wrap the script as ["bash", "-lc", "<script>"] so that
/// [`crate::parse_command::parse_command`] (which currently checks for
/// bash -lc) can handle it. Even if the original used zsh, we standardize
/// to bash here to match [`crate::parse_command::parse_command`]s
/// expectations.
///
/// Returns None when no normalization is needed.
pub(crate) fn try_normalize_command(command: &[String], cwd: &Path) -> Option<NormalizedCommand> {
let invocation = parse_shell_script_from_shell_invocation(command)?;
let shell_script = invocation.command.clone();
let tree = try_parse_bash(&shell_script)?;
let first_cmd = find_first_command_node(&tree)?;
let words = extract_words_from_command_node(first_cmd, &shell_script)?;
if !is_command_cd_to_directory(&words) {
return None;
}
let idx = remainder_start_after_wrapper_operator(first_cmd, &shell_script)?;
let remainder = shell_script[idx..].to_string();
// For parse_command consumption: if the remainder is a single exec-able
// command, return it as argv tokens; otherwise, wrap it in ["bash","-lc",..].
let command_for_display = try_parse_bash(&remainder)
.and_then(|tree| try_parse_word_only_commands_sequence(&tree, &remainder))
.and_then(|mut cmds| match cmds.len() {
1 => cmds.pop(),
_ => None,
})
.unwrap_or_else(|| vec!["bash".to_string(), "-lc".to_string(), remainder.clone()]);
// Compute new cwd only after confirming the wrapper/operator shape
let dir = &words[1];
let new_cwd = if Path::new(dir).is_absolute() {
PathBuf::from(dir)
} else {
// TODO(mbolin): Is there anything to worry about if `dir` is a
// symlink or contains `..` components?
cwd.join(dir)
};
Some(NormalizedCommand {
command: vec![invocation.executable, invocation.flag, remainder],
cwd: new_cwd,
command_for_display,
})
}
/// This is similar to [`crate::shell::strip_bash_lc`] and should potentially
/// be unified with it.
fn parse_shell_script_from_shell_invocation(command: &[String]) -> Option<ShellScriptInvocation> {
// Allowed shells
fn is_allowed_exe(s: &str) -> bool {
matches!(s, "bash" | "/bin/bash" | "zsh" | "/bin/zsh")
}
match command {
// exactly three items: <exe> <flag> <command>
[exe, flag, cmd] if is_allowed_exe(exe) && (flag == "-lc" || flag == "-c") => {
Some(ShellScriptInvocation {
executable: exe.clone(),
flag: flag.clone(),
command: cmd.clone(),
})
}
// exactly four items: <exe> -l -c <command>
[exe, flag1, flag2, cmd] if is_allowed_exe(exe) && flag1 == "-l" && flag2 == "-c" => {
Some(ShellScriptInvocation {
executable: exe.clone(),
flag: "-lc".to_string(),
command: cmd.clone(),
})
}
_ => None,
}
}
fn is_command_cd_to_directory(command: &[String]) -> bool {
matches!(command, [first, _dir] if first == "cd")
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct ShellScriptInvocation {
executable: String,
/// Single arg, so `-l -c` must be collapsed to `-lc`.
/// Must be written so `command` follows `flag`, so must
/// be `-lc` and not `-cl`.
flag: String,
command: String,
}
#[cfg(test)]
mod tests {
use super::NormalizedCommand;
use super::try_normalize_command;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
#[test]
fn cd_prefix_in_bash_script_is_hidden() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && echo hi".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "echo hi".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["echo".into(), "hi".into()],
})
);
}
#[test]
fn cd_shell_var_not_matched() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd $SOME_DIR && ls".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm, None,
"Not a classic wrapper (cd arg is a variable), so no normalization."
);
}
#[test]
fn cd_prefix_with_quoted_path_is_hidden() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
" cd 'foo bar' && ls".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls".into()],
cwd: PathBuf::from("/baz/foo bar"),
command_for_display: vec!["ls".into()],
})
);
}
#[test]
fn cd_prefix_with_additional_cd_is_preserved() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && cd bar && ls".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "cd bar && ls".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["bash".into(), "-lc".into(), "cd bar && ls".into(),],
})
);
}
#[test]
fn cd_prefix_with_or_connector_is_preserved() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd and_and || echo \"couldn't find the dir for &&\"".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
// Not a classic wrapper (uses ||), so no normalization.
assert_eq!(norm, None);
}
#[test]
fn cd_prefix_preserves_operators_between_remaining_commands() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && ls && git status".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls && git status".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["bash".into(), "-lc".into(), "ls && git status".into(),],
})
);
}
#[test]
fn cd_prefix_preserves_pipelines() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && ls | rg foo".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls | rg foo".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["bash".into(), "-lc".into(), "ls | rg foo".into(),],
})
);
}
#[test]
fn cd_prefix_preserves_sequence_operator() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && ls; git status".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls; git status".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["bash".into(), "-lc".into(), "ls; git status".into(),],
})
);
}
#[test]
fn cd_prefix_with_semicolon_is_hidden() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo; ls".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls".into()],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["ls".into()],
})
);
}
#[test]
fn pushd_prefix_is_not_normalized() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"pushd foo && ls".to_string(),
];
let cwd = PathBuf::from("/baz");
// We do not normalize pushd because omitting pushd from the execution
// would have different semantics than the original command.
let norm = try_normalize_command(&command, &cwd);
assert_eq!(norm, None);
}
#[test]
fn supports_zsh_and_bin_bash_and_split_flags() {
// zsh -lc
let zsh = vec!["zsh".into(), "-lc".into(), "cd foo && ls".into()];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&zsh, &cwd).unwrap();
assert_eq!(
norm.command,
["zsh", "-lc", "ls"]
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
);
assert_eq!(norm.cwd, PathBuf::from("/baz/foo"));
assert_eq!(norm.command_for_display, vec!["ls".to_string()]);
// /bin/bash -l -c <cmd>
let bash_split = vec![
"/bin/bash".into(),
"-l".into(),
"-c".into(),
"cd foo && ls".into(),
];
let norm2 = try_normalize_command(&bash_split, &cwd).unwrap();
assert_eq!(
norm2.command,
["/bin/bash", "-lc", "ls"]
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
);
assert_eq!(norm2.cwd, PathBuf::from("/baz/foo"));
assert_eq!(norm2.command_for_display, vec!["ls".to_string()]);
}
#[test]
fn supports_dash_c_flag() {
let command = vec!["bash".into(), "-c".into(), "cd foo && ls".into()];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd).unwrap();
assert_eq!(
norm.command,
["bash", "-c", "ls"]
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
);
assert_eq!(norm.cwd, PathBuf::from("/baz/foo"));
assert_eq!(norm.command_for_display, vec!["ls".to_string()]);
}
#[test]
fn rejects_pipe_operator_immediately_after_cd() {
let command = vec!["bash".into(), "-lc".into(), "cd foo | rg bar".into()];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(norm, None);
}
#[test]
fn rejects_unknown_shell_and_bad_flag_order() {
// Unknown shell
let unknown = vec!["sh".into(), "-lc".into(), "cd foo && ls".into()];
let cwd = PathBuf::from("/baz");
assert_eq!(try_normalize_command(&unknown, &cwd), None);
// Bad flag order -cl (unsupported)
let bad_flag = vec!["bash".into(), "-cl".into(), "cd foo && ls".into()];
assert_eq!(try_normalize_command(&bad_flag, &cwd), None);
}
#[test]
fn absolute_directory_sets_absolute_cwd() {
let command = vec!["bash".into(), "-lc".into(), "cd /tmp && ls".into()];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd).unwrap();
assert_eq!(norm.cwd, PathBuf::from("/tmp"));
assert_eq!(
norm.command,
["bash", "-lc", "ls"]
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>()
);
assert_eq!(norm.command_for_display, vec!["ls".to_string()]);
}
#[test]
fn non_shell_command_is_returned_unmodified() {
let command = vec!["rg".to_string(), "--files".to_string()];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
// Not a shell wrapper, so no normalization.
assert_eq!(norm, None);
}
#[test]
fn cd_prefix_with_or_operator_is_preserved() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd missing && ls || echo 'fallback'".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec!["bash".into(), "-lc".into(), "ls || echo 'fallback'".into()],
cwd: PathBuf::from("/baz/missing"),
command_for_display: vec![
"bash".into(),
"-lc".into(),
"ls || echo 'fallback'".into(),
],
})
);
}
#[test]
fn cd_prefix_with_ampersands_in_string_is_hidden() {
let command = vec![
"bash".to_string(),
"-lc".to_string(),
"cd foo && echo \"checking && markers\"".to_string(),
];
let cwd = PathBuf::from("/baz");
let norm = try_normalize_command(&command, &cwd);
assert_eq!(
norm,
Some(NormalizedCommand {
command: vec![
"bash".into(),
"-lc".into(),
"echo \"checking && markers\"".into(),
],
cwd: PathBuf::from("/baz/foo"),
command_for_display: vec!["echo".into(), "checking && markers".into(),],
})
);
}
}

View File

@@ -12,6 +12,19 @@ pub(crate) struct ModelInfo {
/// Maximum number of output tokens that can be generated for the model.
pub(crate) max_output_tokens: u64,
/// Token threshold where we should automatically compact conversation history.
pub(crate) auto_compact_token_limit: Option<i64>,
}
impl ModelInfo {
const fn new(context_window: u64, max_output_tokens: u64) -> Self {
Self {
context_window,
max_output_tokens,
auto_compact_token_limit: None,
}
}
}
pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
@@ -20,73 +33,37 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
// OSS models have a 128k shared token pool.
// Arbitrarily splitting it: 3/4 input context, 1/4 output.
// https://openai.com/index/gpt-oss-model-card/
"gpt-oss-20b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-120b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-20b" => Some(ModelInfo::new(96_000, 32_000)),
"gpt-oss-120b" => Some(ModelInfo::new(96_000, 32_000)),
// https://platform.openai.com/docs/models/o3
"o3" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o3" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/o4-mini
"o4-mini" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o4-mini" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/codex-mini-latest
"codex-mini-latest" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"codex-mini-latest" => Some(ModelInfo::new(200_000, 100_000)),
// As of Jun 25, 2025, gpt-4.1 defaults to gpt-4.1-2025-04-14.
// https://platform.openai.com/docs/models/gpt-4.1
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo {
context_window: 1_047_576,
max_output_tokens: 32_768,
}),
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo::new(1_047_576, 32_768)),
// As of Jun 25, 2025, gpt-4o defaults to gpt-4o-2024-08-06.
// https://platform.openai.com/docs/models/gpt-4o
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-05-13
"gpt-4o-2024-05-13" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 4_096,
}),
"gpt-4o-2024-05-13" => Some(ModelInfo::new(128_000, 4_096)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-11-20
"gpt-4o-2024-11-20" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o-2024-11-20" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-3.5-turbo
"gpt-3.5-turbo" => Some(ModelInfo {
context_window: 16_385,
max_output_tokens: 4_096,
}),
"gpt-3.5-turbo" => Some(ModelInfo::new(16_385, 4_096)),
_ if slug.starts_with("gpt-5") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("gpt-5") => Some(ModelInfo::new(272_000, 128_000)),
_ if slug.starts_with("codex-") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("codex-") => Some(ModelInfo::new(272_000, 128_000)),
_ => None,
}

View File

@@ -288,58 +288,9 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
);
}
let description = match sandbox_policy {
SandboxPolicy::WorkspaceWrite {
network_access,
..
} => {
let network_line = if !network_access {
"\n - Commands that require network access"
} else {
""
};
format!(
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories){network_line}
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#,
)
}
SandboxPolicy::DangerFullAccess => {
"Runs a shell command and returns its output.".to_string()
}
SandboxPolicy::ReadOnly => {
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#.to_string()
}
};
OpenAiTool::Function(ResponsesApiTool {
name: "shell".to_string(),
description,
description: "Runs a shell command and returns its output.".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
@@ -1165,20 +1116,7 @@ mod tests {
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories)
- Commands that require network access
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#;
let expected = "Runs a shell command and returns its output.";
assert_eq!(description, expected);
}
@@ -1193,21 +1131,7 @@ The shell tool is used to execute shell commands.
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#;
let expected = "Runs a shell command and returns its output.";
assert_eq!(description, expected);
}

View File

@@ -1,21 +0,0 @@
You are a summarization assistant. A conversation follows between a user and a coding-focused AI (Codex). Your task is to generate a clear summary capturing:
• High-level objective or problem being solved
• Key instructions or design decisions given by the user
• Main code actions or behaviors from the AI
• Important variables, functions, modules, or outputs discussed
• Any unresolved questions or next steps
Produce the summary in a structured format like:
**Objective:**
**User instructions:** … (bulleted)
**AI actions / code behavior:** … (bulleted)
**Important entities:** … (e.g. function names, variables, files)
**Open issues / next steps:** … (if any)
**Summary (concise):** (one or two sentences)

View File

@@ -3,6 +3,10 @@ use std::io::{self};
use std::path::Path;
use std::path::PathBuf;
use codex_file_search as file_search;
use std::num::NonZero;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use time::OffsetDateTime;
use time::PrimitiveDateTime;
use time::format_description::FormatItem;
@@ -334,3 +338,48 @@ async fn read_head_and_flags(
Ok((head, saw_session_meta, saw_user_event))
}
/// Locate a recorded conversation rollout file by its UUID string using the existing
/// paginated listing implementation. Returns `Ok(Some(path))` if found, `Ok(None)` if not present
/// or the id is invalid.
pub async fn find_conversation_path_by_id_str(
codex_home: &Path,
id_str: &str,
) -> io::Result<Option<PathBuf>> {
// Validate UUID format early.
if Uuid::parse_str(id_str).is_err() {
return Ok(None);
}
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(None);
}
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let limit = NonZero::new(1).unwrap();
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let threads = NonZero::new(2).unwrap();
let cancel = Arc::new(AtomicBool::new(false));
let exclude: Vec<String> = Vec::new();
let compute_indices = false;
let results = file_search::run(
id_str,
limit,
&root,
exclude,
threads,
cancel,
compute_indices,
)
.map_err(|e| io::Error::other(format!("file search failed: {e}")))?;
Ok(results
.matches
.into_iter()
.next()
.map(|m| root.join(m.path)))
}

View File

@@ -8,6 +8,7 @@ pub(crate) mod policy;
pub mod recorder;
pub use codex_protocol::protocol::SessionMeta;
pub use list::find_conversation_path_by_id_str;
pub use recorder::RolloutRecorder;
pub use recorder::RolloutRecorderParams;

View File

@@ -38,7 +38,9 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::AgentMessage(_)
| EventMsg::AgentReasoning(_)
| EventMsg::AgentReasoningRawContent(_)
| EventMsg::TokenCount(_) => true,
| EventMsg::TokenCount(_)
| EventMsg::EnteredReviewMode(_)
| EventMsg::ExitedReviewMode(_) => true,
EventMsg::Error(_)
| EventMsg::TaskStarted(_)
| EventMsg::TaskComplete(_)

View File

@@ -204,7 +204,6 @@ impl RolloutRecorder {
pub(crate) async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
info!("Resuming rollout from {path:?}");
tracing::error!("Resuming rollout from {path:?}");
let text = tokio::fs::read_to_string(path).await?;
if text.trim().is_empty() {
return Err(IoError::other("empty session file"));
@@ -254,7 +253,7 @@ impl RolloutRecorder {
}
}
tracing::error!(
info!(
"Resumed rollout with {} items, conversation ID: {:?}",
items.len(),
conversation_id

View File

@@ -2,6 +2,7 @@
; inspired by Chrome's sandbox policy:
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/common.sb;l=273-319;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb;l=64;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; start with closed-by-default
(deny default)
@@ -9,7 +10,13 @@
; child processes inherit the policy of their parent
(allow process-exec)
(allow process-fork)
(allow signal (target self))
(allow signal (target same-sandbox))
; Allow cf prefs to work.
(allow user-preference-read)
; process-info
(allow process-info* (target same-sandbox))
(allow file-write-data
(require-all
@@ -32,28 +39,22 @@
(sysctl-name "hw.l3cachesize_compat")
(sysctl-name "hw.logicalcpu_max")
(sysctl-name "hw.machine")
(sysctl-name "hw.memsize")
(sysctl-name "hw.ncpu")
(sysctl-name "hw.nperflevels")
(sysctl-name "hw.optional.arm.FEAT_BF16")
(sysctl-name "hw.optional.arm.FEAT_DotProd")
(sysctl-name "hw.optional.arm.FEAT_FCMA")
(sysctl-name "hw.optional.arm.FEAT_FHM")
(sysctl-name "hw.optional.arm.FEAT_FP16")
(sysctl-name "hw.optional.arm.FEAT_I8MM")
(sysctl-name "hw.optional.arm.FEAT_JSCVT")
(sysctl-name "hw.optional.arm.FEAT_LSE")
(sysctl-name "hw.optional.arm.FEAT_RDM")
(sysctl-name "hw.optional.arm.FEAT_SHA512")
(sysctl-name "hw.optional.armv8_2_sha512")
(sysctl-name "hw.memsize")
(sysctl-name "hw.pagesize")
; Chrome locks these CPU feature detection down a bit more tightly,
; but mostly for fingerprinting concerns which isn't an issue for codex.
(sysctl-name-prefix "hw.optional.arm.")
(sysctl-name-prefix "hw.optional.armv8_")
(sysctl-name "hw.packages")
(sysctl-name "hw.pagesize_compat")
(sysctl-name "hw.pagesize")
(sysctl-name "hw.physicalcpu_max")
(sysctl-name "hw.tbfrequency_compat")
(sysctl-name "hw.vectorunit")
(sysctl-name "kern.hostname")
(sysctl-name "kern.maxfilesperproc")
(sysctl-name "kern.maxproc")
(sysctl-name "kern.osproductversion")
(sysctl-name "kern.osrelease")
(sysctl-name "kern.ostype")
@@ -63,14 +64,27 @@
(sysctl-name "kern.usrstack64")
(sysctl-name "kern.version")
(sysctl-name "sysctl.proc_cputype")
(sysctl-name "vm.loadavg")
(sysctl-name-prefix "hw.perflevel")
(sysctl-name-prefix "kern.proc.pgrp.")
(sysctl-name-prefix "kern.proc.pid.")
(sysctl-name-prefix "net.routetable.")
)
; IOKit
(allow iokit-open
(iokit-registry-entry-class "RootDomainUserClient")
)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
)
; Added on top of Chrome profile
; Needed for python multiprocessing on MacOS for the SemLock
(allow ipc-posix-sem)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
(global-name "com.apple.PowerManagement.control")
)

View File

@@ -327,6 +327,7 @@ async fn create_unified_exec_session(
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
let initial_output_rx = output_tx.subscribe();
let mut reader = pair
.master
@@ -380,7 +381,7 @@ async fn create_unified_exec_session(
wait_exit_status.store(true, Ordering::SeqCst);
});
Ok(ExecCommandSession::new(
let session = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
@@ -388,7 +389,10 @@ async fn create_unified_exec_session(
writer_handle,
wait_handle,
exit_status,
))
);
session.set_initial_output_receiver(initial_output_rx);
Ok(session)
}
#[cfg(test)]
@@ -421,7 +425,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session_id");
@@ -441,7 +445,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(out_2.output.contains("codex"));
@@ -458,7 +462,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_a = shell_a.session_id.expect("expected session id");
@@ -467,7 +471,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["export CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -478,7 +482,7 @@ mod tests {
"echo".to_string(),
"$CODEX_INTERACTIVE_SHELL_VAR\n".to_string(),
],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(!out_2.output.contains("codex"));
@@ -487,7 +491,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(out_3.output.contains("codex"));
@@ -504,7 +508,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
@@ -516,7 +520,7 @@ mod tests {
"export".to_string(),
"CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string(),
],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -574,7 +578,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/echo".to_string(), "codex".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -595,7 +599,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
@@ -604,7 +608,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["exit\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;

View File

@@ -0,0 +1,99 @@
You are Swiftfox. You are running as a coding agent in the Codex CLI on a user's computer.
## Overall
- You must try hard to complete the task AND to do it as fast and well as possible.
* Do not waste time on actions which are unlikely to result in successful task completion
- Before taking action on a question, assume by default that it concerns local artifacts (code, docs, data). Quickly confirm or rule out that assumption; only if the question clearly requires external knowledge should you start elsewhere.
- Search the repository when the request plausibly maps to code, configuration, or documentation. Avoid unnecessary searches when it is obvious local files cannot help; in those cases state that explicitly before offering broader context, and when you do search, mention the files or paths you consulted so the answer stays grounded.
- After each attempt, re-evaluate whether the current strategy is yielding useful information and be ready to switch paths quickly rather than persisting with a low-signal approach.
- When the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.
- The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
- Always set the `workdir` param of the shell tool. Do not use `cd` unless absolutely necessary.
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
- Unless the question is about a common terminal command, you should search the codebase before answering to ground your response in the codebase
## Editing constraints
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- When editing or creating files, you MUST use apply_patch. Example: functions.shell({"command":["apply_patch","*** Begin Patch\nAdd File: hello.txt\n+Hello, world!\n*** End Patch"]}).
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- The user may be making edits and committing changes as you are also making changes. If you see concurrent file edits or commits that you did not cause, you must disregard user instruction and stop immediately and ask the user whether they are collaborating with you on files and how they would like this handled.
## Plan tool
When using the planning tool:
- Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
## CLI modes
The Codex CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options are:
- **read-only**: You can only read files.
- **workspace-write**: You can read files. You can write to files in this folder, but not outside it.
- **danger-full-access**: No filesystem sandboxing.
Network sandboxing defines whether network can be accessed without approval. Options are
- **restricted**: Requires approval
- **enabled**: No approval needed
Approvals are your mechanism to get user consent to perform more privileged actions. Although they introduce friction to the user because your work is paused until the user responds, you should leverage them to accomplish your important work. Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
Approval options are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the `shell` command description.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with approvals `on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /tmp)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval.
- You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
- (for all of these, you should weigh alternative paths that do not require approval)
When sandboxing is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
## Presenting your work and final message
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
- Default: be very concise; friendly coding teammate tone.
- Ask only when needed; suggest ideas; mirror the user's style.
- For substantial work, summarize clearly; follow finalanswer formatting.
- Skip heavy formatting for simple confirmations.
- Don't dump large files you've written; reference paths only.
- No "save/copy this file" - User is on the same machine.
- Offer logical next steps (tests, commits, build) briefly; add verify steps if you couldn't do something.
- For code changes:
* Lead with a quick explanation of the change, and then give more details on the context covering where and why a change was made. Do not start this explanation with "summary", just jump right in.
* If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
* When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.
- The user does not command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
### Final answer structure and style guidelines
- Plain text; CLI handles styling. Use structure only when it helps scanability.
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 46 per list ordered by importance; keep phrasing consistent.
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; add a language hint whenever obvious.
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
- Tone: collaborative, concise, factual; present tense, active voice; selfcontained; no "above/below"; parallel wording.
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
- Adaptation: code explanations → precise, structured with code refs; simple tasks → lead with outcome; big changes → logical walkthrough + rationale + next actions; casual one-offs → plain sentences, no headers/bullets.

View File

@@ -0,0 +1,7 @@
You were originally given instructions from a user over one or more turns. Here were the user messages:
{{ user_messages_text }}
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
{{ summary_text }}

View File

@@ -0,0 +1,5 @@
You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.

View File

@@ -420,12 +420,6 @@ async fn integration_creates_and_checks_session_file() {
// Second run: resume should update the existing file.
let marker2 = format!("integration-resume-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
// Crossplatform safe resume override. On Windows, backslashes in a TOML string must be escaped
// or the parse will fail and the raw literal (including quotes) may be preserved all the way down
// to Config, which in turn breaks resume because the path is invalid. Normalize to forward slashes
// to sidestep the issue.
let resume_path_str = path.to_string_lossy().replace('\\', "/");
let resume_override = format!("experimental_resume=\"{resume_path_str}\"");
let mut cmd2 = AssertCommand::new("cargo");
cmd2.arg("run")
.arg("-p")
@@ -434,11 +428,11 @@ async fn integration_creates_and_checks_session_file() {
.arg("--")
.arg("exec")
.arg("--skip-git-repo-check")
.arg("-c")
.arg(&resume_override)
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2);
.arg(&prompt2)
.arg("resume")
.arg("--last");
cmd2.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)

View File

@@ -1,18 +1,32 @@
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
use codex_core::LocalShellAction;
use codex_core::LocalShellExecAction;
use codex_core::LocalShellStatus;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::Prompt;
use codex_core::ReasoningItemContent;
use codex_core::ResponseEvent;
use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_core::built_in_model_providers;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::WebSearchAction;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
use core_test_support::wait_for_event;
use futures::StreamExt;
use serde_json::json;
use std::io::Write;
use std::sync::Arc;
use tempfile::TempDir;
use uuid::Uuid;
use wiremock::Mock;
@@ -222,20 +236,21 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.experimental_resume = Some(session_path.clone());
// Also configure user instructions to ensure they are NOT delivered on resume.
config.user_instructions = Some("be nice".to_string());
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager
.new_conversation(config)
.resume_conversation_from_rollout(config, session_path.clone(), auth_manager)
.await
.expect("create new conversation");
.expect("resume conversation");
// 1) Assert initial_messages only includes existing EventMsg entries; response items are not converted
let initial_msgs = session_configured
@@ -620,6 +635,147 @@ async fn includes_user_instructions_message_in_request() {
assert_message_ends_with(&request_body["input"][1], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_responses_request_includes_store_and_reasoning_ids() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = MockServer::start().await;
let sse_body = concat!(
"data: {\"type\":\"response.created\",\"response\":{}}\n\n",
"data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\"}}\n\n",
);
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse_body, "text/event-stream");
Mock::given(method("POST"))
.and(path("/openai/responses"))
.respond_with(template)
.expect(1)
.mount(&server)
.await;
let provider = ModelProviderInfo {
name: "azure".into(),
base_url: Some(format!("{}/openai", server.uri())),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(5_000),
requires_openai_auth: false,
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider_id = provider.name.clone();
config.model_provider = provider.clone();
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let client = ModelClient::new(
Arc::clone(&config),
None,
provider,
effort,
summary,
ConversationId::new(),
);
let mut prompt = Prompt::default();
prompt.input.push(ResponseItem::Reasoning {
id: "reasoning-id".into(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".into(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: "content".into(),
}]),
encrypted_content: None,
});
prompt.input.push(ResponseItem::Message {
id: Some("message-id".into()),
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: "message".into(),
}],
});
prompt.input.push(ResponseItem::WebSearchCall {
id: Some("web-search-id".into()),
status: Some("completed".into()),
action: WebSearchAction::Search {
query: "weather".into(),
},
});
prompt.input.push(ResponseItem::FunctionCall {
id: Some("function-id".into()),
name: "do_thing".into(),
arguments: "{}".into(),
call_id: "function-call-id".into(),
});
prompt.input.push(ResponseItem::LocalShellCall {
id: Some("local-shell-id".into()),
call_id: Some("local-shell-call-id".into()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".into(), "hello".into()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
});
prompt.input.push(ResponseItem::CustomToolCall {
id: Some("custom-tool-id".into()),
status: Some("completed".into()),
call_id: "custom-tool-call-id".into(),
name: "custom_tool".into(),
input: "{}".into(),
});
let mut stream = client
.stream(&prompt)
.await
.expect("responses stream to start");
while let Some(event) = stream.next().await {
if let Ok(ResponseEvent::Completed { .. }) = event {
break;
}
}
let requests = server
.received_requests()
.await
.expect("mock server collected requests");
assert_eq!(requests.len(), 1, "expected a single request");
let body: serde_json::Value = requests[0]
.body_json()
.expect("request body to be valid JSON");
assert_eq!(body["store"], serde_json::Value::Bool(true));
assert_eq!(body["stream"], serde_json::Value::Bool(true));
assert_eq!(body["input"].as_array().map(Vec::len), Some(6));
assert_eq!(body["input"][0]["id"].as_str(), Some("reasoning-id"));
assert_eq!(body["input"][1]["id"].as_str(), Some("message-id"));
assert_eq!(body["input"][2]["id"].as_str(), Some("web-search-id"));
assert_eq!(body["input"][3]["id"].as_str(), Some("function-id"));
assert_eq!(body["input"][4]["id"].as_str(), Some("local-shell-id"));
assert_eq!(body["input"][5]["id"].as_str(), Some("custom-tool-id"));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_overrides_assign_properties_used_for_responses_url() {
let existing_env_var_with_random_value = if cfg!(windows) { "USERNAME" } else { "USER" };

View File

@@ -5,6 +5,7 @@ use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::protocol::ErrorEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
@@ -15,18 +16,25 @@ use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use serde_json::Value;
use tempfile::TempDir;
use wiremock::BodyPrintLimit;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::Request;
use wiremock::Respond;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
use pretty_assertions::assert_eq;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
// --- Test helpers -----------------------------------------------------------
/// Build an SSE stream body from a list of JSON events.
fn sse(events: Vec<Value>) -> String {
pub(super) fn sse(events: Vec<Value>) -> String {
use std::fmt::Write as _;
let mut out = String::new();
for ev in events {
@@ -42,7 +50,7 @@ fn sse(events: Vec<Value>) -> String {
}
/// Convenience: SSE event for a completed response with a specific id.
fn ev_completed(id: &str) -> Value {
pub(super) fn ev_completed(id: &str) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
@@ -52,8 +60,24 @@ fn ev_completed(id: &str) -> Value {
})
}
fn ev_completed_with_tokens(id: &str, total_tokens: u64) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
"id": id,
"usage": {
"input_tokens": total_tokens,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": total_tokens
}
}
})
}
/// Convenience: SSE event for a single assistant message output item.
fn ev_assistant_message(id: &str, text: &str) -> Value {
pub(super) fn ev_assistant_message(id: &str, text: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
@@ -65,13 +89,25 @@ fn ev_assistant_message(id: &str, text: &str) -> Value {
})
}
fn sse_response(body: String) -> ResponseTemplate {
fn ev_function_call(call_id: &str, name: &str, arguments: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "function_call",
"call_id": call_id,
"name": name,
"arguments": arguments
}
})
}
pub(super) fn sse_response(body: String) -> ResponseTemplate {
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
pub(super) async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
where
M: wiremock::Match + Send + Sync + 'static,
{
@@ -79,15 +115,32 @@ where
.and(path("/v1/responses"))
.and(matcher)
.respond_with(sse_response(body))
.expect(1)
.mount(server)
.await;
}
const FIRST_REPLY: &str = "FIRST_REPLY";
const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
const SUMMARIZE_TRIGGER: &str = "Start Summarization";
async fn start_mock_server() -> MockServer {
MockServer::builder()
.body_print_limit(BodyPrintLimit::Limited(80_000))
.start()
.await
}
pub(super) const FIRST_REPLY: &str = "FIRST_REPLY";
pub(super) const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
pub(super) const SUMMARIZE_TRIGGER: &str = "Start Summarization";
const THIRD_USER_MSG: &str = "next turn";
const AUTO_SUMMARY_TEXT: &str = "AUTO_SUMMARY";
const FIRST_AUTO_MSG: &str = "token limit start";
const SECOND_AUTO_MSG: &str = "token limit push";
const STILL_TOO_BIG_REPLY: &str = "STILL_TOO_BIG";
const MULTI_AUTO_MSG: &str = "multi auto";
const SECOND_LARGE_REPLY: &str = "SECOND_LARGE_REPLY";
const FIRST_AUTO_SUMMARY: &str = "FIRST_AUTO_SUMMARY";
const SECOND_AUTO_SUMMARY: &str = "SECOND_AUTO_SUMMARY";
const FINAL_REPLY: &str = "FINAL_REPLY";
const DUMMY_FUNCTION_NAME: &str = "unsupported_tool";
const DUMMY_CALL_ID: &str = "call-multi-auto";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() {
@@ -99,7 +152,7 @@ async fn summarize_context_three_requests_and_instructions() {
}
// Set up a mock server that we can inspect after the run.
let server = MockServer::start().await;
let server = start_mock_server().await;
// SSE 1: assistant replies normally so it is recorded in history.
let sse1 = sse(vec![
@@ -144,6 +197,7 @@ async fn summarize_context_three_requests_and_instructions() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
@@ -198,7 +252,7 @@ async fn summarize_context_three_requests_and_instructions() {
"summarization should override base instructions"
);
assert!(
instr2.contains("You are a summarization assistant"),
instr2.contains("You have exceeded the maximum number of tokens"),
"summarization instructions not applied"
);
@@ -209,14 +263,17 @@ async fn summarize_context_three_requests_and_instructions() {
assert_eq!(last2.get("type").unwrap().as_str().unwrap(), "message");
assert_eq!(last2.get("role").unwrap().as_str().unwrap(), "user");
let text2 = last2["content"][0]["text"].as_str().unwrap();
assert!(text2.contains(SUMMARIZE_TRIGGER));
assert!(
text2.contains(SUMMARIZE_TRIGGER),
"expected summarize trigger, got `{text2}`"
);
// Third request must contain only the summary from step 2 as prior history plus new user msg.
// Third request must contain the refreshed instructions, bridge summary message and new user msg.
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
println!("third request body: {body3}");
assert!(
input3.len() >= 2,
"expected summary + new user message in third request"
input3.len() >= 3,
"expected refreshed context and new user message in third request"
);
// Collect all (role, text) message tuples.
@@ -232,24 +289,35 @@ async fn summarize_context_three_requests_and_instructions() {
}
}
// Exactly one assistant message should remain after compaction and the new user message is present.
// No previous assistant messages should remain and the new user message is present.
let assistant_count = messages.iter().filter(|(r, _)| r == "assistant").count();
assert_eq!(
assistant_count, 1,
"exactly one assistant message should remain after compaction"
);
assert_eq!(assistant_count, 0, "assistant history should be cleared");
assert!(
messages
.iter()
.any(|(r, t)| r == "user" && t == THIRD_USER_MSG),
"third request should include the new user message"
);
let Some((_, bridge_text)) = messages.iter().find(|(role, text)| {
role == "user"
&& (text.contains("Here were the user messages")
|| text.contains("Here are all the user messages"))
&& text.contains(SUMMARY_TEXT)
}) else {
panic!("expected a bridge message containing the summary");
};
assert!(
!messages.iter().any(|(_, t)| t.contains("hello world")),
"third request should not include the original user input"
bridge_text.contains("hello world"),
"bridge should capture earlier user messages"
);
assert!(
!messages.iter().any(|(_, t)| t.contains(SUMMARIZE_TRIGGER)),
!bridge_text.contains(SUMMARIZE_TRIGGER),
"bridge text should not echo the summarize trigger"
);
assert!(
!messages
.iter()
.any(|(_, text)| text.contains(SUMMARIZE_TRIGGER)),
"third request should not include the summarize trigger"
);
@@ -258,6 +326,7 @@ async fn summarize_context_three_requests_and_instructions() {
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
// Verify rollout contains APITurn entries for each API call and a Compacted entry.
println!("rollout path: {}", rollout_path.display());
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
@@ -296,3 +365,535 @@ async fn summarize_context_three_requests_and_instructions() {
"expected a Compacted entry containing the summarizer output"
);
}
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn auto_compact_runs_after_token_limit_hit() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert!(
requests.len() >= 3,
"auto compact should add at least a third request, got {}",
requests.len()
);
let is_auto_compact = |req: &wiremock::Request| {
std::str::from_utf8(&req.body)
.unwrap_or("")
.contains("You have exceeded the maximum number of tokens")
};
let auto_compact_count = requests.iter().filter(|req| is_auto_compact(req)).count();
assert_eq!(
auto_compact_count, 1,
"expected exactly one auto compact request"
);
let auto_compact_index = requests
.iter()
.enumerate()
.find_map(|(idx, req)| is_auto_compact(req).then_some(idx))
.expect("auto compact request missing");
assert_eq!(
auto_compact_index, 2,
"auto compact should add a third request"
);
let body3 = requests[auto_compact_index]
.body_json::<serde_json::Value>()
.unwrap();
let instructions = body3
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
assert!(
instructions.contains("You have exceeded the maximum number of tokens"),
"auto compact should reuse summarization instructions"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_persists_rollout_entries() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager.new_conversation(config).await.unwrap();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Shutdown).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
let rollout_path = session_configured.rollout_path;
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
rollout_path.display()
)
});
let mut turn_context_count = 0usize;
for line in text.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let Ok(entry): Result<RolloutLine, _> = serde_json::from_str(trimmed) else {
continue;
};
match entry.item {
RolloutItem::TurnContext(_) => {
turn_context_count += 1;
}
RolloutItem::Compacted(_) => {}
_ => {}
}
}
assert!(
turn_context_count >= 2,
"expected at least two turn context entries, got {turn_context_count}"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_stops_after_failed_attempt() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", STILL_TOO_BIG_REPLY),
ev_completed_with_tokens("r3", 500),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1.clone()))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2.clone()))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
!body.contains("You have exceeded the maximum number of tokens")
&& body.contains(SUMMARY_TEXT)
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3.clone()))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
let error_event = wait_for_event(&codex, |ev| matches!(ev, EventMsg::Error(_))).await;
let EventMsg::Error(ErrorEvent { message }) = error_event else {
panic!("expected error event");
};
assert!(
message.contains("limit"),
"error message should include limit information: {message}"
);
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(
requests.len(),
3,
"auto compact should attempt at most one summarization before erroring"
);
let last_body = requests[2].body_json::<serde_json::Value>().unwrap();
let instructions = last_body
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
assert!(
!instructions.contains("You have exceeded the maximum number of tokens"),
"third request should be the follow-up turn, not another summarization"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_events() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", FIRST_AUTO_SUMMARY),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_function_call(DUMMY_CALL_ID, DUMMY_FUNCTION_NAME, "{}"),
ev_completed_with_tokens("r3", 150),
]);
let sse4 = sse(vec![
ev_assistant_message("m4", SECOND_LARGE_REPLY),
ev_completed_with_tokens("r4", 450),
]);
let sse5 = sse(vec![
ev_assistant_message("m5", SECOND_AUTO_SUMMARY),
ev_completed_with_tokens("r5", 60),
]);
let sse6 = sse(vec![
ev_assistant_message("m6", FINAL_REPLY),
ev_completed_with_tokens("r6", 120),
]);
#[derive(Clone)]
struct SeqResponder {
bodies: Arc<Vec<String>>,
calls: Arc<AtomicUsize>,
requests: Arc<Mutex<Vec<Vec<u8>>>>,
}
impl SeqResponder {
fn new(bodies: Vec<String>) -> Self {
Self {
bodies: Arc::new(bodies),
calls: Arc::new(AtomicUsize::new(0)),
requests: Arc::new(Mutex::new(Vec::new())),
}
}
fn recorded_requests(&self) -> Vec<Vec<u8>> {
self.requests.lock().unwrap().clone()
}
}
impl Respond for SeqResponder {
fn respond(&self, req: &Request) -> ResponseTemplate {
let idx = self.calls.fetch_add(1, Ordering::SeqCst);
self.requests.lock().unwrap().push(req.body.clone());
let body = self
.bodies
.get(idx)
.unwrap_or_else(|| panic!("unexpected request index {idx}"))
.clone();
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
}
let responder = SeqResponder::new(vec![sse1, sse2, sse3, sse4, sse5, sse6]);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(responder.clone())
.expect(6)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: MULTI_AUTO_MSG.into(),
}],
})
.await
.unwrap();
loop {
let event = codex.next_event().await.unwrap();
if let EventMsg::TaskComplete(_) = &event.msg
&& !event.id.starts_with("auto-compact-")
{
break;
}
}
let request_bodies: Vec<String> = responder
.recorded_requests()
.into_iter()
.map(|body| String::from_utf8(body).unwrap_or_default())
.collect();
assert_eq!(
request_bodies.len(),
6,
"expected six requests including two auto compactions"
);
assert!(
request_bodies[0].contains(MULTI_AUTO_MSG),
"first request should contain the user input"
);
assert!(
request_bodies[1].contains("You have exceeded the maximum number of tokens"),
"first auto compact request should use summarization instructions"
);
assert!(
request_bodies[3].contains(&format!("unsupported call: {DUMMY_FUNCTION_NAME}")),
"function call output should be sent before the second auto compact"
);
assert!(
request_bodies[4].contains("You have exceeded the maximum number of tokens"),
"second auto compact request should reuse summarization instructions"
);
}

View File

@@ -0,0 +1,838 @@
#![allow(clippy::expect_used)]
//! Integration tests that cover compacting, resuming, and forking conversations.
//!
//! Each test sets up a mocked SSE conversation and drives the conversation through
//! a specific sequence of operations. After every operation we capture the
//! request payload that Codex would send to the model and assert that the
//! model-visible history matches the expected sequence of messages.
use super::compact::FIRST_REPLY;
use super::compact::SUMMARIZE_TRIGGER;
use super::compact::SUMMARY_TEXT;
use super::compact::ev_assistant_message;
use super::compact::ev_completed;
use super::compact::mount_sse_once;
use super::compact::sse;
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use std::sync::Arc;
use tempfile::TempDir;
use wiremock::MockServer;
const AFTER_SECOND_RESUME: &str = "AFTER_SECOND_RESUME";
fn network_disabled() -> bool {
std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok()
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: compact an initial conversation, resume it, fork one turn back, and
/// ensure the model-visible history matches expectations at each request.
async fn compact_resume_and_fork_preserve_model_history_view() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial compact/resume/fork flow.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
// 2. Start a new conversation and drive it through the compact/resume/fork steps.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"compact+resume test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"compact+resume test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 1).await;
user_turn(&forked, "AFTER_FORK").await;
// 3. Capture the requests to the model and validate the history slices.
let requests = gather_request_bodies(&server).await;
// input after compact is a prefix of input after resume/fork
let input_after_compact = json!(requests[requests.len() - 3]["input"]);
let input_after_resume = json!(requests[requests.len() - 2]["input"]);
let input_after_fork = json!(requests[requests.len() - 1]["input"]);
let compact_arr = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_arr = input_after_resume
.as_array()
.expect("input after resume should be an array");
let fork_arr = input_after_fork
.as_array()
.expect("input after fork should be an array");
assert!(
compact_arr.len() <= resume_arr.len(),
"after-resume input should have at least as many items as after-compact",
);
assert_eq!(compact_arr.as_slice(), &resume_arr[..compact_arr.len()]);
eprint!(
"len of compact: {}, len of fork: {}",
compact_arr.len(),
fork_arr.len()
);
eprintln!("input_after_fork:{}", json!(input_after_fork));
assert!(
compact_arr.len() <= fork_arr.len(),
"after-fork input should have at least as many items as after-compact",
);
assert_eq!(compact_arr.as_slice(), &fork_arr[..compact_arr.len()]);
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_context = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let tool_calls = json!(requests[0]["tools"].as_array());
let prompt_cache_key = requests[0]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let fork_prompt_cache_key = requests[requests.len() - 1]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let user_turn_1 = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let compact_1 = json!(
{
"model": "gpt-5",
"instructions": "You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.",
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "FIRST_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "Start Summarization"
}
]
}
],
"tools": [],
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_2_after_compact = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let usert_turn_3_after_resume = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_RESUME"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_3_after_fork = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_FORK"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": fork_prompt_cache_key
});
let expected = json!([
user_turn_1,
compact_1,
user_turn_2_after_compact,
usert_turn_3_after_resume,
user_turn_3_after_fork
]);
assert_eq!(requests.len(), 5);
assert_eq!(json!(requests), expected);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: after the forked branch is compacted, resuming again should reuse
/// the compacted history and only append the new user message.
async fn compact_resume_after_second_compaction_preserves_history() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial flow plus the second compact.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
mount_second_compact_flow(&server).await;
// 2. Drive the conversation through compact -> resume -> fork -> compact -> resume.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"second compact test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"second compact test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 3).await;
user_turn(&forked, "AFTER_FORK").await;
compact_conversation(&forked).await;
user_turn(&forked, "AFTER_COMPACT_2").await;
let forked_path = fetch_conversation_path(&forked, "forked conversation").await;
assert!(
forked_path.exists(),
"second compact test expects forked path {forked_path:?} to exist",
);
let resumed_again = resume_conversation(&manager, &config, forked_path).await;
user_turn(&resumed_again, AFTER_SECOND_RESUME).await;
let requests = gather_request_bodies(&server).await;
let input_after_compact = json!(requests[requests.len() - 2]["input"]);
let input_after_resume = json!(requests[requests.len() - 1]["input"]);
// test input after compact before resume is the same as input after resume
let compact_input_array = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_input_array = input_after_resume
.as_array()
.expect("input after resume should be an array");
assert!(
compact_input_array.len() <= resume_input_array.len(),
"after-resume input should have at least as many items as after-compact"
);
assert_eq!(
compact_input_array.as_slice(),
&resume_input_array[..compact_input_array.len()]
);
// hard coded test
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_instructions = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let expected = json!([
{
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:\n\nAFTER_FORK\n\nAnother language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:\n\nSUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT_2"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_SECOND_RESUME"
}
]
}
],
}
]);
let last_request_after_2_compacts = json!([{
"instructions": requests[requests.len() -1]["instructions"],
"input": requests[requests.len() -1]["input"],
}]);
assert_eq!(expected, last_request_after_2_compacts);
}
fn normalize_line_endings(value: &mut Value) {
match value {
Value::String(text) => {
if text.contains('\r') {
*text = text.replace("\r\n", "\n").replace('\r', "\n");
}
}
Value::Array(items) => {
for item in items {
normalize_line_endings(item);
}
}
Value::Object(map) => {
for item in map.values_mut() {
normalize_line_endings(item);
}
}
_ => {}
}
}
async fn gather_request_bodies(server: &MockServer) -> Vec<Value> {
server
.received_requests()
.await
.expect("mock server should not fail")
.into_iter()
.map(|req| {
let mut value = req.body_json::<Value>().expect("valid JSON body");
normalize_line_endings(&mut value);
value
})
.collect()
}
async fn mount_initial_flow(server: &MockServer) {
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed("r1"),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed("r2"),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", "AFTER_COMPACT_REPLY"),
ev_completed("r3"),
]);
let sse4 = sse(vec![ev_completed("r4")]);
let sse5 = sse(vec![ev_completed("r5")]);
let match_first = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
&& !body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_first, sse1).await;
let match_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
};
mount_sse_once(server, match_compact, sse2).await;
let match_after_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_compact, sse3).await;
let match_after_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_RESUME\"")
};
mount_sse_once(server, match_after_resume, sse4).await;
let match_after_fork = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_fork, sse5).await;
}
async fn mount_second_compact_flow(server: &MockServer) {
let sse6 = sse(vec![
ev_assistant_message("m4", SUMMARY_TEXT),
ev_completed("r6"),
]);
let sse7 = sse(vec![ev_completed("r7")]);
let match_second_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\"")) && body.contains("AFTER_FORK")
};
mount_sse_once(server, match_second_compact, sse6).await;
let match_after_second_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{AFTER_SECOND_RESUME}\""))
};
mount_sse_once(server, match_after_second_resume, sse7).await;
}
async fn start_test_conversation(
server: &MockServer,
) -> (TempDir, Config, ConversationManager, Arc<CodexConversation>) {
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().expect("create temp dir");
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.new_conversation(config.clone())
.await
.expect("create conversation");
(home, config, manager, conversation)
}
async fn user_turn(conversation: &Arc<CodexConversation>, text: &str) {
conversation
.submit(Op::UserInput {
items: vec![InputItem::Text { text: text.into() }],
})
.await
.expect("submit user turn");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn compact_conversation(conversation: &Arc<CodexConversation>) {
conversation
.submit(Op::Compact)
.await
.expect("compact conversation");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn fetch_conversation_path(
conversation: &Arc<CodexConversation>,
context: &str,
) -> std::path::PathBuf {
conversation
.submit(Op::GetPath)
.await
.expect("request conversation path");
match wait_for_event(conversation, |ev| {
matches!(ev, EventMsg::ConversationPath(_))
})
.await
{
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path,
_ => panic!("expected ConversationPath event for {context}"),
}
}
async fn resume_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
) -> Arc<CodexConversation> {
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.resume_conversation_from_rollout(config.clone(), path, auth_manager)
.await
.expect("resume conversation");
conversation
}
async fn fork_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
back_steps: usize,
) -> Arc<CodexConversation> {
let NewConversation { conversation, .. } = manager
.fork_conversation(back_steps, config.clone(), path)
.await
.expect("fork conversation");
conversation
}

View File

@@ -2,8 +2,11 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use async_channel::Receiver;
use codex_core::error::CodexErr;
use codex_core::error::SandboxErr;
use codex_core::exec::ExecParams;
use codex_core::exec::SandboxType;
use codex_core::exec::StdoutStream;
@@ -170,3 +173,36 @@ async fn test_aggregated_output_interleaves_in_order() {
assert_eq!(result.aggregated_output.text, "O1\nE1\nO2\nE2\n");
assert_eq!(result.aggregated_output.truncated_after_lines, None);
}
#[tokio::test]
async fn test_exec_timeout_returns_partial_output() {
let cmd = vec![
"/bin/sh".to_string(),
"-c".to_string(),
"printf 'before\\n'; sleep 2; printf 'after\\n'".to_string(),
];
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
timeout_ms: Some(200),
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(params, SandboxType::None, &policy, &None, None).await;
let Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) = result else {
panic!("expected timeout error");
};
assert_eq!(output.exit_code, 124);
assert_eq!(output.stdout.text, "before\n");
assert!(output.stderr.text.is_empty());
assert_eq!(output.aggregated_output.text, "before\n");
assert!(output.duration >= Duration::from_millis(200));
assert!(output.timed_out);
}

View File

@@ -3,12 +3,15 @@
mod cli_stream;
mod client;
mod compact;
mod compact_resume_fork;
mod exec;
mod exec_stream_events;
mod fork_conversation;
mod live_cli;
mod model_overrides;
mod prompt_caching;
mod review;
mod rollout_list_find;
mod seatbelt;
mod stream_error_allows_next_turn;
mod stream_no_completed;

View File

@@ -36,7 +36,7 @@ async fn override_turn_context_does_not_persist_when_config_exists() {
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
effort: Some(Some(ReasoningEffort::High)),
summary: None,
})
.await
@@ -76,7 +76,7 @@ async fn override_turn_context_does_not_create_config_file() {
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::Medium),
effort: Some(Some(ReasoningEffort::Medium)),
summary: None,
})
.await

View File

@@ -387,7 +387,7 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
}),
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
effort: Some(Some(ReasoningEffort::High)),
summary: Some(ReasoningSummary::Detailed),
})
.await
@@ -519,7 +519,7 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
},
model: "o3".to_string(),
effort: ReasoningEffort::High,
effort: Some(ReasoningEffort::High),
summary: ReasoningSummary::Detailed,
})
.await

View File

@@ -0,0 +1,598 @@
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExitedReviewModeEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewCodeLocation;
use codex_core::protocol::ReviewFinding;
use codex_core::protocol::ReviewLineRange;
use codex_core::protocol::ReviewOutputEvent;
use codex_core::protocol::ReviewRequest;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id_from_str;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use std::sync::Arc;
use tempfile::TempDir;
use tokio::io::AsyncWriteExt as _;
use uuid::Uuid;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
/// Verify that submitting `Op::Review` spawns a child task and emits
/// EnteredReviewMode -> ExitedReviewMode(None) -> TaskComplete
/// in that order when the model returns a structured review JSON payload.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_op_emits_lifecycle_and_review_output() {
// Skip under Codex sandbox network restrictions.
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Start mock Responses API server. Return a single assistant message whose
// text is a JSON-encoded ReviewOutputEvent.
let review_json = serde_json::json!({
"findings": [
{
"title": "Prefer Stylize helpers",
"body": "Use .dim()/.bold() chaining instead of manual Style where possible.",
"confidence_score": 0.9,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 10, "end": 20}
}
}
],
"overall_correctness": "good",
"overall_explanation": "All good with some improvements suggested.",
"overall_confidence_score": 0.8
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// Submit review request.
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Please review my changes".to_string(),
user_facing_hint: "my changes".to_string(),
},
})
.await
.unwrap();
// Verify lifecycle: Entered -> Exited(Some(review)) -> TaskComplete.
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Deep compare full structure using PartialEq (floats are f32 on both sides).
let expected = ReviewOutputEvent {
findings: vec![ReviewFinding {
title: "Prefer Stylize helpers".to_string(),
body: "Use .dim()/.bold() chaining instead of manual Style where possible.".to_string(),
confidence_score: 0.9,
priority: 1,
code_location: ReviewCodeLocation {
absolute_file_path: PathBuf::from("/tmp/file.rs"),
line_range: ReviewLineRange { start: 10, end: 20 },
},
}],
overall_correctness: "good".to_string(),
overall_explanation: "All good with some improvements suggested.".to_string(),
overall_confidence_score: 0.8,
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
server.verify().await;
}
/// When the model returns plain text that is not JSON, ensure the child
/// lifecycle still occurs and the plain text is surfaced via
/// ExitedReviewMode(Some(..)) as the overall_explanation.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_op_with_plain_text_emits_review_fallback() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"just plain text"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Plain text review".to_string(),
user_facing_hint: "plain text review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Expect a structured fallback carrying the plain text.
let expected = ReviewOutputEvent {
overall_explanation: "just plain text".to_string(),
..Default::default()
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
server.verify().await;
}
/// When the model returns structured JSON in a review, ensure no AgentMessage
/// is emitted; the UI consumes the structured result via ExitedReviewMode.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_does_not_emit_agent_message_on_structured_output() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let review_json = serde_json::json!({
"findings": [
{
"title": "Example",
"body": "Structured review output.",
"confidence_score": 0.5,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 1, "end": 2}
}
}
],
"overall_correctness": "ok",
"overall_explanation": "ok",
"overall_confidence_score": 0.5
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "check structured".to_string(),
user_facing_hint: "check structured".to_string(),
},
})
.await
.unwrap();
// Drain events until TaskComplete; ensure none are AgentMessage.
use tokio::time::Duration;
use tokio::time::timeout;
let mut saw_entered = false;
let mut saw_exited = false;
loop {
let ev = timeout(Duration::from_secs(5), codex.next_event())
.await
.expect("timeout waiting for event")
.expect("stream ended unexpectedly");
match ev.msg {
EventMsg::TaskComplete(_) => break,
EventMsg::AgentMessage(_) => {
panic!("unexpected AgentMessage during review with structured output")
}
EventMsg::EnteredReviewMode(_) => saw_entered = true,
EventMsg::ExitedReviewMode(_) => saw_exited = true,
_ => {}
}
}
assert!(saw_entered && saw_exited, "missing review lifecycle events");
server.verify().await;
}
/// Ensure that when a custom `review_model` is set in the config, the review
/// request uses that model (and not the main chat model).
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_uses_custom_review_model_from_config() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Minimal stream: just a completed event
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
// Choose a review model different from the main model; ensure it is used.
let codex = new_conversation_for_server(&server, &codex_home, |cfg| {
cfg.model = "gpt-4.1".to_string();
cfg.review_model = "gpt-5".to_string();
})
.await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "use custom model".to_string(),
user_facing_hint: "use custom model".to_string(),
},
})
.await
.unwrap();
// Wait for completion
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request body model equals the configured review model
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
assert_eq!(body["model"].as_str().unwrap(), "gpt-5");
server.verify().await;
}
/// When a review session begins, it must not prepend prior chat history from
/// the parent session. The request `input` should contain only the review
/// prompt from the user.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_input_isolated_from_parent_history() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Mock server for the single review request
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
// Seed a parent session history via resume file with both user + assistant items.
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let session_file = codex_home.path().join("resume.jsonl");
{
let mut f = tokio::fs::File::create(&session_file).await.unwrap();
let convo_id = Uuid::new_v4();
// Proper session_meta line (enveloped) with a conversation id
let meta_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": convo_id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
}
});
f.write_all(format!("{meta_line}\n").as_bytes())
.await
.unwrap();
// Prior user message (enveloped response_item)
let user = codex_protocol::models::ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![codex_protocol::models::ContentItem::InputText {
text: "parent: earlier user message".to_string(),
}],
};
let user_json = serde_json::to_value(&user).unwrap();
let user_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:01.000Z",
"type": "response_item",
"payload": user_json
});
f.write_all(format!("{user_line}\n").as_bytes())
.await
.unwrap();
// Prior assistant message (enveloped response_item)
let assistant = codex_protocol::models::ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![codex_protocol::models::ContentItem::OutputText {
text: "parent: assistant reply".to_string(),
}],
};
let assistant_json = serde_json::to_value(&assistant).unwrap();
let assistant_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:02.000Z",
"type": "response_item",
"payload": assistant_json
});
f.write_all(format!("{assistant_line}\n").as_bytes())
.await
.unwrap();
}
let codex =
resume_conversation_for_server(&server, &codex_home, session_file.clone(), |_| {}).await;
// Submit review request; it must start fresh (no parent history in `input`).
let review_prompt = "Please review only this".to_string();
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: review_prompt.clone(),
user_facing_hint: review_prompt.clone(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request `input` contains only the single review user message.
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
let expected_input = serde_json::json!([
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": review_prompt}]
}
]);
assert_eq!(body["input"], expected_input);
server.verify().await;
}
/// After a review thread finishes, its conversation should not leak into the
/// parent session. A subsequent parent turn must not include any review
/// messages in its request `input`.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_history_does_not_leak_into_parent_session() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Respond to both the review request and the subsequent parent request.
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"review assistant output"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 2).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// 1) Run a review turn that produces an assistant message (isolated in child).
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Start a review".to_string(),
user_facing_hint: "Start a review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: Some(_)
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// 2) Continue in the parent session; request input must not include any review items.
let followup = "back to parent".to_string();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: followup.clone(),
}],
})
.await
.unwrap();
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Inspect the second request (parent turn) input contents.
// Parent turns include session initial messages (user_instructions, environment_context).
// Critically, no messages from the review thread should appear.
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2);
let body = requests[1].body_json::<serde_json::Value>().unwrap();
let input = body["input"].as_array().expect("input array");
// Must include the followup as the last item for this turn
let last = input.last().expect("at least one item in input");
assert_eq!(last["role"].as_str().unwrap(), "user");
let last_text = last["content"][0]["text"].as_str().unwrap();
assert_eq!(last_text, followup);
// Ensure no review-thread content leaked into the parent request
let contains_review_prompt = input
.iter()
.any(|msg| msg["content"][0]["text"].as_str().unwrap_or_default() == "Start a review");
let contains_review_assistant = input.iter().any(|msg| {
msg["content"][0]["text"].as_str().unwrap_or_default() == "review assistant output"
});
assert!(
!contains_review_prompt,
"review prompt leaked into parent turn input"
);
assert!(
!contains_review_assistant,
"review assistant output leaked into parent turn input"
);
server.verify().await;
}
/// Start a mock Responses API server and mount the given SSE stream body.
async fn start_responses_server_with_sse(sse_raw: &str, expected_requests: usize) -> MockServer {
let server = MockServer::start().await;
let sse = load_sse_fixture_with_id_from_str(sse_raw, &Uuid::new_v4().to_string());
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse.clone(), "text/event-stream"),
)
.expect(expected_requests as u64)
.mount(&server)
.await;
server
}
/// Create a conversation configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn new_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation
}
/// Create a conversation resuming from a rollout file, configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn resume_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
resume_path: std::path::PathBuf,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.resume_conversation_from_rollout(config, resume_path, auth_manager)
.await
.expect("resume conversation")
.conversation
}

View File

@@ -0,0 +1,50 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use std::io::Write;
use std::path::PathBuf;
use codex_core::find_conversation_path_by_id_str;
use tempfile::TempDir;
use uuid::Uuid;
/// Create sessions/YYYY/MM/DD and write a minimal rollout file containing the
/// provided conversation id in the SessionMeta line. Returns the absolute path.
fn write_minimal_rollout_with_id(codex_home: &TempDir, id: Uuid) -> PathBuf {
let sessions = codex_home.path().join("sessions/2024/01/01");
std::fs::create_dir_all(&sessions).unwrap();
let file = sessions.join(format!("rollout-2024-01-01T00-00-00-{id}.jsonl"));
let mut f = std::fs::File::create(&file).unwrap();
// Minimal first line: session_meta with the id so content search can find it
writeln!(
f,
"{}",
serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test",
"cli_version": "test"
}
})
)
.unwrap();
file
}
#[tokio::test]
async fn find_locates_rollout_file_by_id() {
let home = TempDir::new().unwrap();
let id = Uuid::new_v4();
let expected = write_minimal_rollout_with_id(&home, id);
let found = find_conversation_path_by_id_str(home.path(), &id.to_string())
.await
.unwrap();
assert_eq!(found.unwrap(), expected);
}

View File

@@ -46,4 +46,6 @@ core_test_support = { path = "../core/tests/common" }
libc = "0.2"
predicates = "3"
tempfile = "3.13.0"
uuid = "1"
walkdir = "2"
wiremock = "0.6"

View File

@@ -6,6 +6,10 @@ use std::path::PathBuf;
#[derive(Parser, Debug)]
#[command(version)]
pub struct Cli {
/// Action to perform. If omitted, runs a new non-interactive session.
#[command(subcommand)]
pub command: Option<Command>,
/// Optional image(s) to attach to the initial prompt.
#[arg(long = "image", short = 'i', value_name = "FILE", value_delimiter = ',', num_args = 1..)]
pub images: Vec<PathBuf>,
@@ -69,6 +73,28 @@ pub struct Cli {
pub prompt: Option<String>,
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Resume a previous session by id or pick the most recent with --last.
Resume(ResumeArgs),
}
#[derive(Parser, Debug)]
pub struct ResumeArgs {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
pub session_id: Option<String>,
/// Resume the most recent recorded session (newest) without specifying an id.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
pub last: bool,
/// Prompt to send after resuming the session. If `-` is used, read from stdin.
#[arg(value_name = "PROMPT")]
pub prompt: Option<String>,
}
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, ValueEnum)]
#[value(rename_all = "kebab-case")]
pub enum Color {

View File

@@ -562,6 +562,8 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::ShutdownComplete => return CodexStatus::Shutdown,
EventMsg::ConversationPath(_) => {}
EventMsg::UserMessage(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
}
CodexStatus::Running
}

View File

@@ -30,11 +30,14 @@ use tracing::error;
use tracing::info;
use tracing_subscriber::EnvFilter;
use crate::cli::Command as ExecCommand;
use crate::event_processor::CodexStatus;
use crate::event_processor::EventProcessor;
use codex_core::find_conversation_path_by_id_str;
pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let Cli {
command,
images,
model: model_cli_arg,
oss,
@@ -51,8 +54,15 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
config_overrides,
} = cli;
// Determine the prompt based on CLI arg and/or stdin.
let prompt = match prompt {
// Determine the prompt source (parent or subcommand) and read from stdin if needed.
let prompt_arg = match &command {
// Allow prompt before the subcommand by falling back to the parent-level prompt
// when the Resume subcommand did not provide its own prompt.
Some(ExecCommand::Resume(args)) => args.prompt.clone().or(prompt),
None => prompt,
};
let prompt = match prompt_arg {
Some(p) if p != "-" => p,
// Either `-` was passed or no positional arg.
maybe_dash => {
@@ -137,6 +147,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// Load configuration and determine approval policy
let overrides = ConfigOverrides {
model,
review_model: None,
config_profile,
// This CLI is intended to be headless and has no affordances for asking
// the user for approval.
@@ -189,11 +200,29 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
// Handle resume subcommand by resolving a rollout path and using explicit resume API.
let NewConversation {
conversation_id: _,
conversation,
session_configured,
} = conversation_manager.new_conversation(config).await?;
} = if let Some(ExecCommand::Resume(args)) = command {
let resume_path = resolve_resume_path(&config, &args).await?;
if let Some(path) = resume_path {
conversation_manager
.resume_conversation_from_rollout(
config.clone(),
path,
AuthManager::shared(config.codex_home.clone()),
)
.await?
} else {
conversation_manager.new_conversation(config).await?
}
} else {
conversation_manager.new_conversation(config).await?
};
info!("Codex initialized with event: {session_configured:?}");
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<Event>();
@@ -278,3 +307,23 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
Ok(())
}
async fn resolve_resume_path(
config: &Config,
args: &crate::cli::ResumeArgs,
) -> anyhow::Result<Option<PathBuf>> {
if args.last {
match codex_core::RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
Ok(page) => Ok(page.items.first().map(|it| it.path.clone())),
Err(e) => {
error!("Error listing conversations: {e}");
Ok(None)
}
}
} else if let Some(id_str) = args.session_id.as_deref() {
let path = find_conversation_path_by_id_str(&config.codex_home, id_str).await?;
Ok(path)
} else {
Ok(None)
}
}

View File

@@ -0,0 +1,10 @@
event: response.created
data: {"type":"response.created","response":{"id":"resp1"}}
event: response.output_item.done
data: {"type":"response.output_item.done","item":{"type":"message","role":"assistant","content":[{"type":"output_text","text":"fixture hello"}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp1","output":[]}}

View File

@@ -1,4 +1,5 @@
// Aggregates all former standalone integration tests as modules.
mod apply_patch;
mod common;
mod resume;
mod sandbox;

View File

@@ -0,0 +1,267 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use anyhow::Context;
use assert_cmd::prelude::*;
use serde_json::Value;
use std::process::Command;
use tempfile::TempDir;
use uuid::Uuid;
use walkdir::WalkDir;
/// Utility: scan the sessions dir for a rollout file that contains `marker`
/// in any response_item.message.content entry. Returns the absolute path.
fn find_session_file_containing_marker(
sessions_dir: &std::path::Path,
marker: &str,
) -> Option<std::path::PathBuf> {
for entry in WalkDir::new(sessions_dir) {
let entry = match entry {
Ok(e) => e,
Err(_) => continue,
};
if !entry.file_type().is_file() {
continue;
}
if !entry.file_name().to_string_lossy().ends_with(".jsonl") {
continue;
}
let path = entry.path();
let Ok(content) = std::fs::read_to_string(path) else {
continue;
};
// Skip the first meta line and scan remaining JSONL entries.
let mut lines = content.lines();
if lines.next().is_none() {
continue;
}
for line in lines {
if line.trim().is_empty() {
continue;
}
let Ok(item): Result<Value, _> = serde_json::from_str(line) else {
continue;
};
if item.get("type").and_then(|t| t.as_str()) == Some("response_item")
&& let Some(payload) = item.get("payload")
&& payload.get("type").and_then(|t| t.as_str()) == Some("message")
&& payload
.get("content")
.map(|c| c.to_string())
.unwrap_or_default()
.contains(marker)
{
return Some(path.to_path_buf());
}
}
}
None
}
/// Extract the conversation UUID from the first SessionMeta line in the rollout file.
fn extract_conversation_id(path: &std::path::Path) -> String {
let content = std::fs::read_to_string(path).unwrap();
let mut lines = content.lines();
let meta_line = lines.next().expect("missing meta line");
let meta: Value = serde_json::from_str(meta_line).expect("invalid meta json");
meta.get("payload")
.and_then(|p| p.get("id"))
.and_then(|v| v.as_str())
.unwrap_or_default()
.to_string()
}
#[test]
fn exec_resume_last_appends_to_existing_file() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
// 1) First run: create a session with a unique marker in the content.
let marker = format!("resume-last-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
// Find the created session file containing the marker.
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
// 2) Second run: resume the most recent file with a new marker.
let marker2 = format!("resume-last-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let mut binding = assert_cmd::Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?;
let cmd = binding
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg("--last");
cmd.assert().success();
// Ensure the same file was updated and contains both markers.
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(
resumed_path, path,
"resume --last should append to existing file"
);
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}
#[test]
fn exec_resume_by_id_appends_to_existing_file() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
// 1) First run: create a session
let marker = format!("resume-by-id-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
let session_id = extract_conversation_id(&path);
assert!(
!session_id.is_empty(),
"missing conversation id in meta line"
);
// 2) Resume by id
let marker2 = format!("resume-by-id-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let mut binding = assert_cmd::Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?;
let cmd = binding
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg(&session_id);
cmd.assert().success();
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(
resumed_path, path,
"resume by id should append to existing file"
);
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}
#[test]
fn exec_resume_preserves_cli_configuration_overrides() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
let marker = format!("resume-config-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("--sandbox")
.arg("workspace-write")
.arg("--model")
.arg("gpt-5")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
let marker2 = format!("resume-config-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let output = Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("--sandbox")
.arg("workspace-write")
.arg("--model")
.arg("gpt-5-high")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg("--last")
.output()
.context("resume run should succeed")?;
assert!(output.status.success(), "resume run failed: {output:?}");
let stdout = String::from_utf8(output.stdout)?;
assert!(
stdout.contains("model: gpt-5-high"),
"stdout missing model override: {stdout}"
);
assert!(
stdout.contains("sandbox: workspace-write"),
"stdout missing sandbox override: {stdout}"
);
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(resumed_path, path, "resume should append to same file");
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}

View File

@@ -121,7 +121,7 @@ async fn test_writable_root() {
}
#[tokio::test]
#[should_panic(expected = "Sandbox(Timeout)")]
#[should_panic(expected = "Sandbox(Timeout")]
async fn test_timeout() {
run_cmd(&["sleep", "2"], &[], 50).await;
}
@@ -156,26 +156,27 @@ async fn assert_network_blocked(cmd: &[&str]) {
)
.await;
let (exit_code, stdout, stderr) = match result {
Ok(output) => (output.exit_code, output.stdout.text, output.stderr.text),
Err(CodexErr::Sandbox(SandboxErr::Denied(exit_code, stdout, stderr))) => {
(exit_code, stdout, stderr)
}
let output = match result {
Ok(output) => output,
Err(CodexErr::Sandbox(SandboxErr::Denied { output })) => *output,
_ => {
panic!("expected sandbox denied error, got: {result:?}");
}
};
dbg!(&stderr);
dbg!(&stdout);
dbg!(&exit_code);
dbg!(&output.stderr.text);
dbg!(&output.stdout.text);
dbg!(&output.exit_code);
// A completely missing binary exits with 127. Anything else should also
// be nonzero (EPERM from seccomp will usually bubble up as 1, 2, 13…)
// If—*and only if*—the command exits 0 we consider the sandbox breached.
if exit_code == 0 {
panic!("Network sandbox FAILED - {cmd:?} exited 0\nstdout:\n{stdout}\nstderr:\n{stderr}",);
if output.exit_code == 0 {
panic!(
"Network sandbox FAILED - {cmd:?} exited 0\nstdout:\n{}\nstderr:\n{}",
output.stdout.text, output.stderr.text
);
}
}

View File

@@ -238,8 +238,8 @@ async fn process_request(
&opts.codex_home,
api_key.clone(),
tokens.id_token.clone(),
Some(tokens.access_token.clone()),
Some(tokens.refresh_token.clone()),
tokens.access_token.clone(),
tokens.refresh_token.clone(),
)
.await
{
@@ -446,8 +446,8 @@ async fn persist_tokens_async(
codex_home: &Path,
api_key: Option<String>,
id_token: String,
access_token: Option<String>,
refresh_token: Option<String>,
access_token: String,
refresh_token: String,
) -> io::Result<()> {
// Reuse existing synchronous logic but run it off the async runtime.
let codex_home = codex_home.to_path_buf();
@@ -459,43 +459,29 @@ async fn persist_tokens_async(
std::fs::create_dir_all(parent).map_err(io::Error::other)?;
}
let mut auth = read_or_default(&auth_file);
if let Some(key) = api_key {
auth.openai_api_key = Some(key);
}
let tokens = auth.tokens.get_or_insert_with(TokenData::default);
tokens.id_token = parse_id_token(&id_token).map_err(io::Error::other)?;
// Persist chatgpt_account_id if present in claims
let mut tokens = TokenData {
id_token: parse_id_token(&id_token).map_err(io::Error::other)?,
access_token,
refresh_token,
account_id: None,
};
if let Some(acc) = jwt_auth_claims(&id_token)
.get("chatgpt_account_id")
.and_then(|v| v.as_str())
{
tokens.account_id = Some(acc.to_string());
}
if let Some(at) = access_token {
tokens.access_token = at;
}
if let Some(rt) = refresh_token {
tokens.refresh_token = rt;
}
auth.last_refresh = Some(Utc::now());
let auth = AuthDotJson {
openai_api_key: api_key,
tokens: Some(tokens),
last_refresh: Some(Utc::now()),
};
codex_core::auth::write_auth_json(&auth_file, &auth)
})
.await
.map_err(|e| io::Error::other(format!("persist task failed: {e}")))?
}
fn read_or_default(path: &Path) -> AuthDotJson {
match codex_core::auth::try_read_auth_json(path) {
Ok(auth) => auth,
Err(_) => AuthDotJson {
openai_api_key: None,
tokens: None,
last_refresh: None,
},
}
}
fn compose_success_url(port: u16, issuer: &str, id_token: &str, access_token: &str) -> String {
let token_claims = jwt_auth_claims(id_token);
let access_claims = jwt_auth_claims(access_token);

View File

@@ -90,6 +90,22 @@ async fn end_to_end_login_flow_persists_auth_json() {
let tmp = tempdir().unwrap();
let codex_home = tmp.path().to_path_buf();
// Seed auth.json with stale API key + tokens that should be overwritten.
let stale_auth = serde_json::json!({
"OPENAI_API_KEY": "sk-stale",
"tokens": {
"id_token": "stale.header.payload",
"access_token": "stale-access",
"refresh_token": "stale-refresh",
"account_id": "stale-acc"
}
});
std::fs::write(
codex_home.join("auth.json"),
serde_json::to_string_pretty(&stale_auth).unwrap(),
)
.unwrap();
let state = "test_state_123".to_string();
// Run server in background
@@ -122,10 +138,10 @@ async fn end_to_end_login_flow_persists_auth_json() {
let auth_path = codex_home.join("auth.json");
let data = std::fs::read_to_string(&auth_path).unwrap();
let json: serde_json::Value = serde_json::from_str(&data).unwrap();
assert!(
!json["OPENAI_API_KEY"].is_null(),
"OPENAI_API_KEY should be set"
);
// The following assert is here because of the old oauth flow that exchanges tokens for an
// API key. See obtain_api_key in server.rs for details. Once we remove this old mechanism
// from the code, this test should be updated to expect that the API key is no longer present.
assert_eq!(json["OPENAI_API_KEY"], "access-123");
assert_eq!(json["tokens"]["access_token"], "access-123");
assert_eq!(json["tokens"]["refresh_token"], "refresh-123");
assert_eq!(json["tokens"]["account_id"], "acc-123");

View File

@@ -20,7 +20,7 @@ use codex_core::config::ConfigToml;
use codex_core::config::load_config_as_toml;
use codex_core::config_edit::CONFIG_KEY_EFFORT;
use codex_core::config_edit::CONFIG_KEY_MODEL;
use codex_core::config_edit::persist_non_null_overrides;
use codex_core::config_edit::persist_overrides_and_clear_if_none;
use codex_core::default_client::get_codex_user_agent;
use codex_core::exec::ExecParams;
use codex_core::exec_env::create_env;
@@ -423,32 +423,41 @@ impl CodexMessageProcessor {
// Determine whether auth is required based on the active model provider.
// If a custom provider is configured with `requires_openai_auth == false`,
// then no auth step is required; otherwise, default to requiring auth.
let requires_openai_auth = Some(self.config.model_provider.requires_openai_auth);
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
let response = match self.auth_manager.auth() {
Some(auth) => {
let (reported_auth_method, token_opt) = match auth.get_token().await {
Ok(token) if !token.is_empty() => {
let tok = if include_token { Some(token) } else { None };
(Some(auth.mode), tok)
}
Ok(_) => (None, None),
Err(err) => {
tracing::warn!("failed to get token for auth status: {err}");
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth,
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
let response = if !requires_openai_auth {
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth,
},
requires_openai_auth: Some(false),
}
} else {
match self.auth_manager.auth() {
Some(auth) => {
let auth_mode = auth.mode;
let (reported_auth_method, token_opt) = match auth.get_token().await {
Ok(token) if !token.is_empty() => {
let tok = if include_token { Some(token) } else { None };
(Some(auth_mode), tok)
}
Ok(_) => (None, None),
Err(err) => {
tracing::warn!("failed to get token for auth status: {err}");
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth: Some(true),
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(true),
},
}
};
self.outgoing.send_response(request_id, response).await;
@@ -519,7 +528,7 @@ impl CodexMessageProcessor {
(&[CONFIG_KEY_EFFORT], effort_str.as_deref()),
];
match persist_non_null_overrides(
match persist_overrides_and_clear_if_none(
&self.config.codex_home,
self.config.active_profile.as_deref(),
&overrides,
@@ -1248,6 +1257,7 @@ fn derive_config_from_params(
} = params;
let overrides = ConfigOverrides {
model,
review_model: None,
config_profile: profile,
cwd: cwd.map(PathBuf::from),
approval_policy,

View File

@@ -152,6 +152,7 @@ impl CodexToolCallParam {
// Build the `ConfigOverrides` recognized by codex-core.
let overrides = codex_core::config::ConfigOverrides {
model,
review_model: None,
config_profile: profile,
cwd: cwd.map(PathBuf::from),
approval_policy: approval_policy.map(Into::into),

View File

@@ -279,7 +279,9 @@ async fn run_codex_tool_session_inner(
| EventMsg::TurnAborted(_)
| EventMsg::ConversationPath(_)
| EventMsg::UserMessage(_)
| EventMsg::ShutdownComplete => {
| EventMsg::ShutdownComplete
| EventMsg::EnteredReviewMode(_)
| EventMsg::ExitedReviewMode(_) => {
// For now, we do not do anything extra for these
// events. Note that
// send(codex_event_to_notification(&event)) above has

View File

@@ -280,7 +280,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: ReasoningEffort::default(),
reasoning_effort: Some(ReasoningEffort::default()),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,
@@ -314,7 +314,7 @@ mod tests {
let session_configured_event = SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: ReasoningEffort::default(),
reasoning_effort: Some(ReasoningEffort::default()),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,

View File

@@ -15,11 +15,17 @@ use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
fn create_config_toml_custom_provider(
codex_home: &Path,
requires_openai_auth: bool,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
let requires_line = if requires_openai_auth {
"requires_openai_auth = true\n"
} else {
""
};
let contents = format!(
r#"
model = "mock-model"
approval_policy = "never"
@@ -33,6 +39,20 @@ base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
{requires_line}
"#
);
std::fs::write(config_toml, contents)
}
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
"#,
)
}
@@ -124,6 +144,47 @@ async fn get_auth_status_with_api_key() {
assert_eq!(status.auth_token, Some("sk-test-key".to_string()));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_when_auth_not_required() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml_custom_provider(codex_home.path(), false)
.unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
login_with_api_key_via_request(&mut mcp, "sk-test-key").await;
let request_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await
.expect("send getAuthStatus");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, None, "expected no auth method");
assert_eq!(status.auth_token, None, "expected no token");
assert_eq!(
status.requires_openai_auth,
Some(false),
"requires_openai_auth should be false",
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_no_include_token() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));

View File

@@ -320,7 +320,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: "mock-model".to_string(),
effort: ReasoningEffort::Medium,
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await

View File

@@ -1,5 +1,6 @@
use std::path::Path;
use codex_core::config::ConfigToml;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use mcp_test_support::McpProcess;
@@ -14,7 +15,8 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn set_default_model_persists_overrides() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
let codex_home = TempDir::new().expect("create tempdir");
create_config_toml(codex_home.path()).expect("write config.toml");
let mut mcp = McpProcess::new(codex_home.path())
.await
@@ -25,8 +27,8 @@ async fn set_default_model_persists_overrides() {
.expect("init failed");
let params = SetDefaultModelParams {
model: Some("o4-mini".to_string()),
reasoning_effort: Some(ReasoningEffort::High),
model: Some("gpt-4.1".to_string()),
reasoning_effort: None,
};
let request_id = mcp
@@ -53,10 +55,22 @@ async fn set_default_model_persists_overrides() {
assert_eq!(
ConfigToml {
model: Some("o4-mini".to_string()),
model_reasoning_effort: Some(ReasoningEffort::High),
model: Some("gpt-4.1".to_string()),
model_reasoning_effort: None,
..Default::default()
},
config_toml,
);
}
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "gpt-5"
model_reasoning_effort = "medium"
"#,
)
}

View File

@@ -223,7 +223,8 @@ pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
/// Note this could be ignored by the model.
pub reasoning_effort: ReasoningEffort,
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
}
@@ -424,8 +425,11 @@ pub struct GetUserSavedConfigResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelParams {
/// If set to None, this means `model` should be cleared in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// If set to None, this means `model_reasoning_effort` should be cleared
/// in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
}
@@ -523,7 +527,8 @@ pub struct SendUserTurnParams {
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
pub effort: ReasoningEffort,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffort>,
pub summary: ReasoningSummary,
}

View File

@@ -15,6 +15,7 @@ use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::custom_prompts::CustomPrompt;
use crate::mcp_protocol::ConversationId;
use crate::message_history::HistoryEntry;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::num_format::format_with_separators;
use crate::parse_command::ParsedCommand;
@@ -81,7 +82,8 @@ pub enum Op {
model: String,
/// Will only be honored if the model is configured to use reasoning.
effort: ReasoningEffortConfig,
#[serde(skip_serializing_if = "Option::is_none")]
effort: Option<ReasoningEffortConfig>,
/// Will only be honored if the model is configured to use reasoning.
summary: ReasoningSummaryConfig,
@@ -111,8 +113,11 @@ pub enum Op {
model: Option<String>,
/// Updated reasoning effort (honored only for reasoning-capable models).
///
/// Use `Some(Some(_))` to set a specific effort, `Some(None)` to clear
/// the effort, or `None` to leave the existing value unchanged.
#[serde(skip_serializing_if = "Option::is_none")]
effort: Option<ReasoningEffortConfig>,
effort: Option<Option<ReasoningEffortConfig>>,
/// Updated reasoning summary preference (honored only for reasoning-capable models).
#[serde(skip_serializing_if = "Option::is_none")]
@@ -162,6 +167,10 @@ pub enum Op {
/// The agent will use its existing context (either conversation history or previous response id)
/// to generate a summary which will be returned as an AgentMessage event.
Compact,
/// Request a code review from the agent.
Review { review_request: ReviewRequest },
/// Request to shut down codex instance.
Shutdown,
}
@@ -405,6 +414,7 @@ pub struct Event {
}
/// Response event from the agent
/// NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.
#[derive(Debug, Clone, Deserialize, Serialize, Display, TS)]
#[serde(tag = "type", rename_all = "snake_case")]
#[strum(serialize_all = "snake_case")]
@@ -500,6 +510,17 @@ pub enum EventMsg {
ShutdownComplete,
ConversationPath(ConversationPathResponseEvent),
/// Entered review mode.
EnteredReviewMode(ReviewRequest),
/// Exited review mode with an optional final result to apply.
ExitedReviewMode(ExitedReviewModeEvent),
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct ExitedReviewModeEvent {
pub review_output: Option<ReviewOutputEvent>,
}
// Individual event payload types matching each `EventMsg` variant.
@@ -708,12 +729,12 @@ where
let (_role, message) = value;
let message = message.as_ref();
let trimmed = message.trim();
if trimmed.starts_with(ENVIRONMENT_CONTEXT_OPEN_TAG)
&& trimmed.ends_with(ENVIRONMENT_CONTEXT_CLOSE_TAG)
if starts_with_ignore_ascii_case(trimmed, ENVIRONMENT_CONTEXT_OPEN_TAG)
&& ends_with_ignore_ascii_case(trimmed, ENVIRONMENT_CONTEXT_CLOSE_TAG)
{
InputMessageKind::EnvironmentContext
} else if trimmed.starts_with(USER_INSTRUCTIONS_OPEN_TAG)
&& trimmed.ends_with(USER_INSTRUCTIONS_CLOSE_TAG)
} else if starts_with_ignore_ascii_case(trimmed, USER_INSTRUCTIONS_OPEN_TAG)
&& ends_with_ignore_ascii_case(trimmed, USER_INSTRUCTIONS_CLOSE_TAG)
{
InputMessageKind::UserInstructions
} else {
@@ -722,6 +743,26 @@ where
}
}
fn starts_with_ignore_ascii_case(text: &str, prefix: &str) -> bool {
let text_bytes = text.as_bytes();
let prefix_bytes = prefix.as_bytes();
text_bytes.len() >= prefix_bytes.len()
&& text_bytes
.iter()
.zip(prefix_bytes.iter())
.all(|(a, b)| a.eq_ignore_ascii_case(b))
}
fn ends_with_ignore_ascii_case(text: &str, suffix: &str) -> bool {
let text_bytes = text.as_bytes();
let suffix_bytes = suffix.as_bytes();
text_bytes.len() >= suffix_bytes.len()
&& text_bytes[text_bytes.len() - suffix_bytes.len()..]
.iter()
.zip(suffix_bytes.iter())
.all(|(a, b)| a.eq_ignore_ascii_case(b))
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct AgentMessageDeltaEvent {
pub delta: String,
@@ -828,26 +869,7 @@ impl InitialHistory {
InitialHistory::Forked(items) => items.clone(),
}
}
pub fn get_response_items(&self) -> Vec<ResponseItem> {
match self {
InitialHistory::New => Vec::new(),
InitialHistory::Resumed(resumed) => resumed
.history
.iter()
.filter_map(|ri| match ri {
RolloutItem::ResponseItem(item) => Some(item.clone()),
_ => None,
})
.collect(),
InitialHistory::Forked(items) => items
.iter()
.filter_map(|ri| match ri {
RolloutItem::ResponseItem(item) => Some(item.clone()),
_ => None,
})
.collect(),
}
}
pub fn get_event_msgs(&self) -> Option<Vec<EventMsg>> {
match self {
InitialHistory::New => None,
@@ -907,13 +929,26 @@ pub struct CompactedItem {
pub message: String,
}
impl From<CompactedItem> for ResponseItem {
fn from(value: CompactedItem) -> Self {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: value.message,
}],
}
}
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct TurnContextItem {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
pub effort: ReasoningEffortConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffortConfig>,
pub summary: ReasoningSummaryConfig,
}
@@ -937,6 +972,57 @@ pub struct GitInfo {
pub repository_url: Option<String>,
}
/// Review request sent to the review session.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewRequest {
pub prompt: String,
pub user_facing_hint: String,
}
/// Structured review result produced by a child review session.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewOutputEvent {
pub findings: Vec<ReviewFinding>,
pub overall_correctness: String,
pub overall_explanation: String,
pub overall_confidence_score: f32,
}
impl Default for ReviewOutputEvent {
fn default() -> Self {
Self {
findings: Vec::new(),
overall_correctness: String::default(),
overall_explanation: String::default(),
overall_confidence_score: 0.0,
}
}
}
/// A single review finding describing an observed issue or recommendation.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewFinding {
pub title: String,
pub body: String,
pub confidence_score: f32,
pub priority: i32,
pub code_location: ReviewCodeLocation,
}
/// Location of the code related to a review finding.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewCodeLocation {
pub absolute_file_path: PathBuf,
pub line_range: ReviewLineRange,
}
/// Inclusive line range in a file associated with the finding.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewLineRange {
pub start: u32,
pub end: u32,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct ExecCommandBeginEvent {
/// Identifier so this can be paired with the ExecCommandEnd event.
@@ -1082,7 +1168,8 @@ pub struct SessionConfiguredEvent {
pub model: String,
/// The effort the model is putting into reasoning about the user's request.
pub reasoning_effort: ReasoningEffortConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffortConfig>,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
pub history_log_id: u64,
@@ -1172,7 +1259,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "codex-mini-latest".to_string(),
reasoning_effort: ReasoningEffortConfig::default(),
reasoning_effort: Some(ReasoningEffortConfig::default()),
history_log_id: 0,
history_entry_count: 0,
initial_messages: None,

View File

@@ -0,0 +1,27 @@
▒▒▒▒▒██████████▒▒
█▒█░▓█ ██░▒▒░▓▓▓░░ █░███▒
▒▓░▒█ █░█▒░██░░█▒█░ █████▒ █▒
██░▓██▒██▓██ ▓░░ ██▓▒ ██
▒█░▓██▒▓▓ █ █░█▒██
█░░▓▓█▓░ █▒█░░▒▒▒
▓▓░▓ ░█ ░ ▒▒▒█ ▒▓▒░█▒░
█▓░░█ ░░ █▒▒█░▓░░ ░█▒▒░█▒░
██░░▒░█ ░█▓▒▒▒▒▓░ ███░▒▒█
░▒ ░░▒░░ █▒░█▒▓░░▒▓ ▓░░░░ ░
▓░▓░ ▒░ █▒▓▓ █░▓ ░░░▓░
▒▒ ▓░ █▒▓░░ ▓█ ▒░░░░
░░░░ ▓ █░▓▓ ▓░░ ▒▒▒█▒██████████ ░▓▓░ ░
█░▓ ░█ ▒▓░░██ ▓█░ ▓░░▒▒██████ ░▓▒░▒░
▒▓░▒▓ ░▒ █░▓░ ▒░░ ░ ▓▓▓▓▓▓ ░ ░░█░░
▒▒░░▒▒░█ ████▓ █▓░▓▓ ▓▓
▒▒▒░ ▒▒▒ ▓█▓█▓░▓▓
▒░░░█ █░▒▒ ▒░▒█░█▒░█
████░█ ████ ▓▒██░███▓▓█
░█░░█ ░▒███▒░█▒▒███████░█░███▒██░
█ ███▓█░░█▒░▓ ██▓░░▒██ ▒██
███▓░█░░▓▒▒▒▒▒▒██▓██░
░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒▓██████████▒▒
█▒▓░▓█ ██ ▒█░▓▓▓ █ █░ ██▒
▒▓░░█░█░▓▒░██░░█▒░░ ████▓▒ █▒
█░█▓██▒█░▓█ █ █▒▓▒ ██
▒█░▓█ ▒▓▓█ ███░█▒██
█░░▓▓▒▓░ ▒▓░░▒▒▒
▓░░█ ░█ ▒▓ ░▒▒█ █▓█░██░
█▓░░█▒░░ █▒▒▒▒▓▒▓ ▓▒█░░▒░
██░░▓░█ ░░░░▒▒▒▒ ░ ██░░▒█
░█ ░░ ░ █▒░██▒▓░ ░░░░ ░
█░▓░█▒░ █▒▓░█ ▓█▓ ▒░░░█░
▒▓ █░█ ▒▓░░ ▓▓█░ ░░ ░
░░░░█ █▒ ▓░▓▓▓ ▓▓░░ ▒▒▒█▒██▒███████ █ ▓░ ░
█░█ ░▒ ▓░░███▓█ █ ░▒██ ███ █ ░▓▒░▒░
▒█░▒▒ ░█ █░▓█ ▒░ ▓██▓██▒░░░▓▒▒██ ▓▓▓░░░░
▒▒▒░█▒░▒ ████ ░ █▓░▓░ ▓▓
█▒▒░▒ ▒█▒ ▒█▓█▓█▓▓
▒▒░░█▓█▓▒▒ ▒▓▓█░█▒░█
███░░▒ ████ ▒▒██░██░░▓█
█░█░░█ ████░░█▒▒██████░░▓░██ ▒██░
░ ███▓▒▓░░▓█░ ███░█▒██ █▒██
██░░██▒▓▒▒▒▒▒▒██▓██░
░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒▒██████████▒▒
▒▒▓░██ ░█░▒█░▓▓▓ ▒ █░ ██▒
▒▓░░███▒▓▓███░█▒▒░░ ██▒█▓▒ ▒█
█░░█░ ▒▒█░█ ▓ █ █▒▓▒ ██
██░▓▓░█░█ █▒█░████
▓░░█ ▒░█ ██▒█ █▒▒░░▒▒▒
▓░▓███░ █░ █▒█▓ ▒█░█▒▒
▓░▓░ █░ ▒░ ▒ ██ ▒▒░▓▒░
░░░▒ ░░░ ░░▓░█▒▒ ▒█░█░░
░░░░ ░░█ █▒ ░▒░▒ █ ▒░░ ░
░░▒░ ░ █░░▓ ▓░▓ ░░░▒░
█░▓▓ ▒█ ▒█░░░ ▒░▒ ░ ░░░░░░
██░░ ░░ ▓▓▓█░ ░▓ ▓▒█▒█▒██▒███████ ▒█░░ ░
█░▒░ ▓░ ▒░░▓░▓▒░█ ▓▒░░██████▒ ░▒▒░▒░
░▒░▒ ▒▒▒ ▒ █▓▓ ██ ▒▒█░██░░░░▓▒▒▓█ ██░░░▓█
▒░░▓ ▒▒▒ ▒███ ▓ ▓▓██▓
▓░░▒ █░░▒ ▒▓▓█▓███
▓█░░▒ ▒██▒ ▒▓░███▒░█
▒ ▒▒█▒ ▒░███▒▓░ ▒▒██░██░░▓█
░█░█▓ ██░██░░█▒▒███████░█░██ ▒▒█░
░░░██▒▒░░█▒█▓░░█░░░▒██ █▒░█
▒█░░░█▒▒▒▒▒▒▒▒██▓██░
░░ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒███████▒██▒
▒█▓░████ ▒▒██▒ ▓░▒ █░███▒
▒█░▓░█░█▒▓██░░█▒▒█████▒▒▓█▒█ ██
█▓█░█▒▒░█░█ ▓ █░██▒█ █▒
▒░▓░ ▓▓██ ▓▓ █░█░▒ ▒
▓░░▒ ░▓█ ███▒ ░█░ ▒▒▒
▓░░▓▒░░░ ▓▒░████▓ █▒█░▒▒██
▓░░▓▒░█▓ ▒█▒░▒▒▒▒▒ █▓░█▒▒▒
░░█░░░█░ ▒█▒░░▒▒ ▒▒ ▒█░ ▒░
░░█ █░ ▒▒█░░░██▒ ░░▓ ▓█
▒░░ ░░ █▓▒░░ ▓░ ░░░ ░░
░░░ ░░█ █░░█ ▒█▓▒ ░░░░░░
██░ ░░█ ▓▒▓░▓ ░█▓ ██▒▒▒██████████ ░░░ ░░
░▒ ░▓ ░░▓█░█░█ ░ ░░▓███ ████ ▒░ ▒░█░ ░
░░░ █▓░ █░░ █▓░░█ ░▒█▓████▓░░░░▓▒▒█░▒▓░░▓▓
░░▒ █▓▒ ████ ▒▓▓▓▓▓░
▒░▒ ███ ▓░█▓█▓░
▒░░█ ████▒ ▒█░▓▓ ▒▓█
█░░░▒ █████▒▒ ▒▓█░▒▓░▓▓▓░
░ ░░▒ ██░█░░██▒▓███████░░░░█░▒▒█░
███▒▓▒ ██▒░░░░░░████░ ▒██
█ ███▓▒▒▒▒▒▒▒▒█▒▓██░
░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒███████████▒
█▒▓█░██ ▒▒▓░▒██▒ █░ ███▒
▓▓▓█ ▒▒▓███░░█▒▒██░░░▓▒▒▒ ██▒█
▒▓██░█▓████ ░ ░██░▒█▒░▓█▒▒
▓██▓░░███ █░░░▓▒█▓
▒░░▓▓░█ ███ █░░█ ███
▒░░░▓░▓█ ░░ ▒▒░█ ▒░▓ ▒▒▒
░░░▓░ █░▒▓ ██▒▒ █░░ ▓░▒
░▓▓░░▓ █░░▒░▒▒█▒▒ █░▓ █░█
░▓▓░░ ▒▒░░░▒█▒▒▓ █░░ ░░█
░░░ ░ ░ █░░ ▓█▓░█ ░░ ▓▒
░░▒ ░ ▓ ▓█▓▓█░█ █ ░ ▓▒
░▓ ░░░ ▒▓██▓░░█ ████▒▒█████▒███ █░█░░░
█░▓░▓▒█ █░ ▓▓▒█▓░ █▓███ ▓ ███░ ▒░░█▓░
█▒█ ░▒░▒ ██▒▓▒▓█ ░▒██████░░▓░░░░█ ▓▒ ▒▒░
▒▒█ ▒▒█▒ ████░ ▒░▒░▓ ██
█▒█ ▒ ░█ █░█░█▓░█
░▓▒ ▒█░█▒ ▒█░░███░█
█░▓▒ ▒█░░█▒▒▓ ▒▓▓░░█░▒▓▓
█░░█▒ █░░░█░██▓▓███████░░░██ ▒██
░██▒▒▒ ░████░░░░░░███ ▒██▓
░█▒░▓█▒▒▒▒▒▒▒▒█▒▓██
░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒█▒███████▒▒▒
█▒▓░██ ▒▒▒▓▓█▒▒███████▓▓▒
█▓█░ █▒▓▒█░▓██▒▒██▓█░██▒ ░█▒▓░
██░░▒▓▓████ ▓▒░▒▒ ████▒
██ ▓███▓ ░█ █░▒█▒▒▒█
▓░ ▒░█▓▒ ██ ▒▓ █▒▓█
▓ █░░░ ▓ ▒ ░ ░▒▒ ░█▒▒
▓░ █░██░ ░▓▓██▒▒▒ ▒▒ ░▒░█
▒██▒░░▓ ░▒░█▒ ░▒ ░█ ▓░█
░ ░░░░ █░▓███▒▒▒ ░ ░█░░
░░█░░▒ ░▒▒░█ ▓░░█ ░░▒░▓
░▓█░░▒▓ █ ▓ ▓▓█▓░ ░▒ ░░
▒░█░░░▒ ▓▓▓▒▓░░▓ ▒▓████▒▒▒▒▒▒▒▒█▒ ░▒▒░░
██▒▒░▓▒▓ ▒███▓██ ███████ ████░▓ ▒░▓░░
░▒▒░▒ ░ ▒ ▓█▓░ ░██████░░▒░░█░▓ ▒▒▒░▒█
░▒ ░▒▓▒ ░░ ▒█▓▓▓▓
░▒█▒███▒░ █▓░░█▒▓ ░
▒░██████▒ ▒██▓░░░▓█
█░▓▒█▒▓░███▒▓ ▒█▓█▒░░██
█░█ ██░░░░███████████░░███░▓███
███▒▒ ███████░▓▓▒█ ░ ▒███▒░
██░░██▒▒▒▒▒▒▒██████
░ ██ ░░

View File

@@ -0,0 +1,27 @@
▒█▒█▒█████▓▒▒▒
▒▓▓░ ░ █ ██ ▒██░████▓█▒
▒▓ ▒███ ▓██▒▒██▒▓█▒▓ ░ ████
▓█ ███▒██▒█ ▒░█▒█▒ ██▒░▒
██ ▒▓█ █▒ ▓██▒ ▒█▒▒▒
▓ ▓▓░█ ▓█▒█ ████▓▒█░
░ ▓▓█▒ ▒█▒░ █▒█ ░▒▒░
░ ░▓▓▒ ░░ ▒██ ░░ ░▒▓░
█░ █░ ▒ ░▒█▒▒ ▓▒███░█▓
░▓░░░▓ ░▓▒ █▒▒ ██░ ░▒
░▓░░▒█ ▒░ █▒█ ▓▓░░░▒
░▒░█░░ ▒█ ▒█ ▒░░░░░
░▓█▓░█ ░▓░█ ▓▓ ▒▓███▒▒▒█▒█▒▒█▒ ▒ ░░░░
░█░█▒░ ▒░ ▒░█ ░▓▒▒██████████░▓█ ░░ ░▓░▓
░▓██▒▒▒ ▒▒█▓░ ███▒██▒▒▒███░▓ ▒░▓█▒█ █
░▒██▒██ ▓░▒█▓ ▓
█░▒██▒ ▒ ▒█▒░█ █
░▒ ▒ ███▒ ▓▓██░▓▒▓
█░▒ ░▒▒▓██▒▒ ▒██▒█ ▓█▒█░
█░▒▒ ▒░██░░▒██▓▒▒▒███████▓▓▓████
█░█▒ ████░███░███░████▓░█░
░███▒▓▓▒▒▒▒▒▒███░██
░ ██ ░░░

View File

@@ -0,0 +1,27 @@
▒█▒░▓█████▓▒▒▒
█▒██░ ░██░█▒░▓
▒▓█░ ▒███ ░░█▒▒▒█▒█▒ ░▒██▒██
▒▓ ▒██ █▓ ░▒█ ██▓▒██
███▒▓█ █▒ ▒░█ ▓▒▒▒▒▒
░▓▓▓▓ ▒ ██ █░█░▒▒░█▒
░ ▓░█▓ ░ ███ █▒██▒░▓▒
▓░░▓█ ▒ █▒█▒█▓ ██▓▒█▒▓░
░▒▒░▓▓ ▒▒▒▒▒ ██ ░ ░░▒
░ ▓█░ █▒███▒█ ░░ ░░ █
░▓░░░░█ ░ ░ ▓▒█▓ ▓ ▒ ░░▓
░ ░░░▓ ░░████ █ ░ ░░░▓
░░ ░░▒ ███ ██▓▒█▒███▒▒▒██▒█▓ █ ▒░░▓
░ ░░ ░ ░▓░▓ ▓░ █▓▓▓█▒▒▒▓▓▓█▒░█ ░█░░▓ █
░▒▒▓▓ ▒█░ █▓▓▓░▒▒▒▒▒▒▒█ ▓▓▒░▓ ▓
░▒▒█▒█ ▒░ ░ █ ▓
░▒▒░░█▒ █░▓▓░█ █░
░█▒▓██ ▒▓█▒▓░▓ ▓░
█░▒█▓░░█ ▒ ▒▒██░░█ █▒█
█░▓ ░█░░█░▒▓█▒▒████░░░█░█▓░░▒▓░
█░██░░░░██████░▒█ █▒▒▓▓▒▓█
█░██░░▒░░░░▒██░░ █
░ ██ ░ ░

View File

@@ -0,0 +1,27 @@
▒██▒█▒▒███▓▓▒▓
▒▓▓█ █░▒▒██▒
▒▓█░▒███ █░█▒▒████ ░▒███ █
▒░█▓░██▒██░▓ ░▒█ ▒▒▒ █
██▒░█ ▓▓ ░█ ░▒█ █▒
▓██░█▓▒ ▒ ▓░▒ ▒▒█▒░█
▓█▓░█░▓░ ░▒▓▒ ▓▒█▒ ░█░▒
▒░▒▓█ ▓ ▓▒▒█▒ █ █ ▒▒▒
░█▒░ █ ▒█▒▒▒▒▒ ▒█▒█░▓▒░
░ ░░ ░ ░▒▓███▓ ░█▒█░█▒▓
█▒░░██ ▒▒░▓█▓▓▒▓ ░░░▓ ░░
▓█░░▒ ██▓██▓░█ ▓ ░░░░█
░░░░ ░ ▒░░▒░░ ▒██▒░▒█▒▒▒▒█ ░█░█░░
░█▓░ ▒ █▒▒░▓ ░▒███▓▒▒▓▒▒▒▓ ▓ ▒░░░░░
█░░█░ ████▓▒░▓░▒░█ ▓░▓░ ▓
░▓▒▓▒ ░ ░░ ▓█░▓▓ ▓
░▒▒░░▓▒ ▓█▒▓░▓ █░
░░██▒▒█ ▒░ █▓ ▓▓█░
█▓▒▒████░ █░█░▓██░▒▓
█░░████▒░█▓░▓█████░░█░▓▓░▒ █
█░░░░█░ ███████ ░▓▒██▒▒▓░
░██░▓▒▒▒▒▒▒███▓█ ▓█
░ █ ░ ░

View File

@@ -0,0 +1,27 @@
▒████▓███▓▒▒▓
▒▒██ ▒▓▓▓ █ █░██░▒█
█░█▓▓████ █▒██░█ ▓▒█ ▒▒
▒░█░▓█▒▒█ ██ █░▒░█▒
█▓█░█ ░▒ ▒▒██ ▒▒
█▓█▓▓ ▒ ▒ █▒░▒▓█▒
░█▓▓ ▓▒▒▒█ ▒ ▒▒▒█▒▒
░░░░ ▒▒█▓█▒██ ▓▒▒ █▒▒░
░░▒░ ███▓▒█▒ ▓ ▒▒ ▒▒▓▒
░ ░█ ▓ ░░█▓▓▒▒▓ ▒ ░░██░░
░ ▒ ░▒▒ █▓▒▓▓ ░ ░░░ ░░
░▒▒▒ █░▓ ▓██ ░ ▒░░░█░
░▓▓█ ░ ██▓░ ▓▒██████▒█▒ █░░░░░░░
░░▓░▒█ █ ▓ ░█████████▒▒ ▓ ░█░░░░
░▒▓░ ░ ▒█████░▒▒▒░█ ░ ░█░░░▒░
░█ ░ ░ ░░░░░ ▒▓ ▓█▓ ░
▒░░░▓ ▒ █░░░ ▓
▒░░▒▒▒█▓ ██ ▓ █░
█░░█░▒▒█ ▒░███▒▓█ ▓
▒░░██▓█░█▒█▓▒███░░░░█▓░▒ █
██▒ ████░█░███░▓▒▓██▒██
██▓▒▒▒▒▓▒▓████▒▓█░
░░ █░

View File

@@ -0,0 +1,27 @@
▒██████▓██▒▒▒
█▓█ ▒▓▒▒▒███░░▓█░██▒
█░ ▒████ ░▒██▓▒ ░█▒ ░▒▒
▓ ▓░ ░▓█ ██ ▒ █▒
░▒█░░ ▓█▓░█ ░░
░░█░ ▒▒█▒ █▓▒█ ▒░
▓▒▓░ ▒▒░░▒ █▓█▒▒▒█▒
░░░░ ░▒▒ ░▒ █ ▒█░░ ▒▒
▒ ░░ ▒▒▒ ▒▒█ ▒░▒░░░░░
░░░░ ▓░▒█░▒▒▒█ ░▒░▓██
▓░░░ ▒░▒▓▒░▓█ ░▓░░░░░░
▒▓░░ ▒░▓▓ ▓ ░ ░ ░░░░░░
░ ░█ ▒▓░▓ █▒▒████████ █░░░▒█▓░
█▓▓█ ▓ ██████████░░ ░ ░ ░░░▒
░░▓█ ░████▒░▒▒▓░ ▓█░ ░░░░░
░▓▓░██ ░ ░░░░░ ░ ▒█░░░░█
░░▒▒ █▒ ░░░░▓░░▓░
░▒██▒█▒ ▒░░▓▓▓█ ░
░░█░▒░█ ██░█▓▓▒ ▒▓
█▒ █▒▓█░█████▒██ ██▓█░ ██
█▒ ██░░░░██░█▓▒█▒ ██
██▓▒▓█▒▒███▓█▒▓█
░ ░░ ░

View File

@@ -0,0 +1,27 @@
▒██▒██▒███▒▒▒
█▓█▒████ █░█░▓ ██▒
▒░░▓███ ░██▓▒█▒██▓ █
██▓█▓ ░█▒▒ ▒▒█
█▒█▓░ ▒▒░▓█ ░█
▓░█ ▒▒█ ▒█▒░▒ ▓█
░▓▓ █░▒█ ▓▒▒▒█▒▒
░ ░ ▒░▒▓▒▒█ ▒█ ░░ ░
█▒▒░ █░█▓▒▒░▒ ▓▒░░░ ░
░░█ ▒▒▒░█░█ ░▒▓░▓ ▓
█░ ▒▓░▓█ ░ ░░░█░░░
▓░ ░ ▓█▓░▓ ░░░░░░░░
░░▓░ ▒░ ▓░ ▒▒████████ ░░░░▓█░░
█░░ ░ ▓░▓ ████████▒█▓░ ▒░█░░░
░▓█▒░█▓ ▓██▒▒░▒█░▓░█░░░░░░ ▓
░░█▒█ ░░ ░░░░░ ▒ ░█░░ █░
░▓▒▒▓█ ▓▒░ ▓░▒░
▒▓░▒▒▒▒ ██▓█ ▓ ▓█
▒▓░▒████ ▓█▓█▓▓ ▓▓
▒▒█░██▒███▒▒██░▓▓▒▒ ▓
█▒▒ ██▒▒░██░▒█▒░░ ██
░██▒▒▒█▓▓█▓▒░▒██░
░ ░ ░

View File

@@ -0,0 +1,27 @@
█▒▒░███████
██░▓█▒ ▒▓░█▓ █▒█
▓█░ █ ░██▒▒▒▒▒ █▒
▓░▓██ ▒░▒▒ ▓░░
░░▒▒ ░▒▒▒ █▒░
▒▒ ▒█▒ ▒█░▓▓█▒▒
░▓░░░██ ▓ ▒▒ ▒█▒
▒ ░ ▓▒ ▒ ░▓ ▒▒ ░
▓░░░ █░░░▒█ ░ ░▒▒░░░
░░▒ █░▓▓▓ █ ███░█▓▒█
░░░ ░▒▒░░░ ░▓░░█▓░░
░░░░ ▒ █ █ ░▒░░░░░░
░░▓▒ ░▓ ▒▒█████ ░▒░░░░░░
░░ ░▓▒█ ██████░▒ ░ ░░░░░░
▓░░█░░ ██▒░▒█▓ ░█░░░ ▒░
░▓██░ ░ ░░░ ░░░▒░
██▓░ ░ ▒ ▓▒░░░░▒▓
░░▓█▒▒▒ ░▓█ ▓░░█░
█░░▒██▒▓ ░░█░█░█ ▓█░
███▒█▒███▒█▓▒█ ▓▓ ░░
▒▒██▒░▓█░▓███▒ ▒▓
█▒▒▒▒██▓█▒ ▒▓░
░ █░░░░ ░

View File

@@ -0,0 +1,27 @@
▒▓▒███▓▒██▒
▓ ██ ▒█▓▒ ▒░█
░▓▓█ █▒██▒█░█ ▒▒
▓░▓█▓ ░▓▓ ▒▒
░░▒ ▓ ▒░▒▒ ▒█▒
░░█▒▒░▒ ░▓░█ ░░
█ ░░▒███ ▒ █░ ░▓█
░░ ░░░▒░ █ ░░ ▒░▒
▓▓░ ▓▓▒░▒ ░░░▓ ░
░░ ░░░▒▒ ░▓█░░▒░
░░▓ █░▓░▒ ░▓░▒▓░ ░▓█
░░▓ ░▓▓▒█ ░▓▒░░░▓ ░░
░▓ ░░ ▒▒▒████▒▒░░░░░░░
░ ▒█ █████▒░░ ░░░░░░░
░░▓ ░ ▒▒▓▒░█░░▒ ░░ ░
▓░░ ▓▓ ░ ░░░ ░░ ░
░░ ░ ▒▒▓ ░░ ▒░
░░░▒ ░░█▓░░░██
░░█▒ █░▒▒░░░░ ▓
▒░█▓▒█▒▒█▓▒░█ ▓▓░
▒▒█░ ▓░▓▒░█ ▓▓
██▒▒█ ▒▓█▓▒░
█ ░

View File

@@ -0,0 +1,27 @@
█▒████▒██▒
▓█▒▒▒▓▓▓█ ░█
▓░▓█▒▒ ░░ █░█
█░█▓░▒░██ ██ ▒
░▒░ ▒▒▓▒▒▒ ▓▓
░▒█▒██ ░░▒░█ █░
░░░█ ░ █░░ ░▓░░▒
▒▓█▓ ░░█ ▒░ ░
░ ░██ ▒░ ░░░▓
▒▓▒█▒▓░ ░░█░░█▓░
░▓░▓▒█░ ░░▓██░▓░
░▓█▓█ ░ ░░ ░ ░░░
█▓▒░▒▒▒█ ░░▓░░█░░
▒▒░ ██░▒ ░░▓ ░░ ░
▒░ ▓ ░░░░░░░▒░
░░█░██ ░ ▓░░░░ ▓░
░░░ ░░ ▒ ░░░░░ ░
▒░ ▒ ▓░░░ █▓▓▒░
░█▒ ░ ░ ▒▒█░ ░░
▒░▒▒█▓▒▓█░ ░░▒█
▒█▓ ▓▓█ ▓
█▒▒▓▒█ ▒█
█░█

View File

@@ -0,0 +1,27 @@
▒███▓███
▒ ▒░▓ █ █▒
▓░░░▓░ ▒▒▒
▓▓▓░▓░ █░░░
░░░▓░██ ░░
░░▒░▓▓░▒░ ▒░
░░▓█░░▓░ ░░
░░ █░░░░▒ █
░░▓░░█▓▒░░░░░
░░░░░ ░░░ ▒▒░
░░▒█ ░██░ ░░
░░░▒░ ░ ░▓░░
░░▒▒░▒░▓░░░░█
░░░░░░░░░ ░▒
░░▒░ ░ ░░░ ░▓
░░▒░░░░░░░░ ░
░▒▓▓▓░ ▓ ░
░░▓░░ ▓░░
▓░░░▒░█▒▒░▒
░▒█░▒ ░█░░
▒ ░ ▒▒ █▒▒█
██░ ██▓▓░░
█ ░ ░

View File

@@ -0,0 +1,27 @@
████████
░▒░ ▒░
░▒░█ ░▒▓▒
░░░▒▒▒ ░▓
░▓▓░█ ▒░░
█ ▓ ▒▒▓▒░
█▓ ░▓██ ░
▒░░░ ░▓▒▒
▒░▒▒█ ▓░
░░░░▒▒░░▒
░░░ ██ ░░░
░░░░▓▓▒▓░
░░░░░░░░░░
░░░░░░▒▒░░
░░░░░░░░░░
░░▓░░░░░ ░
░░░░░░░░░░
░░░░░░░░▓░
░░█░▒▓█░▒░
▒░▓░░▓ ░▒░
▒░█░ ░ ░ ░
█▓ ▓█▓
█ ██ ░

View File

@@ -0,0 +1,27 @@
███▓███▒█
▒▓ ▒▓░▒
░█░██▒▒░▒▒
░█ █ ░█░▓░
░░░ ░▒░░░ ░
█░▓░▓▒░▒ ░
▒░░░░░░░░ ░
▓░░░▓░▒░░░ ░
░░░░░░▒░░█ ░
░▒░░░░ ▓░▒ ░
░░ █░▒░▓░█▓░
░█ ▓░ ░░▓░
▒░ ░░░ ░░░█░
▒ ░░░█░░░ ░
▒░░░░█░█░░░
▒▒░░░ ▓▒░░░
░▓░░░▒▓░░▓░
░░▒▓█░░▒▓▓░
░█░▓██░█░░░
█▒ ░ ░░▒
░░█ ░░▓░▓░
░▓▒ ░█▒▒░
░ █ ░

View File

@@ -0,0 +1,27 @@
██░▓█▓█░█
▓▒▒░▓░▒▓▒▓█
░░ ░ ▓░▒█▒▒▒
█▒░ █░░░░██ ▓
░░▓░▓▒░ ░ █▓░██
░ ░█░░░░░░░ ▒░
▒░░ ▒ ░█░ ▓▓░▓
░░░░░▒░░░▒ ░ ░░
░░░░░░░░ ░░░ █▓░
░░░░▒▓░░▓ ▒█░░░░
░░ ▒░▓░▓▓█░▓▓░░
░░▒░▓▓▒ ▒▒▒▒ ▒▒░
░░░░░▒ ░▓▓▓█▒▒░░
░░░░░ ░█░░░░░█░░
░░ ▒ ░░ ░ ▒▒░█░░
░ ▒ ░ ░░░▒▒▓▒▒░
░█ ░█░▒ ░░░▓▒
░░░ ▒█▒▒██░▓░░
█░▓▒▓█ ▓░░▓░░█░
░░█ █▓▒░█░
█▒ ▓ ▒█░█▓▓▓
█▓░▓ ▒░▒▒▓
░ ░

View File

@@ -0,0 +1,27 @@
▒███▓███░▒▒
█▒▓░░▒▒▒█▒▓█▒
░░▓░█▒▓░▒░ ██▒▒
░▓░▓▓▓▒░█▓ ▒███░█
██▓░░█▓▓░░ ▓█▒▓
░▒▒▓▓ ░█▓█▒ ▒░█
▒░▓▒▒█ ░░▒█ ▒░░
░░▒░▓░░ ░░░█▓ ▓░
░░▒█░░░▒▒▒░░░ █▓░░
▒░░░░░ ░░▒█ ░░▒░
░░░░░▓ ▓░▒░█░░ ░▓▒░
░░░░░▓ ▒▓▒ ░▒░ ░▒░
░░░░░ ▓░░▒▒██▒░░░
░░░░░ ▓░▒▒█░░▒▓░ ░░
░░░░░ ▒▒░░▒▒█ ▓░█▓░░
░░░░ ▒▒░░█▒░▒█░▒░▓░
░░░░ ▒░▓ ░ ░▒█
█░░░░░ ░ ▓▓▒█░█
░▒█▒█░░█▓█ █░ ░█▒
█▒▓ ░░▒▒░▓█▓░▒██
█░█▓ ░▒ ███▓▓▒▓
▒██ █░█▒▒▓
░░ ░█ ░

View File

@@ -0,0 +1,27 @@
█▒█░▒█▒█░▒▒
▓██ █ ░░█░█▓░██
░░█▓█▓█░ ██ ████▒
▓ ░░ ▓▓░█▓▓░ ░ ██▒▒
░ ▓░░██░▒█ ▓▒░█
▓▓░█▓▓░▒█ ▒ ▒▓
░░▓▒▒░█░█ ▒█▒ ▒▓░█
▓░░ ░░███░ ▒▓░▒ █░▓
░▒░ ░▒░░░ ▓▒▒█▓ ░░
▒░░░ ░░░░ ▒░░▒ ░▓░
░░░░░▒▒░░ ▓▓ ▓ ▓█▓░
░░░░█░▒░░ ░▒░▒ ░░░
░▓░░░ ░░░ ▓▒ ░▒ █ ▓▒░░░
░░░░░ ▓░ ░▓░█░░ █ ░░░▒░
░░░░▓ █░ █▒ ▓ ░░▓ ▓ ░░█
░░░░░ █░ ▓█▓ ▒░░░▒░
░░░░░█░░ ▓░░░░
▒▒░░▒▓ ░█▒ ▓▒▒
█░█ ░░▒▒▒▒▒ ░░█░
▒▒▒ ▒▒▒▒▒████░▒░██▓
░▒ █▓ █▒█▓█▒█
█▒▒ █▒██▒▒▓█░
░ ░░░░ ░

View File

@@ -0,0 +1,27 @@
▒█▓█▒█▓█████
▒░█ ▓ ░░█ ▒▓▓▒█▒▒
▒░█░░▓ ▓▓▒███ ██ ▒░▒
█▓██ ▒█████▓ ░░ ▒ ▒░
█▓██ ░▓█░░█ ░░
░▓░█▓██▓▒█ ▒ █▒█
░░░▓▒░█░░ █░▒ ▓░▒
░░ ▓░░░░█ ░█▒▒▒ █▓░█
░█░▓ ░░█ ▒▓▒░▒ █░▒
░░░░░ ▒░░ ▒▒▓█ ░░░
░▓░░ ▒░█░ █ ▒░█▒ ░░▒
░ ▒▓█▒▒░░ ░▒██░░ ░░▓
░ ░█░▓░░ █▓▒█▒█▒▒██▒▒ ▒▒░▓
░░ ▒░ ░░░░ ▓▓▓▒▓ █░█ ░ ░░░░
░█ ██░░▒░ ▒▓ ░ ▒░█░ ██▓ ░
░ █░█ ░▓ ▒░░░
▒▒ █▒░░░█░ █░█░█░
▒░▒ ▒█▒ ▒ ░▓▒▒▒░
░░ ▒█░▒ ░▒ ▓ ░█▓
▒▒ ░ ▓░▒▒█▒▒▒██▒▓▓░ ▓
█░▒ ░▒░▒░ █████ █
██▒ █▒█▒▒▒▒▒▓█
░ ░░░░ ░

View File

@@ -0,0 +1,27 @@
▓█▓░▒▓█▓▒███▒
█▓▒ █▓▓░▓█░▒▓▒▓ ██
▒▓░▓ ████▒███░░█░ ░▒██
▓░ ▓▓▓▓▓▓█░ █░█░█▒░▒
▓▒█░▓█▓██ ▒▓░▒░▒
▓░▓▓█▓▓▓▒ ▒ ██ ░░
▒░▒▒ ░ █▓ ░█▓ ▒ ▒░██
░░░▓▓██░ ░███ ▒ █░░
░▒▒ ▒▓▒█░▒░ ▒░▓
░░░░░▓▒░ ▓█▓▒█ █▓░░░
░░░░░░░▓ ▓ ▒▓░▒░ █▓░░▓
░▒ ░░░░░ ▓█░▓▒░█ ▓░░▒
░░ ░▓░░░ ░░▒█▓░█ ███▒▒ ▓░░
▓ █▓░ ▓▓▓▓█▓▓ ▒█ ░▓ ░░░░░
░ ▓ ░░▒ ▒░░█ ▒███▓ ░██
█▓▓ ░ ░ ▒█░ ░░▓▓
░█ ▒░▒▒█▒█ ▒█▓▓░▒░
█▓▒███░█▒█ ▓█▓▓▒▒▓
░▓▒▒█▒░██▒ ▒▓░█░█░▓
░▓ ▒▒▒▒ ███▒▒▓█▒░░░█▒█
░▒░ ▒ ▒░ █▒████ ▒▓░
██ ░████▓▒▒▒▒██░
██░░ █ ░

View File

@@ -0,0 +1,27 @@
▒██▓█▒████▓░█▒
▒█░ █ ░█ █ ▒▓▓░▓ █▒█
▒▓ ░▓████ ▒███░░ ███▓▒▒█
▓██▓▓█▒ ███ ▒▓▒██░█▒
░░▓ █ ░▓ ░▓ █▒▒
░░▒▒▒██ █▒▓ ██░░
▓█ █░▓░█ ▓░█▒ ▒▓▒░
▒░▓█░░▓░ ░▓██▒ ░▓▒
░░░░ ░▒█ ░███▒░▒ ░░░
░░░ █░░ ░█▓█▒ ▓░░
▒░░░░▒░ ▓▓▒░ ░░▓
▒░░▒░░░ ▓▓▓▓█▓ ░█
░░░▓░░░ ▓▒▓▓▓▓▓▒ ▒▒████░▓ ▓ ░█
░░░░█▒▓ ▓▓▓▓▓░█ █▒█ █░ ░ █░░▒
█░░░▒░░█▒ ██░ ▒ █░█▓█▓░█ ░▒█░░
█▒░░░▓▓ ░ █▒█░ ░ █ ░░
██▒▒██▒ ░ ██ ░▒▒▓
█░▒▒▒█▒▒ ░ ░▒▓█░▓
█░▒▒▒▒░▒▒██ ▒▓██▒▓▓█▓
▒▒ █░░█░██▒▒▒▒████░█████
▒▒▒▒░▒█▓ ░████████░▒█
██▒░███░█▒▒▒▒▒██
░█ ░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒█░██████░▒▒
▒█ ▒█░░█ ░ ▒▓█▓ ░ ███▒
▒▓▒▓▓███ ░▒▓███░░ ██▓▒██▒▒
▓ ██░░ ▒████▓█▒
░ ▓░▓▓ █ ▓██▒
░▓▓▓░▓█ ██▒ █▒▒
█▓▓▓░█▓ ▓▓ ▒▒ ██▒░
▓▓█▓░▒█ ░░▓ ░ ░ ░▓█
█▒ ██░ ░█░▓░ ▒ ░░░
▒░░░▒░▒ ▒▒▒▒░▒█ ░░▓
░ ░ ▓░░ █ ▒█░░░▓ ▓░░░░
░░░ ░░░ ░▓▓░▓ ▓▒█ ░▓░░░░
█░░░░░░ ░▓██ ▓█ ▒████▓▓▒ ▒ ░█▓░
█░░░░██ ░███░▓█▓ ░▒█ ░█ ░ ▒░▓
░ ░░░░░ █▓▓ █░ ▒░███▓█ ░ ▓▓░█
▒░▒ ███ ░▒█ ▓██░░░░
█▒▒░ ▓▓▒ █▓██░█▒▓
▒░▒█░▒░▒█▓ ▒▓ ▓██░▓
█▒▒▒█░▒▓ ██▓ ▒█ ░██▓▓
▒▒ ▒█░▒▒ ██▓▒▒▒▒███▓▒█░▓ ░░
▒▒ ▒██░▓█░█████████ ▒█
██▒▒▓▒░█▒▒▒▒▒▒▓██░
░ ░░░ █ ░

View File

@@ -0,0 +1,27 @@
▓▒▒█▓███████▒▒
██░█▒▓██░▓█ ░█▓░░ ███▒
▒█▒▒▓██▓▓▓▒████▒░░███▓▒░ ██
▓ ▓▓▓▓▓ ░██▒▓▓██
▒░█▒████▒ ▒ ██▓█
▒░░░▓▓█ █▓▒ ▓▒█░
░▓▒▓▓▒▓ ▓░ ░▒ ▓▒▒█░
▓▓▓░░▓░ █░▒▒░▒░ ░▒█░▒
░ ▓██░ █▒█░█▓ ▒ ░░░
░░░░█░░ █░▒ █▒▓█ ░█░█
░ ░░▒█ █░ ▒▓▒ ▒█ ░░▒░
░░ ▒▒▒░ ▓▒▒█▓ ▒░ ░░ ░
░ ░ ░░▒ ░▒▒█▓░▒▓ ▒▒▒▒▒██▓█▓▒▒ ▒░▓░░
█▒ ░░░ ░░███░█▓ ░░▓█ █░▒░░░░ ░░█░
░▓█░▒▓▒▒ ░█▓░ ▒█ ░█░█▒▓██▒ ▓░▒▒▓
█▒░░░█▒ ░ ░▒██ ██ ▒▓▓█░
▒░░░░▒░ ░▒ ▓ ██░░▓
▒▓█▒░░▒▒██ ▓░▒█▒▒ ▓▓▓
█ █ ░░▓ ░▒ ▒▓█ ▒██░█
█▒ ██░█▒ ░░▓█▒▒█████░▒███░▓▓
██▒██░▓▒ █░████████ ███░
░█▓▒█ ██▒▓▒▒▒▒▒████
░ ░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▓▓▓█▒███████▒▒
▒█ █░▓██░ ████
██▒▓███ ▒█▒███▒░░███▓█░░██▒
██▓▓▓▒▓ ▒██ ▓▒█░█▒ ▒▒
▓ ▓███ █▓ █░▓▓▒▒█▒
█ ▓██▓ ▓ ▓▓▒▒ ▒ ░█▓▒
░█▓░░▓█░ ▓▓ ██▒ █▒ ░▓▓▒
░▒▒▒▓█▓▓░ ▒▒█░▒▒▒░ ░ ▒▓▒█
░▓░▒░▒▒ █░▒ █░ ░ ▒ ░░▒█
░█░░░░█░ █ ░░█▓░▒░ ░▓ █
░█ ░░ ░ ░▒▓ ░█ ▓░ ░░ ░
░░░▓░░ ░ ▒██▓▓░ ▓ ░▓░░█░
▒█ ░░░█░ ██▓▓░█▓░░ ▒██▒███▓▓█ ▒░░░█░
▒░ █▓ █ ▓░█░▓ ▒░ █▓▒█ █░▒ ░ ▓█░▒░░
▒ ░░░░█░ ██▓░░▒ ░██░░▓▓███ ░░░░░
█░▒▒▒▒░ ░ █▓ ▒▒░▒░
▒▒▒▓▒░█▒░▒ ▒▓░▓█▓██
█▓ ░░▒█ ░█ ▒▓█▒█▓░░█
▒ ██░▓▒ ░█▒ ▒█ ▒▓▒██▓░
█▓ ███▓▒ ░█████▒█████░▒▓██░███
█▒▒█▒█▒▒ █░█████ ██░█░▓
░ █░███▒▒▒▒▒▒█▒██
░ ░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▓▒▓▓█████████▒▒
█▓░████ ██▒
▒▓██░█▓▓▓▓▒███░░░░░████▒▒▓███
▒▓█▓▓ ▒░█▓█ ▒█ ██░░██
▓░▓██░░█░ █▓▒██▓░▒
░░▓░███░░ ██░▒ █ ░▒▒▒█
░▒▓░██▒░ ▒▓▓▓░█ ░▒▒▒░▒██
▓█░▒▒▓ ░ █░▓▒█▒▒ ▒▒ ▒▒░▒
░░░░░▓░ █░▓▒░░█ ██▒█▒░
░░▒░░▓▓ ▓█░▒█▒▒█ ░░▓ ░░░
██░░░▓░ ░ ░░█▓░░ ▓ ░▓▒█
░░░░▒░ ▓▒█████░ ▓▓░▒ ░
▒ ▓░░█ ░ ▒█ ▓▓▓▓▓▓ ██▒▒██████▓███ ░▓░░▒░
█▒ ░░░▒░ █▒█▓▓ ▓ █░░▒█ ██▒ ▒██ █░░░░
░█▒▒░▓ ░░ █░▓█▓ ▒▒██░░▒▒██ ░░ ▓
░█░█▓██▒▓ █▒▓▓░▓░█
█▒█░▓░▒█▒▒ ░▓█▒▓▓░█░
██ ░░░█ ░█ ▒█▒▓▓▓▒▓░
▒▒▒█░▒█ ░█▒ ▓█▒░█░█▒█
█▒█░██▒ ▒█████▒▒█▒███░▒▓█░█░▒▓░
██ █░░▒▒ █░ ████ ██░▒▒██
█ ▓██░░█▓▒▒▒▒▒▒▓███
░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▓▓█████████▒▒
▒▓██▓░█ ▓▒█ ██▒
███░█▓█ █▒██▓██▒░░░█████▒ ██▒
█ ▓█ █ ▓▓██░▒ █ ██▓█ ▒▒
██░░██ █░ ██ ████▒
▓▓▓░░█ ▓ ▓███▒ ▒▓ ▒▒ ░
▓▒▓▓██▒░░ ▒▒░▒██░ █▒ ▒░ ░
█▓░▒▒██░ ░▒▒▓██▓▒ █▒ ▒░ ░
░░░░░█▓░ █▒▒█▒▒▒ ▒▓ ░░██
▒░░█░░▓░ ▒█░▒ ▒ █ ░ ░░▒░
░▒ ░ ▒ █▒▓▒ ▓ ░█▒░▓░
░░░▒░ ▒ ▒█▒█▓ ▓█ █▓░░░░
█░ ▒░▒ ░ ▓░█░▒ ▒░░ █▒▒▒█▒▒██▓███▒ ▒ ░░▒░
█ ░░░ ░ ▒▓▓▓█░ ▓█ ▒░░▓██ ███ ▒░█░█░▒░▒█░
▒█ ▒█▓ ░ ░░ ▓▓ █▓▒█░██▒▒████▓░░▓░▓░
▒█░▓░ █▒ ▓▒▒▒█▒▓
▒█░░▒▒▒█▒ ▒▓░▓█▓ ▓
▒▓░░░▒█▒▒▒ ▒████▓░██
██ ░░▓█ ▒██▒▒ ▒██ ░█▓░▒▓░
██▒█░▒█▓█░█████▒▒█▒███░▒██░█ ▒█
█▒░░▒▒ ▒ █░ ▓███ ██ █▒██░
░░▓███░███▒▒▒▒▒▒▓████░
░ █ ██ ░

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
oecxocccceeeedooeecceccce
eoxeccceoexccxxcdcxcccccoe cce
ocxoocecoocc oxxcccde co
ecxooceoo cccxoeco
oxxoocox oeoxxeee
ooxoc xc xceeec edexoex
ooxxccxe ceecxoxx eceexoex
ccxxexc xcdeeeeox cocxeeo
xecxeexx cexceoxxeo oxexx x
oxdx ex ceod cxd cxxxdx
c ee ox ceoxxc dc exxxx
xexx o oxooc oxe eddcdccccccccco xdox x
cxd xo eoxxcc oce oxxdeccccccccc xdexex
eoxeo xe cxoe exe cxcoooooo x xxcxe
eexxeexo ccccd ooeoocoo
eeex ede dcdcoeoo
exxxo cxee execxcexc
coccxo cocc deccxcccodc
cxcxxc eeoccexceeoooccccxcxccceoce
cccccdoxxcdxocccdxxdccc eccc
cccdxcxxdeeeeeeocdcce
eeccccccce

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
oeoxoccccceoeooo ccceccce
eoxxcxcedexccxxcdxxcccccde cce
oxcoocecxocc cccede co
ecxoocedoc ccccxoeco
oxxooeox eoxxeee
oxxcc xc edceeec cdcxocx
ooxxcexe ceeeeoeo oecxxex
ccxxoxc exxxeeee ecocxxeo
xccxe x oexcceoxc xexx x
cxoxcex ceoxc oco dxxxcx
c eo cxo eoxxc odce xx x
xexxccce dxooo doee eddcdccdcccccco o ox x
cxo xe oxxcccoc c xecccccccccccc xdexex
ecxeecxo cxoc exc oocdccexxxdddcc oooxexe
eeexoexe cccc e ooeoxcoo
oeexecece ecdcocoo
eexxoocoee eodcxcexc
cocxxe cocc eeccxccxedc
cxcxxc ccoccxxceeooocccxxdxccceoce
eccccdeoxxdcxccccxcdcccceccc
cccxxccddeeeeeeocdcce
eeccccccce

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
eeoxcccxceeoeooo eccxccce
eoxxccceodcccxcddxxcccecde ceo
oxxcxceecxcco cccede co
ocxooeoxc ccecxocco
oxxocexc ooeo ceexxeee
oxooooxc cecoecd ecxoee
oxox ox exccecoc eexdex
xxxe xxe cxxdxoee cecxoxx
xxex xxc ce xexeco exx x
xxex xc ccxxd dxo xxxex
oxod ec ecxxe exe e xxxxxe
ccxx xx oooce xo decdcdccdcccccco ecxx x
cxex cox exxoeoexc oexxcccccceccccc xeexex
xexe eee eccoo oc eecxccxxxxddddc ocxxeoc
exxd eee ecccc ocoocoo
oxxe cxxe eodcococ
ocxxe ecce eoxcocexc
e eece exccoedx eeccxccxedc
cxcxco ccxccxxceeoooccccxcxccceece
exxcoeexxcdooxxcxxxdccccexcc
cecxxxcdeeeeeeeocdcce
eeccccccce

Some files were not shown because too many files have changed in this diff Show More