Compare commits

...

12 Commits

Author SHA1 Message Date
jif-oai
615c71fa11 Release 0.105.0-alpha.2 2026-02-18 13:56:00 +00:00
jif-oai
cc3bbd7852 nit: change model for phase 1 (#12137) 2026-02-18 13:55:30 +00:00
jif-oai
7b65b05e87 feat: validate agent config file paths (#12133) 2026-02-18 13:48:52 +00:00
jif-oai
a9f5f633b2 feat: memory usage metrics (#12120) 2026-02-18 12:45:19 +00:00
jif-oai
2293ab0e21 feat: phase 2 usage (#12121) 2026-02-18 11:33:55 +00:00
jif-oai
f0ee2d9f67 feat: phase 1 and phase 2 e2e latencies (#12124) 2026-02-18 11:30:20 +00:00
jif-oai
0dcf8d9c8f Enable default status line indicators in TUI config (#12015)
Default statusline to something
<img width="307" height="83" alt="Screenshot 2026-02-17 at 18 16 12"
src="https://github.com/user-attachments/assets/44e16153-0aa2-4c1a-9b4a-02e2feb8b7f6"
/>
2026-02-18 09:51:15 +00:00
Leo Shimonaka
1946a4c48b fix: Restricted Read: /System is too permissive for macOS platform de… (#11798)
…fault

Update the list of platform defaults included for `ReadOnlyAccess`.

When `ReadOnlyAccess::Restricted::include_platform_defaults` is `true`,
the policy defined in
`codex-rs/core/src/seatbelt_platform_defaults.sbpl` is appended to
enable macOS programs to function properly.
2026-02-17 23:56:35 -08:00
aaronl-openai
f600453699 [js_repl] paths for node module resolution can be specified for js_repl (#11944)
# External (non-OpenAI) Pull Request Requirements

In `js_repl` mode, module resolution currently starts from
`js_repl_kernel.js`, which is written to a per-kernel temp dir. This
effectively means that bare imports will not resolve.

This PR adds a new config option, `js_repl_node_module_dirs`, which is a
list of dirs that are used (in order) to resolve a bare import. If none
of those work, the current working directory of the thread is used.

For example:
```toml
js_repl_node_module_dirs = [
    "/path/to/node_modules/",
    "/other/path/to/node_modules/",
]
```
2026-02-17 23:29:49 -08:00
Eric Traut
57f4e37539 Updated issue labeler script to include safety-check label (#12096)
Also deleted obsolete prompt file
2026-02-17 22:44:42 -08:00
Charley Cunningham
c16f9daaaf Add model-visible context layout snapshot tests (#12073)
## Summary
- add a dedicated `core/tests/suite/model_visible_layout.rs` snapshot
suite to materialize model-visible request layout in high-value
scenarios
- add three reviewer-focused snapshot scenarios:
  - turn-level context updates (cwd / permissions / personality)
  - first post-resume turn with model hydration + personality change
- first post-resume turn where pre-turn model override matches rollout
model
- wire the new suite into `core/tests/suite/mod.rs`
- commit generated `insta` snapshots under `core/tests/suite/snapshots/`

## Why
This creates a stable, reviewable baseline of model-visible context
layout against `main` before follow-on context-management refactors. It
lets subsequent PRs show focused snapshot diffs for behavior changes
instead of introducing the test surface and behavior changes at once.

## Testing
- `just fmt`
- `INSTA_UPDATE=always cargo test -p codex-core model_visible_layout`
2026-02-17 22:30:29 -08:00
Ahmed Ibrahim
03ce01e71f codex-api: realtime websocket session.create + typed inbound events (#12036)
## Summary
- add realtime websocket client transport in codex-api
- send session.create on connect with backend prompt and optional
conversation_id
- keep session.update for prompt changes after connect
- switch inbound event parsing to a tagged enum (typed variants instead
of optional field bag)
- add a websocket e2e integration test in
codex-rs/codex-api/tests/realtime_websocket_e2e.rs

## Why
This moves the realtime transport to an explicit session-create
handshake and improves protocol safety with typed inbound events.

## Testing
- Added e2e integration test coverage for session create + event flow in
the API crate.
2026-02-17 22:17:01 -08:00
39 changed files with 2948 additions and 181 deletions

View File

@@ -1,26 +0,0 @@
You are an assistant that reviews GitHub issues for the repository.
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
Follow these rules:
- Only pick labels out of the list below.
- Prefer a small set of precise labels over many broad ones.
- If none of the labels fit, respond with an empty JSON array: []
- Output must be a JSON array of label names (strings) with no additional commentary.
Labels to apply:
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
3. extension — VS Code (or other IDE) extension-specific issues.
4. windows-os — Bugs or friction specific to Windows environments (PowerShell behavior, path handling, copy/paste, OS-specific auth or tooling failures).
5. mcp — Topics involving Model Context Protocol servers/clients.
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
8. azure — Problems or requests tied to Azure OpenAI deployments.
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
Issue information is available in environment variables:
ISSUE_NUMBER
ISSUE_TITLE
ISSUE_BODY
REPO_FULL_NAME

View File

@@ -50,14 +50,15 @@ jobs:
4. azure — Problems or requests tied to Azure OpenAI deployments.
5. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
6. code-review — Issues related to the code review feature or functionality.
7. auth - Problems related to authentication, login, or access tokens.
8. codex-exec - Problems related to the "codex exec" command or functionality.
9. context-management - Problems related to compaction, context windows, or available context reporting.
10. custom-model - Problems that involve using custom model providers, local models, or OSS models.
11. rate-limits - Problems related to token limits, rate limits, or token usage reporting.
12. sandbox - Issues related to local sandbox environments or tool call approvals to override sandbox restrictions.
13. tool-calls - Problems related to specific tool call invocations including unexpected errors, failures, or hangs.
14. TUI - Problems with the terminal user interface (TUI) including keyboard shortcuts, copy & pasting, menus, or screen update issues.
7. safety-check - Issues related to cyber risk detection or trusted access verification.
8. auth - Problems related to authentication, login, or access tokens.
9. codex-exec - Problems related to the "codex exec" command or functionality.
10. context-management - Problems related to compaction, context windows, or available context reporting.
11. custom-model - Problems that involve using custom model providers, local models, or OSS models.
12. rate-limits - Problems related to token limits, rate limits, or token usage reporting.
13. sandbox - Issues related to local sandbox environments or tool call approvals to override sandbox restrictions.
14. tool-calls - Problems related to specific tool call invocations including unexpected errors, failures, or hangs.
15. TUI - Problems with the terminal user interface (TUI) including keyboard shortcuts, copy & pasting, menus, or screen update issues.
Issue number: ${{ github.event.issue.number }}

View File

@@ -66,7 +66,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.0.0"
version = "0.105.0-alpha.2"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024

View File

@@ -2,6 +2,7 @@ pub mod aggregate;
pub mod compact;
pub mod memories;
pub mod models;
pub mod realtime_websocket;
pub mod responses;
pub mod responses_websocket;
mod session;

View File

@@ -0,0 +1,824 @@
use crate::endpoint::realtime_websocket::protocol::ConversationItem;
use crate::endpoint::realtime_websocket::protocol::ConversationItemContent;
use crate::endpoint::realtime_websocket::protocol::RealtimeAudioFrame;
use crate::endpoint::realtime_websocket::protocol::RealtimeEvent;
use crate::endpoint::realtime_websocket::protocol::RealtimeOutboundMessage;
use crate::endpoint::realtime_websocket::protocol::RealtimeSessionConfig;
use crate::endpoint::realtime_websocket::protocol::SessionCreateSession;
use crate::endpoint::realtime_websocket::protocol::SessionUpdateSession;
use crate::endpoint::realtime_websocket::protocol::parse_realtime_event;
use crate::error::ApiError;
use crate::provider::Provider;
use codex_utils_rustls_provider::ensure_rustls_crypto_provider;
use futures::SinkExt;
use futures::StreamExt;
use http::HeaderMap;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use tokio::net::TcpStream;
use tokio::sync::Mutex;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio_tungstenite::MaybeTlsStream;
use tokio_tungstenite::WebSocketStream;
use tokio_tungstenite::tungstenite::Error as WsError;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tracing::info;
use tracing::trace;
use tungstenite::protocol::WebSocketConfig;
use url::Url;
struct WsStream {
tx_command: mpsc::Sender<WsCommand>,
pump_task: tokio::task::JoinHandle<()>,
}
enum WsCommand {
Send {
message: Message,
tx_result: oneshot::Sender<Result<(), WsError>>,
},
Close {
tx_result: oneshot::Sender<Result<(), WsError>>,
},
}
impl WsStream {
fn new(
inner: WebSocketStream<MaybeTlsStream<TcpStream>>,
) -> (Self, mpsc::UnboundedReceiver<Result<Message, WsError>>) {
let (tx_command, mut rx_command) = mpsc::channel::<WsCommand>(32);
let (tx_message, rx_message) = mpsc::unbounded_channel::<Result<Message, WsError>>();
let pump_task = tokio::spawn(async move {
let mut inner = inner;
loop {
tokio::select! {
command = rx_command.recv() => {
let Some(command) = command else {
break;
};
match command {
WsCommand::Send { message, tx_result } => {
let result = inner.send(message).await;
let should_break = result.is_err();
let _ = tx_result.send(result);
if should_break {
break;
}
}
WsCommand::Close { tx_result } => {
let result = inner.close(None).await;
let _ = tx_result.send(result);
break;
}
}
}
message = inner.next() => {
let Some(message) = message else {
break;
};
match message {
Ok(Message::Ping(payload)) => {
if let Err(err) = inner.send(Message::Pong(payload)).await {
let _ = tx_message.send(Err(err));
break;
}
}
Ok(Message::Pong(_)) => {}
Ok(message @ (Message::Text(_)
| Message::Binary(_)
| Message::Close(_)
| Message::Frame(_))) => {
let is_close = matches!(message, Message::Close(_));
if tx_message.send(Ok(message)).is_err() {
break;
}
if is_close {
break;
}
}
Err(err) => {
let _ = tx_message.send(Err(err));
break;
}
}
}
}
}
});
(
Self {
tx_command,
pump_task,
},
rx_message,
)
}
async fn request(
&self,
make_command: impl FnOnce(oneshot::Sender<Result<(), WsError>>) -> WsCommand,
) -> Result<(), WsError> {
let (tx_result, rx_result) = oneshot::channel();
if self.tx_command.send(make_command(tx_result)).await.is_err() {
return Err(WsError::ConnectionClosed);
}
rx_result.await.unwrap_or(Err(WsError::ConnectionClosed))
}
async fn send(&self, message: Message) -> Result<(), WsError> {
self.request(|tx_result| WsCommand::Send { message, tx_result })
.await
}
async fn close(&self) -> Result<(), WsError> {
self.request(|tx_result| WsCommand::Close { tx_result })
.await
}
}
impl Drop for WsStream {
fn drop(&mut self) {
self.pump_task.abort();
}
}
pub struct RealtimeWebsocketConnection {
writer: RealtimeWebsocketWriter,
events: RealtimeWebsocketEvents,
}
#[derive(Clone)]
pub struct RealtimeWebsocketWriter {
stream: Arc<WsStream>,
is_closed: Arc<AtomicBool>,
}
#[derive(Clone)]
pub struct RealtimeWebsocketEvents {
rx_message: Arc<Mutex<mpsc::UnboundedReceiver<Result<Message, WsError>>>>,
is_closed: Arc<AtomicBool>,
}
impl RealtimeWebsocketConnection {
pub async fn send_audio_frame(&self, frame: RealtimeAudioFrame) -> Result<(), ApiError> {
self.writer.send_audio_frame(frame).await
}
pub async fn send_conversation_item_create(&self, text: String) -> Result<(), ApiError> {
self.writer.send_conversation_item_create(text).await
}
pub async fn send_session_update(
&self,
backend_prompt: String,
conversation_id: Option<String>,
) -> Result<(), ApiError> {
self.writer
.send_session_update(backend_prompt, conversation_id)
.await
}
pub async fn send_session_create(
&self,
backend_prompt: String,
conversation_id: Option<String>,
) -> Result<(), ApiError> {
self.writer
.send_session_create(backend_prompt, conversation_id)
.await
}
pub async fn close(&self) -> Result<(), ApiError> {
self.writer.close().await
}
pub async fn next_event(&self) -> Result<Option<RealtimeEvent>, ApiError> {
self.events.next_event().await
}
pub fn writer(&self) -> RealtimeWebsocketWriter {
self.writer.clone()
}
pub fn events(&self) -> RealtimeWebsocketEvents {
self.events.clone()
}
fn new(
stream: WsStream,
rx_message: mpsc::UnboundedReceiver<Result<Message, WsError>>,
) -> Self {
let stream = Arc::new(stream);
let is_closed = Arc::new(AtomicBool::new(false));
Self {
writer: RealtimeWebsocketWriter {
stream: Arc::clone(&stream),
is_closed: Arc::clone(&is_closed),
},
events: RealtimeWebsocketEvents {
rx_message: Arc::new(Mutex::new(rx_message)),
is_closed,
},
}
}
}
impl RealtimeWebsocketWriter {
pub async fn send_audio_frame(&self, frame: RealtimeAudioFrame) -> Result<(), ApiError> {
self.send_json(RealtimeOutboundMessage::InputAudioDelta {
delta: frame.data,
sample_rate: frame.sample_rate,
num_channels: frame.num_channels,
samples_per_channel: frame.samples_per_channel,
})
.await
}
pub async fn send_conversation_item_create(&self, text: String) -> Result<(), ApiError> {
self.send_json(RealtimeOutboundMessage::ConversationItemCreate {
item: ConversationItem {
kind: "message".to_string(),
role: "user".to_string(),
content: vec![ConversationItemContent {
kind: "text".to_string(),
text,
}],
},
})
.await
}
pub async fn send_session_update(
&self,
backend_prompt: String,
conversation_id: Option<String>,
) -> Result<(), ApiError> {
self.send_json(RealtimeOutboundMessage::SessionUpdate {
session: Some(SessionUpdateSession {
backend_prompt,
conversation_id,
}),
})
.await
}
pub async fn send_session_create(
&self,
backend_prompt: String,
conversation_id: Option<String>,
) -> Result<(), ApiError> {
self.send_json(RealtimeOutboundMessage::SessionCreate {
session: SessionCreateSession {
backend_prompt,
conversation_id,
},
})
.await
}
pub async fn close(&self) -> Result<(), ApiError> {
if self.is_closed.swap(true, Ordering::SeqCst) {
return Ok(());
}
if let Err(err) = self.stream.close().await
&& !matches!(err, WsError::ConnectionClosed | WsError::AlreadyClosed)
{
return Err(ApiError::Stream(format!(
"failed to close websocket: {err}"
)));
}
Ok(())
}
async fn send_json(&self, message: RealtimeOutboundMessage) -> Result<(), ApiError> {
let payload = serde_json::to_string(&message)
.map_err(|err| ApiError::Stream(format!("failed to encode realtime request: {err}")))?;
trace!("realtime websocket request: {payload}");
if self.is_closed.load(Ordering::SeqCst) {
return Err(ApiError::Stream(
"realtime websocket connection is closed".to_string(),
));
}
self.stream
.send(Message::Text(payload.into()))
.await
.map_err(|err| ApiError::Stream(format!("failed to send realtime request: {err}")))?;
Ok(())
}
}
impl RealtimeWebsocketEvents {
pub async fn next_event(&self) -> Result<Option<RealtimeEvent>, ApiError> {
if self.is_closed.load(Ordering::SeqCst) {
return Ok(None);
}
loop {
let msg = match self.rx_message.lock().await.recv().await {
Some(Ok(msg)) => msg,
Some(Err(err)) => {
self.is_closed.store(true, Ordering::SeqCst);
return Err(ApiError::Stream(format!(
"failed to read websocket message: {err}"
)));
}
None => {
self.is_closed.store(true, Ordering::SeqCst);
return Ok(None);
}
};
match msg {
Message::Text(text) => {
if let Some(event) = parse_realtime_event(&text) {
return Ok(Some(event));
}
}
Message::Close(_) => {
self.is_closed.store(true, Ordering::SeqCst);
return Ok(None);
}
Message::Binary(_) => {
return Ok(Some(RealtimeEvent::Error(
"unexpected binary realtime websocket event".to_string(),
)));
}
Message::Frame(_) | Message::Ping(_) | Message::Pong(_) => {}
}
}
}
}
pub struct RealtimeWebsocketClient {
provider: Provider,
}
impl RealtimeWebsocketClient {
pub fn new(provider: Provider) -> Self {
Self { provider }
}
pub async fn connect(
&self,
config: RealtimeSessionConfig,
extra_headers: HeaderMap,
default_headers: HeaderMap,
) -> Result<RealtimeWebsocketConnection, ApiError> {
ensure_rustls_crypto_provider();
let ws_url = websocket_url_from_api_url(config.api_url.as_str())?;
let mut request = ws_url
.as_str()
.into_client_request()
.map_err(|err| ApiError::Stream(format!("failed to build websocket request: {err}")))?;
let headers = merge_request_headers(&self.provider.headers, extra_headers, default_headers);
request.headers_mut().extend(headers);
info!("connecting realtime websocket: {ws_url}");
let (stream, _) =
tokio_tungstenite::connect_async_with_config(request, Some(websocket_config()), false)
.await
.map_err(|err| {
ApiError::Stream(format!("failed to connect realtime websocket: {err}"))
})?;
let (stream, rx_message) = WsStream::new(stream);
let connection = RealtimeWebsocketConnection::new(stream, rx_message);
connection
.send_session_create(config.prompt, config.session_id)
.await?;
Ok(connection)
}
}
fn merge_request_headers(
provider_headers: &HeaderMap,
extra_headers: HeaderMap,
default_headers: HeaderMap,
) -> HeaderMap {
let mut headers = provider_headers.clone();
headers.extend(extra_headers);
for (name, value) in &default_headers {
if let http::header::Entry::Vacant(entry) = headers.entry(name) {
entry.insert(value.clone());
}
}
headers
}
fn websocket_config() -> WebSocketConfig {
WebSocketConfig::default()
}
fn websocket_url_from_api_url(api_url: &str) -> Result<Url, ApiError> {
let mut url = Url::parse(api_url)
.map_err(|err| ApiError::Stream(format!("failed to parse realtime api_url: {err}")))?;
match url.scheme() {
"ws" | "wss" => {
if url.path().is_empty() || url.path() == "/" {
url.set_path("/ws");
}
Ok(url)
}
"http" | "https" => {
if url.path().is_empty() || url.path() == "/" {
url.set_path("/ws");
}
let scheme = if url.scheme() == "http" { "ws" } else { "wss" };
let _ = url.set_scheme(scheme);
Ok(url)
}
scheme => Err(ApiError::Stream(format!(
"unsupported realtime api_url scheme: {scheme}"
))),
}
}
#[cfg(test)]
mod tests {
use super::*;
use http::HeaderValue;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use std::collections::HashMap;
use std::time::Duration;
use tokio::net::TcpListener;
use tokio_tungstenite::accept_async;
use tokio_tungstenite::tungstenite::Message;
#[test]
fn parse_session_created_event() {
let payload = json!({
"type": "session.created",
"session": {"id": "sess_123"}
})
.to_string();
assert_eq!(
parse_realtime_event(payload.as_str()),
Some(RealtimeEvent::SessionCreated {
session_id: "sess_123".to_string()
})
);
}
#[test]
fn parse_audio_delta_event() {
let payload = json!({
"type": "response.output_audio.delta",
"delta": "AAA=",
"sample_rate": 48000,
"num_channels": 1,
"samples_per_channel": 960
})
.to_string();
assert_eq!(
parse_realtime_event(payload.as_str()),
Some(RealtimeEvent::AudioOut(RealtimeAudioFrame {
data: "AAA=".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: Some(960),
}))
);
}
#[test]
fn parse_conversation_item_added_event() {
let payload = json!({
"type": "conversation.item.added",
"item": {"type": "spawn_transcript", "seq": 7}
})
.to_string();
assert_eq!(
parse_realtime_event(payload.as_str()),
Some(RealtimeEvent::ConversationItemAdded(
json!({"type": "spawn_transcript", "seq": 7})
))
);
}
#[test]
fn merge_request_headers_matches_http_precedence() {
let mut provider_headers = HeaderMap::new();
provider_headers.insert(
"originator",
HeaderValue::from_static("provider-originator"),
);
provider_headers.insert("x-priority", HeaderValue::from_static("provider"));
let mut extra_headers = HeaderMap::new();
extra_headers.insert("x-priority", HeaderValue::from_static("extra"));
let mut default_headers = HeaderMap::new();
default_headers.insert("originator", HeaderValue::from_static("default-originator"));
default_headers.insert("x-priority", HeaderValue::from_static("default"));
default_headers.insert("x-default-only", HeaderValue::from_static("default-only"));
let merged = merge_request_headers(&provider_headers, extra_headers, default_headers);
assert_eq!(
merged.get("originator"),
Some(&HeaderValue::from_static("provider-originator"))
);
assert_eq!(
merged.get("x-priority"),
Some(&HeaderValue::from_static("extra"))
);
assert_eq!(
merged.get("x-default-only"),
Some(&HeaderValue::from_static("default-only"))
);
}
#[test]
fn websocket_url_from_http_base_defaults_to_ws_path() {
let url = websocket_url_from_api_url("http://127.0.0.1:8011").expect("build ws url");
assert_eq!(url.as_str(), "ws://127.0.0.1:8011/ws");
}
#[test]
fn websocket_url_from_ws_base_defaults_to_ws_path() {
let url = websocket_url_from_api_url("wss://example.com").expect("build ws url");
assert_eq!(url.as_str(), "wss://example.com/ws");
}
#[tokio::test]
async fn e2e_connect_and_exchange_events_against_mock_ws_server() {
let listener = TcpListener::bind("127.0.0.1:0").await.expect("bind");
let addr = listener.local_addr().expect("local addr");
let server = tokio::spawn(async move {
let (stream, _) = listener.accept().await.expect("accept");
let mut ws = accept_async(stream).await.expect("accept ws");
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
assert_eq!(
first_json["session"]["backend_prompt"],
Value::String("backend prompt".to_string())
);
assert_eq!(
first_json["session"]["conversation_id"],
Value::String("conv_1".to_string())
);
ws.send(Message::Text(
json!({
"type": "session.created",
"session": {"id": "sess_mock"}
})
.to_string()
.into(),
))
.await
.expect("send session.created");
let second = ws
.next()
.await
.expect("second msg")
.expect("second msg ok")
.into_text()
.expect("text");
let second_json: Value = serde_json::from_str(&second).expect("json");
assert_eq!(second_json["type"], "response.input_audio.delta");
let third = ws
.next()
.await
.expect("third msg")
.expect("third msg ok")
.into_text()
.expect("text");
let third_json: Value = serde_json::from_str(&third).expect("json");
assert_eq!(third_json["type"], "conversation.item.create");
assert_eq!(third_json["item"]["content"][0]["text"], "hello agent");
ws.send(Message::Text(
json!({
"type": "response.output_audio.delta",
"delta": "AQID",
"sample_rate": 48000,
"num_channels": 1
})
.to_string()
.into(),
))
.await
.expect("send audio");
ws.send(Message::Text(
json!({
"type": "conversation.item.added",
"item": {"type": "spawn_transcript", "seq": 2}
})
.to_string()
.into(),
))
.await
.expect("send item added");
});
let provider = Provider {
name: "test".to_string(),
base_url: "http://localhost".to_string(),
query_params: Some(HashMap::new()),
headers: HeaderMap::new(),
retry: crate::provider::RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(1),
retry_429: false,
retry_5xx: false,
retry_transport: false,
},
stream_idle_timeout: Duration::from_secs(5),
};
let client = RealtimeWebsocketClient::new(provider);
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_1".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let created = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
created,
RealtimeEvent::SessionCreated {
session_id: "sess_mock".to_string()
}
);
connection
.send_audio_frame(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: Some(960),
})
.await
.expect("send audio");
connection
.send_conversation_item_create("hello agent".to_string())
.await
.expect("send item");
let audio_event = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
audio_event,
RealtimeEvent::AudioOut(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: None,
})
);
let added_event = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
added_event,
RealtimeEvent::ConversationItemAdded(json!({
"type": "spawn_transcript",
"seq": 2
}))
);
connection.close().await.expect("close");
server.await.expect("server task");
}
#[tokio::test]
async fn send_does_not_block_while_next_event_waits_for_inbound_data() {
let listener = TcpListener::bind("127.0.0.1:0").await.expect("bind");
let addr = listener.local_addr().expect("local addr");
let server = tokio::spawn(async move {
let (stream, _) = listener.accept().await.expect("accept");
let mut ws = accept_async(stream).await.expect("accept ws");
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
let second = ws
.next()
.await
.expect("second msg")
.expect("second msg ok")
.into_text()
.expect("text");
let second_json: Value = serde_json::from_str(&second).expect("json");
assert_eq!(second_json["type"], "response.input_audio.delta");
ws.send(Message::Text(
json!({
"type": "session.created",
"session": {"id": "sess_after_send"}
})
.to_string()
.into(),
))
.await
.expect("send session.created");
});
let provider = Provider {
name: "test".to_string(),
base_url: "http://localhost".to_string(),
query_params: Some(HashMap::new()),
headers: HeaderMap::new(),
retry: crate::provider::RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(1),
retry_429: false,
retry_5xx: false,
retry_transport: false,
},
stream_idle_timeout: Duration::from_secs(5),
};
let client = RealtimeWebsocketClient::new(provider);
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_1".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let (send_result, next_result) = tokio::join!(
async {
tokio::time::timeout(
Duration::from_millis(200),
connection.send_audio_frame(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: Some(960),
}),
)
.await
},
connection.next_event()
);
send_result
.expect("send should not block on next_event")
.expect("send audio");
let next_event = next_result.expect("next event").expect("event");
assert_eq!(
next_event,
RealtimeEvent::SessionCreated {
session_id: "sess_after_send".to_string()
}
);
connection.close().await.expect("close");
server.await.expect("server task");
}
}

View File

@@ -0,0 +1,10 @@
pub mod methods;
pub mod protocol;
pub use methods::RealtimeWebsocketClient;
pub use methods::RealtimeWebsocketConnection;
pub use methods::RealtimeWebsocketEvents;
pub use methods::RealtimeWebsocketWriter;
pub use protocol::RealtimeAudioFrame;
pub use protocol::RealtimeEvent;
pub use protocol::RealtimeSessionConfig;

View File

@@ -0,0 +1,161 @@
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value;
use tracing::debug;
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RealtimeSessionConfig {
pub api_url: String,
pub prompt: String,
pub session_id: Option<String>,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct RealtimeAudioFrame {
pub data: String,
pub sample_rate: u32,
pub num_channels: u16,
#[serde(skip_serializing_if = "Option::is_none")]
pub samples_per_channel: Option<u32>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum RealtimeEvent {
SessionCreated { session_id: String },
SessionUpdated { backend_prompt: Option<String> },
AudioOut(RealtimeAudioFrame),
ConversationItemAdded(Value),
Error(String),
}
#[derive(Debug, Clone, Serialize)]
#[serde(tag = "type")]
pub(super) enum RealtimeOutboundMessage {
#[serde(rename = "response.input_audio.delta")]
InputAudioDelta {
delta: String,
sample_rate: u32,
num_channels: u16,
#[serde(skip_serializing_if = "Option::is_none")]
samples_per_channel: Option<u32>,
},
#[serde(rename = "session.create")]
SessionCreate { session: SessionCreateSession },
#[serde(rename = "session.update")]
SessionUpdate {
#[serde(skip_serializing_if = "Option::is_none")]
session: Option<SessionUpdateSession>,
},
#[serde(rename = "conversation.item.create")]
ConversationItemCreate { item: ConversationItem },
}
#[derive(Debug, Clone, Serialize)]
pub(super) struct SessionUpdateSession {
pub(super) backend_prompt: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) conversation_id: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
pub(super) struct SessionCreateSession {
pub(super) backend_prompt: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub(super) conversation_id: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
pub(super) struct ConversationItem {
#[serde(rename = "type")]
pub(super) kind: String,
pub(super) role: String,
pub(super) content: Vec<ConversationItemContent>,
}
#[derive(Debug, Clone, Serialize)]
pub(super) struct ConversationItemContent {
#[serde(rename = "type")]
pub(super) kind: String,
pub(super) text: String,
}
pub(super) fn parse_realtime_event(payload: &str) -> Option<RealtimeEvent> {
let parsed: Value = match serde_json::from_str(payload) {
Ok(msg) => msg,
Err(err) => {
debug!("failed to parse realtime event: {err}, data: {payload}");
return None;
}
};
let message_type = match parsed.get("type").and_then(Value::as_str) {
Some(message_type) => message_type,
None => {
debug!("received realtime event without type field: {payload}");
return None;
}
};
match message_type {
"session.created" => {
let session = parsed.get("session").and_then(Value::as_object);
let session_id = session
.and_then(|session| session.get("id"))
.and_then(Value::as_str)
.map(str::to_string)
.or_else(|| {
parsed
.get("session_id")
.and_then(Value::as_str)
.map(str::to_string)
});
session_id.map(|id| RealtimeEvent::SessionCreated { session_id: id })
}
"session.updated" => {
let backend_prompt = parsed
.get("session")
.and_then(Value::as_object)
.and_then(|session| session.get("backend_prompt"))
.and_then(Value::as_str)
.map(str::to_string);
Some(RealtimeEvent::SessionUpdated { backend_prompt })
}
"response.output_audio.delta" => {
let data = parsed
.get("delta")
.and_then(Value::as_str)
.or_else(|| parsed.get("data").and_then(Value::as_str))
.map(str::to_string)?;
let sample_rate = parsed
.get("sample_rate")
.and_then(Value::as_u64)
.and_then(|v| u32::try_from(v).ok())?;
let num_channels = parsed
.get("num_channels")
.and_then(Value::as_u64)
.and_then(|v| u16::try_from(v).ok())?;
Some(RealtimeEvent::AudioOut(RealtimeAudioFrame {
data,
sample_rate,
num_channels,
samples_per_channel: parsed
.get("samples_per_channel")
.and_then(Value::as_u64)
.and_then(|v| u32::try_from(v).ok()),
}))
}
"conversation.item.added" => parsed
.get("item")
.cloned()
.map(RealtimeEvent::ConversationItemAdded),
"error" => parsed
.get("message")
.and_then(Value::as_str)
.map(str::to_string)
.or_else(|| parsed.get("error").map(std::string::ToString::to_string))
.map(RealtimeEvent::Error),
_ => {
debug!("received unsupported realtime event type: {message_type}, data: {payload}");
None
}
}
}

View File

@@ -29,6 +29,11 @@ pub use crate::endpoint::aggregate::AggregateStreamExt;
pub use crate::endpoint::compact::CompactClient;
pub use crate::endpoint::memories::MemoriesClient;
pub use crate::endpoint::models::ModelsClient;
pub use crate::endpoint::realtime_websocket::RealtimeAudioFrame;
pub use crate::endpoint::realtime_websocket::RealtimeEvent;
pub use crate::endpoint::realtime_websocket::RealtimeSessionConfig;
pub use crate::endpoint::realtime_websocket::RealtimeWebsocketClient;
pub use crate::endpoint::realtime_websocket::RealtimeWebsocketConnection;
pub use crate::endpoint::responses::ResponsesClient;
pub use crate::endpoint::responses::ResponsesOptions;
pub use crate::endpoint::responses_websocket::ResponsesWebsocketClient;

View File

@@ -0,0 +1,368 @@
use std::collections::HashMap;
use std::future::Future;
use std::time::Duration;
use codex_api::RealtimeAudioFrame;
use codex_api::RealtimeEvent;
use codex_api::RealtimeSessionConfig;
use codex_api::RealtimeWebsocketClient;
use codex_api::provider::Provider;
use codex_api::provider::RetryConfig;
use futures::SinkExt;
use futures::StreamExt;
use http::HeaderMap;
use serde_json::Value;
use serde_json::json;
use tokio::net::TcpListener;
use tokio_tungstenite::accept_async;
use tokio_tungstenite::tungstenite::Message;
type RealtimeWsStream = tokio_tungstenite::WebSocketStream<tokio::net::TcpStream>;
async fn spawn_realtime_ws_server<Handler, Fut>(
handler: Handler,
) -> (String, tokio::task::JoinHandle<()>)
where
Handler: FnOnce(RealtimeWsStream) -> Fut + Send + 'static,
Fut: Future<Output = ()> + Send + 'static,
{
let listener = match TcpListener::bind("127.0.0.1:0").await {
Ok(listener) => listener,
Err(err) => panic!("failed to bind test websocket listener: {err}"),
};
let addr = match listener.local_addr() {
Ok(addr) => addr.to_string(),
Err(err) => panic!("failed to read local websocket listener address: {err}"),
};
let server = tokio::spawn(async move {
let (stream, _) = match listener.accept().await {
Ok(stream) => stream,
Err(err) => panic!("failed to accept test websocket connection: {err}"),
};
let ws = match accept_async(stream).await {
Ok(ws) => ws,
Err(err) => panic!("failed to complete websocket handshake: {err}"),
};
handler(ws).await;
});
(addr, server)
}
fn test_provider() -> Provider {
Provider {
name: "test".to_string(),
base_url: "http://localhost".to_string(),
query_params: Some(HashMap::new()),
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(1),
retry_429: false,
retry_5xx: false,
retry_transport: false,
},
stream_idle_timeout: Duration::from_secs(5),
}
}
#[tokio::test]
async fn realtime_ws_e2e_session_create_and_event_flow() {
let (addr, server) = spawn_realtime_ws_server(|mut ws: RealtimeWsStream| async move {
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
assert_eq!(
first_json["session"]["backend_prompt"],
Value::String("backend prompt".to_string())
);
assert_eq!(
first_json["session"]["conversation_id"],
Value::String("conv_123".to_string())
);
ws.send(Message::Text(
json!({
"type": "session.created",
"session": {"id": "sess_mock"}
})
.to_string()
.into(),
))
.await
.expect("send session.created");
let second = ws
.next()
.await
.expect("second msg")
.expect("second msg ok")
.into_text()
.expect("text");
let second_json: Value = serde_json::from_str(&second).expect("json");
assert_eq!(second_json["type"], "response.input_audio.delta");
ws.send(Message::Text(
json!({
"type": "response.output_audio.delta",
"delta": "AQID",
"sample_rate": 48000,
"num_channels": 1
})
.to_string()
.into(),
))
.await
.expect("send audio out");
})
.await;
let client = RealtimeWebsocketClient::new(test_provider());
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_123".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let created = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
created,
RealtimeEvent::SessionCreated {
session_id: "sess_mock".to_string()
}
);
connection
.send_audio_frame(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: Some(960),
})
.await
.expect("send audio");
let audio_event = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
audio_event,
RealtimeEvent::AudioOut(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: None,
})
);
connection.close().await.expect("close");
server.await.expect("server task");
}
#[tokio::test]
async fn realtime_ws_e2e_send_while_next_event_waits() {
let (addr, server) = spawn_realtime_ws_server(|mut ws: RealtimeWsStream| async move {
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
let second = ws
.next()
.await
.expect("second msg")
.expect("second msg ok")
.into_text()
.expect("text");
let second_json: Value = serde_json::from_str(&second).expect("json");
assert_eq!(second_json["type"], "response.input_audio.delta");
ws.send(Message::Text(
json!({
"type": "session.created",
"session": {"id": "sess_after_send"}
})
.to_string()
.into(),
))
.await
.expect("send session.created");
})
.await;
let client = RealtimeWebsocketClient::new(test_provider());
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_123".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let (send_result, next_result) = tokio::join!(
async {
tokio::time::timeout(
Duration::from_millis(200),
connection.send_audio_frame(RealtimeAudioFrame {
data: "AQID".to_string(),
sample_rate: 48000,
num_channels: 1,
samples_per_channel: Some(960),
}),
)
.await
},
connection.next_event()
);
send_result
.expect("send should not block on next_event")
.expect("send audio");
let next_event = next_result.expect("next event").expect("event");
assert_eq!(
next_event,
RealtimeEvent::SessionCreated {
session_id: "sess_after_send".to_string()
}
);
connection.close().await.expect("close");
server.await.expect("server task");
}
#[tokio::test]
async fn realtime_ws_e2e_disconnected_emitted_once() {
let (addr, server) = spawn_realtime_ws_server(|mut ws: RealtimeWsStream| async move {
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
ws.send(Message::Close(None)).await.expect("send close");
})
.await;
let client = RealtimeWebsocketClient::new(test_provider());
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_123".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let first = connection.next_event().await.expect("next event");
assert_eq!(first, None);
let second = connection.next_event().await.expect("next event");
assert_eq!(second, None);
server.await.expect("server task");
}
#[tokio::test]
async fn realtime_ws_e2e_ignores_unknown_text_events() {
let (addr, server) = spawn_realtime_ws_server(|mut ws: RealtimeWsStream| async move {
let first = ws
.next()
.await
.expect("first msg")
.expect("first msg ok")
.into_text()
.expect("text");
let first_json: Value = serde_json::from_str(&first).expect("json");
assert_eq!(first_json["type"], "session.create");
ws.send(Message::Text(
json!({
"type": "response.created",
"response": {"id": "resp_unknown"}
})
.to_string()
.into(),
))
.await
.expect("send unknown event");
ws.send(Message::Text(
json!({
"type": "session.created",
"session": {"id": "sess_after_unknown"}
})
.to_string()
.into(),
))
.await
.expect("send session.created");
})
.await;
let client = RealtimeWebsocketClient::new(test_provider());
let connection = client
.connect(
RealtimeSessionConfig {
api_url: format!("ws://{addr}"),
prompt: "backend prompt".to_string(),
session_id: Some("conv_123".to_string()),
},
HeaderMap::new(),
HeaderMap::new(),
)
.await
.expect("connect");
let event = connection
.next_event()
.await
.expect("next event")
.expect("event");
assert_eq!(
event,
RealtimeEvent::SessionCreated {
session_id: "sess_after_unknown".to_string()
}
);
connection.close().await.expect("close");
server.await.expect("server task");
}

View File

@@ -15,7 +15,7 @@
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Path to a role-specific config layer."
"description": "Path to a role-specific config layer. Relative paths are resolved relative to the `config.toml` that defines them."
},
"description": {
"description": "Human-facing role documentation used in spawn tool guidance.",
@@ -331,6 +331,13 @@
"include_apply_patch_tool": {
"type": "boolean"
},
"js_repl_node_module_dirs": {
"description": "Ordered list of directories to search for Node modules in `js_repl`.",
"items": {
"$ref": "#/definitions/AbsolutePathBuf"
},
"type": "array"
},
"js_repl_node_path": {
"$ref": "#/definitions/AbsolutePathBuf"
},
@@ -1178,7 +1185,7 @@
},
"status_line": {
"default": null,
"description": "Ordered list of status line item identifiers.\n\nWhen set, the TUI renders the selected items as the status line.",
"description": "Ordered list of status line item identifiers.\n\nWhen set, the TUI renders the selected items as the status line. When unset, the TUI defaults to: `model-with-reasoning`, `context-remaining`, and `current-dir`.",
"items": {
"type": "string"
},
@@ -1518,6 +1525,13 @@
"description": "System instructions.",
"type": "string"
},
"js_repl_node_module_dirs": {
"description": "Ordered list of directories to search for Node modules in `js_repl`.",
"items": {
"$ref": "#/definitions/AbsolutePathBuf"
},
"type": "array"
},
"js_repl_node_path": {
"allOf": [
{

View File

@@ -6,6 +6,7 @@ use crate::thread_manager::ThreadManagerState;
use codex_protocol::ThreadId;
use codex_protocol::protocol::Op;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::user_input::UserInput;
use std::path::PathBuf;
use std::sync::Arc;
@@ -153,6 +154,16 @@ impl AgentControl {
Ok(thread.subscribe_status())
}
pub(crate) async fn get_total_token_usage(&self, agent_id: ThreadId) -> Option<TokenUsage> {
let Ok(state) = self.upgrade() else {
return None;
};
let Ok(thread) = state.get_thread(agent_id).await else {
return None;
};
thread.total_token_usage().await
}
fn upgrade(&self) -> CodexResult<Arc<ThreadManagerState>> {
self.manager
.upgrade()

View File

@@ -1327,7 +1327,7 @@ impl Session {
};
let js_repl = Arc::new(JsReplHandle::with_node_path(
config.js_repl_node_path.clone(),
config.codex_home.clone(),
config.js_repl_node_module_dirs.clone(),
));
let prewarm_model_info = models_manager
@@ -1498,6 +1498,11 @@ impl Session {
state.history.get_total_token_usage_breakdown()
}
pub(crate) async fn total_token_usage(&self) -> Option<TokenUsage> {
let state = self.state.lock().await;
state.token_info().map(|info| info.total_token_usage)
}
pub(crate) async fn get_estimated_token_count(
&self,
turn_context: &TurnContext,
@@ -7413,7 +7418,7 @@ mod tests {
};
let js_repl = Arc::new(JsReplHandle::with_node_path(
config.js_repl_node_path.clone(),
config.codex_home.clone(),
config.js_repl_node_module_dirs.clone(),
));
let turn_context = Session::make_turn_context(
@@ -7562,7 +7567,7 @@ mod tests {
};
let js_repl = Arc::new(JsReplHandle::with_node_path(
config.js_repl_node_path.clone(),
config.codex_home.clone(),
config.js_repl_node_module_dirs.clone(),
));
let turn_context = Arc::new(Session::make_turn_context(

View File

@@ -11,6 +11,7 @@ use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::user_input::UserInput;
use std::path::PathBuf;
use tokio::sync::watch;
@@ -73,6 +74,10 @@ impl CodexThread {
self.codex.agent_status.clone()
}
pub(crate) async fn total_token_usage(&self) -> Option<TokenUsage> {
self.codex.session.total_token_usage().await
}
pub fn rollout_path(&self) -> Option<PathBuf> {
self.rollout_path.clone()
}

View File

@@ -56,12 +56,6 @@ pub enum ConfigEdit {
}
pub fn status_line_items_edit(items: &[String]) -> ConfigEdit {
if items.is_empty() {
return ConfigEdit::ClearPath {
segments: vec!["tui".to_string(), "status_line".to_string()],
};
}
let mut array = toml_edit::Array::new();
for item in items {
array.push(item.clone());

View File

@@ -258,6 +258,9 @@ pub struct Config {
pub tui_alternate_screen: AltScreenMode,
/// Ordered list of status line item identifiers for the TUI.
///
/// When unset, the TUI defaults to: `model-with-reasoning`, `context-remaining`, and
/// `current-dir`.
pub tui_status_line: Option<Vec<String>>,
/// The directory that should be treated as the current working directory
@@ -336,6 +339,10 @@ pub struct Config {
/// Optional absolute path to the Node runtime used by `js_repl`.
pub js_repl_node_path: Option<PathBuf>,
/// Ordered list of directories to search for Node modules in `js_repl`.
pub js_repl_node_module_dirs: Vec<PathBuf>,
/// Optional absolute path to patched zsh used by zsh-exec-bridge-backed shell execution.
pub zsh_path: Option<PathBuf>,
@@ -977,6 +984,10 @@ pub struct ConfigToml {
/// Optional absolute path to the Node runtime used by `js_repl`.
pub js_repl_node_path: Option<AbsolutePathBuf>,
/// Ordered list of directories to search for Node modules in `js_repl`.
pub js_repl_node_module_dirs: Option<Vec<AbsolutePathBuf>>,
/// Optional absolute path to patched zsh used by zsh-exec-bridge-backed shell execution.
pub zsh_path: Option<AbsolutePathBuf>,
@@ -1193,6 +1204,7 @@ pub struct AgentRoleToml {
pub description: Option<String>,
/// Path to a role-specific config layer.
/// Relative paths are resolved relative to the `config.toml` that defines them.
pub config_file: Option<AbsolutePathBuf>,
}
@@ -1357,6 +1369,7 @@ pub struct ConfigOverrides {
pub config_profile: Option<String>,
pub codex_linux_sandbox_exe: Option<PathBuf>,
pub js_repl_node_path: Option<PathBuf>,
pub js_repl_node_module_dirs: Option<Vec<PathBuf>>,
pub zsh_path: Option<PathBuf>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
@@ -1485,6 +1498,7 @@ impl Config {
config_profile: config_profile_key,
codex_linux_sandbox_exe,
js_repl_node_path: js_repl_node_path_override,
js_repl_node_module_dirs: js_repl_node_module_dirs_override,
zsh_path: zsh_path_override,
base_instructions,
developer_instructions,
@@ -1646,19 +1660,20 @@ impl Config {
.roles
.iter()
.map(|(name, role)| {
(
let config_file =
role.config_file.as_ref().map(AbsolutePathBuf::to_path_buf);
Self::validate_agent_role_config_file(name, config_file.as_deref())?;
Ok((
name.clone(),
AgentRoleConfig {
description: role.description.clone(),
config_file: role
.config_file
.as_ref()
.map(AbsolutePathBuf::to_path_buf),
config_file,
},
)
))
})
.collect()
.collect::<std::io::Result<BTreeMap<_, _>>>()
})
.transpose()?
.unwrap_or_default();
let ghost_snapshot = {
@@ -1746,6 +1761,17 @@ impl Config {
let js_repl_node_path = js_repl_node_path_override
.or(config_profile.js_repl_node_path.map(Into::into))
.or(cfg.js_repl_node_path.map(Into::into));
let js_repl_node_module_dirs = js_repl_node_module_dirs_override
.or_else(|| {
config_profile
.js_repl_node_module_dirs
.map(|dirs| dirs.into_iter().map(Into::into).collect::<Vec<PathBuf>>())
})
.or_else(|| {
cfg.js_repl_node_module_dirs
.map(|dirs| dirs.into_iter().map(Into::into).collect::<Vec<PathBuf>>())
})
.unwrap_or_default();
let zsh_path = zsh_path_override
.or(config_profile.zsh_path.map(Into::into))
.or(cfg.zsh_path.map(Into::into));
@@ -1873,6 +1899,7 @@ impl Config {
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
codex_linux_sandbox_exe,
js_repl_node_path,
js_repl_node_module_dirs,
zsh_path,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -2002,6 +2029,36 @@ impl Config {
}
}
fn validate_agent_role_config_file(
role_name: &str,
config_file: Option<&Path>,
) -> std::io::Result<()> {
let Some(config_file) = config_file else {
return Ok(());
};
let metadata = std::fs::metadata(config_file).map_err(|e| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!(
"agents.{role_name}.config_file must point to an existing file at {}: {e}",
config_file.display()
),
)
})?;
if metadata.is_file() {
Ok(())
} else {
Err(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!(
"agents.{role_name}.config_file must point to a file: {}",
config_file.display()
),
))
}
}
pub fn set_windows_sandbox_enabled(&mut self, value: bool) {
self.permissions.windows_sandbox_mode = if value {
Some(WindowsSandboxModeToml::Unelevated)
@@ -4037,6 +4094,74 @@ model = "gpt-5.1-codex"
Ok(())
}
#[test]
fn load_config_rejects_missing_agent_role_config_file() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let missing_path = codex_home.path().join("agents").join("researcher.toml");
let cfg = ConfigToml {
agents: Some(AgentsToml {
max_threads: None,
roles: BTreeMap::from([(
"researcher".to_string(),
AgentRoleToml {
description: Some("Research role".to_string()),
config_file: Some(AbsolutePathBuf::from_absolute_path(missing_path)?),
},
)]),
}),
..Default::default()
};
let result = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
);
let err = result.expect_err("missing role config file should be rejected");
assert_eq!(err.kind(), std::io::ErrorKind::InvalidInput);
let message = err.to_string();
assert!(message.contains("agents.researcher.config_file"));
assert!(message.contains("must point to an existing file"));
Ok(())
}
#[tokio::test]
async fn agent_role_relative_config_file_resolves_against_config_toml() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let role_config_path = codex_home.path().join("agents").join("researcher.toml");
tokio::fs::create_dir_all(
role_config_path
.parent()
.expect("role config should have a parent directory"),
)
.await?;
tokio::fs::write(&role_config_path, "model = \"gpt-5\"").await?;
tokio::fs::write(
codex_home.path().join(CONFIG_TOML_FILE),
r#"[agents.researcher]
description = "Research role"
config_file = "./agents/researcher.toml"
"#,
)
.await?;
let config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.fallback_cwd(Some(codex_home.path().to_path_buf()))
.build()
.await?;
assert_eq!(
config
.agent_roles
.get("researcher")
.and_then(|role| role.config_file.as_ref()),
Some(&role_config_path)
);
Ok(())
}
fn create_test_fixture() -> std::io::Result<PrecedenceTestFixture> {
let toml = r#"
model = "o3"
@@ -4202,6 +4327,7 @@ model_verbosity = "high"
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
js_repl_node_path: None,
js_repl_node_module_dirs: Vec::new(),
zsh_path: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -4315,6 +4441,7 @@ model_verbosity = "high"
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
js_repl_node_path: None,
js_repl_node_module_dirs: Vec::new(),
zsh_path: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -4426,6 +4553,7 @@ model_verbosity = "high"
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
js_repl_node_path: None,
js_repl_node_module_dirs: Vec::new(),
zsh_path: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -4523,6 +4651,7 @@ model_verbosity = "high"
file_opener: UriBasedFileOpener::VsCode,
codex_linux_sandbox_exe: None,
js_repl_node_path: None,
js_repl_node_module_dirs: Vec::new(),
zsh_path: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,

View File

@@ -30,9 +30,11 @@ pub struct ConfigProfile {
pub chatgpt_base_url: Option<String>,
/// Optional path to a file containing model instructions.
pub model_instructions_file: Option<AbsolutePathBuf>,
pub js_repl_node_path: Option<AbsolutePathBuf>,
/// Ordered list of directories to search for Node modules in `js_repl`.
pub js_repl_node_module_dirs: Option<Vec<AbsolutePathBuf>>,
/// Optional absolute path to patched zsh used by zsh-exec-bridge-backed shell execution.
pub zsh_path: Option<AbsolutePathBuf>,
pub js_repl_node_path: Option<AbsolutePathBuf>,
/// Deprecated: ignored. Use `model_instructions_file`.
#[schemars(skip)]
pub experimental_instructions_file: Option<AbsolutePathBuf>,

View File

@@ -619,6 +619,8 @@ pub struct Tui {
/// Ordered list of status line item identifiers.
///
/// When set, the TUI renders the selected items as the status line.
/// When unset, the TUI defaults to: `model-with-reasoning`, `context-remaining`, and
/// `current-dir`.
#[serde(default)]
pub status_line: Option<Vec<String>>,
}

View File

@@ -11,6 +11,7 @@ mod start;
mod storage;
#[cfg(test)]
mod tests;
pub(crate) mod usage;
/// Starts the memory startup pipeline for eligible root sessions.
/// This is the single entrypoint that `codex` uses to trigger memory startup.
@@ -26,7 +27,7 @@ mod artifacts {
/// Phase 1 (startup extraction).
mod phase_one {
/// Default model used for phase 1.
pub(super) const MODEL: &str = "gpt-5.3-codex-spark";
pub(super) const MODEL: &str = "gpt-5.1-codex-mini";
/// Prompt used for phase 1.
pub(super) const PROMPT: &str = include_str!("../../templates/memories/stage_one_system.md");
/// Concurrency cap for startup memory extraction and consolidation scheduling.
@@ -67,14 +68,20 @@ mod phase_two {
mod metrics {
/// Number of phase-1 startup jobs grouped by status.
pub(super) const MEMORY_PHASE_ONE_JOBS: &str = "codex.memory.phase1";
/// End-to-end latency for a single phase-1 startup run.
pub(super) const MEMORY_PHASE_ONE_E2E_MS: &str = "codex.memory.phase1.e2e_ms";
/// Number of raw memories produced by phase-1 startup extraction.
pub(super) const MEMORY_PHASE_ONE_OUTPUT: &str = "codex.memory.phase1.output";
/// Histogram for aggregate token usage across one phase-1 startup run.
pub(super) const MEMORY_PHASE_ONE_TOKEN_USAGE: &str = "codex.memory.phase1.token_usage";
/// Number of phase-2 startup jobs grouped by status.
pub(super) const MEMORY_PHASE_TWO_JOBS: &str = "codex.memory.phase2";
/// End-to-end latency for a single phase-2 consolidation run.
pub(super) const MEMORY_PHASE_TWO_E2E_MS: &str = "codex.memory.phase2.e2e_ms";
/// Number of stage-1 memories included in each phase-2 consolidation step.
pub(super) const MEMORY_PHASE_TWO_INPUT: &str = "codex.memory.phase2.input";
/// Histogram for aggregate token usage across one phase-2 consolidation run.
pub(super) const MEMORY_PHASE_TWO_TOKEN_USAGE: &str = "codex.memory.phase2.token_usage";
}
use std::path::Path;

View File

@@ -80,6 +80,12 @@ struct StageOneOutput {
/// 3) run stage-1 extraction jobs in parallel
/// 4) emit metrics and logs
pub(in crate::memories) async fn run(session: &Arc<Session>, config: &Config) {
let _phase_one_e2e_timer = session
.services
.otel_manager
.start_timer(metrics::MEMORY_PHASE_ONE_E2E_MS, &[])
.ok();
// 1. Claim startup job.
let Some(claimed_candidates) = claim_startup_jobs(session, &config.memories).await else {
return;

View File

@@ -14,6 +14,7 @@ use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::user_input::UserInput;
use codex_state::StateRuntime;
use codex_utils_absolute_path::AbsolutePathBuf;
@@ -36,6 +37,12 @@ struct Counters {
/// Runs memory phase 2 (aka consolidation) in strict order. The method represents the linear
/// flow of the consolidation phase.
pub(super) async fn run(session: &Arc<Session>, config: Arc<Config>) {
let phase_two_e2e_timer = session
.services
.otel_manager
.start_timer(metrics::MEMORY_PHASE_TWO_E2E_MS, &[])
.ok();
let Some(db) = session.services.state_db.as_deref() else {
// This should not happen.
return;
@@ -117,7 +124,13 @@ pub(super) async fn run(session: &Arc<Session>, config: Arc<Config>) {
};
// 6. Spawn the agent handler.
agent::handle(session, claim, new_watermark, thread_id);
agent::handle(
session,
claim,
new_watermark,
thread_id,
phase_two_e2e_timer,
);
// 7. Metrics and logs.
let counters = Counters {
@@ -264,6 +277,7 @@ mod agent {
claim: Claim,
new_watermark: i64,
thread_id: ThreadId,
phase_two_e2e_timer: Option<codex_otel::Timer>,
) {
let Some(db) = session.services.state_db.clone() else {
return;
@@ -271,6 +285,7 @@ mod agent {
let session = session.clone();
tokio::spawn(async move {
let _phase_two_e2e_timer = phase_two_e2e_timer;
let agent_control = session.services.agent_control.clone();
// TODO(jif) we might have a very small race here.
@@ -294,6 +309,9 @@ mod agent {
.await;
if matches!(final_status, AgentStatus::Completed(_)) {
if let Some(token_usage) = agent_control.get_total_token_usage(thread_id).await {
emit_token_usage_metrics(&session, &token_usage);
}
job::succeed(&session, &db, &claim, new_watermark, "succeeded").await;
} else {
job::failed(&session, &db, &claim, "failed_agent").await;
@@ -390,3 +408,32 @@ fn emit_metrics(session: &Arc<Session>, counters: Counters) {
&[("status", "agent_spawned")],
);
}
fn emit_token_usage_metrics(session: &Arc<Session>, token_usage: &TokenUsage) {
let otel = session.services.otel_manager.clone();
otel.histogram(
metrics::MEMORY_PHASE_TWO_TOKEN_USAGE,
token_usage.total_tokens.max(0),
&[("token_type", "total")],
);
otel.histogram(
metrics::MEMORY_PHASE_TWO_TOKEN_USAGE,
token_usage.input_tokens.max(0),
&[("token_type", "input")],
);
otel.histogram(
metrics::MEMORY_PHASE_TWO_TOKEN_USAGE,
token_usage.cached_input(),
&[("token_type", "cached_input")],
);
otel.histogram(
metrics::MEMORY_PHASE_TWO_TOKEN_USAGE,
token_usage.output_tokens.max(0),
&[("token_type", "output")],
);
otel.histogram(
metrics::MEMORY_PHASE_TWO_TOKEN_USAGE,
token_usage.reasoning_output_tokens.max(0),
&[("token_type", "reasoning_output")],
);
}

View File

@@ -0,0 +1,122 @@
use crate::is_safe_command::is_known_safe_command;
use crate::parse_command::parse_command;
use crate::tools::context::ToolInvocation;
use crate::tools::context::ToolPayload;
use crate::tools::handlers::unified_exec::ExecCommandArgs;
use codex_protocol::models::ShellCommandToolCallParams;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::parse_command::ParsedCommand;
use std::path::PathBuf;
const MEMORIES_USAGE_METRIC: &str = "codex.memories.usage";
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
enum MemoriesUsageKind {
MemoryMd,
MemorySummary,
RawMemories,
RolloutSummaries,
Skills,
}
impl MemoriesUsageKind {
fn as_tag(self) -> &'static str {
match self {
Self::MemoryMd => "memory_md",
Self::MemorySummary => "memory_summary",
Self::RawMemories => "raw_memories",
Self::RolloutSummaries => "rollout_summaries",
Self::Skills => "skills",
}
}
}
pub(crate) async fn emit_metric_for_tool_read(invocation: &ToolInvocation, success: bool) {
let kinds = memories_usage_kinds_from_invocation(invocation).await;
if kinds.is_empty() {
return;
}
let success = if success { "true" } else { "false" };
for kind in kinds {
invocation.turn.otel_manager.counter(
MEMORIES_USAGE_METRIC,
1,
&[
("kind", kind.as_tag()),
("tool", invocation.tool_name.as_str()),
("success", success),
],
);
}
}
async fn memories_usage_kinds_from_invocation(
invocation: &ToolInvocation,
) -> Vec<MemoriesUsageKind> {
let Some((command, _)) = shell_command_for_invocation(invocation) else {
return Vec::new();
};
if !is_known_safe_command(&command) {
return Vec::new();
}
let parsed_commands = parse_command(&command);
parsed_commands
.into_iter()
.filter_map(|command| match command {
ParsedCommand::Read { path, .. } => get_memory_kind(path.display().to_string()),
ParsedCommand::Search { path, .. } => path.and_then(get_memory_kind),
ParsedCommand::ListFiles { .. } | ParsedCommand::Unknown { .. } => None,
})
.collect()
}
fn shell_command_for_invocation(invocation: &ToolInvocation) -> Option<(Vec<String>, PathBuf)> {
let ToolPayload::Function { arguments } = &invocation.payload else {
return None;
};
match invocation.tool_name.as_str() {
"shell" => serde_json::from_str::<ShellToolCallParams>(arguments)
.ok()
.map(|params| (params.command, invocation.turn.resolve_path(params.workdir))),
"shell_command" => serde_json::from_str::<ShellCommandToolCallParams>(arguments)
.ok()
.map(|params| {
let command = invocation
.session
.user_shell()
.derive_exec_args(&params.command, params.login.unwrap_or(true));
(command, invocation.turn.resolve_path(params.workdir))
}),
"exec_command" => serde_json::from_str::<ExecCommandArgs>(arguments)
.ok()
.map(|params| {
(
crate::tools::handlers::unified_exec::get_command(
&params,
invocation.session.user_shell(),
),
invocation.turn.resolve_path(params.workdir),
)
}),
_ => None,
}
}
fn get_memory_kind(path: String) -> Option<MemoriesUsageKind> {
if path.contains("memories/MEMORY.md") {
Some(MemoriesUsageKind::MemoryMd)
} else if path.contains("memories/memory_summary.md") {
Some(MemoriesUsageKind::MemorySummary)
} else if path.contains("memories/raw_memories.md") {
Some(MemoriesUsageKind::RawMemories)
} else if path.contains("memories/rollout_summaries/") {
Some(MemoriesUsageKind::RolloutSummaries)
} else if path.contains("memories/skills/") {
Some(MemoriesUsageKind::Skills)
} else {
None
}
}

View File

@@ -23,6 +23,7 @@ use crate::spawn::spawn_child_async;
const MACOS_SEATBELT_BASE_POLICY: &str = include_str!("seatbelt_base_policy.sbpl");
const MACOS_SEATBELT_NETWORK_POLICY: &str = include_str!("seatbelt_network_policy.sbpl");
const MACOS_SEATBELT_PLATFORM_DEFAULTS: &str = include_str!("seatbelt_platform_defaults.sbpl");
/// When working with `sandbox-exec`, only consider `sandbox-exec` in `/usr/bin`
/// to defend against an attacker trying to inject a malicious version on the
@@ -314,18 +315,23 @@ pub(crate) fn create_seatbelt_command_args_with_extensions(
build_seatbelt_extensions,
);
let include_platform_defaults = sandbox_policy.include_platform_defaults();
let mut policy_sections = vec![
MACOS_SEATBELT_BASE_POLICY.to_string(),
file_read_policy,
file_write_policy,
network_policy,
];
if include_platform_defaults {
policy_sections.push(MACOS_SEATBELT_PLATFORM_DEFAULTS.to_string());
}
if !unix_socket_policy.is_empty() {
policy_sections.push(unix_socket_policy);
}
if !seatbelt_extensions.policy.is_empty() {
policy_sections.push(seatbelt_extensions.policy.clone());
}
let full_policy = policy_sections.join("\n");
let dir_params = [

View File

@@ -0,0 +1,181 @@
; macOS platform defaults included via `ReadOnlyAccess::Restricted::include_platform_defaults`
; Read access to standard system paths
(allow file-read* file-test-existence
(subpath "/Library/Apple")
(subpath "/Library/Filesystems/NetFSPlugins")
(subpath "/Library/Preferences/Logging")
(subpath "/private/var/db/DarwinDirectory/local/recordStore.data")
(subpath "/private/var/db/timezone")
(subpath "/usr/lib")
(subpath "/usr/share")
(subpath "/Library/Preferences")
(subpath "/var/db")
(subpath "/private/var/db"))
; Map system frameworks + dylibs for loader.
(allow file-map-executable
(subpath "/Library/Apple/System/Library/Frameworks")
(subpath "/Library/Apple/System/Library/PrivateFrameworks")
(subpath "/Library/Apple/usr/lib")
(subpath "/System/Library/Extensions")
(subpath "/System/Library/Frameworks")
(subpath "/System/Library/PrivateFrameworks")
(subpath "/System/Library/SubFrameworks")
(subpath "/System/iOSSupport/System/Library/Frameworks")
(subpath "/System/iOSSupport/System/Library/PrivateFrameworks")
(subpath "/System/iOSSupport/System/Library/SubFrameworks")
(subpath "/usr/lib"))
; Allow guarded vnodes.
(allow system-mac-syscall (mac-policy-name "vnguard"))
; Determine whether a container is expected.
(allow system-mac-syscall
(require-all
(mac-policy-name "Sandbox")
(mac-syscall-number 67)))
; Allow resolution of standard system symlinks.
(allow file-read-metadata file-test-existence
(literal "/etc")
(literal "/tmp")
(literal "/var")
(literal "/private/etc/localtime"))
; Allow stat'ing of firmlink parent path components.
(allow file-read-metadata file-test-existence
(path-ancestors "/System/Volumes/Data/private"))
; Allow processes to get their current working directory.
(allow file-read* file-test-existence
(literal "/"))
; Allow FSIOC_CAS_BSDFLAGS as alternate chflags.
(allow system-fsctl (fsctl-command FSIOC_CAS_BSDFLAGS))
; Allow access to standard special files.
(allow file-read* file-test-existence
(literal "/dev/autofs_nowait")
(literal "/dev/random")
(literal "/dev/urandom")
(literal "/private/etc/master.passwd")
(literal "/private/etc/passwd")
(literal "/private/etc/protocols")
(literal "/private/etc/services"))
; Allow null/zero read/write.
(allow file-read* file-test-existence file-write-data
(literal "/dev/null")
(literal "/dev/zero"))
; Allow read/write access to the file descriptors.
(allow file-read-data file-test-existence file-write-data
(subpath "/dev/fd"))
; Provide access to debugger helpers.
(allow file-read* file-test-existence file-write-data file-ioctl
(literal "/dev/dtracehelper"))
; Scratch space so tools can create temp files.
(allow file-read* file-test-existence file-write* (subpath "/tmp"))
(allow file-read* file-write* (subpath "/private/tmp"))
(allow file-read* file-write* (subpath "/var/tmp"))
(allow file-read* file-write* (subpath "/private/var/tmp"))
; Allow reading standard config directories.
(allow file-read* (subpath "/etc"))
(allow file-read* (subpath "/private/etc"))
; Some processes read /var metadata during startup.
(allow file-read-metadata (subpath "/var"))
(allow file-read-metadata (subpath "/private/var"))
; IOKit access for root domain services.
(allow iokit-open
(iokit-registry-entry-class "RootDomainUserClient"))
; macOS Standard library queries opendirectoryd at startup
(allow mach-lookup (global-name "com.apple.system.opendirectoryd.libinfo"))
; Allow IPC to analytics, logging, trust, and other system agents.
(allow mach-lookup
(global-name "com.apple.analyticsd")
(global-name "com.apple.analyticsd.messagetracer")
(global-name "com.apple.appsleep")
(global-name "com.apple.bsd.dirhelper")
(global-name "com.apple.cfprefsd.agent")
(global-name "com.apple.cfprefsd.daemon")
(global-name "com.apple.diagnosticd")
(global-name "com.apple.dt.automationmode.reader")
(global-name "com.apple.espd")
(global-name "com.apple.logd")
(global-name "com.apple.logd.events")
(global-name "com.apple.runningboard")
(global-name "com.apple.secinitd")
(global-name "com.apple.system.DirectoryService.libinfo_v1")
(global-name "com.apple.system.logger")
(global-name "com.apple.system.notification_center")
(global-name "com.apple.system.opendirectoryd.membership")
(global-name "com.apple.trustd")
(global-name "com.apple.trustd.agent")
(global-name "com.apple.xpc.activity.unmanaged")
(local-name "com.apple.cfprefsd.agent"))
; Allow IPC to the syslog socket for logging.
(allow network-outbound (literal "/private/var/run/syslog"))
; macOS Notifications
(allow ipc-posix-shm-read*
(ipc-posix-name "apple.shm.notification_center"))
; Regulatory domain support.
(allow file-read*
(literal "/private/var/db/eligibilityd/eligibility.plist"))
; Audio and power management services.
(allow mach-lookup (global-name "com.apple.audio.audiohald"))
(allow mach-lookup (global-name "com.apple.audio.AudioComponentRegistrar"))
(allow mach-lookup (global-name "com.apple.PowerManagement.control"))
; Allow reading the minimum system runtime so exec works.
(allow file-read-data (subpath "/bin"))
(allow file-read-metadata (subpath "/bin"))
(allow file-read-data (subpath "/sbin"))
(allow file-read-metadata (subpath "/sbin"))
(allow file-read-data (subpath "/usr/bin"))
(allow file-read-metadata (subpath "/usr/bin"))
(allow file-read-data (subpath "/usr/sbin"))
(allow file-read-metadata (subpath "/usr/sbin"))
(allow file-read-data (subpath "/usr/libexec"))
(allow file-read-metadata (subpath "/usr/libexec"))
(allow file-read* (subpath "/Library/Preferences"))
(allow file-read* (subpath "/opt/homebrew/lib"))
(allow file-read* (subpath "/usr/local/lib"))
(allow file-read* (subpath "/Applications"))
; Terminal basics and device handles.
(allow file-read* (regex "^/dev/fd/(0|1|2)$"))
(allow file-write* (regex "^/dev/fd/(1|2)$"))
(allow file-read* file-write* (literal "/dev/null"))
(allow file-read* file-write* (literal "/dev/tty"))
(allow file-read-metadata (literal "/dev"))
(allow file-read-metadata (regex "^/dev/.*$"))
(allow file-read-metadata (literal "/dev/stdin"))
(allow file-read-metadata (literal "/dev/stdout"))
(allow file-read-metadata (literal "/dev/stderr"))
(allow file-read-metadata (regex "^/dev/tty[^/]*$"))
(allow file-read-metadata (regex "^/dev/pty[^/]*$"))
(allow file-read* file-write* (regex "^/dev/ttys[0-9]+$"))
(allow file-read* file-write* (literal "/dev/ptmx"))
(allow file-ioctl (regex "^/dev/ttys[0-9]+$"))
; Allow metadata traversal for firmlink parents.
(allow file-read-metadata (literal "/System/Volumes") (vnode-type DIRECTORY))
(allow file-read-metadata (literal "/System/Volumes/Data") (vnode-type DIRECTORY))
(allow file-read-metadata (literal "/System/Volumes/Data/Users") (vnode-type DIRECTORY))
; App sandbox extensions
(allow file-read* (extension "com.apple.app-sandbox.read"))
(allow file-read* file-write* (extension "com.apple.app-sandbox.read-write"))

View File

@@ -12,7 +12,7 @@ mod request_user_input;
mod search_tool_bm25;
mod shell;
mod test_sync;
mod unified_exec;
pub(crate) mod unified_exec;
mod view_image;
pub use plan::PLAN_TOOL;

View File

@@ -26,10 +26,10 @@ use std::sync::Arc;
pub struct UnifiedExecHandler;
#[derive(Debug, Deserialize)]
struct ExecCommandArgs {
pub(crate) struct ExecCommandArgs {
cmd: String,
#[serde(default)]
workdir: Option<String>,
pub(crate) workdir: Option<String>,
#[serde(default)]
shell: Option<String>,
#[serde(default = "default_login")]
@@ -238,7 +238,7 @@ impl ToolHandler for UnifiedExecHandler {
}
}
fn get_command(args: &ExecCommandArgs, session_shell: Arc<Shell>) -> Vec<String> {
pub(crate) fn get_command(args: &ExecCommandArgs, session_shell: Arc<Shell>) -> Vec<String> {
let model_shell = args.shell.as_ref().map(|shell_str| {
let mut shell = get_shell_by_model_provided_path(&PathBuf::from(shell_str));
shell.shell_snapshot = crate::shell::empty_shell_snapshot_receiver();

View File

@@ -4,7 +4,7 @@
const { Buffer } = require("node:buffer");
const crypto = require("node:crypto");
const { builtinModules } = require("node:module");
const { builtinModules, createRequire } = require("node:module");
const { createInterface } = require("node:readline");
const { performance } = require("node:perf_hooks");
const path = require("node:path");
@@ -13,7 +13,9 @@ const { inspect, TextDecoder, TextEncoder } = require("node:util");
const vm = require("node:vm");
const { SourceTextModule, SyntheticModule } = vm;
const meriyahPromise = import("./meriyah.umd.min.js").then((m) => m.default ?? m);
const meriyahPromise = import("./meriyah.umd.min.js").then(
(m) => m.default ?? m,
);
// vm contexts start with very few globals. Populate common Node/web globals
// so snippets and dependencies behave like a normal modern JS runtime.
@@ -100,8 +102,12 @@ function toNodeBuiltinSpecifier(specifier) {
}
function isDeniedBuiltin(specifier) {
const normalized = specifier.startsWith("node:") ? specifier.slice(5) : specifier;
return deniedBuiltinModules.has(specifier) || deniedBuiltinModules.has(normalized);
const normalized = specifier.startsWith("node:")
? specifier.slice(5)
: specifier;
return (
deniedBuiltinModules.has(specifier) || deniedBuiltinModules.has(normalized)
);
}
/** @type {Map<string, (msg: any) => void>} */
@@ -111,39 +117,146 @@ const tmpDir = process.env.CODEX_JS_TMP_DIR || process.cwd();
// Explicit long-lived mutable store exposed as `codex.state`. This is useful
// when callers want shared state without relying on lexical binding carry-over.
const state = {};
const nodeModuleDirEnv = process.env.CODEX_JS_REPL_NODE_MODULE_DIRS ?? "";
const moduleSearchBases = (() => {
const bases = [];
const seen = new Set();
for (const entry of nodeModuleDirEnv.split(path.delimiter)) {
const trimmed = entry.trim();
if (!trimmed) {
continue;
}
const resolved = path.isAbsolute(trimmed)
? trimmed
: path.resolve(process.cwd(), trimmed);
const base =
path.basename(resolved) === "node_modules"
? path.dirname(resolved)
: resolved;
if (seen.has(base)) {
continue;
}
seen.add(base);
bases.push(base);
}
const cwd = process.cwd();
if (!seen.has(cwd)) {
bases.push(cwd);
}
return bases;
})();
const importResolveConditions = new Set(["node", "import"]);
const requireByBase = new Map();
function getRequireForBase(base) {
let req = requireByBase.get(base);
if (!req) {
req = createRequire(path.join(base, "__codex_js_repl__.cjs"));
requireByBase.set(base, req);
}
return req;
}
function isModuleNotFoundError(err) {
return (
err?.code === "MODULE_NOT_FOUND" || err?.code === "ERR_MODULE_NOT_FOUND"
);
}
function isWithinBaseNodeModules(base, resolvedPath) {
const nodeModulesRoot = path.resolve(base, "node_modules");
const relative = path.relative(nodeModulesRoot, resolvedPath);
return (
relative !== "" && !relative.startsWith("..") && !path.isAbsolute(relative)
);
}
function isBarePackageSpecifier(specifier) {
if (
typeof specifier !== "string" ||
!specifier ||
specifier.trim() !== specifier
) {
return false;
}
if (specifier.startsWith("./") || specifier.startsWith("../")) {
return false;
}
if (specifier.startsWith("/") || specifier.startsWith("\\")) {
return false;
}
if (path.isAbsolute(specifier)) {
return false;
}
if (/^[a-zA-Z][a-zA-Z\d+.-]*:/.test(specifier)) {
return false;
}
if (specifier.includes("\\")) {
return false;
}
return true;
}
function resolveBareSpecifier(specifier) {
let firstResolutionError = null;
for (const base of moduleSearchBases) {
try {
const resolved = getRequireForBase(base).resolve(specifier, {
conditions: importResolveConditions,
});
if (isWithinBaseNodeModules(base, resolved)) {
return resolved;
}
// Ignore resolutions that escape this base via parent node_modules lookup.
} catch (err) {
if (isModuleNotFoundError(err)) {
continue;
}
if (!firstResolutionError) {
firstResolutionError = err;
}
}
}
if (firstResolutionError) {
throw firstResolutionError;
}
return null;
}
function resolveSpecifier(specifier) {
if (specifier.startsWith("node:") || builtinModuleSet.has(specifier)) {
if (isDeniedBuiltin(specifier)) {
throw new Error(`Importing module "${specifier}" is not allowed in js_repl`);
throw new Error(
`Importing module "${specifier}" is not allowed in js_repl`,
);
}
return { kind: "builtin", specifier: toNodeBuiltinSpecifier(specifier) };
}
if (specifier.startsWith("file:")) {
return { kind: "url", url: specifier };
if (!isBarePackageSpecifier(specifier)) {
throw new Error(
`Unsupported import specifier "${specifier}" in js_repl. Use a package name like "lodash" or "@scope/pkg".`,
);
}
if (specifier.startsWith("./") || specifier.startsWith("../") || path.isAbsolute(specifier)) {
return { kind: "path", path: path.resolve(process.cwd(), specifier) };
const resolvedBare = resolveBareSpecifier(specifier);
if (!resolvedBare) {
throw new Error(`Module not found: ${specifier}`);
}
return { kind: "bare", specifier };
return { kind: "path", path: resolvedBare };
}
function importResolved(resolved) {
if (resolved.kind === "builtin") {
return import(resolved.specifier);
}
if (resolved.kind === "url") {
return import(resolved.url);
}
if (resolved.kind === "path") {
return import(pathToFileURL(resolved.path).href);
}
if (resolved.kind === "bare") {
return import(resolved.specifier);
}
throw new Error(`Unsupported module resolution kind: ${resolved.kind}`);
}
@@ -196,13 +309,24 @@ function collectBindings(ast) {
} else if (stmt.type === "ClassDeclaration" && stmt.id) {
map.set(stmt.id.name, "class");
} else if (stmt.type === "ForStatement") {
if (stmt.init && stmt.init.type === "VariableDeclaration" && stmt.init.kind === "var") {
if (
stmt.init &&
stmt.init.type === "VariableDeclaration" &&
stmt.init.kind === "var"
) {
for (const decl of stmt.init.declarations) {
collectPatternNames(decl.id, "var", map);
}
}
} else if (stmt.type === "ForInStatement" || stmt.type === "ForOfStatement") {
if (stmt.left && stmt.left.type === "VariableDeclaration" && stmt.left.kind === "var") {
} else if (
stmt.type === "ForInStatement" ||
stmt.type === "ForOfStatement"
) {
if (
stmt.left &&
stmt.left.type === "VariableDeclaration" &&
stmt.left.kind === "var"
) {
for (const decl of stmt.left.declarations) {
collectPatternNames(decl.id, "var", map);
}
@@ -230,7 +354,8 @@ async function buildModuleSource(code) {
prelude += 'import * as __prev from "@prev";\n';
prelude += priorBindings
.map((b) => {
const keyword = b.kind === "var" ? "var" : b.kind === "const" ? "const" : "let";
const keyword =
b.kind === "var" ? "var" : b.kind === "const" ? "const" : "let";
return `${keyword} ${b.name} = __prev.${b.name};`;
})
.join("\n");
@@ -246,9 +371,14 @@ async function buildModuleSource(code) {
}
// Export the merged binding set so the next cell can import it through @prev.
const exportNames = Array.from(mergedBindings.keys());
const exportStmt = exportNames.length ? `\nexport { ${exportNames.join(", ")} };` : "";
const exportStmt = exportNames.length
? `\nexport { ${exportNames.join(", ")} };`
: "";
const nextBindings = Array.from(mergedBindings, ([name, kind]) => ({ name, kind }));
const nextBindings = Array.from(mergedBindings, ([name, kind]) => ({
name,
kind,
}));
return { source: `${prelude}${code}${exportStmt}`, nextBindings };
}
@@ -259,7 +389,9 @@ function send(message) {
function formatLog(args) {
return args
.map((arg) => (typeof arg === "string" ? arg : inspect(arg, { depth: 4, colors: false })))
.map((arg) =>
typeof arg === "string" ? arg : inspect(arg, { depth: 4, colors: false }),
)
.join(" ");
}
@@ -352,7 +484,10 @@ async function handleExec(message) {
exportNames,
function initSynthetic() {
for (const binding of previousBindings) {
this.setExport(binding.name, previousModule.namespace[binding.name]);
this.setExport(
binding.name,
previousModule.namespace[binding.name],
);
}
},
{ context },

View File

@@ -52,7 +52,7 @@ const JS_REPL_MODEL_DIAG_ERROR_MAX_BYTES: usize = 256;
/// Per-task js_repl handle stored on the turn context.
pub(crate) struct JsReplHandle {
node_path: Option<PathBuf>,
codex_home: PathBuf,
node_module_dirs: Vec<PathBuf>,
cell: OnceCell<Arc<JsReplManager>>,
}
@@ -63,10 +63,13 @@ impl fmt::Debug for JsReplHandle {
}
impl JsReplHandle {
pub(crate) fn with_node_path(node_path: Option<PathBuf>, codex_home: PathBuf) -> Self {
pub(crate) fn with_node_path(
node_path: Option<PathBuf>,
node_module_dirs: Vec<PathBuf>,
) -> Self {
Self {
node_path,
codex_home,
node_module_dirs,
cell: OnceCell::new(),
}
}
@@ -74,7 +77,7 @@ impl JsReplHandle {
pub(crate) async fn manager(&self) -> Result<Arc<JsReplManager>, FunctionCallError> {
self.cell
.get_or_try_init(|| async {
JsReplManager::new(self.node_path.clone(), self.codex_home.clone()).await
JsReplManager::new(self.node_path.clone(), self.node_module_dirs.clone()).await
})
.await
.cloned()
@@ -264,7 +267,7 @@ fn with_model_kernel_failure_message(
pub struct JsReplManager {
node_path: Option<PathBuf>,
codex_home: PathBuf,
node_module_dirs: Vec<PathBuf>,
tmp_dir: tempfile::TempDir,
kernel: Mutex<Option<KernelState>>,
exec_lock: Arc<tokio::sync::Semaphore>,
@@ -274,7 +277,7 @@ pub struct JsReplManager {
impl JsReplManager {
async fn new(
node_path: Option<PathBuf>,
codex_home: PathBuf,
node_module_dirs: Vec<PathBuf>,
) -> Result<Arc<Self>, FunctionCallError> {
let tmp_dir = tempfile::tempdir().map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to create js_repl temp dir: {err}"))
@@ -282,7 +285,7 @@ impl JsReplManager {
let manager = Arc::new(Self {
node_path,
codex_home,
node_module_dirs,
tmp_dir,
kernel: Mutex::new(None),
exec_lock: Arc::new(tokio::sync::Semaphore::new(1)),
@@ -565,13 +568,15 @@ impl JsReplManager {
"CODEX_JS_TMP_DIR".to_string(),
self.tmp_dir.path().to_string_lossy().to_string(),
);
env.insert(
"CODEX_JS_REPL_HOME".to_string(),
self.codex_home
.join("js_repl")
.to_string_lossy()
.to_string(),
);
let node_module_dirs_key = "CODEX_JS_REPL_NODE_MODULE_DIRS";
if !self.node_module_dirs.is_empty() && !env.contains_key(node_module_dirs_key) {
let joined = std::env::join_paths(&self.node_module_dirs)
.map_err(|err| format!("failed to join js_repl_node_module_dirs: {err}"))?;
env.insert(
node_module_dirs_key.to_string(),
joined.to_string_lossy().to_string(),
);
}
let spec = CommandSpec {
program: node_path.to_string_lossy().to_string(),
@@ -1290,6 +1295,9 @@ mod tests {
use codex_protocol::models::ResponseInputItem;
use codex_protocol::openai_models::InputModality;
use pretty_assertions::assert_eq;
use std::fs;
use std::path::Path;
use tempfile::tempdir;
#[test]
fn node_version_parses_v_prefix_and_suffix() {
@@ -1463,7 +1471,7 @@ mod tests {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reset_waits_for_exec_lock_before_clearing_exec_tool_calls() {
let manager = JsReplManager::new(None, PathBuf::from("."))
let manager = JsReplManager::new(None, Vec::new())
.await
.expect("manager should initialize");
let permit = manager
@@ -1503,7 +1511,7 @@ mod tests {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reset_clears_inflight_exec_tool_calls_without_waiting() {
let manager = JsReplManager::new(None, std::env::temp_dir())
let manager = JsReplManager::new(None, Vec::new())
.await
.expect("manager should initialize");
let exec_id = Uuid::new_v4().to_string();
@@ -1536,7 +1544,7 @@ mod tests {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reset_aborts_inflight_exec_tool_tasks() {
let manager = JsReplManager::new(None, std::env::temp_dir())
let manager = JsReplManager::new(None, Vec::new())
.await
.expect("manager should initialize");
let exec_id = Uuid::new_v4().to_string();
@@ -1582,6 +1590,22 @@ mod tests {
found >= required
}
fn write_js_repl_test_package(base: &Path, name: &str, value: &str) -> anyhow::Result<()> {
let pkg_dir = base.join("node_modules").join(name);
fs::create_dir_all(&pkg_dir)?;
fs::write(
pkg_dir.join("package.json"),
format!(
"{{\n \"name\": \"{name}\",\n \"version\": \"1.0.0\",\n \"type\": \"module\",\n \"exports\": {{\n \"import\": \"./index.js\"\n }}\n}}\n"
),
)?;
fs::write(
pkg_dir.join("index.js"),
format!("export const value = \"{value}\";\n"),
)?;
Ok(())
}
#[tokio::test]
async fn js_repl_persists_top_level_bindings_and_supports_tla() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
@@ -2028,4 +2052,205 @@ console.log(out.output?.body?.text ?? "");
);
Ok(())
}
#[tokio::test]
async fn js_repl_prefers_env_node_module_dirs_over_config() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
return Ok(());
}
let env_base = tempdir()?;
write_js_repl_test_package(env_base.path(), "repl_probe", "env")?;
let config_base = tempdir()?;
let cwd_dir = tempdir()?;
let (session, mut turn) = make_session_and_context().await;
turn.shell_environment_policy.r#set.insert(
"CODEX_JS_REPL_NODE_MODULE_DIRS".to_string(),
env_base.path().to_string_lossy().to_string(),
);
turn.cwd = cwd_dir.path().to_path_buf();
turn.js_repl = Arc::new(JsReplHandle::with_node_path(
turn.config.js_repl_node_path.clone(),
vec![config_base.path().to_path_buf()],
));
let session = Arc::new(session);
let turn = Arc::new(turn);
let tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::default()));
let manager = turn.js_repl.manager().await?;
let result = manager
.execute(
session,
turn,
tracker,
JsReplArgs {
code: "const mod = await import(\"repl_probe\"); console.log(mod.value);"
.to_string(),
timeout_ms: Some(10_000),
},
)
.await?;
assert!(result.output.contains("env"));
Ok(())
}
#[tokio::test]
async fn js_repl_resolves_from_first_config_dir() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
return Ok(());
}
let first_base = tempdir()?;
let second_base = tempdir()?;
write_js_repl_test_package(first_base.path(), "repl_probe", "first")?;
write_js_repl_test_package(second_base.path(), "repl_probe", "second")?;
let cwd_dir = tempdir()?;
let (session, mut turn) = make_session_and_context().await;
turn.shell_environment_policy
.r#set
.remove("CODEX_JS_REPL_NODE_MODULE_DIRS");
turn.cwd = cwd_dir.path().to_path_buf();
turn.js_repl = Arc::new(JsReplHandle::with_node_path(
turn.config.js_repl_node_path.clone(),
vec![
first_base.path().to_path_buf(),
second_base.path().to_path_buf(),
],
));
let session = Arc::new(session);
let turn = Arc::new(turn);
let tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::default()));
let manager = turn.js_repl.manager().await?;
let result = manager
.execute(
session,
turn,
tracker,
JsReplArgs {
code: "const mod = await import(\"repl_probe\"); console.log(mod.value);"
.to_string(),
timeout_ms: Some(10_000),
},
)
.await?;
assert!(result.output.contains("first"));
Ok(())
}
#[tokio::test]
async fn js_repl_falls_back_to_cwd_node_modules() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
return Ok(());
}
let config_base = tempdir()?;
let cwd_dir = tempdir()?;
write_js_repl_test_package(cwd_dir.path(), "repl_probe", "cwd")?;
let (session, mut turn) = make_session_and_context().await;
turn.shell_environment_policy
.r#set
.remove("CODEX_JS_REPL_NODE_MODULE_DIRS");
turn.cwd = cwd_dir.path().to_path_buf();
turn.js_repl = Arc::new(JsReplHandle::with_node_path(
turn.config.js_repl_node_path.clone(),
vec![config_base.path().to_path_buf()],
));
let session = Arc::new(session);
let turn = Arc::new(turn);
let tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::default()));
let manager = turn.js_repl.manager().await?;
let result = manager
.execute(
session,
turn,
tracker,
JsReplArgs {
code: "const mod = await import(\"repl_probe\"); console.log(mod.value);"
.to_string(),
timeout_ms: Some(10_000),
},
)
.await?;
assert!(result.output.contains("cwd"));
Ok(())
}
#[tokio::test]
async fn js_repl_accepts_node_modules_dir_entries() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
return Ok(());
}
let base_dir = tempdir()?;
let cwd_dir = tempdir()?;
write_js_repl_test_package(base_dir.path(), "repl_probe", "normalized")?;
let (session, mut turn) = make_session_and_context().await;
turn.shell_environment_policy
.r#set
.remove("CODEX_JS_REPL_NODE_MODULE_DIRS");
turn.cwd = cwd_dir.path().to_path_buf();
turn.js_repl = Arc::new(JsReplHandle::with_node_path(
turn.config.js_repl_node_path.clone(),
vec![base_dir.path().join("node_modules")],
));
let session = Arc::new(session);
let turn = Arc::new(turn);
let tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::default()));
let manager = turn.js_repl.manager().await?;
let result = manager
.execute(
session,
turn,
tracker,
JsReplArgs {
code: "const mod = await import(\"repl_probe\"); console.log(mod.value);"
.to_string(),
timeout_ms: Some(10_000),
},
)
.await?;
assert!(result.output.contains("normalized"));
Ok(())
}
#[tokio::test]
async fn js_repl_rejects_path_specifiers() -> anyhow::Result<()> {
if !can_run_js_repl_runtime_tests().await {
return Ok(());
}
let (session, turn) = make_session_and_context().await;
let session = Arc::new(session);
let turn = Arc::new(turn);
let tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::default()));
let manager = turn.js_repl.manager().await?;
let err = manager
.execute(
session,
turn,
tracker,
JsReplArgs {
code: "await import(\"./local.js\");".to_string(),
timeout_ms: Some(10_000),
},
)
.await
.expect_err("expected path specifier to be rejected");
assert!(err.to_string().contains("Unsupported import specifier"));
Ok(())
}
}

View File

@@ -6,6 +6,7 @@ use std::time::Instant;
use crate::client_common::tools::ToolSpec;
use crate::features::Feature;
use crate::function_tool::FunctionCallError;
use crate::memories::usage::emit_metric_for_tool_read;
use crate::protocol::SandboxPolicy;
use crate::sandbox_tags::sandbox_tag;
use crate::tools::context::ToolInvocation;
@@ -173,6 +174,7 @@ impl ToolRegistry {
Ok((preview, success)) => (preview.clone(), *success),
Err(err) => (err.to_string(), false),
};
emit_metric_for_tool_read(&invocation, success).await;
let hook_abort_error = dispatch_after_tool_use_hook(AfterToolUseHookDispatch {
invocation: &invocation,
output_preview,

View File

@@ -85,6 +85,7 @@ mod model_info_overrides;
mod model_overrides;
mod model_switching;
mod model_tools;
mod model_visible_layout;
mod models_cache_ttl;
mod models_etag_responses;
mod otel;

View File

@@ -0,0 +1,443 @@
#![allow(clippy::expect_used)]
use std::fs;
use std::sync::Arc;
use anyhow::Result;
use codex_core::config::types::Personality;
use codex_core::features::Feature;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::user_input::UserInput;
use core_test_support::context_snapshot;
use core_test_support::context_snapshot::ContextSnapshotOptions;
use core_test_support::context_snapshot::ContextSnapshotRenderMode;
use core_test_support::responses::ResponsesRequest;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::mount_sse_once;
use core_test_support::responses::mount_sse_sequence;
use core_test_support::responses::sse;
use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
const PRETURN_CONTEXT_DIFF_CWD: &str = "PRETURN_CONTEXT_DIFF_CWD";
fn context_snapshot_options() -> ContextSnapshotOptions {
ContextSnapshotOptions::default()
.render_mode(ContextSnapshotRenderMode::KindWithTextPrefix { max_chars: 96 })
}
fn format_labeled_requests_snapshot(
scenario: &str,
sections: &[(&str, &ResponsesRequest)],
) -> String {
context_snapshot::format_labeled_requests_snapshot(
scenario,
sections,
&context_snapshot_options(),
)
}
fn agents_message_count(request: &ResponsesRequest) -> usize {
request
.message_input_texts("user")
.iter()
.filter(|text| text.starts_with("# AGENTS.md instructions for "))
.count()
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn snapshot_model_visible_layout_turn_overrides() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let responses = mount_sse_sequence(
&server,
vec![
sse(vec![
ev_response_created("resp-1"),
ev_assistant_message("msg-1", "turn one complete"),
ev_completed("resp-1"),
]),
sse(vec![
ev_response_created("resp-2"),
ev_assistant_message("msg-2", "turn two complete"),
ev_completed("resp-2"),
]),
],
)
.await;
let mut builder = test_codex()
.with_model("gpt-5.2-codex")
.with_config(|config| {
config.features.enable(Feature::Personality);
config.personality = Some(Personality::Pragmatic);
});
let test = builder.build(&server).await?;
let preturn_context_diff_cwd = test.cwd_path().join(PRETURN_CONTEXT_DIFF_CWD);
fs::create_dir_all(&preturn_context_diff_cwd)?;
test.codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "first turn".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: test.cwd_path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: test.session_configured.model.clone(),
effort: test.config.model_reasoning_effort,
summary: ReasoningSummary::Auto,
collaboration_mode: None,
personality: None,
})
.await?;
wait_for_event(&test.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
test.codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "second turn with context updates".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: preturn_context_diff_cwd,
approval_policy: AskForApproval::OnRequest,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: test.session_configured.model.clone(),
effort: test.config.model_reasoning_effort,
summary: ReasoningSummary::Auto,
collaboration_mode: None,
personality: Some(Personality::Friendly),
})
.await?;
wait_for_event(&test.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
let requests = responses.requests();
assert_eq!(requests.len(), 2, "expected two requests");
insta::assert_snapshot!(
"model_visible_layout_turn_overrides",
format_labeled_requests_snapshot(
"Second turn changes cwd, approval policy, and personality while keeping model constant.",
&[
("First Request (Baseline)", &requests[0]),
("Second Request (Turn Overrides)", &requests[1]),
]
)
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
// TODO(ccunningham): Diff `user_instructions` and emit updates when AGENTS.md content changes
// (for example after cwd changes), then update this test to assert refreshed AGENTS content.
async fn snapshot_model_visible_layout_cwd_change_does_not_refresh_agents() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let responses = mount_sse_sequence(
&server,
vec![
sse(vec![
ev_response_created("resp-1"),
ev_assistant_message("msg-1", "turn one complete"),
ev_completed("resp-1"),
]),
sse(vec![
ev_response_created("resp-2"),
ev_assistant_message("msg-2", "turn two complete"),
ev_completed("resp-2"),
]),
],
)
.await;
let mut builder = test_codex().with_model("gpt-5.2-codex");
let test = builder.build(&server).await?;
let cwd_one = test.cwd_path().join("agents_one");
let cwd_two = test.cwd_path().join("agents_two");
fs::create_dir_all(&cwd_one)?;
fs::create_dir_all(&cwd_two)?;
fs::write(
cwd_one.join("AGENTS.md"),
"# AGENTS one\n\n<INSTRUCTIONS>\nTurn one agents instructions.\n</INSTRUCTIONS>\n",
)?;
fs::write(
cwd_two.join("AGENTS.md"),
"# AGENTS two\n\n<INSTRUCTIONS>\nTurn two agents instructions.\n</INSTRUCTIONS>\n",
)?;
test.codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "first turn in agents_one".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd_one.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: test.session_configured.model.clone(),
effort: test.config.model_reasoning_effort,
summary: ReasoningSummary::Auto,
collaboration_mode: None,
personality: None,
})
.await?;
wait_for_event(&test.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
test.codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "second turn in agents_two".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd_two,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: test.session_configured.model.clone(),
effort: test.config.model_reasoning_effort,
summary: ReasoningSummary::Auto,
collaboration_mode: None,
personality: None,
})
.await?;
wait_for_event(&test.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
let requests = responses.requests();
assert_eq!(requests.len(), 2, "expected two requests");
assert_eq!(
agents_message_count(&requests[0]),
1,
"expected exactly one AGENTS message in first request"
);
assert_eq!(
agents_message_count(&requests[1]),
1,
"expected AGENTS to refresh after cwd change, but current behavior only keeps history AGENTS"
);
insta::assert_snapshot!(
"model_visible_layout_cwd_change_does_not_refresh_agents",
format_labeled_requests_snapshot(
"Second turn changes cwd to a directory with different AGENTS.md; current behavior does not emit refreshed AGENTS instructions.",
&[
("First Request (agents_one)", &requests[0]),
("Second Request (agents_two cwd)", &requests[1]),
]
)
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn snapshot_model_visible_layout_resume_with_personality_change() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut initial_builder = test_codex().with_config(|config| {
config.model = Some("gpt-5.2".to_string());
});
let initial = initial_builder.build(&server).await?;
let codex = Arc::clone(&initial.codex);
let home = initial.home.clone();
let rollout_path = initial
.session_configured
.rollout_path
.clone()
.expect("rollout path");
let initial_mock = mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-initial"),
ev_assistant_message("msg-1", "recorded before resume"),
ev_completed("resp-initial"),
]),
)
.await;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "seed resume history".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await?;
wait_for_event(&codex, |event| matches!(event, EventMsg::TurnComplete(_))).await;
let initial_request = initial_mock.single_request();
let resumed_mock = mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-resume"),
ev_assistant_message("msg-2", "first resumed turn"),
ev_completed("resp-resume"),
]),
)
.await;
let mut resume_builder = test_codex().with_config(|config| {
config.model = Some("gpt-5.2-codex".to_string());
config.features.enable(Feature::Personality);
config.personality = Some(Personality::Pragmatic);
});
let resumed = resume_builder.resume(&server, home, rollout_path).await?;
resumed
.codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "resume and change personality".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: resumed.cwd_path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: resumed.session_configured.model.clone(),
effort: resumed.config.model_reasoning_effort,
summary: ReasoningSummary::Auto,
collaboration_mode: None,
personality: Some(Personality::Friendly),
})
.await?;
wait_for_event(&resumed.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
let resumed_request = resumed_mock.single_request();
insta::assert_snapshot!(
"model_visible_layout_resume_with_personality_change",
format_labeled_requests_snapshot(
"First post-resume turn where resumed config model differs from rollout and personality changes.",
&[
("Last Request Before Resume", &initial_request),
("First Request After Resume", &resumed_request),
]
)
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn snapshot_model_visible_layout_resume_override_matches_rollout_model() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut initial_builder = test_codex().with_config(|config| {
config.model = Some("gpt-5.2".to_string());
});
let initial = initial_builder.build(&server).await?;
let codex = Arc::clone(&initial.codex);
let home = initial.home.clone();
let rollout_path = initial
.session_configured
.rollout_path
.clone()
.expect("rollout path");
let initial_mock = mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-initial"),
ev_assistant_message("msg-1", "recorded before resume"),
ev_completed("resp-initial"),
]),
)
.await;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "seed resume history".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await?;
wait_for_event(&codex, |event| matches!(event, EventMsg::TurnComplete(_))).await;
let initial_request = initial_mock.single_request();
let resumed_mock = mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-resume"),
ev_assistant_message("msg-2", "first resumed turn"),
ev_completed("resp-resume"),
]),
)
.await;
let mut resume_builder = test_codex().with_config(|config| {
config.model = Some("gpt-5.2-codex".to_string());
});
let resumed = resume_builder.resume(&server, home, rollout_path).await?;
resumed
.codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
windows_sandbox_level: None,
model: Some("gpt-5.2".to_string()),
effort: None,
summary: None,
collaboration_mode: None,
personality: None,
})
.await?;
resumed
.codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "first resumed turn after model override".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await?;
wait_for_event(&resumed.codex, |event| {
matches!(event, EventMsg::TurnComplete(_))
})
.await;
let resumed_request = resumed_mock.single_request();
insta::assert_snapshot!(
"model_visible_layout_resume_override_matches_rollout_model",
format_labeled_requests_snapshot(
"First post-resume turn where pre-turn override sets model to rollout model; no model-switch update should appear.",
&[
("Last Request Before Resume", &initial_request),
("First Request After Resume + Override", &resumed_request),
]
)
);
Ok(())
}

View File

@@ -0,0 +1,24 @@
---
source: core/tests/suite/model_visible_layout.rs
expression: "format_labeled_requests_snapshot(\"Second turn changes cwd to a directory with different AGENTS.md; current behavior does not emit refreshed AGENTS instructions.\",\n&[(\"First Request (agents_one)\", &requests[0]),\n(\"Second Request (agents_two cwd)\", &requests[1]),])"
---
Scenario: Second turn changes cwd to a directory with different AGENTS.md; current behavior does not emit refreshed AGENTS instructions.
## First Request (agents_one)
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
04:message/developer:<PERMISSIONS_INSTRUCTIONS>
05:message/user:first turn in agents_one
## Second Request (agents_two cwd)
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
04:message/developer:<PERMISSIONS_INSTRUCTIONS>
05:message/user:first turn in agents_one
06:message/assistant:turn one complete
07:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
08:message/user:second turn in agents_two

View File

@@ -0,0 +1,22 @@
---
source: core/tests/suite/model_visible_layout.rs
expression: "format_labeled_requests_snapshot(\"First post-resume turn where pre-turn override sets model to rollout model; no model-switch update should appear.\",\n&[(\"Last Request Before Resume\", &initial_request),\n(\"First Request After Resume + Override\", &resumed_request),])"
---
Scenario: First post-resume turn where pre-turn override sets model to rollout model; no model-switch update should appear.
## Last Request Before Resume
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:seed resume history
## First Request After Resume + Override
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:seed resume history
04:message/assistant:recorded before resume
05:message/developer:<PERMISSIONS_INSTRUCTIONS>
06:message/user:<AGENTS_MD>
07:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
08:message/user:first resumed turn after model override

View File

@@ -0,0 +1,26 @@
---
source: core/tests/suite/model_visible_layout.rs
expression: "format_labeled_requests_snapshot(\"First post-resume turn where resumed config model differs from rollout and personality changes.\",\n&[(\"Last Request Before Resume\", &initial_request),\n(\"First Request After Resume\", &resumed_request),])"
---
Scenario: First post-resume turn where resumed config model differs from rollout and personality changes.
## Last Request Before Resume
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:seed resume history
## First Request After Resume
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/user:seed resume history
04:message/assistant:recorded before resume
05:message/developer:<PERMISSIONS_INSTRUCTIONS>
06:message/developer:<personality_spec> The user has requested a new communication style. Future messages should adhe...
07:message/user:<AGENTS_MD>
08:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
09:message/developer:<PERMISSIONS_INSTRUCTIONS>
10:message/developer:<model_switch>\nThe user was previously using a different model. Please continue the conversatio...
11:message/developer:<personality_spec> The user has requested a new communication style. Future messages should adhe...
12:message/user:resume and change personality

View File

@@ -0,0 +1,24 @@
---
source: core/tests/suite/model_visible_layout.rs
expression: "format_labeled_requests_snapshot(\"Second turn changes cwd, approval policy, and personality while keeping model constant.\",\n&[(\"First Request (Baseline)\", &requests[0]),\n(\"Second Request (Turn Overrides)\", &requests[1]),])"
---
Scenario: Second turn changes cwd, approval policy, and personality while keeping model constant.
## First Request (Baseline)
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/developer:<PERMISSIONS_INSTRUCTIONS>
04:message/user:first turn
## Second Request (Turn Overrides)
00:message/developer:<PERMISSIONS_INSTRUCTIONS>
01:message/user:<AGENTS_MD>
02:message/user:<ENVIRONMENT_CONTEXT:cwd=<CWD>>
03:message/developer:<PERMISSIONS_INSTRUCTIONS>
04:message/user:first turn
05:message/assistant:turn one complete
06:message/user:<ENVIRONMENT_CONTEXT:cwd=PRETURN_CONTEXT_DIFF_CWD>
07:message/developer:<PERMISSIONS_INSTRUCTIONS>
08:message/developer:<personality_spec> The user has requested a new communication style. Future messages should adhe...
09:message/user:second turn with context updates

View File

@@ -252,6 +252,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
model_provider: model_provider.clone(),
codex_linux_sandbox_exe,
js_repl_node_path: None,
js_repl_node_module_dirs: None,
zsh_path: None,
base_instructions: None,
developer_instructions: None,

View File

@@ -435,6 +435,17 @@ impl ReadOnlyAccess {
matches!(self, ReadOnlyAccess::FullAccess)
}
/// Returns true if platform defaults should be included for restricted read access.
pub fn include_platform_defaults(&self) -> bool {
matches!(
self,
ReadOnlyAccess::Restricted {
include_platform_defaults: true,
..
}
)
}
/// Returns the readable roots for restricted read access.
///
/// For [`ReadOnlyAccess::FullAccess`], returns an empty list because
@@ -442,53 +453,12 @@ impl ReadOnlyAccess {
pub fn get_readable_roots_with_cwd(&self, cwd: &Path) -> Vec<AbsolutePathBuf> {
let mut roots: Vec<AbsolutePathBuf> = match self {
ReadOnlyAccess::FullAccess => return Vec::new(),
ReadOnlyAccess::Restricted {
include_platform_defaults,
readable_roots,
} => {
ReadOnlyAccess::Restricted { readable_roots, .. } => {
let mut roots = readable_roots.clone();
if *include_platform_defaults {
#[cfg(target_os = "macos")]
for platform_path in [
"/bin", "/dev", "/etc", "/Library", "/private", "/sbin", "/System", "/tmp",
"/usr",
] {
#[allow(clippy::expect_used)]
roots.push(
AbsolutePathBuf::from_absolute_path(platform_path)
.expect("platform defaults should be absolute"),
);
}
#[cfg(target_os = "linux")]
for platform_path in ["/bin", "/dev", "/etc", "/lib", "/lib64", "/tmp", "/usr"]
{
#[allow(clippy::expect_used)]
roots.push(
AbsolutePathBuf::from_absolute_path(platform_path)
.expect("platform defaults should be absolute"),
);
}
#[cfg(target_os = "windows")]
for platform_path in [
r"C:\Windows",
r"C:\Program Files",
r"C:\Program Files (x86)",
r"C:\ProgramData",
] {
#[allow(clippy::expect_used)]
roots.push(
AbsolutePathBuf::from_absolute_path(platform_path)
.expect("platform defaults should be absolute"),
);
}
match AbsolutePathBuf::from_absolute_path(cwd) {
Ok(cwd_root) => roots.push(cwd_root),
Err(err) => {
error!("Ignoring invalid cwd {cwd:?} for sandbox readable root: {err}");
}
match AbsolutePathBuf::from_absolute_path(cwd) {
Ok(cwd_root) => roots.push(cwd_root),
Err(err) => {
error!("Ignoring invalid cwd {cwd:?} for sandbox readable root: {err}");
}
}
roots
@@ -653,6 +623,20 @@ impl SandboxPolicy {
}
}
/// Returns true if platform defaults should be included for restricted read access.
pub fn include_platform_defaults(&self) -> bool {
if self.has_full_disk_read_access() {
return false;
}
match self {
SandboxPolicy::ReadOnly { access } => access.include_platform_defaults(),
SandboxPolicy::WorkspaceWrite {
read_only_access, ..
} => read_only_access.include_platform_defaults(),
SandboxPolicy::DangerFullAccess | SandboxPolicy::ExternalSandbox { .. } => false,
}
}
/// Returns the list of readable roots (tailored to the current working
/// directory) when read access is restricted.
///

View File

@@ -2516,11 +2516,7 @@ impl App {
.await;
match apply_result {
Ok(()) => {
self.config.tui_status_line = if ids.is_empty() {
None
} else {
Some(ids.clone())
};
self.config.tui_status_line = Some(ids.clone());
self.chat_widget.setup_status_line(items);
}
Err(err) => {

View File

@@ -249,6 +249,8 @@ use strum::IntoEnumIterator;
const USER_SHELL_COMMAND_HELP_TITLE: &str = "Prefix a command with ! to run it locally";
const USER_SHELL_COMMAND_HELP_HINT: &str = "Example: !ls";
const DEFAULT_OPENAI_BASE_URL: &str = "https://api.openai.com/v1";
const DEFAULT_STATUS_LINE_ITEMS: [&str; 3] =
["model-with-reasoning", "context-remaining", "current-dir"];
// Track information about an in-flight exec command.
struct RunningCommand {
command: Vec<String>,
@@ -953,12 +955,11 @@ impl ChatWidget {
/// Applies status-line item selection from the setup view to in-memory config.
///
/// An empty selection is normalized to `None` so the status line is fully disabled and the
/// behavior matches an unset `tui.status_line` config value.
/// An empty selection persists as an explicit empty list.
pub(crate) fn setup_status_line(&mut self, items: Vec<StatusLineItem>) {
tracing::info!("status line setup confirmed with items: {items:#?}");
let ids = items.iter().map(ToString::to_string).collect::<Vec<_>>();
self.config.tui_status_line = if ids.is_empty() { None } else { Some(ids) };
self.config.tui_status_line = Some(ids);
self.refresh_status_line();
}
@@ -2692,13 +2693,9 @@ impl ChatWidget {
widget
.bottom_pane
.set_steer_enabled(widget.config.features.enabled(Feature::Steer));
widget.bottom_pane.set_status_line_enabled(
widget
.config
.tui_status_line
.as_ref()
.is_some_and(|items| !items.is_empty()),
);
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
@@ -2859,13 +2856,9 @@ impl ChatWidget {
widget
.bottom_pane
.set_steer_enabled(widget.config.features.enabled(Feature::Steer));
widget.bottom_pane.set_status_line_enabled(
widget
.config
.tui_status_line
.as_ref()
.is_some_and(|items| !items.is_empty()),
);
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
@@ -3015,13 +3008,9 @@ impl ChatWidget {
widget
.bottom_pane
.set_steer_enabled(widget.config.features.enabled(Feature::Steer));
widget.bottom_pane.set_status_line_enabled(
widget
.config
.tui_status_line
.as_ref()
.is_some_and(|items| !items.is_empty()),
);
widget
.bottom_pane
.set_status_line_enabled(!widget.configured_status_line_items().is_empty());
widget.bottom_pane.set_collaboration_modes_enabled(
widget.config.features.enabled(Feature::CollaborationModes),
);
@@ -4349,8 +4338,9 @@ impl ChatWidget {
}
fn open_status_line_setup(&mut self) {
let configured_status_line_items = self.configured_status_line_items();
let view = StatusLineSetupView::new(
self.config.tui_status_line.as_deref(),
Some(configured_status_line_items.as_slice()),
self.app_event_tx.clone(),
);
self.bottom_pane.show_view(Box::new(view));
@@ -4363,10 +4353,7 @@ impl ChatWidget {
let mut invalid = Vec::new();
let mut invalid_seen = HashSet::new();
let mut items = Vec::new();
let Some(config_items) = self.config.tui_status_line.as_ref() else {
return (items, invalid);
};
for id in config_items {
for id in self.configured_status_line_items() {
match id.parse::<StatusLineItem>() {
Ok(item) => items.push(item),
Err(_) => {
@@ -4379,6 +4366,15 @@ impl ChatWidget {
(items, invalid)
}
fn configured_status_line_items(&self) -> Vec<String> {
self.config.tui_status_line.clone().unwrap_or_else(|| {
DEFAULT_STATUS_LINE_ITEMS
.iter()
.map(ToString::to_string)
.collect()
})
}
fn status_line_cwd(&self) -> &Path {
self.current_cwd.as_ref().unwrap_or(&self.config.cwd)
}

View File

@@ -37,6 +37,19 @@ You can configure an explicit runtime path:
js_repl_node_path = "/absolute/path/to/node"
```
## Module resolution
`js_repl` resolves **bare** specifiers (for example `await import("pkg")`) using an ordered
search path. Path-style specifiers (`./`, `../`, absolute paths, `file:` URLs) are rejected.
Module resolution proceeds in the following order:
1. `CODEX_JS_REPL_NODE_MODULE_DIRS` (PATH-delimited list)
2. `js_repl_node_module_dirs` in config/profile (array of absolute paths)
3. Thread working directory (cwd, always included as the last fallback)
For `CODEX_JS_REPL_NODE_MODULE_DIRS` and `js_repl_node_module_dirs`, module resolution is attempted in the order provided with earlier entries taking precedence.
## Usage
- `js_repl` is a freeform tool: send raw JavaScript source text.