Compare commits

...

17 Commits

Author SHA1 Message Date
starr-openai
6095ec9839 Load project docs through session environment
Co-authored-by: Codex <noreply@openai.com>
2026-03-17 23:53:19 -07:00
starr-openai
2829c35d25 Add remote workspace remap for exec-server mode
Co-authored-by: Codex <noreply@openai.com>
2026-03-17 23:49:28 -07:00
starr-openai
690b9ce1f8 Harden exec-server unified-exec follow-up
- fall back to local when sandboxed exec cannot be modeled remotely
- use server-issued process ids for remote session continuations
- retain symlink fidelity across fs/readDirectory plumbing
- clean up exited exec-server processes after retention

Co-authored-by: Codex <noreply@openai.com>
2026-03-17 21:14:08 -07:00
starr-openai
52dd39bc95 Route unified exec and filesystem tools through exec-server
Add exec-server filesystem RPCs and a core-side remote filesystem client,
then route unified-exec and filesystem-backed tools through that backend
when enabled by config. Also add Docker-backed remote exec integration
coverage for the local codex-exec CLI.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 03:24:20 +00:00
starr-openai
678dbe28af exec-server: add optional sandbox start config
Add a typed optional sandbox field to process/start so callers can omit sandboxing for the existing direct-spawn path while reserving a host-default mode for future remote materialization. Reject hostDefault for now instead of silently running unsandboxed, and cover both omitted and explicit sandbox payloads in tests.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
c0b8f2dfe8 exec-server: tighten retained-output reads
Fix read pagination when max_bytes truncates a response, add a chunking regression covering stdout/stderr retention, warn on retained-output eviction, and note init auth as a pre-trust-boundary TODO.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
4a3ae786fd exec-server: make in-process client call handler directly
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
238840fe08 exec-server: add in-process client mode
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
30e094e34a codex: address PR review feedback (#14862)
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
25481592c2 Expand exec-server unit test coverage
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
11f1182870 Document exec-server design flow and add lifecycle tests
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:54 +00:00
starr-openai
7b7046486f refactor(exec-server): split routing from handler
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
starr-openai
45820f8aa4 refactor(exec-server): tighten client lifecycle and output model
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
starr-openai
672222e594 test(exec-server): add unit coverage for transport and handshake
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
starr-openai
6f2c8896dc refactor(exec-server): split transports from client launch
Separate the transport-neutral JSON-RPC connection and server processor from
local process spawning, add websocket support, and document the new API
shape.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
starr-openai
33a4387bd9 docs(exec-server): add protocol README
Document the standalone exec-server crate, its stdio JSON-RPC
transport, and the current request/response and notification
payloads.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
starr-openai
0c6d47a0b5 Add codex-exec-server crate
This adds the standalone exec-server stdio JSON-RPC crate and its
smoke tests without wiring it into the CLI or unified-exec yet.

Co-authored-by: Codex <noreply@openai.com>
2026-03-18 00:39:53 +00:00
58 changed files with 8274 additions and 147 deletions

22
codex-rs/Cargo.lock generated
View File

@@ -1842,6 +1842,7 @@ dependencies = [
"codex-config",
"codex-connectors",
"codex-environment",
"codex-exec-server",
"codex-execpolicy",
"codex-file-search",
"codex-git",
@@ -2001,6 +2002,27 @@ dependencies = [
"wiremock",
]
[[package]]
name = "codex-exec-server"
version = "0.0.0"
dependencies = [
"anyhow",
"base64 0.22.1",
"clap",
"codex-app-server-protocol",
"codex-environment",
"codex-utils-cargo-bin",
"codex-utils-pty",
"futures",
"pretty_assertions",
"serde",
"serde_json",
"thiserror 2.0.18",
"tokio",
"tokio-tungstenite",
"tracing",
]
[[package]]
name = "codex-execpolicy"
version = "0.0.0"

View File

@@ -26,6 +26,7 @@ members = [
"hooks",
"secrets",
"exec",
"exec-server",
"execpolicy",
"execpolicy-legacy",
"keyring-store",

View File

@@ -7466,12 +7466,17 @@
"isFile": {
"description": "Whether this entry resolves to a regular file.",
"type": "boolean"
},
"isSymlink": {
"description": "Whether this entry is a symlink.",
"type": "boolean"
}
},
"required": [
"fileName",
"isDirectory",
"isFile"
"isFile",
"isSymlink"
],
"type": "object"
},

View File

@@ -4110,12 +4110,17 @@
"isFile": {
"description": "Whether this entry resolves to a regular file.",
"type": "boolean"
},
"isSymlink": {
"description": "Whether this entry is a symlink.",
"type": "boolean"
}
},
"required": [
"fileName",
"isDirectory",
"isFile"
"isFile",
"isSymlink"
],
"type": "object"
},

View File

@@ -15,12 +15,17 @@
"isFile": {
"description": "Whether this entry resolves to a regular file.",
"type": "boolean"
},
"isSymlink": {
"description": "Whether this entry is a symlink.",
"type": "boolean"
}
},
"required": [
"fileName",
"isDirectory",
"isFile"
"isFile",
"isSymlink"
],
"type": "object"
}

View File

@@ -17,4 +17,8 @@ isDirectory: boolean,
/**
* Whether this entry resolves to a regular file.
*/
isFile: boolean, };
isFile: boolean,
/**
* Whether this entry is a symlink.
*/
isSymlink: boolean, };

View File

@@ -2210,6 +2210,8 @@ pub struct FsReadDirectoryEntry {
pub is_directory: bool,
/// Whether this entry resolves to a regular file.
pub is_file: bool,
/// Whether this entry is a symlink.
pub is_symlink: bool,
}
/// Directory entries returned by `fs/readDirectory`.

View File

@@ -34,7 +34,7 @@ pub(crate) struct FsApi {
impl Default for FsApi {
fn default() -> Self {
Self {
file_system: Arc::new(Environment.get_filesystem()),
file_system: Environment::default().get_filesystem(),
}
}
}
@@ -119,6 +119,7 @@ impl FsApi {
file_name: entry.file_name,
is_directory: entry.is_directory,
is_file: entry.is_file,
is_symlink: entry.is_symlink,
})
.collect(),
})

View File

@@ -229,11 +229,13 @@ async fn fs_methods_cover_current_fs_utils_surface() -> Result<()> {
file_name: "nested".to_string(),
is_directory: true,
is_file: false,
is_symlink: false,
},
FsReadDirectoryEntry {
file_name: "root.txt".to_string(),
is_directory: false,
is_file: true,
is_symlink: false,
},
]
);

View File

@@ -35,6 +35,7 @@ codex-client = { workspace = true }
codex-connectors = { workspace = true }
codex-config = { workspace = true }
codex-environment = { workspace = true }
codex-exec-server = { path = "../exec-server" }
codex-shell-command = { workspace = true }
codex-skills = { workspace = true }
codex-execpolicy = { workspace = true }

View File

@@ -321,6 +321,21 @@
"experimental_compact_prompt_file": {
"$ref": "#/definitions/AbsolutePathBuf"
},
"experimental_supported_tools": {
"items": {
"type": "string"
},
"type": "array"
},
"experimental_unified_exec_exec_server_websocket_url": {
"type": "string"
},
"experimental_unified_exec_spawn_local_exec_server": {
"type": "boolean"
},
"experimental_unified_exec_use_exec_server": {
"type": "boolean"
},
"experimental_use_freeform_apply_patch": {
"type": "boolean"
},
@@ -1879,6 +1894,25 @@
"description": "Experimental / do not use. Replaces the synthesized realtime startup context appended to websocket session instructions. An empty string disables startup context injection entirely.",
"type": "string"
},
"experimental_supported_tools": {
"description": "Additional experimental tools to expose regardless of the selected model's advertised tool support.",
"items": {
"type": "string"
},
"type": "array"
},
"experimental_unified_exec_exec_server_websocket_url": {
"description": "Optional websocket URL for connecting to an existing `codex-exec-server`.",
"type": "string"
},
"experimental_unified_exec_spawn_local_exec_server": {
"description": "When `true`, start a session-scoped local `codex-exec-server` subprocess during session startup and route unified-exec calls through it.",
"type": "boolean"
},
"experimental_unified_exec_use_exec_server": {
"description": "When `true`, route unified-exec process launches through `codex-exec-server` instead of spawning them directly in-process.",
"type": "boolean"
},
"experimental_use_freeform_apply_patch": {
"type": "boolean"
},
@@ -2479,4 +2513,4 @@
},
"title": "ConfigToml",
"type": "object"
}
}

View File

@@ -225,7 +225,7 @@ use crate::network_policy_decision::execpolicy_network_rule_amendment;
use crate::plugins::PluginsManager;
use crate::plugins::build_plugin_injections;
use crate::plugins::render_plugins_section;
use crate::project_doc::get_user_instructions;
use crate::project_doc::get_user_instructions_with_environment;
use crate::protocol::AgentMessageContentDeltaEvent;
use crate::protocol::AgentReasoningSectionBreakEvent;
use crate::protocol::ApplyPatchApprovalRequestEvent;
@@ -308,6 +308,7 @@ use crate::turn_timing::TurnTimingState;
use crate::turn_timing::record_turn_ttfm_metric;
use crate::turn_timing::record_turn_ttft_metric;
use crate::unified_exec::UnifiedExecProcessManager;
use crate::unified_exec::session_execution_backends_for_config;
use crate::util::backoff;
use crate::windows_sandbox::WindowsSandboxLevelExt;
use codex_async_utils::OrCancelExt;
@@ -474,7 +475,7 @@ impl Codex {
config.startup_warnings.push(message);
}
let user_instructions = get_user_instructions(&config).await;
let user_instructions = config.user_instructions.clone();
let exec_policy = if crate::guardian::is_guardian_reviewer_source(&session_source) {
// Guardian review should rely on the built-in shell safety checks,
@@ -1761,6 +1762,11 @@ impl Session {
});
}
let session_execution_backends =
session_execution_backends_for_config(config.as_ref(), None).await?;
session_configuration.user_instructions =
get_user_instructions_with_environment(config.as_ref(), &session_execution_backends.environment)
.await;
let services = SessionServices {
// Initialize the MCP connection manager with an uninitialized
// instance. It will be replaced with one created via
@@ -1773,8 +1779,9 @@ impl Session {
&config.permissions.approval_policy,
))),
mcp_startup_cancellation_token: Mutex::new(CancellationToken::new()),
unified_exec_manager: UnifiedExecProcessManager::new(
unified_exec_manager: UnifiedExecProcessManager::with_session_factory(
config.background_terminal_max_timeout,
session_execution_backends.unified_exec_session_factory,
),
shell_zsh_path: config.zsh_path.clone(),
main_execve_wrapper_exe: config.main_execve_wrapper_exe.clone(),
@@ -1815,7 +1822,7 @@ impl Session {
code_mode_service: crate::tools::code_mode::CodeModeService::new(
config.js_repl_node_path.clone(),
),
environment: Arc::new(Environment),
environment: session_execution_backends.environment,
};
let js_repl = Arc::new(JsReplHandle::with_node_path(
config.js_repl_node_path.clone(),

View File

@@ -2466,7 +2466,7 @@ pub(crate) async fn make_session_and_context() -> (Session, TurnContext) {
true,
));
let network_approval = Arc::new(NetworkApprovalService::default());
let environment = Arc::new(codex_environment::Environment);
let environment = Arc::new(codex_environment::Environment::default());
let file_watcher = Arc::new(FileWatcher::noop());
let services = SessionServices {
@@ -3261,7 +3261,7 @@ pub(crate) async fn make_session_and_context_with_dynamic_tools_and_rx(
true,
));
let network_approval = Arc::new(NetworkApprovalService::default());
let environment = Arc::new(codex_environment::Environment);
let environment = Arc::new(codex_environment::Environment::default());
let file_watcher = Arc::new(FileWatcher::noop());
let services = SessionServices {

View File

@@ -1696,6 +1696,70 @@ fn legacy_toggles_map_to_features() -> std::io::Result<()> {
assert!(config.include_apply_patch_tool);
assert!(config.use_experimental_unified_exec_tool);
assert!(!config.experimental_unified_exec_use_exec_server);
assert!(!config.experimental_unified_exec_spawn_local_exec_server);
assert_eq!(
config.experimental_unified_exec_exec_server_websocket_url,
None
);
assert!(config.experimental_supported_tools.is_empty());
Ok(())
}
#[test]
fn unified_exec_exec_server_flags_load_from_config() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml {
experimental_unified_exec_use_exec_server: Some(true),
experimental_unified_exec_spawn_local_exec_server: Some(true),
experimental_unified_exec_exec_server_websocket_url: Some(
"ws://127.0.0.1:8765".to_string(),
),
experimental_unified_exec_exec_server_workspace_root: Some(
AbsolutePathBuf::try_from("/home/dev-user/codex").unwrap(),
),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert!(config.experimental_unified_exec_use_exec_server);
assert!(config.experimental_unified_exec_spawn_local_exec_server);
assert_eq!(
config.experimental_unified_exec_exec_server_websocket_url,
Some("ws://127.0.0.1:8765".to_string())
);
assert_eq!(
config.experimental_unified_exec_exec_server_workspace_root,
Some(AbsolutePathBuf::try_from("/home/dev-user/codex").unwrap())
);
Ok(())
}
#[test]
fn experimental_supported_tools_load_from_config() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml {
experimental_supported_tools: Some(vec!["read_file".to_string(), "list_dir".to_string()]),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert_eq!(
config.experimental_supported_tools,
vec!["read_file".to_string(), "list_dir".to_string()]
);
Ok(())
}
@@ -4265,6 +4329,10 @@ fn test_precedence_fixture_with_o3_profile() -> std::io::Result<()> {
web_search_mode: Constrained::allow_any(WebSearchMode::Cached),
web_search_config: None,
use_experimental_unified_exec_tool: !cfg!(windows),
experimental_unified_exec_use_exec_server: false,
experimental_unified_exec_spawn_local_exec_server: false,
experimental_unified_exec_exec_server_websocket_url: None,
experimental_supported_tools: Vec::new(),
background_terminal_max_timeout: DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults().into(),
@@ -4404,6 +4472,10 @@ fn test_precedence_fixture_with_gpt3_profile() -> std::io::Result<()> {
web_search_mode: Constrained::allow_any(WebSearchMode::Cached),
web_search_config: None,
use_experimental_unified_exec_tool: !cfg!(windows),
experimental_unified_exec_use_exec_server: false,
experimental_unified_exec_spawn_local_exec_server: false,
experimental_unified_exec_exec_server_websocket_url: None,
experimental_supported_tools: Vec::new(),
background_terminal_max_timeout: DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults().into(),
@@ -4541,6 +4613,10 @@ fn test_precedence_fixture_with_zdr_profile() -> std::io::Result<()> {
web_search_mode: Constrained::allow_any(WebSearchMode::Cached),
web_search_config: None,
use_experimental_unified_exec_tool: !cfg!(windows),
experimental_unified_exec_use_exec_server: false,
experimental_unified_exec_spawn_local_exec_server: false,
experimental_unified_exec_exec_server_websocket_url: None,
experimental_supported_tools: Vec::new(),
background_terminal_max_timeout: DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults().into(),
@@ -4664,6 +4740,10 @@ fn test_precedence_fixture_with_gpt5_profile() -> std::io::Result<()> {
web_search_mode: Constrained::allow_any(WebSearchMode::Cached),
web_search_config: None,
use_experimental_unified_exec_tool: !cfg!(windows),
experimental_unified_exec_use_exec_server: false,
experimental_unified_exec_spawn_local_exec_server: false,
experimental_unified_exec_exec_server_websocket_url: None,
experimental_supported_tools: Vec::new(),
background_terminal_max_timeout: DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults().into(),

View File

@@ -533,6 +533,27 @@ pub struct Config {
/// If set to `true`, used only the experimental unified exec tool.
pub use_experimental_unified_exec_tool: bool,
/// When `true`, route unified-exec process launches through `codex-exec-server`
/// instead of spawning them directly in-process.
pub experimental_unified_exec_use_exec_server: bool,
/// When `true`, start a session-scoped local `codex-exec-server` subprocess
/// during session startup and route unified-exec calls through it.
pub experimental_unified_exec_spawn_local_exec_server: bool,
/// When set, connect unified-exec and remote filesystem calls to an existing
/// `codex-exec-server` websocket endpoint instead of using the in-process or
/// spawned-local variants.
pub experimental_unified_exec_exec_server_websocket_url: Option<String>,
/// When set, remap remote exec-server cwd and filesystem paths from the
/// local session cwd root into this executor-visible workspace root.
pub experimental_unified_exec_exec_server_workspace_root: Option<AbsolutePathBuf>,
/// Additional experimental tools to expose regardless of the model catalog's
/// advertised tool support.
pub experimental_supported_tools: Vec<String>,
/// Maximum poll window for background terminal output (`write_stdin`), in milliseconds.
/// Default: `300000` (5 minutes).
pub background_terminal_max_timeout: u64,
@@ -1316,6 +1337,25 @@ pub struct ConfigToml {
/// Default: `300000` (5 minutes).
pub background_terminal_max_timeout: Option<u64>,
/// When `true`, route unified-exec process launches through `codex-exec-server`
/// instead of spawning them directly in-process.
pub experimental_unified_exec_use_exec_server: Option<bool>,
/// When `true`, start a session-scoped local `codex-exec-server` subprocess
/// during session startup and route unified-exec calls through it.
pub experimental_unified_exec_spawn_local_exec_server: Option<bool>,
/// Optional websocket URL for connecting to an existing `codex-exec-server`.
pub experimental_unified_exec_exec_server_websocket_url: Option<String>,
/// Optional absolute path to the executor-visible workspace root that
/// corresponds to the local session cwd.
pub experimental_unified_exec_exec_server_workspace_root: Option<AbsolutePathBuf>,
/// Additional experimental tools to expose regardless of the selected
/// model's advertised tool support.
pub experimental_supported_tools: Option<Vec<String>>,
/// Optional absolute path to the Node runtime used by `js_repl`.
pub js_repl_node_path: Option<AbsolutePathBuf>,
@@ -2439,6 +2479,37 @@ impl Config {
let include_apply_patch_tool_flag = features.enabled(Feature::ApplyPatchFreeform);
let use_experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
let experimental_unified_exec_use_exec_server = config_profile
.experimental_unified_exec_use_exec_server
.or(cfg.experimental_unified_exec_use_exec_server)
.unwrap_or(false);
let experimental_unified_exec_spawn_local_exec_server = config_profile
.experimental_unified_exec_spawn_local_exec_server
.or(cfg.experimental_unified_exec_spawn_local_exec_server)
.unwrap_or(false);
let experimental_unified_exec_exec_server_websocket_url = config_profile
.experimental_unified_exec_exec_server_websocket_url
.clone()
.or(cfg
.experimental_unified_exec_exec_server_websocket_url
.clone())
.and_then(|value| {
let trimmed = value.trim();
if trimmed.is_empty() {
None
} else {
Some(trimmed.to_string())
}
});
let experimental_unified_exec_exec_server_workspace_root = config_profile
.experimental_unified_exec_exec_server_workspace_root
.clone()
.or(cfg.experimental_unified_exec_exec_server_workspace_root.clone());
let experimental_supported_tools = config_profile
.experimental_supported_tools
.clone()
.or(cfg.experimental_supported_tools.clone())
.unwrap_or_default();
let forced_chatgpt_workspace_id =
cfg.forced_chatgpt_workspace_id.as_ref().and_then(|value| {
@@ -2730,6 +2801,11 @@ impl Config {
web_search_mode: constrained_web_search_mode.value,
web_search_config,
use_experimental_unified_exec_tool,
experimental_unified_exec_use_exec_server,
experimental_unified_exec_spawn_local_exec_server,
experimental_unified_exec_exec_server_websocket_url,
experimental_unified_exec_exec_server_workspace_root,
experimental_supported_tools,
background_terminal_max_timeout,
ghost_snapshot,
features,

View File

@@ -49,6 +49,11 @@ pub struct ConfigProfile {
pub experimental_compact_prompt_file: Option<AbsolutePathBuf>,
pub include_apply_patch_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_unified_exec_use_exec_server: Option<bool>,
pub experimental_unified_exec_spawn_local_exec_server: Option<bool>,
pub experimental_unified_exec_exec_server_websocket_url: Option<String>,
pub experimental_unified_exec_exec_server_workspace_root: Option<AbsolutePathBuf>,
pub experimental_supported_tools: Option<Vec<String>>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub tools_view_image: Option<bool>,
pub tools: Option<ToolsToml>,

View File

@@ -0,0 +1,167 @@
use async_trait::async_trait;
use base64::Engine as _;
use base64::engine::general_purpose::STANDARD;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsWriteFileParams;
use codex_environment::CopyOptions;
use codex_environment::CreateDirectoryOptions;
use codex_environment::ExecutorFileSystem;
use codex_environment::FileMetadata;
use codex_environment::FileSystemResult;
use codex_environment::ReadDirectoryEntry;
use codex_environment::RemoveOptions;
use codex_exec_server::ExecServerClient;
use codex_utils_absolute_path::AbsolutePathBuf;
use std::io;
use crate::exec_server_path_mapper::RemoteWorkspacePathMapper;
#[derive(Clone)]
pub(crate) struct ExecServerFileSystem {
client: ExecServerClient,
path_mapper: Option<RemoteWorkspacePathMapper>,
}
impl ExecServerFileSystem {
pub(crate) fn new(
client: ExecServerClient,
path_mapper: Option<RemoteWorkspacePathMapper>,
) -> Self {
Self {
client,
path_mapper,
}
}
fn map_path(&self, path: &AbsolutePathBuf) -> AbsolutePathBuf {
self.path_mapper
.as_ref()
.map_or_else(|| path.clone(), |mapper| mapper.map_path(path))
}
}
#[async_trait]
impl ExecutorFileSystem for ExecServerFileSystem {
async fn read_file(&self, path: &AbsolutePathBuf) -> FileSystemResult<Vec<u8>> {
let path = self.map_path(path);
let response = self
.client
.fs_read_file(FsReadFileParams { path: path.clone() })
.await
.map_err(map_exec_server_error)?;
STANDARD
.decode(response.data_base64)
.map_err(|err| io::Error::new(io::ErrorKind::InvalidData, err))
}
async fn write_file(&self, path: &AbsolutePathBuf, contents: Vec<u8>) -> FileSystemResult<()> {
let path = self.map_path(path);
self.client
.fs_write_file(FsWriteFileParams {
path: path.clone(),
data_base64: STANDARD.encode(contents),
})
.await
.map_err(map_exec_server_error)?;
Ok(())
}
async fn create_directory(
&self,
path: &AbsolutePathBuf,
options: CreateDirectoryOptions,
) -> FileSystemResult<()> {
let path = self.map_path(path);
self.client
.fs_create_directory(FsCreateDirectoryParams {
path: path.clone(),
recursive: Some(options.recursive),
})
.await
.map_err(map_exec_server_error)?;
Ok(())
}
async fn get_metadata(&self, path: &AbsolutePathBuf) -> FileSystemResult<FileMetadata> {
let path = self.map_path(path);
let response = self
.client
.fs_get_metadata(FsGetMetadataParams { path: path.clone() })
.await
.map_err(map_exec_server_error)?;
Ok(FileMetadata {
is_directory: response.is_directory,
is_file: response.is_file,
created_at_ms: response.created_at_ms,
modified_at_ms: response.modified_at_ms,
})
}
async fn read_directory(
&self,
path: &AbsolutePathBuf,
) -> FileSystemResult<Vec<ReadDirectoryEntry>> {
let path = self.map_path(path);
let response = self
.client
.fs_read_directory(FsReadDirectoryParams { path: path.clone() })
.await
.map_err(map_exec_server_error)?;
Ok(response
.entries
.into_iter()
.map(|entry| ReadDirectoryEntry {
file_name: entry.file_name,
is_directory: entry.is_directory,
is_file: entry.is_file,
is_symlink: entry.is_symlink,
})
.collect())
}
async fn remove(&self, path: &AbsolutePathBuf, options: RemoveOptions) -> FileSystemResult<()> {
let path = self.map_path(path);
self.client
.fs_remove(FsRemoveParams {
path: path.clone(),
recursive: Some(options.recursive),
force: Some(options.force),
})
.await
.map_err(map_exec_server_error)?;
Ok(())
}
async fn copy(
&self,
source_path: &AbsolutePathBuf,
destination_path: &AbsolutePathBuf,
options: CopyOptions,
) -> FileSystemResult<()> {
let source_path = self.map_path(source_path);
let destination_path = self.map_path(destination_path);
self.client
.fs_copy(FsCopyParams {
source_path: source_path.clone(),
destination_path: destination_path.clone(),
recursive: options.recursive,
})
.await
.map_err(map_exec_server_error)?;
Ok(())
}
}
fn map_exec_server_error(error: codex_exec_server::ExecServerError) -> io::Error {
match error {
codex_exec_server::ExecServerError::Server { code: _, message } => {
io::Error::new(io::ErrorKind::InvalidInput, message)
}
other => io::Error::other(other.to_string()),
}
}

View File

@@ -0,0 +1,54 @@
use codex_utils_absolute_path::AbsolutePathBuf;
#[derive(Clone, Debug)]
pub(crate) struct RemoteWorkspacePathMapper {
local_root: AbsolutePathBuf,
remote_root: AbsolutePathBuf,
}
impl RemoteWorkspacePathMapper {
pub(crate) fn new(local_root: AbsolutePathBuf, remote_root: AbsolutePathBuf) -> Self {
Self {
local_root,
remote_root,
}
}
pub(crate) fn map_path(&self, path: &AbsolutePathBuf) -> AbsolutePathBuf {
let Ok(relative) = path.as_path().strip_prefix(self.local_root.as_path()) else {
return path.clone();
};
AbsolutePathBuf::try_from(self.remote_root.as_path().join(relative))
.expect("workspace remap should preserve an absolute path")
}
}
#[cfg(test)]
mod tests {
use super::RemoteWorkspacePathMapper;
use codex_utils_absolute_path::AbsolutePathBuf;
#[test]
fn remaps_path_inside_workspace_root() {
let mapper = RemoteWorkspacePathMapper::new(
AbsolutePathBuf::try_from("/Users/starr/code/codex").unwrap(),
AbsolutePathBuf::try_from("/home/dev-user/codex").unwrap(),
);
let path = AbsolutePathBuf::try_from("/Users/starr/code/codex/codex-rs/core/src/lib.rs")
.unwrap();
assert_eq!(
mapper.map_path(&path),
AbsolutePathBuf::try_from("/home/dev-user/codex/codex-rs/core/src/lib.rs").unwrap()
);
}
#[test]
fn leaves_path_outside_workspace_root_unchanged() {
let mapper = RemoteWorkspacePathMapper::new(
AbsolutePathBuf::try_from("/Users/starr/code/codex").unwrap(),
AbsolutePathBuf::try_from("/home/dev-user/codex").unwrap(),
);
let path = AbsolutePathBuf::try_from("/tmp/outside.txt").unwrap();
assert_eq!(mapper.map_path(&path), path);
}
}

View File

@@ -38,6 +38,8 @@ pub mod error;
pub mod exec;
pub mod exec_env;
mod exec_policy;
mod exec_server_filesystem;
mod exec_server_path_mapper;
pub mod external_agent_config;
pub mod features;
mod file_watcher;

View File

@@ -54,6 +54,12 @@ pub(crate) fn with_config_overrides(mut model: ModelInfo, config: &Config) -> Mo
model.model_messages = None;
}
for tool_name in &config.experimental_supported_tools {
if !model.experimental_supported_tools.contains(tool_name) {
model.experimental_supported_tools.push(tool_name.clone());
}
}
model
}

View File

@@ -37,3 +37,18 @@ fn reasoning_summaries_override_false_is_noop_when_model_is_false() {
assert_eq!(updated, model);
}
#[test]
fn experimental_supported_tools_are_merged_from_config() {
let mut model = model_info_from_slug("unknown-model");
model.experimental_supported_tools = vec!["grep_files".to_string()];
let mut config = test_config();
config.experimental_supported_tools = vec!["read_file".to_string(), "grep_files".to_string()];
let updated = with_config_overrides(model, &config);
assert_eq!(
updated.experimental_supported_tools,
vec!["grep_files".to_string(), "read_file".to_string()]
);
}

View File

@@ -22,6 +22,8 @@ use crate::config_loader::merge_toml_values;
use crate::config_loader::project_root_markers_from_config;
use crate::features::Feature;
use codex_app_server_protocol::ConfigLayerSource;
use codex_environment::Environment;
use codex_utils_absolute_path::AbsolutePathBuf;
use dunce::canonicalize as normalize_path;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
@@ -78,7 +80,21 @@ fn render_js_repl_instructions(config: &Config) -> Option<String> {
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
let project_docs = read_project_docs(config).await;
build_user_instructions(config, project_docs)
}
pub(crate) async fn get_user_instructions_with_environment(
config: &Config,
environment: &Environment,
) -> Option<String> {
let project_docs = read_project_docs_with_environment(config, environment).await;
build_user_instructions(config, project_docs)
}
fn build_user_instructions(
config: &Config,
project_docs: std::io::Result<Option<String>>,
) -> Option<String> {
let mut output = String::new();
if let Some(instructions) = config.user_instructions.clone() {
@@ -119,6 +135,69 @@ pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
}
}
pub async fn read_project_docs_with_environment(
config: &Config,
environment: &Environment,
) -> std::io::Result<Option<String>> {
let max_total = config.project_doc_max_bytes;
if max_total == 0 {
return Ok(None);
}
let paths = discover_project_doc_paths_with_environment(config, environment).await?;
if paths.is_empty() {
return Ok(None);
}
let file_system = environment.get_filesystem();
let mut remaining: u64 = max_total as u64;
let mut parts: Vec<String> = Vec::new();
for p in paths {
if remaining == 0 {
break;
}
let metadata = match file_system.get_metadata(&p).await {
Ok(metadata) => metadata,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
};
let data = match file_system.read_file(&p).await {
Ok(data) => data,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
};
if (data.len() as u64) > remaining {
tracing::warn!(
"Project doc `{}` exceeds remaining budget ({} bytes) - truncating.",
p.display(),
remaining,
);
}
let allowed = remaining.min(data.len() as u64) as usize;
let data = &data[..allowed];
if metadata.is_file {
let text = String::from_utf8_lossy(data).to_string();
if !text.trim().is_empty() {
parts.push(text);
remaining = remaining.saturating_sub(data.len() as u64);
}
}
}
if parts.is_empty() {
Ok(None)
} else {
Ok(Some(parts.join("\n\n")))
}
}
/// Attempt to locate and load the project documentation.
///
/// On success returns `Ok(Some(contents))` where `contents` is the
@@ -270,6 +349,119 @@ pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBu
Ok(found)
}
pub async fn discover_project_doc_paths_with_environment(
config: &Config,
environment: &Environment,
) -> std::io::Result<Vec<AbsolutePathBuf>> {
let dir = AbsolutePathBuf::try_from(config.cwd.clone()).map_err(|err| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("cwd must be absolute for project-doc discovery: {err}"),
)
})?;
let file_system = environment.get_filesystem();
let mut merged = TomlValue::Table(toml::map::Map::new());
for layer in config.config_layer_stack.get_layers(
ConfigLayerStackOrdering::LowestPrecedenceFirst,
/*include_disabled*/ false,
) {
if matches!(layer.name, ConfigLayerSource::Project { .. }) {
continue;
}
merge_toml_values(&mut merged, &layer.config);
}
let project_root_markers = match project_root_markers_from_config(&merged) {
Ok(Some(markers)) => markers,
Ok(None) => default_project_root_markers(),
Err(err) => {
tracing::warn!("invalid project_root_markers: {err}");
default_project_root_markers()
}
};
let mut project_root = None;
if !project_root_markers.is_empty() {
for ancestor in dir.as_path().ancestors() {
let ancestor = AbsolutePathBuf::try_from(ancestor.to_path_buf()).map_err(|err| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("ancestor must stay absolute for project-doc discovery: {err}"),
)
})?;
for marker in &project_root_markers {
let marker_path = AbsolutePathBuf::try_from(ancestor.as_path().join(marker))
.map_err(|err| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("marker path must stay absolute for project-doc discovery: {err}"),
)
})?;
let marker_exists = match file_system.get_metadata(&marker_path).await {
Ok(_) => true,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => false,
Err(e) => return Err(e),
};
if marker_exists {
project_root = Some(ancestor);
break;
}
}
if project_root.is_some() {
break;
}
}
}
let search_dirs: Vec<AbsolutePathBuf> = if let Some(root) = project_root {
let mut dirs = Vec::new();
let mut cursor = dir.as_path();
loop {
dirs.push(AbsolutePathBuf::try_from(cursor.to_path_buf()).map_err(|err| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("search dir must stay absolute for project-doc discovery: {err}"),
)
})?);
if cursor == root.as_path() {
break;
}
let Some(parent) = cursor.parent() else {
break;
};
cursor = parent;
}
dirs.reverse();
dirs
} else {
vec![dir]
};
let mut found: Vec<AbsolutePathBuf> = Vec::new();
let candidate_filenames = candidate_filenames(config);
for d in search_dirs {
for name in &candidate_filenames {
let candidate = AbsolutePathBuf::try_from(d.as_path().join(name)).map_err(|err| {
std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("candidate path must stay absolute for project-doc discovery: {err}"),
)
})?;
match file_system.get_metadata(&candidate).await {
Ok(metadata) if metadata.is_file => {
found.push(candidate);
break;
}
Ok(_) => continue,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
}
}
}
Ok(found)
}
fn candidate_filenames<'a>(config: &'a Config) -> Vec<&'a str> {
let mut names: Vec<&'a str> =
Vec::with_capacity(2 + config.project_doc_fallback_filenames.len());

View File

@@ -1,8 +1,19 @@
use super::*;
use crate::config::ConfigBuilder;
use crate::features::Feature;
use async_trait::async_trait;
use codex_environment::CopyOptions;
use codex_environment::CreateDirectoryOptions;
use codex_environment::Environment;
use codex_environment::ExecutorFileSystem;
use codex_environment::FileMetadata;
use codex_environment::FileSystemResult;
use codex_environment::ReadDirectoryEntry;
use codex_environment::RemoveOptions;
use codex_utils_absolute_path::AbsolutePathBuf;
use std::fs;
use std::path::PathBuf;
use std::sync::Arc;
use tempfile::TempDir;
/// Helper that returns a `Config` pointing at `root` and using `limit` as
@@ -68,6 +79,117 @@ async fn make_config_with_project_root_markers(
config
}
#[derive(Clone)]
struct RemappedFileSystem {
local_root: AbsolutePathBuf,
remote_root: AbsolutePathBuf,
}
impl RemappedFileSystem {
fn new(local_root: &std::path::Path, remote_root: &std::path::Path) -> Self {
Self {
local_root: AbsolutePathBuf::try_from(local_root.to_path_buf()).unwrap(),
remote_root: AbsolutePathBuf::try_from(remote_root.to_path_buf()).unwrap(),
}
}
fn remap(&self, path: &AbsolutePathBuf) -> AbsolutePathBuf {
let relative = path
.as_path()
.strip_prefix(self.local_root.as_path())
.expect("path should remain within local root during test");
AbsolutePathBuf::try_from(self.remote_root.as_path().join(relative)).unwrap()
}
}
#[async_trait]
impl ExecutorFileSystem for RemappedFileSystem {
async fn read_file(&self, path: &AbsolutePathBuf) -> FileSystemResult<Vec<u8>> {
tokio::fs::read(self.remap(path).as_path()).await
}
async fn write_file(&self, path: &AbsolutePathBuf, contents: Vec<u8>) -> FileSystemResult<()> {
tokio::fs::write(self.remap(path).as_path(), contents).await
}
async fn create_directory(
&self,
path: &AbsolutePathBuf,
options: CreateDirectoryOptions,
) -> FileSystemResult<()> {
if options.recursive {
tokio::fs::create_dir_all(self.remap(path).as_path()).await
} else {
tokio::fs::create_dir(self.remap(path).as_path()).await
}
}
async fn get_metadata(&self, path: &AbsolutePathBuf) -> FileSystemResult<FileMetadata> {
let metadata = tokio::fs::metadata(self.remap(path).as_path()).await?;
Ok(FileMetadata {
is_directory: metadata.is_dir(),
is_file: metadata.is_file(),
created_at_ms: 0,
modified_at_ms: 0,
})
}
async fn read_directory(
&self,
path: &AbsolutePathBuf,
) -> FileSystemResult<Vec<ReadDirectoryEntry>> {
let mut entries = Vec::new();
let mut read_dir = tokio::fs::read_dir(self.remap(path).as_path()).await?;
while let Some(entry) = read_dir.next_entry().await? {
let metadata = tokio::fs::symlink_metadata(entry.path()).await?;
entries.push(ReadDirectoryEntry {
file_name: entry.file_name().to_string_lossy().into_owned(),
is_directory: metadata.is_dir(),
is_file: metadata.is_file(),
is_symlink: metadata.file_type().is_symlink(),
});
}
Ok(entries)
}
async fn remove(
&self,
path: &AbsolutePathBuf,
options: RemoveOptions,
) -> FileSystemResult<()> {
let remapped = self.remap(path);
match tokio::fs::symlink_metadata(remapped.as_path()).await {
Ok(metadata) => {
if metadata.is_dir() {
if options.recursive {
tokio::fs::remove_dir_all(remapped.as_path()).await
} else {
tokio::fs::remove_dir(remapped.as_path()).await
}
} else {
tokio::fs::remove_file(remapped.as_path()).await
}
}
Err(err) if err.kind() == std::io::ErrorKind::NotFound && options.force => Ok(()),
Err(err) => Err(err),
}
}
async fn copy(
&self,
source_path: &AbsolutePathBuf,
destination_path: &AbsolutePathBuf,
_options: CopyOptions,
) -> FileSystemResult<()> {
tokio::fs::copy(
self.remap(source_path).as_path(),
self.remap(destination_path).as_path(),
)
.await
.map(|_| ())
}
}
/// AGENTS.md missing should yield `None`.
#[tokio::test]
async fn no_doc_file_returns_none() {
@@ -239,6 +361,55 @@ async fn keeps_existing_instructions_when_doc_missing() {
assert_eq!(res, Some(INSTRUCTIONS.to_string()));
}
#[tokio::test]
async fn environment_backed_project_doc_prefers_remote_workspace_contents() {
let local = tempfile::tempdir().expect("local tempdir");
let remote = tempfile::tempdir().expect("remote tempdir");
fs::write(local.path().join("AGENTS.md"), "local doc").unwrap();
fs::write(remote.path().join("AGENTS.md"), "remote doc").unwrap();
let cfg = make_config(&local, 4096, None).await;
let environment = Environment::new(Arc::new(RemappedFileSystem::new(
local.path(),
remote.path(),
)));
let res = get_user_instructions_with_environment(&cfg, &environment)
.await
.expect("remote doc expected");
assert_eq!(res, "remote doc");
}
#[tokio::test]
async fn environment_backed_project_doc_discovers_remote_hierarchy() {
let local = tempfile::tempdir().expect("local tempdir");
let remote = tempfile::tempdir().expect("remote tempdir");
fs::write(local.path().join(".git"), "gitdir: /tmp/local\n").unwrap();
fs::create_dir_all(local.path().join("workspace/crate_a")).unwrap();
fs::write(remote.path().join(".git"), "gitdir: /tmp/remote\n").unwrap();
fs::write(remote.path().join("AGENTS.md"), "remote root").unwrap();
fs::create_dir_all(remote.path().join("workspace/crate_a")).unwrap();
fs::write(
remote.path().join("workspace/crate_a/AGENTS.md"),
"remote nested",
)
.unwrap();
let mut cfg = make_config(&local, 4096, None).await;
cfg.cwd = local.path().join("workspace/crate_a");
let environment = Environment::new(Arc::new(RemappedFileSystem::new(
local.path(),
remote.path(),
)));
let res = get_user_instructions_with_environment(&cfg, &environment)
.await
.expect("remote docs expected");
assert_eq!(res, "remote root\n\nremote nested");
}
/// When both the repository root and the working directory contain
/// AGENTS.md files, their contents are concatenated from root to cwd.
#[tokio::test]

View File

@@ -1,13 +1,13 @@
use std::collections::VecDeque;
use std::ffi::OsStr;
use std::fs::FileType;
use std::path::Path;
use std::path::PathBuf;
use async_trait::async_trait;
use codex_environment::ExecutorFileSystem;
use codex_utils_absolute_path::AbsolutePathBuf;
use codex_utils_string::take_bytes_at_char_boundary;
use serde::Deserialize;
use tokio::fs;
use crate::function_tool::FunctionCallError;
use crate::tools::context::FunctionToolOutput;
@@ -54,7 +54,7 @@ impl ToolHandler for ListDirHandler {
}
async fn handle(&self, invocation: ToolInvocation) -> Result<Self::Output, FunctionCallError> {
let ToolInvocation { payload, .. } = invocation;
let ToolInvocation { payload, turn, .. } = invocation;
let arguments = match payload {
ToolPayload::Function { arguments } => arguments,
@@ -98,23 +98,29 @@ impl ToolHandler for ListDirHandler {
"dir_path must be an absolute path".to_string(),
));
}
let abs_path = AbsolutePathBuf::try_from(path).map_err(|error| {
FunctionCallError::RespondToModel(format!("unable to access directory: {error}"))
})?;
let entries = list_dir_slice(&path, offset, limit, depth).await?;
let file_system = turn.environment.get_filesystem();
verify_directory_exists(file_system.as_ref(), &abs_path).await?;
let entries = list_dir_slice(file_system.as_ref(), &abs_path, offset, limit, depth).await?;
let mut output = Vec::with_capacity(entries.len() + 1);
output.push(format!("Absolute path: {}", path.display()));
output.push(format!("Absolute path: {}", abs_path.display()));
output.extend(entries);
Ok(FunctionToolOutput::from_text(output.join("\n"), Some(true)))
}
}
async fn list_dir_slice(
path: &Path,
file_system: &dyn ExecutorFileSystem,
path: &AbsolutePathBuf,
offset: usize,
limit: usize,
depth: usize,
) -> Result<Vec<String>, FunctionCallError> {
let mut entries = Vec::new();
collect_entries(path, Path::new(""), depth, &mut entries).await?;
collect_entries(file_system, path, Path::new(""), depth, &mut entries).await?;
if entries.is_empty() {
return Ok(Vec::new());
@@ -147,41 +153,43 @@ async fn list_dir_slice(
}
async fn collect_entries(
dir_path: &Path,
file_system: &dyn ExecutorFileSystem,
dir_path: &AbsolutePathBuf,
relative_prefix: &Path,
depth: usize,
entries: &mut Vec<DirEntry>,
) -> Result<(), FunctionCallError> {
let mut queue = VecDeque::new();
queue.push_back((dir_path.to_path_buf(), relative_prefix.to_path_buf(), depth));
queue.push_back((dir_path.clone(), relative_prefix.to_path_buf(), depth));
while let Some((current_dir, prefix, remaining_depth)) = queue.pop_front() {
let mut read_dir = fs::read_dir(&current_dir).await.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read directory: {err}"))
})?;
let mut dir_entries = Vec::new();
while let Some(entry) = read_dir.next_entry().await.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read directory: {err}"))
})? {
let file_type = entry.file_type().await.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to inspect entry: {err}"))
let read_dir = file_system
.read_directory(&current_dir)
.await
.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read directory: {err}"))
})?;
let file_name = entry.file_name();
for entry in read_dir {
let file_name = entry.file_name;
let relative_path = if prefix.as_os_str().is_empty() {
PathBuf::from(&file_name)
} else {
prefix.join(&file_name)
};
let display_name = format_entry_component(&file_name);
let display_name = format_entry_component(OsStr::new(&file_name));
let display_depth = prefix.components().count();
let sort_key = format_entry_name(&relative_path);
let kind = DirEntryKind::from(&file_type);
let kind =
DirEntryKind::from_flags(entry.is_directory, entry.is_file, entry.is_symlink);
dir_entries.push((
entry.path(),
current_dir.join(&file_name).map_err(|error| {
FunctionCallError::RespondToModel(format!(
"failed to resolve directory entry path: {error}",
))
})?,
relative_path,
kind,
DirEntry {
@@ -206,6 +214,22 @@ async fn collect_entries(
Ok(())
}
async fn verify_directory_exists(
file_system: &dyn ExecutorFileSystem,
path: &AbsolutePathBuf,
) -> Result<(), FunctionCallError> {
let metadata = file_system.get_metadata(path).await.map_err(|err| {
FunctionCallError::RespondToModel(format!("unable to access `{}`: {err}", path.display()))
})?;
if !metadata.is_directory {
return Err(FunctionCallError::RespondToModel(format!(
"`{}` is not a directory",
path.display()
)));
}
Ok(())
}
fn format_entry_name(path: &Path) -> String {
let normalized = path.to_string_lossy().replace("\\", "/");
if normalized.len() > MAX_ENTRY_LENGTH {
@@ -252,13 +276,13 @@ enum DirEntryKind {
Other,
}
impl From<&FileType> for DirEntryKind {
fn from(file_type: &FileType) -> Self {
if file_type.is_symlink() {
impl DirEntryKind {
fn from_flags(is_directory: bool, is_file: bool, is_symlink: bool) -> Self {
if is_symlink {
DirEntryKind::Symlink
} else if file_type.is_dir() {
} else if is_directory {
DirEntryKind::Directory
} else if file_type.is_file() {
} else if is_file {
DirEntryKind::File
} else {
DirEntryKind::Other

View File

@@ -1,7 +1,13 @@
use super::*;
use codex_environment::LocalFileSystem;
use codex_utils_absolute_path::AbsolutePathBuf;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
fn abs(path: &std::path::Path) -> AbsolutePathBuf {
AbsolutePathBuf::try_from(path.to_path_buf()).expect("absolute tempdir path")
}
#[tokio::test]
async fn lists_directory_entries() {
let temp = tempdir().expect("create tempdir");
@@ -34,7 +40,7 @@ async fn lists_directory_entries() {
symlink(dir_path.join("entry.txt"), &link_path).expect("create symlink");
}
let entries = list_dir_slice(dir_path, 1, 20, 3)
let entries = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 20, 3)
.await
.expect("list directory");
@@ -68,7 +74,7 @@ async fn errors_when_offset_exceeds_entries() {
.await
.expect("create sub dir");
let err = list_dir_slice(dir_path, 10, 1, 2)
let err = list_dir_slice(&LocalFileSystem, &abs(dir_path), 10, 1, 2)
.await
.expect_err("offset exceeds entries");
assert_eq!(
@@ -95,7 +101,7 @@ async fn respects_depth_parameter() {
.await
.expect("write deeper");
let entries_depth_one = list_dir_slice(dir_path, 1, 10, 1)
let entries_depth_one = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 10, 1)
.await
.expect("list depth 1");
assert_eq!(
@@ -103,7 +109,7 @@ async fn respects_depth_parameter() {
vec!["nested/".to_string(), "root.txt".to_string(),]
);
let entries_depth_two = list_dir_slice(dir_path, 1, 20, 2)
let entries_depth_two = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 20, 2)
.await
.expect("list depth 2");
assert_eq!(
@@ -116,7 +122,7 @@ async fn respects_depth_parameter() {
]
);
let entries_depth_three = list_dir_slice(dir_path, 1, 30, 3)
let entries_depth_three = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 30, 3)
.await
.expect("list depth 3");
assert_eq!(
@@ -148,7 +154,7 @@ async fn paginates_in_sorted_order() {
.await
.expect("write b child");
let first_page = list_dir_slice(dir_path, 1, 2, 2)
let first_page = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 2, 2)
.await
.expect("list page one");
assert_eq!(
@@ -160,7 +166,7 @@ async fn paginates_in_sorted_order() {
]
);
let second_page = list_dir_slice(dir_path, 3, 2, 2)
let second_page = list_dir_slice(&LocalFileSystem, &abs(dir_path), 3, 2, 2)
.await
.expect("list page two");
assert_eq!(
@@ -183,7 +189,7 @@ async fn handles_large_limit_without_overflow() {
.await
.expect("write gamma");
let entries = list_dir_slice(dir_path, 2, usize::MAX, 1)
let entries = list_dir_slice(&LocalFileSystem, &abs(dir_path), 2, usize::MAX, 1)
.await
.expect("list without overflow");
assert_eq!(
@@ -204,7 +210,7 @@ async fn indicates_truncated_results() {
.expect("write file");
}
let entries = list_dir_slice(dir_path, 1, 25, 1)
let entries = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 25, 1)
.await
.expect("list directory");
assert_eq!(entries.len(), 26);
@@ -226,7 +232,7 @@ async fn truncation_respects_sorted_order() -> anyhow::Result<()> {
tokio::fs::write(nested.join("child.txt"), b"child").await?;
tokio::fs::write(deeper.join("grandchild.txt"), b"deep").await?;
let entries_depth_three = list_dir_slice(dir_path, 1, 3, 3).await?;
let entries_depth_three = list_dir_slice(&LocalFileSystem, &abs(dir_path), 1, 3, 3).await?;
assert_eq!(
entries_depth_three,
vec![

View File

@@ -2,6 +2,7 @@ use std::collections::VecDeque;
use std::path::PathBuf;
use async_trait::async_trait;
use codex_utils_absolute_path::AbsolutePathBuf;
use codex_utils_string::take_bytes_at_char_boundary;
use serde::Deserialize;
@@ -100,7 +101,7 @@ impl ToolHandler for ReadFileHandler {
}
async fn handle(&self, invocation: ToolInvocation) -> Result<Self::Output, FunctionCallError> {
let ToolInvocation { payload, .. } = invocation;
let ToolInvocation { payload, turn, .. } = invocation;
let arguments = match payload {
ToolPayload::Function { arguments } => arguments,
@@ -139,12 +140,23 @@ impl ToolHandler for ReadFileHandler {
"file_path must be an absolute path".to_string(),
));
}
let abs_path = AbsolutePathBuf::try_from(path).map_err(|error| {
FunctionCallError::RespondToModel(format!("failed to read file: {error}"))
})?;
let file_bytes = turn
.environment
.get_filesystem()
.read_file(&abs_path)
.await
.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read file: {err}"))
})?;
let collected = match mode {
ReadMode::Slice => slice::read(&path, offset, limit).await?,
ReadMode::Slice => slice::read_bytes(&file_bytes, offset, limit)?,
ReadMode::Indentation => {
let indentation = indentation.unwrap_or_default();
indentation::read_block(&path, offset, limit, indentation).await?
indentation::read_block_bytes(&file_bytes, offset, limit, indentation)?
}
};
Ok(FunctionToolOutput::from_text(
@@ -157,11 +169,19 @@ impl ToolHandler for ReadFileHandler {
mod slice {
use crate::function_tool::FunctionCallError;
use crate::tools::handlers::read_file::format_line;
#[cfg(test)]
use crate::tools::handlers::read_file::normalize_line_bytes;
use crate::tools::handlers::read_file::split_lines;
#[cfg(test)]
use std::path::Path;
#[cfg(test)]
use tokio::fs::File;
#[cfg(test)]
use tokio::io::AsyncBufReadExt;
#[cfg(test)]
use tokio::io::BufReader;
#[cfg(test)]
pub async fn read(
path: &Path,
offset: usize,
@@ -170,11 +190,9 @@ mod slice {
let file = File::open(path).await.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read file: {err}"))
})?;
let mut reader = BufReader::new(file);
let mut collected = Vec::new();
let mut seen = 0usize;
let mut buffer = Vec::new();
let mut lines = Vec::new();
loop {
buffer.clear();
@@ -186,38 +204,39 @@ mod slice {
break;
}
if buffer.last() == Some(&b'\n') {
buffer.pop();
if buffer.last() == Some(&b'\r') {
buffer.pop();
}
}
seen += 1;
if seen < offset {
continue;
}
if collected.len() == limit {
break;
}
let formatted = format_line(&buffer);
collected.push(format!("L{seen}: {formatted}"));
if collected.len() == limit {
break;
}
lines.push(normalize_line_bytes(&buffer).to_vec());
}
if seen < offset {
read_bytes_from_lines(&lines, offset, limit)
}
pub fn read_bytes(
file_bytes: &[u8],
offset: usize,
limit: usize,
) -> Result<Vec<String>, FunctionCallError> {
let lines = split_lines(file_bytes);
read_bytes_from_lines(&lines, offset, limit)
}
fn read_bytes_from_lines(
lines: &[Vec<u8>],
offset: usize,
limit: usize,
) -> Result<Vec<String>, FunctionCallError> {
if lines.len() < offset {
return Err(FunctionCallError::RespondToModel(
"offset exceeds file length".to_string(),
));
}
Ok(collected)
Ok(lines
.iter()
.enumerate()
.skip(offset - 1)
.take(limit)
.map(|(index, line)| format!("L{}: {}", index + 1, format_line(line)))
.collect())
}
}
@@ -226,14 +245,22 @@ mod indentation {
use crate::tools::handlers::read_file::IndentationArgs;
use crate::tools::handlers::read_file::LineRecord;
use crate::tools::handlers::read_file::TAB_WIDTH;
use crate::tools::handlers::read_file::format_line;
use crate::tools::handlers::read_file::line_record;
#[cfg(test)]
use crate::tools::handlers::read_file::normalize_line_bytes;
use crate::tools::handlers::read_file::split_lines;
use crate::tools::handlers::read_file::trim_empty_lines;
use std::collections::VecDeque;
#[cfg(test)]
use std::path::Path;
#[cfg(test)]
use tokio::fs::File;
#[cfg(test)]
use tokio::io::AsyncBufReadExt;
#[cfg(test)]
use tokio::io::BufReader;
#[cfg(test)]
pub async fn read_block(
path: &Path,
offset: usize,
@@ -255,6 +282,39 @@ mod indentation {
}
let collected = collect_file_lines(path).await?;
read_block_from_records(collected, offset, limit, options)
}
pub fn read_block_bytes(
file_bytes: &[u8],
offset: usize,
limit: usize,
options: IndentationArgs,
) -> Result<Vec<String>, FunctionCallError> {
let collected = collect_file_lines_from_bytes(file_bytes);
read_block_from_records(collected, offset, limit, options)
}
fn read_block_from_records(
collected: Vec<LineRecord>,
offset: usize,
limit: usize,
options: IndentationArgs,
) -> Result<Vec<String>, FunctionCallError> {
let anchor_line = options.anchor_line.unwrap_or(offset);
if anchor_line == 0 {
return Err(FunctionCallError::RespondToModel(
"anchor_line must be a 1-indexed line number".to_string(),
));
}
let guard_limit = options.max_lines.unwrap_or(limit);
if guard_limit == 0 {
return Err(FunctionCallError::RespondToModel(
"max_lines must be greater than zero".to_string(),
));
}
if collected.is_empty() || anchor_line > collected.len() {
return Err(FunctionCallError::RespondToModel(
"anchor_line exceeds file length".to_string(),
@@ -366,6 +426,7 @@ mod indentation {
.collect())
}
#[cfg(test)]
async fn collect_file_lines(path: &Path) -> Result<Vec<LineRecord>, FunctionCallError> {
let file = File::open(path).await.map_err(|err| {
FunctionCallError::RespondToModel(format!("failed to read file: {err}"))
@@ -386,28 +447,21 @@ mod indentation {
break;
}
if buffer.last() == Some(&b'\n') {
buffer.pop();
if buffer.last() == Some(&b'\r') {
buffer.pop();
}
}
number += 1;
let raw = String::from_utf8_lossy(&buffer).into_owned();
let indent = measure_indent(&raw);
let display = format_line(&buffer);
lines.push(LineRecord {
number,
raw,
display,
indent,
});
lines.push(line_record(number, normalize_line_bytes(&buffer)));
}
Ok(lines)
}
fn collect_file_lines_from_bytes(file_bytes: &[u8]) -> Vec<LineRecord> {
split_lines(file_bytes)
.into_iter()
.enumerate()
.map(|(index, line)| line_record(index + 1, &line))
.collect()
}
fn compute_effective_indents(records: &[LineRecord]) -> Vec<usize> {
let mut effective = Vec::with_capacity(records.len());
let mut previous_indent = 0usize;
@@ -422,7 +476,7 @@ mod indentation {
effective
}
fn measure_indent(line: &str) -> usize {
pub(super) fn measure_indent(line: &str) -> usize {
line.chars()
.take_while(|c| matches!(c, ' ' | '\t'))
.map(|c| if c == '\t' { TAB_WIDTH } else { 1 })
@@ -430,6 +484,40 @@ mod indentation {
}
}
fn line_record(number: usize, line_bytes: &[u8]) -> LineRecord {
let raw = String::from_utf8_lossy(line_bytes).into_owned();
let indent = indentation::measure_indent(&raw);
let display = format_line(line_bytes);
LineRecord {
number,
raw,
display,
indent,
}
}
fn split_lines(file_bytes: &[u8]) -> Vec<Vec<u8>> {
let mut lines = Vec::new();
for chunk in file_bytes.split_inclusive(|byte| *byte == b'\n') {
lines.push(normalize_line_bytes(chunk).to_vec());
}
if !file_bytes.is_empty() && !file_bytes.ends_with(b"\n") {
return lines;
}
if file_bytes.is_empty() {
return lines;
}
if lines.last().is_some_and(std::vec::Vec::is_empty) {
lines.pop();
}
lines
}
fn normalize_line_bytes(line: &[u8]) -> &[u8] {
let line = line.strip_suffix(b"\n").unwrap_or(line);
line.strip_suffix(b"\r").unwrap_or(line)
}
fn format_line(bytes: &[u8]) -> String {
let decoded = String::from_utf8_lossy(bytes);
if decoded.len() > MAX_LINE_LENGTH {

View File

@@ -1,5 +1,4 @@
use async_trait::async_trait;
use codex_environment::ExecutorFileSystem;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::ImageDetail;

View File

@@ -46,6 +46,7 @@ use std::path::PathBuf;
#[derive(Clone, Debug)]
pub struct UnifiedExecRequest {
pub process_id: i32,
pub command: Vec<String>,
pub cwd: PathBuf,
pub env: HashMap<String, String>,
@@ -239,6 +240,7 @@ impl<'a> ToolRuntime<UnifiedExecRequest, UnifiedExecProcess> for UnifiedExecRunt
return self
.manager
.open_session_with_exec_env(
req.process_id,
&prepared.exec_request,
req.tty,
prepared.spawn_lifecycle,
@@ -275,7 +277,12 @@ impl<'a> ToolRuntime<UnifiedExecRequest, UnifiedExecProcess> for UnifiedExecRunt
.env_for(spec, req.network.as_ref())
.map_err(|err| ToolError::Codex(err.into()))?;
self.manager
.open_session_with_exec_env(&exec_env, req.tty, Box::new(NoopSpawnLifecycle))
.open_session_with_exec_env(
req.process_id,
&exec_env,
req.tty,
Box::new(NoopSpawnLifecycle),
)
.await
.map_err(|err| match err {
UnifiedExecError::SandboxDenied { output, .. } => {

View File

@@ -0,0 +1,278 @@
use std::path::PathBuf;
use std::sync::Arc;
use async_trait::async_trait;
use codex_environment::Environment;
use codex_exec_server::ExecServerClient;
use codex_exec_server::ExecServerClientConnectOptions;
use codex_exec_server::ExecServerLaunchCommand;
use codex_exec_server::RemoteExecServerConnectArgs;
use codex_exec_server::SpawnedExecServer;
use codex_exec_server::spawn_local_exec_server;
use tracing::debug;
use crate::config::Config;
use crate::exec::SandboxType;
use crate::exec_server_filesystem::ExecServerFileSystem;
use crate::exec_server_path_mapper::RemoteWorkspacePathMapper;
use crate::sandboxing::ExecRequest;
use crate::unified_exec::SpawnLifecycleHandle;
use crate::unified_exec::UnifiedExecError;
use crate::unified_exec::UnifiedExecProcess;
pub(crate) type UnifiedExecSessionFactoryHandle = Arc<dyn UnifiedExecSessionFactory>;
pub(crate) struct SessionExecutionBackends {
pub(crate) unified_exec_session_factory: UnifiedExecSessionFactoryHandle,
pub(crate) environment: Arc<Environment>,
}
#[async_trait]
pub(crate) trait UnifiedExecSessionFactory: std::fmt::Debug + Send + Sync {
async fn open_session(
&self,
process_id: i32,
env: &ExecRequest,
tty: bool,
spawn_lifecycle: SpawnLifecycleHandle,
) -> Result<UnifiedExecProcess, UnifiedExecError>;
}
#[derive(Debug, Default)]
pub(crate) struct LocalUnifiedExecSessionFactory;
pub(crate) fn local_unified_exec_session_factory() -> UnifiedExecSessionFactoryHandle {
Arc::new(LocalUnifiedExecSessionFactory)
}
#[async_trait]
impl UnifiedExecSessionFactory for LocalUnifiedExecSessionFactory {
async fn open_session(
&self,
_process_id: i32,
env: &ExecRequest,
tty: bool,
spawn_lifecycle: SpawnLifecycleHandle,
) -> Result<UnifiedExecProcess, UnifiedExecError> {
open_local_session(env, tty, spawn_lifecycle).await
}
}
pub(crate) struct ExecServerUnifiedExecSessionFactory {
client: ExecServerClient,
_spawned_server: Option<Arc<SpawnedExecServer>>,
path_mapper: Option<RemoteWorkspacePathMapper>,
}
impl std::fmt::Debug for ExecServerUnifiedExecSessionFactory {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ExecServerUnifiedExecSessionFactory")
.field("owns_spawned_server", &self._spawned_server.is_some())
.finish_non_exhaustive()
}
}
impl ExecServerUnifiedExecSessionFactory {
pub(crate) fn from_client(
client: ExecServerClient,
path_mapper: Option<RemoteWorkspacePathMapper>,
) -> UnifiedExecSessionFactoryHandle {
Arc::new(Self {
client,
_spawned_server: None,
path_mapper,
})
}
pub(crate) fn from_spawned_server(
spawned_server: Arc<SpawnedExecServer>,
path_mapper: Option<RemoteWorkspacePathMapper>,
) -> UnifiedExecSessionFactoryHandle {
Arc::new(Self {
client: spawned_server.client().clone(),
_spawned_server: Some(spawned_server),
path_mapper,
})
}
}
#[async_trait]
impl UnifiedExecSessionFactory for ExecServerUnifiedExecSessionFactory {
async fn open_session(
&self,
process_id: i32,
env: &ExecRequest,
tty: bool,
spawn_lifecycle: SpawnLifecycleHandle,
) -> Result<UnifiedExecProcess, UnifiedExecError> {
let inherited_fds = spawn_lifecycle.inherited_fds();
if !inherited_fds.is_empty() {
debug!(
process_id,
inherited_fd_count = inherited_fds.len(),
"falling back to local unified-exec backend because exec-server does not support inherited fds",
);
return open_local_session(env, tty, spawn_lifecycle).await;
}
if env.sandbox != SandboxType::None {
debug!(
process_id,
sandbox = ?env.sandbox,
"falling back to local unified-exec backend because sandboxed execution is not modeled by exec-server",
);
return open_local_session(env, tty, spawn_lifecycle).await;
}
UnifiedExecProcess::from_exec_server(
self.client.clone(),
process_id,
env,
tty,
spawn_lifecycle,
self.path_mapper.as_ref(),
)
.await
}
}
pub(crate) async fn session_execution_backends_for_config(
config: &Config,
local_exec_server_command: Option<ExecServerLaunchCommand>,
) -> Result<SessionExecutionBackends, UnifiedExecError> {
let path_mapper = config
.experimental_unified_exec_exec_server_workspace_root
.clone()
.map(|remote_root| {
RemoteWorkspacePathMapper::new(
config
.cwd
.clone()
.try_into()
.expect("config cwd should be absolute"),
remote_root,
)
});
if !config.experimental_unified_exec_use_exec_server {
return Ok(SessionExecutionBackends {
unified_exec_session_factory: local_unified_exec_session_factory(),
environment: Arc::new(Environment::default()),
});
}
if let Some(websocket_url) = config
.experimental_unified_exec_exec_server_websocket_url
.clone()
{
let client = ExecServerClient::connect_websocket(RemoteExecServerConnectArgs::new(
websocket_url,
"codex-core".to_string(),
))
.await
.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
return Ok(exec_server_backends_from_client(client, path_mapper));
}
if config.experimental_unified_exec_spawn_local_exec_server {
let command = local_exec_server_command.unwrap_or_else(default_local_exec_server_command);
let spawned_server =
spawn_local_exec_server(command, ExecServerClientConnectOptions::default())
.await
.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
return Ok(exec_server_backends_from_spawned_server(Arc::new(
spawned_server,
), path_mapper));
}
let client = ExecServerClient::connect_in_process(ExecServerClientConnectOptions::default())
.await
.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
Ok(exec_server_backends_from_client(client, path_mapper))
}
fn default_local_exec_server_command() -> ExecServerLaunchCommand {
let binary_name = if cfg!(windows) {
"codex-exec-server.exe"
} else {
"codex-exec-server"
};
let program = std::env::current_exe()
.ok()
.map(|current_exe| current_exe.with_file_name(binary_name))
.filter(|candidate| candidate.exists())
.unwrap_or_else(|| PathBuf::from(binary_name));
ExecServerLaunchCommand {
program,
args: Vec::new(),
}
}
fn exec_server_backends_from_client(
client: ExecServerClient,
path_mapper: Option<RemoteWorkspacePathMapper>,
) -> SessionExecutionBackends {
SessionExecutionBackends {
unified_exec_session_factory: ExecServerUnifiedExecSessionFactory::from_client(
client.clone(),
path_mapper.clone(),
),
environment: Arc::new(Environment::new(Arc::new(ExecServerFileSystem::new(
client,
path_mapper,
)))),
}
}
fn exec_server_backends_from_spawned_server(
spawned_server: Arc<SpawnedExecServer>,
path_mapper: Option<RemoteWorkspacePathMapper>,
) -> SessionExecutionBackends {
SessionExecutionBackends {
unified_exec_session_factory: ExecServerUnifiedExecSessionFactory::from_spawned_server(
Arc::clone(&spawned_server),
path_mapper.clone(),
),
environment: Arc::new(Environment::new(Arc::new(ExecServerFileSystem::new(
spawned_server.client().clone(),
path_mapper,
)))),
}
}
async fn open_local_session(
env: &ExecRequest,
tty: bool,
mut spawn_lifecycle: SpawnLifecycleHandle,
) -> Result<UnifiedExecProcess, UnifiedExecError> {
let (program, args) = env
.command
.split_first()
.ok_or(UnifiedExecError::MissingCommandLine)?;
let inherited_fds = spawn_lifecycle.inherited_fds();
let spawn_result = if tty {
codex_utils_pty::pty::spawn_process_with_inherited_fds(
program,
args,
env.cwd.as_path(),
&env.env,
&env.arg0,
codex_utils_pty::TerminalSize::default(),
&inherited_fds,
)
.await
} else {
codex_utils_pty::pipe::spawn_process_no_stdin_with_inherited_fds(
program,
args,
env.cwd.as_path(),
&env.env,
&env.arg0,
&inherited_fds,
)
.await
};
let spawned = spawn_result.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
spawn_lifecycle.after_spawn();
UnifiedExecProcess::from_spawned(spawned, env.sandbox, spawn_lifecycle).await
}

View File

@@ -38,6 +38,7 @@ use crate::codex::TurnContext;
use crate::sandboxing::SandboxPermissions;
mod async_watcher;
mod backend;
mod errors;
mod head_tail_buffer;
mod process;
@@ -47,6 +48,9 @@ pub(crate) fn set_deterministic_process_ids_for_tests(enabled: bool) {
process_manager::set_deterministic_process_ids_for_tests(enabled);
}
pub(crate) use backend::UnifiedExecSessionFactoryHandle;
pub(crate) use backend::local_unified_exec_session_factory;
pub(crate) use backend::session_execution_backends_for_config;
pub(crate) use errors::UnifiedExecError;
pub(crate) use process::NoopSpawnLifecycle;
#[cfg(unix)]
@@ -123,14 +127,26 @@ impl ProcessStore {
pub(crate) struct UnifiedExecProcessManager {
process_store: Mutex<ProcessStore>,
max_write_stdin_yield_time_ms: u64,
session_factory: UnifiedExecSessionFactoryHandle,
}
impl UnifiedExecProcessManager {
pub(crate) fn new(max_write_stdin_yield_time_ms: u64) -> Self {
Self::with_session_factory(
max_write_stdin_yield_time_ms,
local_unified_exec_session_factory(),
)
}
pub(crate) fn with_session_factory(
max_write_stdin_yield_time_ms: u64,
session_factory: UnifiedExecSessionFactoryHandle,
) -> Self {
Self {
process_store: Mutex::new(ProcessStore::default()),
max_write_stdin_yield_time_ms: max_write_stdin_yield_time_ms
.max(MIN_EMPTY_YIELD_TIME_MS),
session_factory,
}
}
}

View File

@@ -3,14 +3,30 @@ use super::*;
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::codex::make_session_and_context;
use crate::config::ConfigBuilder;
use crate::config::ConfigOverrides;
use crate::exec::ExecExpiration;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::sandboxing::ExecRequest;
use crate::tools::context::ExecCommandToolOutput;
use crate::unified_exec::ExecCommandRequest;
use crate::unified_exec::WriteStdinRequest;
use codex_exec_server::ExecServerLaunchCommand;
use codex_protocol::config_types::WindowsSandboxLevel;
use codex_protocol::permissions::FileSystemSandboxPolicy;
use codex_protocol::permissions::NetworkSandboxPolicy;
use codex_utils_cargo_bin::cargo_bin;
use core_test_support::skip_if_sandbox;
use std::collections::HashMap;
use std::net::TcpListener;
use std::path::PathBuf;
use std::process::Command;
use std::process::Stdio;
use std::sync::Arc;
use tempfile::TempDir;
use tokio::time::Duration;
use toml::Value as TomlValue;
async fn test_session_and_turn() -> (Arc<Session>, Arc<TurnContext>) {
let (session, mut turn) = make_session_and_context().await;
@@ -82,6 +98,28 @@ async fn write_stdin(
.await
}
fn test_exec_request(command: Vec<String>, cwd: &std::path::Path) -> ExecRequest {
let sandbox_policy = SandboxPolicy::DangerFullAccess;
let file_system_sandbox_policy = FileSystemSandboxPolicy::from(&sandbox_policy);
let network_sandbox_policy = NetworkSandboxPolicy::from(&sandbox_policy);
ExecRequest {
command,
cwd: cwd.to_path_buf(),
env: HashMap::new(),
network: None,
expiration: ExecExpiration::Timeout(Duration::from_secs(5)),
sandbox: crate::exec::SandboxType::None,
windows_sandbox_level: WindowsSandboxLevel::default(),
windows_sandbox_private_desktop: false,
sandbox_permissions: SandboxPermissions::UseDefault,
sandbox_policy,
file_system_sandbox_policy,
network_sandbox_policy,
justification: None,
arg0: None,
}
}
#[test]
fn push_chunk_preserves_prefix_and_suffix() {
let mut buffer = HeadTailBuffer::default();
@@ -233,6 +271,166 @@ async fn unified_exec_timeouts() -> anyhow::Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_can_spawn_a_local_exec_server_backend() -> anyhow::Result<()> {
skip_if_sandbox!(Ok(()));
let codex_home = TempDir::new()?;
let cwd = TempDir::new()?;
let config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.cli_overrides(vec![
(
"experimental_unified_exec_use_exec_server".to_string(),
TomlValue::Boolean(true),
),
(
"experimental_unified_exec_spawn_local_exec_server".to_string(),
TomlValue::Boolean(true),
),
])
.harness_overrides(ConfigOverrides {
cwd: Some(cwd.path().to_path_buf()),
..Default::default()
})
.build()
.await?;
let workspace_root = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.parent()
.expect("core crate should be under codex-rs")
.to_path_buf();
let cargo = PathBuf::from(env!("CARGO"));
let build_status = Command::new(&cargo)
.current_dir(&workspace_root)
.args([
"build",
"-p",
"codex-exec-server",
"--bin",
"codex-exec-server",
])
.status()?;
assert!(build_status.success(), "failed to build codex-exec-server");
let target_dir = std::env::var_os("CARGO_TARGET_DIR")
.map(PathBuf::from)
.unwrap_or_else(|| workspace_root.join("target"));
let binary_name = if cfg!(windows) {
"codex-exec-server.exe"
} else {
"codex-exec-server"
};
let session_factory = session_execution_backends_for_config(
&config,
Some(ExecServerLaunchCommand {
program: target_dir.join("debug").join(binary_name),
args: Vec::new(),
}),
)
.await?
.unified_exec_session_factory;
let manager = UnifiedExecProcessManager::with_session_factory(
DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
session_factory,
);
let process = manager
.open_session_with_exec_env(
1000,
&test_exec_request(
vec![
"bash".to_string(),
"-c".to_string(),
"printf unified_exec_spawned_exec_server_backend_marker".to_string(),
],
cwd.path(),
),
false,
Box::new(NoopSpawnLifecycle),
)
.await?;
let mut output_rx = process.output_receiver();
let chunk = tokio::time::timeout(Duration::from_secs(5), output_rx.recv()).await??;
assert_eq!(
String::from_utf8_lossy(&chunk),
"unified_exec_spawned_exec_server_backend_marker"
);
process.terminate();
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_can_connect_to_websocket_exec_server_backend() -> anyhow::Result<()> {
skip_if_sandbox!(Ok(()));
let codex_home = TempDir::new()?;
let cwd = TempDir::new()?;
let listener = TcpListener::bind("127.0.0.1:0")?;
let port = listener.local_addr()?.port();
drop(listener);
let websocket_url = format!("ws://127.0.0.1:{port}");
let config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.cli_overrides(vec![
(
"experimental_unified_exec_use_exec_server".to_string(),
TomlValue::Boolean(true),
),
(
"experimental_unified_exec_exec_server_websocket_url".to_string(),
TomlValue::String(websocket_url.clone()),
),
])
.harness_overrides(ConfigOverrides {
cwd: Some(cwd.path().to_path_buf()),
..Default::default()
})
.build()
.await?;
let mut child = tokio::process::Command::new(cargo_bin("codex-exec-server")?)
.arg("--listen")
.arg(&websocket_url)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::inherit())
.spawn()?;
tokio::time::sleep(Duration::from_millis(250)).await;
let session_factory = session_execution_backends_for_config(&config, None)
.await?
.unified_exec_session_factory;
let manager = UnifiedExecProcessManager::with_session_factory(
DEFAULT_MAX_BACKGROUND_TERMINAL_TIMEOUT_MS,
session_factory,
);
let process = manager
.open_session_with_exec_env(
2000,
&test_exec_request(
vec![
"bash".to_string(),
"-c".to_string(),
"printf unified_exec_websocket_exec_server_backend_marker".to_string(),
],
cwd.path(),
),
false,
Box::new(NoopSpawnLifecycle),
)
.await?;
let mut output_rx = process.output_receiver();
let chunk = tokio::time::timeout(Duration::from_secs(5), output_rx.recv()).await??;
assert_eq!(
String::from_utf8_lossy(&chunk),
"unified_exec_websocket_exec_server_backend_marker"
);
process.terminate();
child.start_kill()?;
let _ = child.wait().await;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_pause_blocks_yield_timeout() -> anyhow::Result<()> {
skip_if_sandbox!(Ok(()));

View File

@@ -1,6 +1,7 @@
#![allow(clippy::module_inception)]
use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use tokio::sync::Mutex;
@@ -16,8 +17,14 @@ use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StreamOutput;
use crate::exec::is_likely_sandbox_denied;
use crate::exec_server_path_mapper::RemoteWorkspacePathMapper;
use crate::sandboxing::ExecRequest;
use crate::truncate::TruncationPolicy;
use crate::truncate::formatted_truncate_text;
use codex_exec_server::ExecParams;
use codex_exec_server::ExecServerClient;
use codex_exec_server::ExecServerEvent;
use codex_utils_absolute_path::AbsolutePathBuf;
use codex_utils_pty::ExecCommandSession;
use codex_utils_pty::SpawnedPty;
@@ -56,7 +63,7 @@ pub(crate) struct OutputHandles {
#[derive(Debug)]
pub(crate) struct UnifiedExecProcess {
process_handle: ExecCommandSession,
process_handle: ProcessBackend,
output_rx: broadcast::Receiver<Vec<u8>>,
output_buffer: OutputBuffer,
output_notify: Arc<Notify>,
@@ -69,9 +76,45 @@ pub(crate) struct UnifiedExecProcess {
_spawn_lifecycle: SpawnLifecycleHandle,
}
enum ProcessBackend {
Local(ExecCommandSession),
Remote(RemoteExecSession),
}
impl std::fmt::Debug for ProcessBackend {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Local(process_handle) => f.debug_tuple("Local").field(process_handle).finish(),
Self::Remote(process_handle) => f.debug_tuple("Remote").field(process_handle).finish(),
}
}
}
#[derive(Clone)]
struct RemoteExecSession {
process_key: String,
client: ExecServerClient,
writer_tx: mpsc::Sender<Vec<u8>>,
exited: Arc<AtomicBool>,
exit_code: Arc<StdMutex<Option<i32>>>,
}
impl std::fmt::Debug for RemoteExecSession {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("RemoteExecSession")
.field("process_key", &self.process_key)
.field("exited", &self.exited.load(Ordering::SeqCst))
.field(
"exit_code",
&self.exit_code.lock().ok().and_then(|guard| *guard),
)
.finish_non_exhaustive()
}
}
impl UnifiedExecProcess {
pub(super) fn new(
process_handle: ExecCommandSession,
fn new(
process_handle: ProcessBackend,
initial_output_rx: tokio::sync::broadcast::Receiver<Vec<u8>>,
sandbox_type: SandboxType,
spawn_lifecycle: SpawnLifecycleHandle,
@@ -123,7 +166,10 @@ impl UnifiedExecProcess {
}
pub(super) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.process_handle.writer_sender()
match &self.process_handle {
ProcessBackend::Local(process_handle) => process_handle.writer_sender(),
ProcessBackend::Remote(process_handle) => process_handle.writer_tx.clone(),
}
}
pub(super) fn output_handles(&self) -> OutputHandles {
@@ -149,17 +195,38 @@ impl UnifiedExecProcess {
}
pub(super) fn has_exited(&self) -> bool {
self.process_handle.has_exited()
match &self.process_handle {
ProcessBackend::Local(process_handle) => process_handle.has_exited(),
ProcessBackend::Remote(process_handle) => process_handle.exited.load(Ordering::SeqCst),
}
}
pub(super) fn exit_code(&self) -> Option<i32> {
self.process_handle.exit_code()
match &self.process_handle {
ProcessBackend::Local(process_handle) => process_handle.exit_code(),
ProcessBackend::Remote(process_handle) => process_handle
.exit_code
.lock()
.ok()
.and_then(|guard| *guard),
}
}
pub(super) fn terminate(&self) {
self.output_closed.store(true, Ordering::Release);
self.output_closed_notify.notify_waiters();
self.process_handle.terminate();
match &self.process_handle {
ProcessBackend::Local(process_handle) => process_handle.terminate(),
ProcessBackend::Remote(process_handle) => {
let client = process_handle.client.clone();
let process_key = process_handle.process_key.clone();
if let Ok(handle) = tokio::runtime::Handle::try_current() {
handle.spawn(async move {
let _ = client.terminate(&process_key).await;
});
}
}
}
self.cancellation_token.cancel();
self.output_task.abort();
}
@@ -232,7 +299,12 @@ impl UnifiedExecProcess {
mut exit_rx,
} = spawned;
let output_rx = codex_utils_pty::combine_output_receivers(stdout_rx, stderr_rx);
let managed = Self::new(process_handle, output_rx, sandbox_type, spawn_lifecycle);
let managed = Self::new(
ProcessBackend::Local(process_handle),
output_rx,
sandbox_type,
spawn_lifecycle,
);
let exit_ready = matches!(exit_rx.try_recv(), Ok(_) | Err(TryRecvError::Closed));
@@ -262,6 +334,99 @@ impl UnifiedExecProcess {
Ok(managed)
}
pub(super) async fn from_exec_server(
client: ExecServerClient,
process_id: i32,
env: &ExecRequest,
tty: bool,
spawn_lifecycle: SpawnLifecycleHandle,
path_mapper: Option<&RemoteWorkspacePathMapper>,
) -> Result<Self, UnifiedExecError> {
let process_key = process_id.to_string();
let mut events_rx = client.event_receiver();
let cwd = path_mapper.map_or_else(
|| env.cwd.clone(),
|mapper| match AbsolutePathBuf::try_from(env.cwd.clone()) {
Ok(cwd) => mapper.map_path(&cwd).into(),
Err(_) => env.cwd.clone(),
},
);
let response = client
.exec(ExecParams {
process_id: process_key.clone(),
argv: env.command.clone(),
cwd,
env: env.env.clone(),
tty,
arg0: env.arg0.clone(),
sandbox: None,
})
.await
.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
let process_key = response.process_id;
let (output_tx, output_rx) = broadcast::channel(256);
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(256);
let exited = Arc::new(AtomicBool::new(false));
let exit_code = Arc::new(StdMutex::new(None));
let managed = Self::new(
ProcessBackend::Remote(RemoteExecSession {
process_key: process_key.clone(),
client: client.clone(),
writer_tx,
exited: Arc::clone(&exited),
exit_code: Arc::clone(&exit_code),
}),
output_rx,
env.sandbox,
spawn_lifecycle,
);
{
let client = client.clone();
let writer_process_key = process_key.clone();
tokio::spawn(async move {
while let Some(chunk) = writer_rx.recv().await {
if client.write(&writer_process_key, chunk).await.is_err() {
break;
}
}
});
}
{
let process_key = process_key.clone();
let exited = Arc::clone(&exited);
let exit_code = Arc::clone(&exit_code);
let cancellation_token = managed.cancellation_token();
tokio::spawn(async move {
while let Ok(event) = events_rx.recv().await {
match event {
ExecServerEvent::OutputDelta(notification)
if notification.process_id == process_key =>
{
let _ = output_tx.send(notification.chunk.into_inner());
}
ExecServerEvent::Exited(notification)
if notification.process_id == process_key =>
{
exited.store(true, Ordering::SeqCst);
if let Ok(mut guard) = exit_code.lock() {
*guard = Some(notification.exit_code);
}
cancellation_token.cancel();
break;
}
ExecServerEvent::OutputDelta(_) | ExecServerEvent::Exited(_) => {}
}
}
});
}
Ok(managed)
}
fn signal_exit(&self) {
self.cancellation_token.cancel();
}

View File

@@ -539,42 +539,14 @@ impl UnifiedExecProcessManager {
pub(crate) async fn open_session_with_exec_env(
&self,
process_id: i32,
env: &ExecRequest,
tty: bool,
mut spawn_lifecycle: SpawnLifecycleHandle,
spawn_lifecycle: SpawnLifecycleHandle,
) -> Result<UnifiedExecProcess, UnifiedExecError> {
let (program, args) = env
.command
.split_first()
.ok_or(UnifiedExecError::MissingCommandLine)?;
let inherited_fds = spawn_lifecycle.inherited_fds();
let spawn_result = if tty {
codex_utils_pty::pty::spawn_process_with_inherited_fds(
program,
args,
env.cwd.as_path(),
&env.env,
&env.arg0,
codex_utils_pty::TerminalSize::default(),
&inherited_fds,
)
self.session_factory
.open_session(process_id, env, tty, spawn_lifecycle)
.await
} else {
codex_utils_pty::pipe::spawn_process_no_stdin_with_inherited_fds(
program,
args,
env.cwd.as_path(),
&env.env,
&env.arg0,
&inherited_fds,
)
.await
};
let spawned =
spawn_result.map_err(|err| UnifiedExecError::create_process(err.to_string()))?;
spawn_lifecycle.after_spawn();
UnifiedExecProcess::from_spawned(spawned, env.sandbox, spawn_lifecycle).await
}
pub(super) async fn open_session_with_sandbox(
@@ -610,6 +582,7 @@ impl UnifiedExecProcessManager {
})
.await;
let req = UnifiedExecToolRequest {
process_id: request.process_id,
command: request.command.clone(),
cwd,
env,

View File

@@ -269,6 +269,78 @@ async fn unified_exec_intercepts_apply_patch_exec_command() -> Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_can_route_through_in_process_exec_server() -> Result<()> {
skip_if_no_network!(Ok(()));
skip_if_sandbox!(Ok(()));
skip_if_windows!(Ok(()));
let builder = test_codex().with_config(|config| {
config.use_experimental_unified_exec_tool = true;
config.experimental_unified_exec_use_exec_server = true;
config
.features
.enable(Feature::UnifiedExec)
.expect("test config should allow feature update");
});
let harness = TestCodexHarness::with_builder(builder).await?;
let call_id = "uexec-exec-server-inprocess";
let marker = "unified_exec_exec_server_inprocess_marker";
let args = json!({
"cmd": format!("printf {marker}"),
"yield_time_ms": 250,
});
let responses = vec![
sse(vec![
ev_response_created("resp-1"),
ev_function_call(call_id, "exec_command", &serde_json::to_string(&args)?),
ev_completed("resp-1"),
]),
sse(vec![
ev_response_created("resp-2"),
ev_assistant_message("msg-1", "done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(harness.server(), responses).await;
let test = harness.test();
let codex = test.codex.clone();
let cwd = test.cwd_path().to_path_buf();
let session_model = test.session_configured.model.clone();
codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "route unified exec through the in-process exec-server".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: session_model,
effort: None,
summary: None,
service_tier: None,
collaboration_mode: None,
personality: None,
})
.await?;
wait_for_event(&codex, |event| matches!(event, EventMsg::TurnComplete(_))).await;
let output = harness.function_call_stdout(call_id).await;
assert!(
output.contains(marker),
"expected unified exec output from exec-server backend, got: {output:?}"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_emits_exec_command_begin_event() -> Result<()> {
skip_if_no_network!(Ok(()));

View File

@@ -38,6 +38,7 @@ pub struct ReadDirectoryEntry {
pub file_name: String,
pub is_directory: bool,
pub is_file: bool,
pub is_symlink: bool,
}
pub type FileSystemResult<T> = io::Result<T>;
@@ -72,7 +73,7 @@ pub trait ExecutorFileSystem: Send + Sync {
}
#[derive(Clone, Default)]
pub(crate) struct LocalFileSystem;
pub struct LocalFileSystem;
#[async_trait]
impl ExecutorFileSystem for LocalFileSystem {
@@ -121,11 +122,13 @@ impl ExecutorFileSystem for LocalFileSystem {
let mut entries = Vec::new();
let mut read_dir = tokio::fs::read_dir(path.as_path()).await?;
while let Some(entry) = read_dir.next_entry().await? {
let metadata = tokio::fs::metadata(entry.path()).await?;
let metadata = tokio::fs::symlink_metadata(entry.path()).await?;
let file_type = metadata.file_type();
entries.push(ReadDirectoryEntry {
file_name: entry.file_name().to_string_lossy().into_owned(),
is_directory: metadata.is_dir(),
is_file: metadata.is_file(),
is_symlink: file_type.is_symlink(),
});
}
Ok(entries)

View File

@@ -5,14 +5,34 @@ pub use fs::CreateDirectoryOptions;
pub use fs::ExecutorFileSystem;
pub use fs::FileMetadata;
pub use fs::FileSystemResult;
pub use fs::LocalFileSystem;
pub use fs::ReadDirectoryEntry;
pub use fs::RemoveOptions;
use std::sync::Arc;
#[derive(Clone, Debug, Default)]
pub struct Environment;
#[derive(Clone)]
pub struct Environment {
file_system: Arc<dyn ExecutorFileSystem>,
}
impl Environment {
pub fn get_filesystem(&self) -> impl ExecutorFileSystem + use<> {
fs::LocalFileSystem
pub fn new(file_system: Arc<dyn ExecutorFileSystem>) -> Self {
Self { file_system }
}
pub fn get_filesystem(&self) -> Arc<dyn ExecutorFileSystem> {
Arc::clone(&self.file_system)
}
}
impl Default for Environment {
fn default() -> Self {
Self::new(Arc::new(fs::LocalFileSystem))
}
}
impl std::fmt::Debug for Environment {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("Environment").finish_non_exhaustive()
}
}

View File

@@ -0,0 +1,40 @@
[package]
name = "codex-exec-server"
version.workspace = true
edition.workspace = true
license.workspace = true
[[bin]]
name = "codex-exec-server"
path = "src/bin/codex-exec-server.rs"
[lints]
workspace = true
[dependencies]
base64 = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-app-server-protocol = { workspace = true }
codex-environment = { workspace = true }
codex-utils-pty = { workspace = true }
futures = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"io-util",
"macros",
"net",
"process",
"rt-multi-thread",
"sync",
"time",
] }
tokio-tungstenite = { workspace = true }
tracing = { workspace = true }
[dev-dependencies]
anyhow = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
pretty_assertions = { workspace = true }

View File

@@ -0,0 +1,242 @@
# exec-server design notes
This document sketches a likely direction for integrating `codex-exec-server`
with unified exec without baking the full tool-call policy stack into the
server.
The goal is:
- keep exec-server generic and reusable
- keep approval, sandbox, and retry policy in `core`
- preserve the unified-exec event flow the model already depends on
- support retained output caps so polling and snapshot-style APIs do not grow
memory without bound
## Unified exec today
Today the flow for LLM-visible interactive execution is:
1. The model sees the `exec_command` and `write_stdin` tools.
2. `UnifiedExecHandler` parses the tool arguments and allocates a process id.
3. `UnifiedExecProcessManager::exec_command(...)` calls
`open_session_with_sandbox(...)`.
4. `ToolOrchestrator` drives approval, sandbox selection, managed network
approval, and sandbox-denial retry behavior.
5. `UnifiedExecRuntime` builds a `CommandSpec`, asks the current
`SandboxAttempt` to transform it into an `ExecRequest`, and passes that
resolved request back to the process manager.
6. `open_session_with_exec_env(...)` spawns the process from that resolved
`ExecRequest`.
7. Unified exec emits an `ExecCommandBegin` event.
8. Unified exec starts a background output watcher that emits
`ExecCommandOutputDelta` events.
9. The initial tool call collects output until the requested yield deadline and
returns an `ExecCommandToolOutput` snapshot to the model.
10. If the process is still running, unified exec stores it and later emits
`ExecCommandEnd` when the exit watcher fires.
11. A later `write_stdin` tool call writes to the stored process, emits a
`TerminalInteraction` event, collects another bounded snapshot, and returns
that tool response to the model.
Important observation: the 250ms / 10s yield-window behavior is not really a
process-server concern. It is a client-side convenience layer for the LLM tool
API. The server should focus on raw process lifecycle and streaming events.
## Proposed boundary
The clean split is:
- exec-server server: process lifecycle, output streaming, retained output caps
- exec-server client: `wait`, `communicate`, yield-window helpers, session
bookkeeping
- unified exec in `core`: tool parsing, event emission, approvals, sandboxing,
managed networking, retry semantics
If exec-server is used by unified exec later, the boundary should sit between
step 5 and step 6 above: after policy has produced a resolved spawn request, but
before the actual PTY or pipe spawn.
## Suggested process API
Start simple and explicit:
- `process/start`
- `process/write`
- `process/closeStdin`
- `process/resize`
- `process/terminate`
- `process/wait`
- `process/snapshot`
Server notifications:
- `process/output`
- `process/exited`
- optionally `process/started`
- optionally `process/failed`
Suggested request shapes:
```rust
enum ProcessStartRequest {
Direct(DirectExecSpec),
Prepared(PreparedExecSpec),
}
struct DirectExecSpec {
process_id: String,
argv: Vec<String>,
cwd: PathBuf,
env: HashMap<String, String>,
arg0: Option<String>,
io: ProcessIo,
}
struct PreparedExecSpec {
process_id: String,
request: PreparedExecRequest,
io: ProcessIo,
}
enum ProcessIo {
Pty { rows: u16, cols: u16 },
Pipe { stdin: StdinMode },
}
enum StdinMode {
Open,
Closed,
}
enum TerminateMode {
Graceful { timeout_ms: u64 },
Force,
}
```
Notes:
- `processId` remains a protocol handle, not an OS pid.
- `wait` is a good generic API because many callers want process completion
without manually wiring notifications.
- `communicate` is also a reasonable API, but it should probably start as a
client helper built on top of `write + closeStdin + wait + snapshot`.
- If an RPC form of `communicate` is added later, it should be a convenience
wrapper rather than the primitive execution model.
## Output capping
Even with event streaming, the server should retain a bounded amount of output
per process so callers can poll, wait, or reconnect without unbounded memory
growth.
Suggested behavior:
- stream every output chunk live via `process/output`
- retain capped output per process in memory
- keep stdout and stderr separately for pipe-backed processes
- for PTY-backed processes, treat retained output as a single terminal stream
- expose truncation metadata on snapshots
Suggested snapshot response:
```rust
struct ProcessSnapshot {
stdout: Vec<u8>,
stderr: Vec<u8>,
terminal: Vec<u8>,
truncated: bool,
exit_code: Option<i32>,
running: bool,
}
```
Implementation-wise, the current `HeadTailBuffer` pattern used by unified exec
is a good fit. The cap should be server config, not request config, so memory
use stays predictable.
## Sandboxing and networking
### How unified exec does it today
Unified exec does not hand raw command args directly to the PTY layer for tool
calls. Instead, it:
1. computes approval requirements
2. chooses a sandbox attempt
3. applies managed-network policy if needed
4. transforms `CommandSpec` into `ExecRequest`
5. spawns from that resolved `ExecRequest`
That split is already valuable and should be preserved.
### Recommended exec-server design
Do not put approval policy into exec-server.
Instead, support two execution modes:
- `Direct`: raw command, intended for orchestrator-side or already-trusted use
- `Prepared`: already-resolved spawn request, intended for tool-call execution
For tool calls from the LLM side:
1. `core` runs the existing approval + sandbox + managed-network flow
2. `core` produces a resolved `ExecRequest`
3. the exec-server client sends `PreparedExecSpec`
4. exec-server spawns exactly that request and streams process events
For orchestrator-side execution:
1. caller sends `DirectExecSpec`
2. exec-server spawns directly without running approval or sandbox policy
This gives one generic process API while keeping the policy-sensitive logic in
the place that already owns it.
### Why not make exec-server own sandbox selection?
That would force exec-server to understand:
- approval policy
- exec policy / prefix rules
- managed-network approval flow
- sandbox retry semantics
- guardian routing
- feature-flag-driven sandbox selection
- platform-specific sandbox helper configuration
That is too opinionated for a reusable process service.
## Optional future server config
If exec-server grows beyond the current prototype, a config object like this
would be enough:
```rust
struct ExecServerConfig {
shutdown_grace_period_ms: u64,
max_processes_per_connection: usize,
retained_output_bytes_per_process: usize,
allow_direct_exec: bool,
allow_prepared_exec: bool,
}
```
That keeps policy surface small:
- lifecycle limits live in the server
- trust and sandbox policy stay with the caller
## Mapping back to LLM-visible events
If unified exec is later backed by exec-server, the `core` client wrapper should
keep owning the translation into the existing event model:
- `process/start` success -> `ExecCommandBegin`
- `process/output` -> `ExecCommandOutputDelta`
- local `process/write` call -> `TerminalInteraction`
- `process/exited` plus retained transcript -> `ExecCommandEnd`
That preserves the current LLM-facing contract while making the process backend
swappable.

View File

@@ -0,0 +1,396 @@
# codex-exec-server
`codex-exec-server` is a small standalone JSON-RPC server for spawning and
controlling subprocesses through `codex-utils-pty`.
It currently provides:
- a standalone binary: `codex-exec-server`
- a transport-agnostic server runtime with stdio and websocket entrypoints
- a Rust client: `ExecServerClient`
- a direct in-process client mode: `ExecServerClient::connect_in_process`
- a separate local launch helper: `spawn_local_exec_server`
- a small protocol module with shared request/response types
This crate is intentionally narrow. It is not wired into the main Codex CLI or
unified-exec in this PR; it is only the standalone transport layer.
The internal shape is intentionally closer to `app-server` than the first cut:
- transport adapters are separate from the per-connection request processor
- JSON-RPC route matching is separate from the stateful exec handler
- the client only speaks the protocol; it does not spawn a server subprocess
- the client can also bypass the JSON-RPC transport/routing layer in local
in-process mode and call the typed handler directly
- local child-process launch is handled by a separate helper/factory layer
That split is meant to leave reusable seams if exec-server and app-server later
share transport or JSON-RPC connection utilities. It also keeps the core
handler testable without the RPC server implementation itself.
Design notes for a likely future integration with unified exec, including
rough call flow, buffering, and sandboxing boundaries, live in
[DESIGN.md](./DESIGN.md).
## Transport
The server speaks the same JSON-RPC message shapes over multiple transports.
The standalone binary supports:
- `stdio://` (default)
- `ws://IP:PORT`
Wire framing:
- stdio: one newline-delimited JSON-RPC message per line on stdin/stdout
- websocket: one JSON-RPC message per websocket text frame
Like the app-server transport, messages on the wire omit the `"jsonrpc":"2.0"`
field and use the shared `codex-app-server-protocol` envelope types.
The current protocol version is:
```text
exec-server.v0
```
## Lifecycle
Each connection follows this sequence:
1. Send `initialize`.
2. Wait for the `initialize` response.
3. Send `initialized`.
4. Start and manage processes with `process/start`, `process/read`,
`process/write`, and `process/terminate`.
5. Read streaming notifications from `process/output` and
`process/exited`.
If the client sends exec methods before completing the `initialize` /
`initialized` handshake, the server rejects them.
If a connection closes, the server terminates any remaining managed processes
for that connection.
TODO: add authentication to the `initialize` setup before this is used across a
trust boundary.
## API
### `initialize`
Initial handshake request.
Request params:
```json
{
"clientName": "my-client"
}
```
Response:
```json
{
"protocolVersion": "exec-server.v0"
}
```
### `initialized`
Handshake acknowledgement notification sent by the client after a successful
`initialize` response. Exec methods are rejected until this arrives.
Params are currently ignored. Sending any other client notification method is a
protocol error.
### `process/start`
Starts a new managed process.
Request params:
```json
{
"processId": "proc-1",
"argv": ["bash", "-lc", "printf 'hello\\n'"],
"cwd": "/absolute/working/directory",
"env": {
"PATH": "/usr/bin:/bin"
},
"tty": true,
"arg0": null,
"sandbox": null
}
```
Field definitions:
- `argv`: command vector. It must be non-empty.
- `cwd`: absolute working directory used for the child process.
- `env`: environment variables passed to the child process.
- `tty`: when `true`, spawn a PTY-backed interactive process; when `false`,
spawn a pipe-backed process with closed stdin.
- `arg0`: optional argv0 override forwarded to `codex-utils-pty`.
- `sandbox`: optional sandbox config. Omit it for the current direct-spawn
behavior. Explicit `{"mode":"none"}` is accepted; `{"mode":"hostDefault"}`
is currently rejected until host-local sandbox materialization is wired up.
Response:
```json
{
"processId": "proc-1"
}
```
Behavior notes:
- `processId` is chosen by the client and must be unique for the connection.
- PTY-backed processes accept later writes through `process/write`.
- Pipe-backed processes are launched with stdin closed and reject writes.
- Output is streamed asynchronously via `process/output`.
- Exit is reported asynchronously via `process/exited`.
### `process/write`
Writes raw bytes to a running PTY-backed process stdin.
Request params:
```json
{
"processId": "proc-1",
"chunk": "aGVsbG8K"
}
```
`chunk` is base64-encoded raw bytes. In the example above it is `hello\n`.
Response:
```json
{
"accepted": true
}
```
Behavior notes:
- Writes to an unknown `processId` are rejected.
- Writes to a non-PTY process are rejected because stdin is already closed.
### `process/read`
Reads retained output from a managed process by sequence number.
Request params:
```json
{
"processId": "proc-1",
"afterSeq": 0,
"maxBytes": 65536,
"waitMs": 250
}
```
Response:
```json
{
"chunks": [
{
"seq": 1,
"stream": "pty",
"chunk": "aGVsbG8K"
}
],
"nextSeq": 2,
"exited": false,
"exitCode": null
}
```
Behavior notes:
- Output is retained in bounded server memory so callers can poll without
relying only on notifications.
- `afterSeq` is exclusive: `0` reads from the beginning of the retained buffer.
- `waitMs` waits briefly for new output or exit if nothing is currently
available.
- Once retained output exceeds the per-process cap, oldest chunks are dropped.
### `process/terminate`
Terminates a running managed process.
Request params:
```json
{
"processId": "proc-1"
}
```
Response:
```json
{
"running": true
}
```
If the process is already unknown or already removed, the server responds with:
```json
{
"running": false
}
```
## Notifications
### `process/output`
Streaming output chunk from a running process.
Params:
```json
{
"processId": "proc-1",
"stream": "stdout",
"chunk": "aGVsbG8K"
}
```
Fields:
- `processId`: process identifier
- `stream`: `"stdout"`, `"stderr"`, or `"pty"` for PTY-backed processes
- `chunk`: base64-encoded output bytes
### `process/exited`
Final process exit notification.
Params:
```json
{
"processId": "proc-1",
"exitCode": 0
}
```
## Errors
The server returns JSON-RPC errors with these codes:
- `-32600`: invalid request
- `-32602`: invalid params
- `-32603`: internal error
Typical error cases:
- unknown method
- malformed params
- empty `argv`
- duplicate `processId`
- writes to unknown processes
- writes to non-PTY processes
## Rust surface
The crate exports:
- `ExecServerClient`
- `ExecServerClientConnectOptions`
- `RemoteExecServerConnectArgs`
- `ExecServerLaunchCommand`
- `ExecServerEvent`
- `SpawnedExecServer`
- `ExecServerError`
- `ExecServerTransport`
- `spawn_local_exec_server(...)`
- protocol structs such as `ExecParams`, `ExecResponse`,
`WriteParams`, `TerminateParams`, `ExecOutputDeltaNotification`, and
`ExecExitedNotification`
- `run_main()` and `run_main_with_transport(...)`
### Binary
Run over stdio:
```text
codex-exec-server
```
Run as a websocket server:
```text
codex-exec-server --listen ws://127.0.0.1:8080
```
### Client
Connect the client to an existing server transport:
- `ExecServerClient::connect_stdio(...)`
- `ExecServerClient::connect_websocket(...)`
- `ExecServerClient::connect_in_process(...)` for a local no-transport mode
backed directly by the typed handler
Timeout behavior:
- stdio and websocket clients both enforce an initialize-handshake timeout
- websocket clients also enforce a connect timeout before the handshake begins
Events:
- `ExecServerClient::event_receiver()` yields `ExecServerEvent`
- output events include both `stream` (`stdout`, `stderr`, or `pty`) and raw
bytes
- process lifetime is tracked by server notifications such as
`process/exited`, not by a client-side process registry
Spawning a local child process is deliberately separate:
- `spawn_local_exec_server(...)`
## Example session
Initialize:
```json
{"id":1,"method":"initialize","params":{"clientName":"example-client"}}
{"id":1,"result":{"protocolVersion":"exec-server.v0"}}
{"method":"initialized","params":{}}
```
Start a process:
```json
{"id":2,"method":"process/start","params":{"processId":"proc-1","argv":["bash","-lc","printf 'ready\\n'; while IFS= read -r line; do printf 'echo:%s\\n' \"$line\"; done"],"cwd":"/tmp","env":{"PATH":"/usr/bin:/bin"},"tty":true,"arg0":null}}
{"id":2,"result":{"processId":"proc-1"}}
{"method":"process/output","params":{"processId":"proc-1","stream":"pty","chunk":"cmVhZHkK"}}
```
Write to the process:
```json
{"id":3,"method":"process/write","params":{"processId":"proc-1","chunk":"aGVsbG8K"}}
{"id":3,"result":{"accepted":true}}
{"method":"process/output","params":{"processId":"proc-1","stream":"pty","chunk":"ZWNobzpoZWxsbwo="}}
```
Terminate it:
```json
{"id":4,"method":"process/terminate","params":{"processId":"proc-1"}}
{"id":4,"result":{"running":true}}
{"method":"process/exited","params":{"processId":"proc-1","exitCode":0}}
```

View File

@@ -0,0 +1,23 @@
use clap::Parser;
use codex_exec_server::ExecServerTransport;
#[derive(Debug, Parser)]
struct ExecServerArgs {
/// Transport endpoint URL. Supported values: `stdio://` (default),
/// `ws://IP:PORT`.
#[arg(
long = "listen",
value_name = "URL",
default_value = ExecServerTransport::DEFAULT_LISTEN_URL
)]
listen: ExecServerTransport,
}
#[tokio::main]
async fn main() {
let args = ExecServerArgs::parse();
if let Err(err) = codex_exec_server::run_main_with_transport(args.listen).await {
eprintln!("{err}");
std::process::exit(1);
}
}

View File

@@ -0,0 +1,929 @@
use std::collections::HashMap;
use std::sync::Arc;
#[cfg(test)]
use std::sync::Mutex as StdMutex;
#[cfg(test)]
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicI64;
use std::sync::atomic::Ordering;
use std::time::Duration;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsCreateDirectoryResponse;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsGetMetadataResponse;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadDirectoryResponse;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsReadFileResponse;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use serde::Serialize;
use serde_json::Value;
use tokio::io::AsyncRead;
use tokio::io::AsyncWrite;
use tokio::sync::Mutex;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::task::JoinHandle;
use tokio::time::timeout;
use tokio_tungstenite::connect_async;
use tracing::debug;
use tracing::warn;
use crate::client_api::ExecServerClientConnectOptions;
use crate::client_api::ExecServerEvent;
use crate::client_api::RemoteExecServerConnectArgs;
use crate::connection::JsonRpcConnection;
use crate::connection::JsonRpcConnectionEvent;
use crate::protocol::EXEC_EXITED_METHOD;
use crate::protocol::EXEC_METHOD;
use crate::protocol::EXEC_OUTPUT_DELTA_METHOD;
use crate::protocol::EXEC_READ_METHOD;
use crate::protocol::EXEC_TERMINATE_METHOD;
use crate::protocol::EXEC_WRITE_METHOD;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::FS_COPY_METHOD;
use crate::protocol::FS_CREATE_DIRECTORY_METHOD;
use crate::protocol::FS_GET_METADATA_METHOD;
use crate::protocol::FS_READ_DIRECTORY_METHOD;
use crate::protocol::FS_READ_FILE_METHOD;
use crate::protocol::FS_REMOVE_METHOD;
use crate::protocol::FS_WRITE_FILE_METHOD;
use crate::protocol::INITIALIZE_METHOD;
use crate::protocol::INITIALIZED_METHOD;
use crate::protocol::InitializeParams;
use crate::protocol::InitializeResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use crate::server::ExecServerHandler;
use crate::server::ExecServerOutboundMessage;
use crate::server::ExecServerServerNotification;
impl Default for ExecServerClientConnectOptions {
fn default() -> Self {
Self {
client_name: "codex-core".to_string(),
initialize_timeout: INITIALIZE_TIMEOUT,
}
}
}
impl From<RemoteExecServerConnectArgs> for ExecServerClientConnectOptions {
fn from(value: RemoteExecServerConnectArgs) -> Self {
Self {
client_name: value.client_name,
initialize_timeout: value.initialize_timeout,
}
}
}
const CONNECT_TIMEOUT: Duration = Duration::from_secs(10);
const INITIALIZE_TIMEOUT: Duration = Duration::from_secs(10);
impl RemoteExecServerConnectArgs {
pub fn new(websocket_url: String, client_name: String) -> Self {
Self {
websocket_url,
client_name,
connect_timeout: CONNECT_TIMEOUT,
initialize_timeout: INITIALIZE_TIMEOUT,
}
}
}
#[cfg(test)]
#[derive(Debug, Clone, PartialEq, Eq)]
struct ExecServerOutput {
stream: crate::protocol::ExecOutputStream,
chunk: Vec<u8>,
}
#[cfg(test)]
struct ExecServerProcess {
process_id: String,
output_rx: broadcast::Receiver<ExecServerOutput>,
status: Arc<RemoteProcessStatus>,
client: ExecServerClient,
}
#[cfg(test)]
impl ExecServerProcess {
fn output_receiver(&self) -> broadcast::Receiver<ExecServerOutput> {
self.output_rx.resubscribe()
}
fn has_exited(&self) -> bool {
self.status.has_exited()
}
fn exit_code(&self) -> Option<i32> {
self.status.exit_code()
}
fn terminate(&self) {
let client = self.client.clone();
let process_id = self.process_id.clone();
tokio::spawn(async move {
let _ = client.terminate_session(&process_id).await;
});
}
}
#[cfg(test)]
struct RemoteProcessStatus {
exited: AtomicBool,
exit_code: StdMutex<Option<i32>>,
}
#[cfg(test)]
impl RemoteProcessStatus {
fn new() -> Self {
Self {
exited: AtomicBool::new(false),
exit_code: StdMutex::new(None),
}
}
fn has_exited(&self) -> bool {
self.exited.load(Ordering::SeqCst)
}
fn exit_code(&self) -> Option<i32> {
self.exit_code.lock().ok().and_then(|guard| *guard)
}
fn mark_exited(&self, exit_code: Option<i32>) {
self.exited.store(true, Ordering::SeqCst);
if let Ok(mut guard) = self.exit_code.lock() {
*guard = exit_code;
}
}
}
enum PendingRequest {
Initialize(oneshot::Sender<Result<InitializeResponse, JSONRPCErrorError>>),
Exec(oneshot::Sender<Result<ExecResponse, JSONRPCErrorError>>),
Read(oneshot::Sender<Result<ReadResponse, JSONRPCErrorError>>),
Write(oneshot::Sender<Result<WriteResponse, JSONRPCErrorError>>),
Terminate(oneshot::Sender<Result<TerminateResponse, JSONRPCErrorError>>),
FsReadFile(oneshot::Sender<Result<FsReadFileResponse, JSONRPCErrorError>>),
FsWriteFile(oneshot::Sender<Result<FsWriteFileResponse, JSONRPCErrorError>>),
FsCreateDirectory(oneshot::Sender<Result<FsCreateDirectoryResponse, JSONRPCErrorError>>),
FsGetMetadata(oneshot::Sender<Result<FsGetMetadataResponse, JSONRPCErrorError>>),
FsReadDirectory(oneshot::Sender<Result<FsReadDirectoryResponse, JSONRPCErrorError>>),
FsRemove(oneshot::Sender<Result<FsRemoveResponse, JSONRPCErrorError>>),
FsCopy(oneshot::Sender<Result<FsCopyResponse, JSONRPCErrorError>>),
}
impl PendingRequest {
fn resolve_json(self, result: Value) -> Result<(), ExecServerError> {
match self {
PendingRequest::Initialize(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::Exec(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::Read(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::Write(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::Terminate(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsReadFile(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsWriteFile(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsCreateDirectory(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsGetMetadata(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsReadDirectory(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsRemove(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
PendingRequest::FsCopy(tx) => {
let _ = tx.send(Ok(serde_json::from_value(result)?));
}
}
Ok(())
}
fn resolve_error(self, error: JSONRPCErrorError) {
match self {
PendingRequest::Initialize(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::Exec(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::Read(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::Write(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::Terminate(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsReadFile(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsWriteFile(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsCreateDirectory(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsGetMetadata(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsReadDirectory(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsRemove(tx) => {
let _ = tx.send(Err(error));
}
PendingRequest::FsCopy(tx) => {
let _ = tx.send(Err(error));
}
}
}
}
enum ClientBackend {
JsonRpc {
write_tx: mpsc::Sender<JSONRPCMessage>,
},
InProcess {
handler: Arc<Mutex<ExecServerHandler>>,
},
}
struct Inner {
backend: ClientBackend,
pending: Mutex<HashMap<RequestId, PendingRequest>>,
events_tx: broadcast::Sender<ExecServerEvent>,
next_request_id: AtomicI64,
transport_tasks: Vec<JoinHandle<()>>,
reader_task: JoinHandle<()>,
}
impl Drop for Inner {
fn drop(&mut self) {
if let ClientBackend::InProcess { handler } = &self.backend
&& let Ok(handle) = tokio::runtime::Handle::try_current()
{
let handler = Arc::clone(handler);
handle.spawn(async move {
handler.lock().await.shutdown().await;
});
}
for task in &self.transport_tasks {
task.abort();
}
self.reader_task.abort();
}
}
#[derive(Clone)]
pub struct ExecServerClient {
inner: Arc<Inner>,
}
#[derive(Debug, thiserror::Error)]
pub enum ExecServerError {
#[error("failed to spawn exec-server: {0}")]
Spawn(#[source] std::io::Error),
#[error("timed out connecting to exec-server websocket `{url}` after {timeout:?}")]
WebSocketConnectTimeout { url: String, timeout: Duration },
#[error("failed to connect to exec-server websocket `{url}`: {source}")]
WebSocketConnect {
url: String,
#[source]
source: tokio_tungstenite::tungstenite::Error,
},
#[error("timed out waiting for exec-server initialize handshake after {timeout:?}")]
InitializeTimedOut { timeout: Duration },
#[error("exec-server transport closed")]
Closed,
#[error("failed to serialize or deserialize exec-server JSON: {0}")]
Json(#[from] serde_json::Error),
#[error("exec-server protocol error: {0}")]
Protocol(String),
#[error("exec-server rejected request ({code}): {message}")]
Server { code: i64, message: String },
}
impl ExecServerClient {
pub async fn connect_in_process(
options: ExecServerClientConnectOptions,
) -> Result<Self, ExecServerError> {
let (outbound_tx, mut outgoing_rx) = mpsc::channel::<ExecServerOutboundMessage>(256);
let handler = Arc::new(Mutex::new(ExecServerHandler::new(outbound_tx)));
let inner = Arc::new_cyclic(|weak| {
let weak = weak.clone();
let reader_task = tokio::spawn(async move {
while let Some(message) = outgoing_rx.recv().await {
if let Some(inner) = weak.upgrade()
&& let Err(err) = handle_in_process_outbound_message(&inner, message).await
{
warn!(
"in-process exec-server client closing after unexpected response: {err}"
);
handle_transport_shutdown(&inner).await;
return;
}
}
if let Some(inner) = weak.upgrade() {
handle_transport_shutdown(&inner).await;
}
});
Inner {
backend: ClientBackend::InProcess { handler },
pending: Mutex::new(HashMap::new()),
events_tx: broadcast::channel(256).0,
next_request_id: AtomicI64::new(1),
transport_tasks: Vec::new(),
reader_task,
}
});
let client = Self { inner };
client.initialize(options).await?;
Ok(client)
}
pub async fn connect_stdio<R, W>(
stdin: W,
stdout: R,
options: ExecServerClientConnectOptions,
) -> Result<Self, ExecServerError>
where
R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static,
{
Self::connect(
JsonRpcConnection::from_stdio(stdout, stdin, "exec-server stdio".to_string()),
options,
)
.await
}
pub async fn connect_websocket(
args: RemoteExecServerConnectArgs,
) -> Result<Self, ExecServerError> {
let websocket_url = args.websocket_url.clone();
let connect_timeout = args.connect_timeout;
let (stream, _) = timeout(connect_timeout, connect_async(websocket_url.as_str()))
.await
.map_err(|_| ExecServerError::WebSocketConnectTimeout {
url: websocket_url.clone(),
timeout: connect_timeout,
})?
.map_err(|source| ExecServerError::WebSocketConnect {
url: websocket_url.clone(),
source,
})?;
Self::connect(
JsonRpcConnection::from_websocket(
stream,
format!("exec-server websocket {websocket_url}"),
),
args.into(),
)
.await
}
async fn connect(
connection: JsonRpcConnection,
options: ExecServerClientConnectOptions,
) -> Result<Self, ExecServerError> {
let (write_tx, mut incoming_rx, transport_tasks) = connection.into_parts();
let inner = Arc::new_cyclic(|weak| {
let weak = weak.clone();
let reader_task = tokio::spawn(async move {
while let Some(event) = incoming_rx.recv().await {
match event {
JsonRpcConnectionEvent::Message(message) => {
if let Some(inner) = weak.upgrade()
&& let Err(err) = handle_server_message(&inner, message).await
{
warn!("exec-server client closing after protocol error: {err}");
handle_transport_shutdown(&inner).await;
return;
}
}
JsonRpcConnectionEvent::Disconnected { reason } => {
if let Some(reason) = reason {
warn!("exec-server client transport disconnected: {reason}");
}
if let Some(inner) = weak.upgrade() {
handle_transport_shutdown(&inner).await;
}
return;
}
}
}
if let Some(inner) = weak.upgrade() {
handle_transport_shutdown(&inner).await;
}
});
Inner {
backend: ClientBackend::JsonRpc { write_tx },
pending: Mutex::new(HashMap::new()),
events_tx: broadcast::channel(256).0,
next_request_id: AtomicI64::new(1),
transport_tasks,
reader_task,
}
});
let client = Self { inner };
client.initialize(options).await?;
Ok(client)
}
pub fn event_receiver(&self) -> broadcast::Receiver<ExecServerEvent> {
self.inner.events_tx.subscribe()
}
#[cfg(test)]
async fn start_process(
&self,
params: ExecParams,
) -> Result<ExecServerProcess, ExecServerError> {
let response = self.exec(params).await?;
let process_id = response.process_id;
let status = Arc::new(RemoteProcessStatus::new());
let (output_tx, output_rx) = broadcast::channel(256);
let mut events_rx = self.event_receiver();
let status_watcher = Arc::clone(&status);
let watch_process_id = process_id.clone();
tokio::spawn(async move {
while let Ok(event) = events_rx.recv().await {
match event {
ExecServerEvent::OutputDelta(notification)
if notification.process_id == watch_process_id =>
{
let _ = output_tx.send(ExecServerOutput {
stream: notification.stream,
chunk: notification.chunk.into_inner(),
});
}
ExecServerEvent::Exited(notification)
if notification.process_id == watch_process_id =>
{
status_watcher.mark_exited(Some(notification.exit_code));
break;
}
ExecServerEvent::OutputDelta(_) | ExecServerEvent::Exited(_) => {}
}
}
});
Ok(ExecServerProcess {
process_id,
output_rx,
status,
client: self.clone(),
})
}
pub async fn exec(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
self.request_exec(params).await
}
pub async fn read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError> {
self.request_read(params).await
}
pub async fn write(
&self,
process_id: &str,
chunk: Vec<u8>,
) -> Result<WriteResponse, ExecServerError> {
self.write_process(WriteParams {
process_id: process_id.to_string(),
chunk: chunk.into(),
})
.await
}
pub async fn terminate(&self, process_id: &str) -> Result<TerminateResponse, ExecServerError> {
self.terminate_session(process_id).await
}
pub async fn fs_read_file(
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_read_file(params).await);
}
self.send_pending_request(FS_READ_FILE_METHOD, &params, PendingRequest::FsReadFile)
.await
}
pub async fn fs_write_file(
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_write_file(params).await);
}
self.send_pending_request(FS_WRITE_FILE_METHOD, &params, PendingRequest::FsWriteFile)
.await
}
pub async fn fs_create_directory(
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_create_directory(params).await);
}
self.send_pending_request(
FS_CREATE_DIRECTORY_METHOD,
&params,
PendingRequest::FsCreateDirectory,
)
.await
}
pub async fn fs_get_metadata(
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_get_metadata(params).await);
}
self.send_pending_request(
FS_GET_METADATA_METHOD,
&params,
PendingRequest::FsGetMetadata,
)
.await
}
pub async fn fs_read_directory(
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_read_directory(params).await);
}
self.send_pending_request(
FS_READ_DIRECTORY_METHOD,
&params,
PendingRequest::FsReadDirectory,
)
.await
}
pub async fn fs_remove(
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_remove(params).await);
}
self.send_pending_request(FS_REMOVE_METHOD, &params, PendingRequest::FsRemove)
.await
}
pub async fn fs_copy(&self, params: FsCopyParams) -> Result<FsCopyResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.fs_copy(params).await);
}
self.send_pending_request(FS_COPY_METHOD, &params, PendingRequest::FsCopy)
.await
}
async fn initialize(
&self,
options: ExecServerClientConnectOptions,
) -> Result<(), ExecServerError> {
let ExecServerClientConnectOptions {
client_name,
initialize_timeout,
} = options;
timeout(initialize_timeout, async {
let _: InitializeResponse = self
.request_initialize(InitializeParams { client_name })
.await?;
self.notify(INITIALIZED_METHOD, &serde_json::json!({}))
.await
})
.await
.map_err(|_| ExecServerError::InitializeTimedOut {
timeout: initialize_timeout,
})?
}
async fn request_exec(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.exec(params).await);
}
self.send_pending_request(EXEC_METHOD, &params, PendingRequest::Exec)
.await
}
async fn write_process(&self, params: WriteParams) -> Result<WriteResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.write(params).await);
}
self.send_pending_request(EXEC_WRITE_METHOD, &params, PendingRequest::Write)
.await
}
async fn request_read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.read(params).await);
}
self.send_pending_request(EXEC_READ_METHOD, &params, PendingRequest::Read)
.await
}
async fn terminate_session(
&self,
process_id: &str,
) -> Result<TerminateResponse, ExecServerError> {
let params = TerminateParams {
process_id: process_id.to_string(),
};
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.terminate(params).await);
}
self.send_pending_request(EXEC_TERMINATE_METHOD, &params, PendingRequest::Terminate)
.await
}
async fn notify<P: Serialize>(&self, method: &str, params: &P) -> Result<(), ExecServerError> {
match &self.inner.backend {
ClientBackend::JsonRpc { write_tx } => {
let params = serde_json::to_value(params)?;
write_tx
.send(JSONRPCMessage::Notification(JSONRPCNotification {
method: method.to_string(),
params: Some(params),
}))
.await
.map_err(|_| ExecServerError::Closed)
}
ClientBackend::InProcess { handler } => match method {
INITIALIZED_METHOD => handler
.lock()
.await
.initialized()
.map_err(ExecServerError::Protocol),
other => Err(ExecServerError::Protocol(format!(
"unsupported in-process notification method `{other}`"
))),
},
}
}
async fn request_initialize(
&self,
params: InitializeParams,
) -> Result<InitializeResponse, ExecServerError> {
if let ClientBackend::InProcess { handler } = &self.inner.backend {
return server_result_to_client(handler.lock().await.initialize());
}
self.send_pending_request(INITIALIZE_METHOD, &params, PendingRequest::Initialize)
.await
}
fn next_request_id(&self) -> RequestId {
RequestId::Integer(self.inner.next_request_id.fetch_add(1, Ordering::SeqCst))
}
async fn send_pending_request<P, T>(
&self,
method: &str,
params: &P,
build_pending: impl FnOnce(oneshot::Sender<Result<T, JSONRPCErrorError>>) -> PendingRequest,
) -> Result<T, ExecServerError>
where
P: Serialize,
{
let request_id = self.next_request_id();
let (response_tx, response_rx) = oneshot::channel();
self.inner
.pending
.lock()
.await
.insert(request_id.clone(), build_pending(response_tx));
let ClientBackend::JsonRpc { write_tx } = &self.inner.backend else {
unreachable!("in-process requests return before JSON-RPC setup");
};
let send_result = send_jsonrpc_request(write_tx, request_id.clone(), method, params).await;
self.finish_request(request_id, send_result, response_rx)
.await
}
async fn finish_request<T>(
&self,
request_id: RequestId,
send_result: Result<(), ExecServerError>,
response_rx: oneshot::Receiver<Result<T, JSONRPCErrorError>>,
) -> Result<T, ExecServerError> {
if let Err(err) = send_result {
self.inner.pending.lock().await.remove(&request_id);
return Err(err);
}
receive_typed_response(response_rx).await
}
}
async fn receive_typed_response<T>(
response_rx: oneshot::Receiver<Result<T, JSONRPCErrorError>>,
) -> Result<T, ExecServerError> {
let result = response_rx.await.map_err(|_| ExecServerError::Closed)?;
match result {
Ok(response) => Ok(response),
Err(error) => Err(ExecServerError::Server {
code: error.code,
message: error.message,
}),
}
}
fn server_result_to_client<T>(result: Result<T, JSONRPCErrorError>) -> Result<T, ExecServerError> {
match result {
Ok(response) => Ok(response),
Err(error) => Err(ExecServerError::Server {
code: error.code,
message: error.message,
}),
}
}
async fn send_jsonrpc_request<P: Serialize>(
write_tx: &mpsc::Sender<JSONRPCMessage>,
request_id: RequestId,
method: &str,
params: &P,
) -> Result<(), ExecServerError> {
let params = serde_json::to_value(params)?;
write_tx
.send(JSONRPCMessage::Request(JSONRPCRequest {
id: request_id,
method: method.to_string(),
params: Some(params),
trace: None,
}))
.await
.map_err(|_| ExecServerError::Closed)
}
async fn handle_in_process_outbound_message(
inner: &Arc<Inner>,
message: ExecServerOutboundMessage,
) -> Result<(), ExecServerError> {
match message {
ExecServerOutboundMessage::Response { .. } | ExecServerOutboundMessage::Error { .. } => {
return Err(ExecServerError::Protocol(
"unexpected in-process RPC response".to_string(),
));
}
ExecServerOutboundMessage::Notification(notification) => {
handle_in_process_notification(inner, notification).await;
}
}
Ok(())
}
async fn handle_in_process_notification(
inner: &Arc<Inner>,
notification: ExecServerServerNotification,
) {
match notification {
ExecServerServerNotification::OutputDelta(params) => {
let _ = inner.events_tx.send(ExecServerEvent::OutputDelta(params));
}
ExecServerServerNotification::Exited(params) => {
let _ = inner.events_tx.send(ExecServerEvent::Exited(params));
}
}
}
async fn handle_server_message(
inner: &Arc<Inner>,
message: JSONRPCMessage,
) -> Result<(), ExecServerError> {
match message {
JSONRPCMessage::Response(JSONRPCResponse { id, result }) => {
if let Some(pending) = inner.pending.lock().await.remove(&id) {
pending.resolve_json(result)?;
}
}
JSONRPCMessage::Error(JSONRPCError { id, error }) => {
if let Some(pending) = inner.pending.lock().await.remove(&id) {
pending.resolve_error(error);
}
}
JSONRPCMessage::Notification(notification) => {
handle_server_notification(inner, notification).await?;
}
JSONRPCMessage::Request(request) => {
return Err(ExecServerError::Protocol(format!(
"unexpected exec-server request from remote server: {}",
request.method
)));
}
}
Ok(())
}
async fn handle_server_notification(
inner: &Arc<Inner>,
notification: JSONRPCNotification,
) -> Result<(), ExecServerError> {
match notification.method.as_str() {
EXEC_OUTPUT_DELTA_METHOD => {
let params: ExecOutputDeltaNotification =
serde_json::from_value(notification.params.unwrap_or(Value::Null))?;
let _ = inner.events_tx.send(ExecServerEvent::OutputDelta(params));
}
EXEC_EXITED_METHOD => {
let params: ExecExitedNotification =
serde_json::from_value(notification.params.unwrap_or(Value::Null))?;
let _ = inner.events_tx.send(ExecServerEvent::Exited(params));
}
other => {
debug!("ignoring unknown exec-server notification: {other}");
}
}
Ok(())
}
async fn handle_transport_shutdown(inner: &Arc<Inner>) {
let pending = {
let mut pending = inner.pending.lock().await;
pending
.drain()
.map(|(_, pending)| pending)
.collect::<Vec<_>>()
};
for pending in pending {
pending.resolve_error(JSONRPCErrorError {
code: -32000,
data: None,
message: "exec-server transport closed".to_string(),
});
}
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,893 @@
use std::collections::HashMap;
use std::time::Duration;
use pretty_assertions::assert_eq;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
use tokio::time::timeout;
use super::ExecServerClient;
use super::ExecServerClientConnectOptions;
use super::ExecServerError;
use super::ExecServerOutput;
use crate::protocol::EXEC_METHOD;
use crate::protocol::EXEC_OUTPUT_DELTA_METHOD;
use crate::protocol::EXEC_TERMINATE_METHOD;
use crate::protocol::ExecOutputStream;
use crate::protocol::ExecParams;
use crate::protocol::INITIALIZE_METHOD;
use crate::protocol::INITIALIZED_METHOD;
use crate::protocol::PROTOCOL_VERSION;
use crate::protocol::ReadParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
fn test_options() -> ExecServerClientConnectOptions {
ExecServerClientConnectOptions {
client_name: "test-client".to_string(),
initialize_timeout: Duration::from_secs(1),
}
}
async fn read_jsonrpc_line<R>(lines: &mut tokio::io::Lines<BufReader<R>>) -> JSONRPCMessage
where
R: tokio::io::AsyncRead + Unpin,
{
let next_line = timeout(Duration::from_secs(1), lines.next_line()).await;
let line_result = match next_line {
Ok(line_result) => line_result,
Err(err) => panic!("timed out waiting for JSON-RPC line: {err}"),
};
let maybe_line = match line_result {
Ok(maybe_line) => maybe_line,
Err(err) => panic!("failed to read JSON-RPC line: {err}"),
};
let line = match maybe_line {
Some(line) => line,
None => panic!("server connection closed before JSON-RPC line arrived"),
};
match serde_json::from_str::<JSONRPCMessage>(&line) {
Ok(message) => message,
Err(err) => panic!("failed to parse JSON-RPC line: {err}"),
}
}
async fn write_jsonrpc_line<W>(writer: &mut W, message: JSONRPCMessage)
where
W: tokio::io::AsyncWrite + Unpin,
{
let encoded = match serde_json::to_string(&message) {
Ok(encoded) => encoded,
Err(err) => panic!("failed to encode JSON-RPC message: {err}"),
};
if let Err(err) = writer.write_all(format!("{encoded}\n").as_bytes()).await {
panic!("failed to write JSON-RPC line: {err}");
}
}
#[tokio::test]
async fn connect_stdio_performs_initialize_handshake() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
let server = tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(request) = initialize else {
panic!("expected initialize request");
};
assert_eq!(request.method, INITIALIZE_METHOD);
assert_eq!(
request.params,
Some(serde_json::json!({ "clientName": "test-client" }))
);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(JSONRPCNotification { method, params }) = initialized
else {
panic!("expected initialized notification");
};
assert_eq!(method, INITIALIZED_METHOD);
assert_eq!(params, Some(serde_json::json!({})));
});
let client = ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await;
if let Err(err) = client {
panic!("failed to connect test client: {err}");
}
if let Err(err) = server.await {
panic!("server task failed: {err}");
}
}
#[tokio::test]
async fn connect_in_process_starts_processes_without_jsonrpc_transport() {
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["printf".to_string(), "hello".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start in-process child: {err}"),
};
let mut output = process.output_receiver();
let output = timeout(Duration::from_secs(1), output.recv())
.await
.unwrap_or_else(|err| panic!("timed out waiting for process output: {err}"))
.unwrap_or_else(|err| panic!("failed to receive process output: {err}"));
assert_eq!(
output,
ExecServerOutput {
stream: crate::protocol::ExecOutputStream::Stdout,
chunk: b"hello".to_vec(),
}
);
}
#[tokio::test]
async fn connect_in_process_read_returns_retained_output_and_exit_state() {
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let response = match client
.exec(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["printf".to_string(), "hello".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
})
.await
{
Ok(response) => response,
Err(err) => panic!("failed to start in-process child: {err}"),
};
let process_id = response.process_id.clone();
let read = match client
.read(ReadParams {
process_id: process_id.clone(),
after_seq: None,
max_bytes: None,
wait_ms: Some(1000),
})
.await
{
Ok(read) => read,
Err(err) => panic!("failed to read in-process child output: {err}"),
};
assert_eq!(read.chunks.len(), 1);
assert_eq!(read.chunks[0].seq, 1);
assert_eq!(read.chunks[0].stream, ExecOutputStream::Stdout);
assert_eq!(read.chunks[0].chunk.clone().into_inner(), b"hello".to_vec());
assert_eq!(read.next_seq, 2);
let read = if read.exited {
read
} else {
match client
.read(ReadParams {
process_id,
after_seq: Some(read.next_seq - 1),
max_bytes: None,
wait_ms: Some(1000),
})
.await
{
Ok(read) => read,
Err(err) => panic!("failed to wait for in-process child exit: {err}"),
}
};
assert!(read.exited);
assert_eq!(read.exit_code, Some(0));
}
#[tokio::test]
async fn connect_in_process_rejects_invalid_exec_params_from_handler() {
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let result = client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: Vec::new(),
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
})
.await;
match result {
Err(ExecServerError::Server { code, message }) => {
assert_eq!(code, -32602);
assert_eq!(message, "argv must not be empty");
}
Err(err) => panic!("unexpected in-process exec failure: {err}"),
Ok(_) => panic!("expected invalid params error"),
}
}
#[tokio::test]
async fn connect_in_process_rejects_writes_to_unknown_processes() {
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let result = client
.write_process(crate::protocol::WriteParams {
process_id: "missing".to_string(),
chunk: b"input".to_vec().into(),
})
.await;
match result {
Err(ExecServerError::Server { code, message }) => {
assert_eq!(code, -32600);
assert_eq!(message, "unknown process id missing");
}
Err(err) => panic!("unexpected in-process write failure: {err}"),
Ok(_) => panic!("expected unknown process error"),
}
}
#[tokio::test]
async fn connect_in_process_terminate_marks_process_exited() {
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["sleep".to_string(), "30".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start in-process child: {err}"),
};
if let Err(err) = client.terminate_session(&process.process_id).await {
panic!("failed to terminate in-process child: {err}");
}
timeout(Duration::from_secs(2), async {
loop {
if process.has_exited() {
break;
}
tokio::time::sleep(Duration::from_millis(10)).await;
}
})
.await
.unwrap_or_else(|err| panic!("timed out waiting for in-process child to exit: {err}"));
assert!(process.has_exited());
}
#[tokio::test]
async fn dropping_in_process_client_terminates_running_processes() {
let marker_path = std::env::temp_dir().join(format!(
"codex-exec-server-inprocess-drop-{}-{}",
std::process::id(),
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.expect("system time")
.as_nanos()
));
let _ = std::fs::remove_file(&marker_path);
{
let client = match ExecServerClient::connect_in_process(test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect in-process client: {err}"),
};
let _ = client
.exec(ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"/bin/sh".to_string(),
"-c".to_string(),
format!("sleep 2; printf dropped > {}", marker_path.display()),
],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
})
.await
.unwrap_or_else(|err| panic!("failed to start in-process child: {err}"));
}
tokio::time::sleep(Duration::from_secs(3)).await;
assert!(
!marker_path.exists(),
"dropping the in-process client should terminate managed children"
);
let _ = std::fs::remove_file(&marker_path);
}
#[tokio::test]
async fn connect_stdio_returns_initialize_errors() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Error(JSONRPCError {
id: request.id,
error: JSONRPCErrorError {
code: -32600,
message: "rejected".to_string(),
data: None,
},
}),
)
.await;
});
let result = ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await;
match result {
Err(ExecServerError::Server { code, message }) => {
assert_eq!(code, -32600);
assert_eq!(message, "rejected");
}
Err(err) => panic!("unexpected initialize failure: {err}"),
Ok(_) => panic!("expected initialize failure"),
}
}
#[tokio::test]
async fn start_process_cleans_up_registered_process_after_request_error() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Error(JSONRPCError {
id,
error: JSONRPCErrorError {
code: -32600,
message: "duplicate process".to_string(),
data: None,
},
}),
)
.await;
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let result = client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await;
match result {
Err(ExecServerError::Server { code, message }) => {
assert_eq!(code, -32600);
assert_eq!(message, "duplicate process");
}
Err(err) => panic!("unexpected start_process failure: {err}"),
Ok(_) => panic!("expected start_process failure"),
}
assert!(
client.inner.pending.lock().await.is_empty(),
"failed requests should not leave pending request state behind"
);
}
#[tokio::test]
async fn connect_stdio_times_out_during_initialize_handshake() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (_server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let _ = read_jsonrpc_line(&mut lines).await;
tokio::time::sleep(Duration::from_millis(200)).await;
});
let result = ExecServerClient::connect_stdio(
client_stdin,
client_stdout,
ExecServerClientConnectOptions {
client_name: "test-client".to_string(),
initialize_timeout: Duration::from_millis(25),
},
)
.await;
match result {
Err(ExecServerError::InitializeTimedOut { timeout }) => {
assert_eq!(timeout, Duration::from_millis(25));
}
Err(err) => panic!("unexpected initialize timeout failure: {err}"),
Ok(_) => panic!("expected initialize timeout"),
}
}
#[tokio::test]
async fn start_process_preserves_output_stream_metadata() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "processId": "proc-1" }),
}),
)
.await;
tokio::time::sleep(Duration::from_millis(25)).await;
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Notification(JSONRPCNotification {
method: EXEC_OUTPUT_DELTA_METHOD.to_string(),
params: Some(serde_json::json!({
"processId": "proc-1",
"stream": "stderr",
"chunk": "ZXJyb3IK"
})),
}),
)
.await;
tokio::time::sleep(Duration::from_millis(100)).await;
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start process: {err}"),
};
let mut output = process.output_receiver();
let output = timeout(Duration::from_secs(1), output.recv())
.await
.unwrap_or_else(|err| panic!("timed out waiting for process output: {err}"))
.unwrap_or_else(|err| panic!("failed to receive process output: {err}"));
assert_eq!(output.stream, ExecOutputStream::Stderr);
assert_eq!(output.chunk, b"error\n".to_vec());
}
#[tokio::test]
async fn terminate_does_not_mark_process_exited_before_exit_notification() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "processId": "proc-1" }),
}),
)
.await;
let terminate_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = terminate_request else {
panic!("expected terminate request");
};
assert_eq!(method, EXEC_TERMINATE_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "running": true }),
}),
)
.await;
tokio::time::sleep(Duration::from_millis(100)).await;
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start process: {err}"),
};
process.terminate();
tokio::time::sleep(Duration::from_millis(25)).await;
assert!(!process.has_exited(), "terminate should not imply exit");
assert_eq!(process.exit_code(), None);
}
#[tokio::test]
async fn start_process_uses_protocol_process_ids() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "processId": "other-proc" }),
}),
)
.await;
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start process: {err}"),
};
assert_eq!(process.process_id, "other-proc");
}
#[tokio::test]
async fn start_process_routes_output_for_protocol_process_ids() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "processId": "proc-1" }),
}),
)
.await;
tokio::time::sleep(Duration::from_millis(25)).await;
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Notification(JSONRPCNotification {
method: EXEC_OUTPUT_DELTA_METHOD.to_string(),
params: Some(serde_json::json!({
"processId": "proc-1",
"stream": "stdout",
"chunk": "YWxpdmUK"
})),
}),
)
.await;
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let first_process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start first process: {err}"),
};
let mut output = first_process.output_receiver();
let output = timeout(Duration::from_secs(1), output.recv())
.await
.unwrap_or_else(|err| panic!("timed out waiting for process output: {err}"))
.unwrap_or_else(|err| panic!("failed to receive process output: {err}"));
assert_eq!(output.stream, ExecOutputStream::Stdout);
assert_eq!(output.chunk, b"alive\n".to_vec());
}
#[tokio::test]
async fn transport_shutdown_marks_processes_exited_without_exit_codes() {
let (client_stdin, server_reader) = tokio::io::duplex(4096);
let (mut server_writer, client_stdout) = tokio::io::duplex(4096);
tokio::spawn(async move {
let mut lines = BufReader::new(server_reader).lines();
let initialize = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(initialize_request) = initialize else {
panic!("expected initialize request");
};
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id: initialize_request.id,
result: serde_json::json!({ "protocolVersion": PROTOCOL_VERSION }),
}),
)
.await;
let initialized = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Notification(notification) = initialized else {
panic!("expected initialized notification");
};
assert_eq!(notification.method, INITIALIZED_METHOD);
let exec_request = read_jsonrpc_line(&mut lines).await;
let JSONRPCMessage::Request(JSONRPCRequest { id, method, .. }) = exec_request else {
panic!("expected exec request");
};
assert_eq!(method, EXEC_METHOD);
write_jsonrpc_line(
&mut server_writer,
JSONRPCMessage::Response(JSONRPCResponse {
id,
result: serde_json::json!({ "processId": "proc-1" }),
}),
)
.await;
drop(server_writer);
});
let client =
match ExecServerClient::connect_stdio(client_stdin, client_stdout, test_options()).await {
Ok(client) => client,
Err(err) => panic!("failed to connect test client: {err}"),
};
let process = match client
.start_process(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().unwrap_or_else(|err| panic!("missing cwd: {err}")),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
})
.await
{
Ok(process) => process,
Err(err) => panic!("failed to start process: {err}"),
};
let _ = process;
}

View File

@@ -0,0 +1,27 @@
use std::time::Duration;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
/// Connection options for any exec-server client transport.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ExecServerClientConnectOptions {
pub client_name: String,
pub initialize_timeout: Duration,
}
/// WebSocket connection arguments for a remote exec-server.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RemoteExecServerConnectArgs {
pub websocket_url: String,
pub client_name: String,
pub connect_timeout: Duration,
pub initialize_timeout: Duration,
}
/// Connection-level server events.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ExecServerEvent {
OutputDelta(ExecOutputDeltaNotification),
Exited(ExecExitedNotification),
}

View File

@@ -0,0 +1,421 @@
use codex_app_server_protocol::JSONRPCMessage;
use futures::SinkExt;
use futures::StreamExt;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncRead;
use tokio::io::AsyncWrite;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
use tokio::io::BufWriter;
use tokio::sync::mpsc;
use tokio_tungstenite::WebSocketStream;
use tokio_tungstenite::tungstenite::Message;
pub(crate) const CHANNEL_CAPACITY: usize = 128;
#[derive(Debug)]
pub(crate) enum JsonRpcConnectionEvent {
Message(JSONRPCMessage),
Disconnected { reason: Option<String> },
}
pub(crate) struct JsonRpcConnection {
outgoing_tx: mpsc::Sender<JSONRPCMessage>,
incoming_rx: mpsc::Receiver<JsonRpcConnectionEvent>,
task_handles: Vec<tokio::task::JoinHandle<()>>,
}
impl JsonRpcConnection {
pub(crate) fn from_stdio<R, W>(reader: R, writer: W, connection_label: String) -> Self
where
R: AsyncRead + Unpin + Send + 'static,
W: AsyncWrite + Unpin + Send + 'static,
{
let (outgoing_tx, mut outgoing_rx) = mpsc::channel(CHANNEL_CAPACITY);
let (incoming_tx, incoming_rx) = mpsc::channel(CHANNEL_CAPACITY);
let reader_label = connection_label.clone();
let incoming_tx_for_reader = incoming_tx.clone();
let reader_task = tokio::spawn(async move {
let mut lines = BufReader::new(reader).lines();
loop {
match lines.next_line().await {
Ok(Some(line)) => {
if line.trim().is_empty() {
continue;
}
match serde_json::from_str::<JSONRPCMessage>(&line) {
Ok(message) => {
if incoming_tx_for_reader
.send(JsonRpcConnectionEvent::Message(message))
.await
.is_err()
{
break;
}
}
Err(err) => {
send_disconnected(
&incoming_tx_for_reader,
Some(format!(
"failed to parse JSON-RPC message from {reader_label}: {err}"
)),
)
.await;
break;
}
}
}
Ok(None) => {
send_disconnected(&incoming_tx_for_reader, /*reason*/ None).await;
break;
}
Err(err) => {
send_disconnected(
&incoming_tx_for_reader,
Some(format!(
"failed to read JSON-RPC message from {reader_label}: {err}"
)),
)
.await;
break;
}
}
}
});
let writer_task = tokio::spawn(async move {
let mut writer = BufWriter::new(writer);
while let Some(message) = outgoing_rx.recv().await {
if let Err(err) = write_jsonrpc_line_message(&mut writer, &message).await {
send_disconnected(
&incoming_tx,
Some(format!(
"failed to write JSON-RPC message to {connection_label}: {err}"
)),
)
.await;
break;
}
}
});
Self {
outgoing_tx,
incoming_rx,
task_handles: vec![reader_task, writer_task],
}
}
pub(crate) fn from_websocket<S>(stream: WebSocketStream<S>, connection_label: String) -> Self
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let (outgoing_tx, mut outgoing_rx) = mpsc::channel(CHANNEL_CAPACITY);
let (incoming_tx, incoming_rx) = mpsc::channel(CHANNEL_CAPACITY);
let (mut websocket_writer, mut websocket_reader) = stream.split();
let reader_label = connection_label.clone();
let incoming_tx_for_reader = incoming_tx.clone();
let reader_task = tokio::spawn(async move {
loop {
match websocket_reader.next().await {
Some(Ok(Message::Text(text))) => {
match serde_json::from_str::<JSONRPCMessage>(text.as_ref()) {
Ok(message) => {
if incoming_tx_for_reader
.send(JsonRpcConnectionEvent::Message(message))
.await
.is_err()
{
break;
}
}
Err(err) => {
send_disconnected(
&incoming_tx_for_reader,
Some(format!(
"failed to parse websocket JSON-RPC message from {reader_label}: {err}"
)),
)
.await;
break;
}
}
}
Some(Ok(Message::Binary(bytes))) => {
match serde_json::from_slice::<JSONRPCMessage>(bytes.as_ref()) {
Ok(message) => {
if incoming_tx_for_reader
.send(JsonRpcConnectionEvent::Message(message))
.await
.is_err()
{
break;
}
}
Err(err) => {
send_disconnected(
&incoming_tx_for_reader,
Some(format!(
"failed to parse websocket JSON-RPC message from {reader_label}: {err}"
)),
)
.await;
break;
}
}
}
Some(Ok(Message::Close(_))) => {
send_disconnected(&incoming_tx_for_reader, /*reason*/ None).await;
break;
}
Some(Ok(Message::Ping(_))) | Some(Ok(Message::Pong(_))) => {}
Some(Ok(_)) => {}
Some(Err(err)) => {
send_disconnected(
&incoming_tx_for_reader,
Some(format!(
"failed to read websocket JSON-RPC message from {reader_label}: {err}"
)),
)
.await;
break;
}
None => {
send_disconnected(&incoming_tx_for_reader, /*reason*/ None).await;
break;
}
}
}
});
let writer_task = tokio::spawn(async move {
while let Some(message) = outgoing_rx.recv().await {
match serialize_jsonrpc_message(&message) {
Ok(encoded) => {
if let Err(err) = websocket_writer.send(Message::Text(encoded.into())).await
{
send_disconnected(
&incoming_tx,
Some(format!(
"failed to write websocket JSON-RPC message to {connection_label}: {err}"
)),
)
.await;
break;
}
}
Err(err) => {
send_disconnected(
&incoming_tx,
Some(format!(
"failed to serialize JSON-RPC message for {connection_label}: {err}"
)),
)
.await;
break;
}
}
}
});
Self {
outgoing_tx,
incoming_rx,
task_handles: vec![reader_task, writer_task],
}
}
pub(crate) fn into_parts(
self,
) -> (
mpsc::Sender<JSONRPCMessage>,
mpsc::Receiver<JsonRpcConnectionEvent>,
Vec<tokio::task::JoinHandle<()>>,
) {
(self.outgoing_tx, self.incoming_rx, self.task_handles)
}
}
async fn send_disconnected(
incoming_tx: &mpsc::Sender<JsonRpcConnectionEvent>,
reason: Option<String>,
) {
let _ = incoming_tx
.send(JsonRpcConnectionEvent::Disconnected { reason })
.await;
}
async fn write_jsonrpc_line_message<W>(
writer: &mut BufWriter<W>,
message: &JSONRPCMessage,
) -> std::io::Result<()>
where
W: AsyncWrite + Unpin,
{
let encoded =
serialize_jsonrpc_message(message).map_err(|err| std::io::Error::other(err.to_string()))?;
writer.write_all(encoded.as_bytes()).await?;
writer.write_all(b"\n").await?;
writer.flush().await
}
fn serialize_jsonrpc_message(message: &JSONRPCMessage) -> Result<String, serde_json::Error> {
serde_json::to_string(message)
}
#[cfg(test)]
mod tests {
use std::time::Duration;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
use tokio::sync::mpsc;
use tokio::time::timeout;
use super::JsonRpcConnection;
use super::JsonRpcConnectionEvent;
use super::serialize_jsonrpc_message;
async fn recv_event(
incoming_rx: &mut mpsc::Receiver<JsonRpcConnectionEvent>,
) -> JsonRpcConnectionEvent {
let recv_result = timeout(Duration::from_secs(1), incoming_rx.recv()).await;
let maybe_event = match recv_result {
Ok(maybe_event) => maybe_event,
Err(err) => panic!("timed out waiting for connection event: {err}"),
};
match maybe_event {
Some(event) => event,
None => panic!("connection event stream ended unexpectedly"),
}
}
async fn read_jsonrpc_line<R>(lines: &mut tokio::io::Lines<BufReader<R>>) -> JSONRPCMessage
where
R: tokio::io::AsyncRead + Unpin,
{
let next_line = timeout(Duration::from_secs(1), lines.next_line()).await;
let line_result = match next_line {
Ok(line_result) => line_result,
Err(err) => panic!("timed out waiting for JSON-RPC line: {err}"),
};
let maybe_line = match line_result {
Ok(maybe_line) => maybe_line,
Err(err) => panic!("failed to read JSON-RPC line: {err}"),
};
let line = match maybe_line {
Some(line) => line,
None => panic!("connection closed before JSON-RPC line arrived"),
};
match serde_json::from_str::<JSONRPCMessage>(&line) {
Ok(message) => message,
Err(err) => panic!("failed to parse JSON-RPC line: {err}"),
}
}
#[tokio::test]
async fn stdio_connection_reads_and_writes_jsonrpc_messages() {
let (mut writer_to_connection, connection_reader) = tokio::io::duplex(1024);
let (connection_writer, reader_from_connection) = tokio::io::duplex(1024);
let connection =
JsonRpcConnection::from_stdio(connection_reader, connection_writer, "test".to_string());
let (outgoing_tx, mut incoming_rx, _task_handles) = connection.into_parts();
let incoming_message = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(7),
method: "initialize".to_string(),
params: Some(serde_json::json!({ "clientName": "test-client" })),
trace: None,
});
let encoded = match serialize_jsonrpc_message(&incoming_message) {
Ok(encoded) => encoded,
Err(err) => panic!("failed to serialize incoming message: {err}"),
};
if let Err(err) = writer_to_connection
.write_all(format!("{encoded}\n").as_bytes())
.await
{
panic!("failed to write to connection: {err}");
}
let event = recv_event(&mut incoming_rx).await;
match event {
JsonRpcConnectionEvent::Message(message) => {
assert_eq!(message, incoming_message);
}
JsonRpcConnectionEvent::Disconnected { reason } => {
panic!("unexpected disconnect event: {reason:?}");
}
}
let outgoing_message = JSONRPCMessage::Response(JSONRPCResponse {
id: RequestId::Integer(7),
result: serde_json::json!({ "protocolVersion": "exec-server.v0" }),
});
if let Err(err) = outgoing_tx.send(outgoing_message.clone()).await {
panic!("failed to queue outgoing message: {err}");
}
let mut lines = BufReader::new(reader_from_connection).lines();
let message = read_jsonrpc_line(&mut lines).await;
assert_eq!(message, outgoing_message);
}
#[tokio::test]
async fn stdio_connection_reports_parse_errors() {
let (mut writer_to_connection, connection_reader) = tokio::io::duplex(1024);
let (connection_writer, _reader_from_connection) = tokio::io::duplex(1024);
let connection =
JsonRpcConnection::from_stdio(connection_reader, connection_writer, "test".to_string());
let (_outgoing_tx, mut incoming_rx, _task_handles) = connection.into_parts();
if let Err(err) = writer_to_connection.write_all(b"not-json\n").await {
panic!("failed to write invalid JSON: {err}");
}
let event = recv_event(&mut incoming_rx).await;
match event {
JsonRpcConnectionEvent::Disconnected { reason } => {
let reason = match reason {
Some(reason) => reason,
None => panic!("expected a parse error reason"),
};
assert!(
reason.contains("failed to parse JSON-RPC message from test"),
"unexpected disconnect reason: {reason}"
);
}
JsonRpcConnectionEvent::Message(message) => {
panic!("unexpected JSON-RPC message: {message:?}");
}
}
}
#[tokio::test]
async fn stdio_connection_reports_clean_disconnect() {
let (writer_to_connection, connection_reader) = tokio::io::duplex(1024);
let (connection_writer, _reader_from_connection) = tokio::io::duplex(1024);
let connection =
JsonRpcConnection::from_stdio(connection_reader, connection_writer, "test".to_string());
let (_outgoing_tx, mut incoming_rx, _task_handles) = connection.into_parts();
drop(writer_to_connection);
let event = recv_event(&mut incoming_rx).await;
match event {
JsonRpcConnectionEvent::Disconnected { reason } => {
assert_eq!(reason, None);
}
JsonRpcConnectionEvent::Message(message) => {
panic!("unexpected JSON-RPC message: {message:?}");
}
}
}
}

View File

@@ -0,0 +1,30 @@
mod client;
mod client_api;
mod connection;
mod local;
mod protocol;
mod server;
pub use client::ExecServerClient;
pub use client::ExecServerError;
pub use client_api::ExecServerClientConnectOptions;
pub use client_api::ExecServerEvent;
pub use client_api::RemoteExecServerConnectArgs;
pub use local::ExecServerLaunchCommand;
pub use local::SpawnedExecServer;
pub use local::spawn_local_exec_server;
pub use protocol::ExecExitedNotification;
pub use protocol::ExecOutputDeltaNotification;
pub use protocol::ExecOutputStream;
pub use protocol::ExecParams;
pub use protocol::ExecResponse;
pub use protocol::InitializeParams;
pub use protocol::InitializeResponse;
pub use protocol::TerminateParams;
pub use protocol::TerminateResponse;
pub use protocol::WriteParams;
pub use protocol::WriteResponse;
pub use server::ExecServerTransport;
pub use server::ExecServerTransportParseError;
pub use server::run_main;
pub use server::run_main_with_transport;

View File

@@ -0,0 +1,70 @@
use std::path::PathBuf;
use std::process::Stdio;
use std::sync::Mutex as StdMutex;
use tokio::process::Child;
use tokio::process::Command;
use crate::client::ExecServerClient;
use crate::client::ExecServerError;
use crate::client_api::ExecServerClientConnectOptions;
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ExecServerLaunchCommand {
pub program: PathBuf,
pub args: Vec<String>,
}
pub struct SpawnedExecServer {
client: ExecServerClient,
child: StdMutex<Option<Child>>,
}
impl SpawnedExecServer {
pub fn client(&self) -> &ExecServerClient {
&self.client
}
}
impl Drop for SpawnedExecServer {
fn drop(&mut self) {
if let Ok(mut child_guard) = self.child.lock()
&& let Some(child) = child_guard.as_mut()
{
let _ = child.start_kill();
}
}
}
pub async fn spawn_local_exec_server(
command: ExecServerLaunchCommand,
options: ExecServerClientConnectOptions,
) -> Result<SpawnedExecServer, ExecServerError> {
let mut child = Command::new(&command.program);
child.args(&command.args);
child.stdin(Stdio::piped());
child.stdout(Stdio::piped());
child.stderr(Stdio::inherit());
child.kill_on_drop(true);
let mut child = child.spawn().map_err(ExecServerError::Spawn)?;
let stdin = child.stdin.take().ok_or_else(|| {
ExecServerError::Protocol("exec-server stdin was not captured".to_string())
})?;
let stdout = child.stdout.take().ok_or_else(|| {
ExecServerError::Protocol("exec-server stdout was not captured".to_string())
})?;
let client = match ExecServerClient::connect_stdio(stdin, stdout, options).await {
Ok(client) => client,
Err(err) => {
let _ = child.start_kill();
return Err(err);
}
};
Ok(SpawnedExecServer {
client,
child: StdMutex::new(Some(child)),
})
}

View File

@@ -0,0 +1,184 @@
use std::collections::HashMap;
use std::path::PathBuf;
use base64::engine::general_purpose::STANDARD as BASE64_STANDARD;
use serde::Deserialize;
use serde::Serialize;
pub const INITIALIZE_METHOD: &str = "initialize";
pub const INITIALIZED_METHOD: &str = "initialized";
pub const EXEC_METHOD: &str = "process/start";
pub const EXEC_READ_METHOD: &str = "process/read";
pub const EXEC_WRITE_METHOD: &str = "process/write";
pub const EXEC_TERMINATE_METHOD: &str = "process/terminate";
pub const EXEC_OUTPUT_DELTA_METHOD: &str = "process/output";
pub const EXEC_EXITED_METHOD: &str = "process/exited";
pub const FS_READ_FILE_METHOD: &str = "fs/readFile";
pub const FS_WRITE_FILE_METHOD: &str = "fs/writeFile";
pub const FS_CREATE_DIRECTORY_METHOD: &str = "fs/createDirectory";
pub const FS_GET_METADATA_METHOD: &str = "fs/getMetadata";
pub const FS_READ_DIRECTORY_METHOD: &str = "fs/readDirectory";
pub const FS_REMOVE_METHOD: &str = "fs/remove";
pub const FS_COPY_METHOD: &str = "fs/copy";
pub const PROTOCOL_VERSION: &str = "exec-server.v0";
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(transparent)]
pub struct ByteChunk(#[serde(with = "base64_bytes")] pub Vec<u8>);
impl ByteChunk {
pub fn into_inner(self) -> Vec<u8> {
self.0
}
}
impl From<Vec<u8>> for ByteChunk {
fn from(value: Vec<u8>) -> Self {
Self(value)
}
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeParams {
pub client_name: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeResponse {
pub protocol_version: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ExecParams {
/// Client-chosen logical process handle scoped to this connection/session.
/// This is a protocol key, not an OS pid.
pub process_id: String,
pub argv: Vec<String>,
pub cwd: PathBuf,
pub env: HashMap<String, String>,
pub tty: bool,
pub arg0: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub sandbox: Option<ExecSandboxConfig>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ExecSandboxConfig {
pub mode: ExecSandboxMode,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum ExecSandboxMode {
None,
HostDefault,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ExecResponse {
pub process_id: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ReadParams {
pub process_id: String,
pub after_seq: Option<u64>,
pub max_bytes: Option<usize>,
pub wait_ms: Option<u64>,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ProcessOutputChunk {
pub seq: u64,
pub stream: ExecOutputStream,
pub chunk: ByteChunk,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ReadResponse {
pub chunks: Vec<ProcessOutputChunk>,
pub next_seq: u64,
pub exited: bool,
pub exit_code: Option<i32>,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct WriteParams {
pub process_id: String,
pub chunk: ByteChunk,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct WriteResponse {
pub accepted: bool,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TerminateParams {
pub process_id: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TerminateResponse {
pub running: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum ExecOutputStream {
Stdout,
Stderr,
Pty,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ExecOutputDeltaNotification {
pub process_id: String,
pub stream: ExecOutputStream,
pub chunk: ByteChunk,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ExecExitedNotification {
pub process_id: String,
pub exit_code: i32,
}
mod base64_bytes {
use super::BASE64_STANDARD;
use base64::Engine as _;
use serde::Deserialize;
use serde::Deserializer;
use serde::Serializer;
pub fn serialize<S>(bytes: &[u8], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(&BASE64_STANDARD.encode(bytes))
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Vec<u8>, D::Error>
where
D: Deserializer<'de>,
{
let encoded = String::deserialize(deserializer)?;
BASE64_STANDARD
.decode(encoded)
.map_err(serde::de::Error::custom)
}
}

View File

@@ -0,0 +1,21 @@
mod filesystem;
mod handler;
mod processor;
mod routing;
mod transport;
pub(crate) use handler::ExecServerHandler;
pub(crate) use routing::ExecServerOutboundMessage;
pub(crate) use routing::ExecServerServerNotification;
pub use transport::ExecServerTransport;
pub use transport::ExecServerTransportParseError;
pub async fn run_main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
run_main_with_transport(ExecServerTransport::Stdio).await
}
pub async fn run_main_with_transport(
transport: ExecServerTransport,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
transport::run_transport(transport).await
}

View File

@@ -0,0 +1,171 @@
use std::io;
use std::sync::Arc;
use base64::Engine as _;
use base64::engine::general_purpose::STANDARD;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsCreateDirectoryResponse;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsGetMetadataResponse;
use codex_app_server_protocol::FsReadDirectoryEntry;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadDirectoryResponse;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsReadFileResponse;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_environment::CopyOptions;
use codex_environment::CreateDirectoryOptions;
use codex_environment::Environment;
use codex_environment::ExecutorFileSystem;
use codex_environment::RemoveOptions;
use crate::server::routing::internal_error;
use crate::server::routing::invalid_request;
#[derive(Clone)]
pub(crate) struct ExecServerFileSystem {
file_system: Arc<dyn ExecutorFileSystem>,
}
impl Default for ExecServerFileSystem {
fn default() -> Self {
Self {
file_system: Environment::default().get_filesystem(),
}
}
}
impl ExecServerFileSystem {
pub(crate) async fn read_file(
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, JSONRPCErrorError> {
let bytes = self
.file_system
.read_file(&params.path)
.await
.map_err(map_fs_error)?;
Ok(FsReadFileResponse {
data_base64: STANDARD.encode(bytes),
})
}
pub(crate) async fn write_file(
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, JSONRPCErrorError> {
let bytes = STANDARD.decode(params.data_base64).map_err(|err| {
invalid_request(format!(
"fs/writeFile requires valid base64 dataBase64: {err}"
))
})?;
self.file_system
.write_file(&params.path, bytes)
.await
.map_err(map_fs_error)?;
Ok(FsWriteFileResponse {})
}
pub(crate) async fn create_directory(
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, JSONRPCErrorError> {
self.file_system
.create_directory(
&params.path,
CreateDirectoryOptions {
recursive: params.recursive.unwrap_or(true),
},
)
.await
.map_err(map_fs_error)?;
Ok(FsCreateDirectoryResponse {})
}
pub(crate) async fn get_metadata(
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, JSONRPCErrorError> {
let metadata = self
.file_system
.get_metadata(&params.path)
.await
.map_err(map_fs_error)?;
Ok(FsGetMetadataResponse {
is_directory: metadata.is_directory,
is_file: metadata.is_file,
created_at_ms: metadata.created_at_ms,
modified_at_ms: metadata.modified_at_ms,
})
}
pub(crate) async fn read_directory(
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, JSONRPCErrorError> {
let entries = self
.file_system
.read_directory(&params.path)
.await
.map_err(map_fs_error)?;
Ok(FsReadDirectoryResponse {
entries: entries
.into_iter()
.map(|entry| FsReadDirectoryEntry {
file_name: entry.file_name,
is_directory: entry.is_directory,
is_file: entry.is_file,
is_symlink: entry.is_symlink,
})
.collect(),
})
}
pub(crate) async fn remove(
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, JSONRPCErrorError> {
self.file_system
.remove(
&params.path,
RemoveOptions {
recursive: params.recursive.unwrap_or(true),
force: params.force.unwrap_or(true),
},
)
.await
.map_err(map_fs_error)?;
Ok(FsRemoveResponse {})
}
pub(crate) async fn copy(
&self,
params: FsCopyParams,
) -> Result<FsCopyResponse, JSONRPCErrorError> {
self.file_system
.copy(
&params.source_path,
&params.destination_path,
CopyOptions {
recursive: params.recursive,
},
)
.await
.map_err(map_fs_error)?;
Ok(FsCopyResponse {})
}
}
fn map_fs_error(err: io::Error) -> JSONRPCErrorError {
if err.kind() == io::ErrorKind::InvalidInput {
invalid_request(err.to_string())
} else {
internal_error(err.to_string())
}
}

View File

@@ -0,0 +1,633 @@
use std::collections::HashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use std::time::Duration;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsCreateDirectoryResponse;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsGetMetadataResponse;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadDirectoryResponse;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsReadFileResponse;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
use codex_utils_pty::ExecCommandSession;
use codex_utils_pty::TerminalSize;
use tokio::sync::Mutex;
use tokio::sync::Notify;
use tokio::sync::mpsc;
use tracing::warn;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecOutputStream;
use crate::protocol::ExecResponse;
use crate::protocol::ExecSandboxMode;
use crate::protocol::InitializeResponse;
use crate::protocol::PROTOCOL_VERSION;
use crate::protocol::ProcessOutputChunk;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteResponse;
use crate::server::filesystem::ExecServerFileSystem;
use crate::server::routing::ExecServerOutboundMessage;
use crate::server::routing::ExecServerServerNotification;
use crate::server::routing::internal_error;
use crate::server::routing::invalid_params;
use crate::server::routing::invalid_request;
const RETAINED_OUTPUT_BYTES_PER_PROCESS: usize = 1024 * 1024;
#[cfg(test)]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_millis(25);
#[cfg(not(test))]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_secs(30);
#[derive(Clone)]
struct RetainedOutputChunk {
seq: u64,
stream: ExecOutputStream,
chunk: Vec<u8>,
}
struct RunningProcess {
session: ExecCommandSession,
tty: bool,
output: VecDeque<RetainedOutputChunk>,
retained_bytes: usize,
next_seq: u64,
exit_code: Option<i32>,
output_notify: Arc<Notify>,
}
pub(crate) struct ExecServerHandler {
outbound_tx: mpsc::Sender<ExecServerOutboundMessage>,
file_system: ExecServerFileSystem,
// Keyed by client-chosen logical `processId` scoped to this connection.
// This is a protocol handle, not an OS pid.
processes: Arc<Mutex<HashMap<String, RunningProcess>>>,
initialize_requested: bool,
initialized: bool,
}
impl ExecServerHandler {
pub(crate) fn new(outbound_tx: mpsc::Sender<ExecServerOutboundMessage>) -> Self {
Self {
outbound_tx,
file_system: ExecServerFileSystem::default(),
processes: Arc::new(Mutex::new(HashMap::new())),
initialize_requested: false,
initialized: false,
}
}
pub(crate) async fn shutdown(&self) {
let remaining = {
let mut processes = self.processes.lock().await;
processes
.drain()
.map(|(_, process)| process)
.collect::<Vec<_>>()
};
for process in remaining {
process.session.terminate();
}
}
pub(crate) fn initialized(&mut self) -> Result<(), String> {
if !self.initialize_requested {
return Err("received `initialized` notification before `initialize`".into());
}
self.initialized = true;
Ok(())
}
pub(crate) fn initialize(
&mut self,
) -> Result<InitializeResponse, codex_app_server_protocol::JSONRPCErrorError> {
if self.initialize_requested {
return Err(invalid_request(
"initialize may only be sent once per connection".to_string(),
));
}
self.initialize_requested = true;
Ok(InitializeResponse {
protocol_version: PROTOCOL_VERSION.to_string(),
})
}
fn require_initialized(&self) -> Result<(), codex_app_server_protocol::JSONRPCErrorError> {
if !self.initialize_requested {
return Err(invalid_request(
"client must call initialize before using exec methods".to_string(),
));
}
if !self.initialized {
return Err(invalid_request(
"client must send initialized before using exec methods".to_string(),
));
}
Ok(())
}
pub(crate) async fn exec(
&self,
params: crate::protocol::ExecParams,
) -> Result<ExecResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
let process_id = params.process_id.clone();
// Same-connection requests are serialized by the RPC processor, and the
// in-process client holds the handler mutex across this full call. That
// makes this pre-spawn duplicate check safe for the current entrypoints.
{
let process_map = self.processes.lock().await;
if process_map.contains_key(&process_id) {
return Err(invalid_request(format!(
"process {process_id} already exists"
)));
}
}
if matches!(
params.sandbox.as_ref().map(|sandbox| sandbox.mode),
Some(ExecSandboxMode::HostDefault)
) {
return Err(invalid_request(
"sandbox mode `hostDefault` is not supported by exec-server yet".to_string(),
));
}
let (program, args) = params
.argv
.split_first()
.ok_or_else(|| invalid_params("argv must not be empty".to_string()))?;
let spawned = if params.tty {
codex_utils_pty::spawn_pty_process(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
TerminalSize::default(),
)
.await
} else {
codex_utils_pty::spawn_pipe_process_no_stdin(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
)
.await
}
.map_err(|err| internal_error(err.to_string()))?;
let output_notify = Arc::new(Notify::new());
{
let mut process_map = self.processes.lock().await;
process_map.insert(
process_id.clone(),
RunningProcess {
session: spawned.session,
tty: params.tty,
output: std::collections::VecDeque::new(),
retained_bytes: 0,
next_seq: 1,
exit_code: None,
output_notify: Arc::clone(&output_notify),
},
);
}
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stdout
},
spawned.stdout_rx,
self.outbound_tx.clone(),
Arc::clone(&self.processes),
Arc::clone(&output_notify),
));
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stderr
},
spawned.stderr_rx,
self.outbound_tx.clone(),
Arc::clone(&self.processes),
Arc::clone(&output_notify),
));
tokio::spawn(watch_exit(
process_id.clone(),
spawned.exit_rx,
self.outbound_tx.clone(),
Arc::clone(&self.processes),
output_notify,
));
Ok(ExecResponse { process_id })
}
pub(crate) async fn fs_read_file(
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.read_file(params).await
}
pub(crate) async fn fs_write_file(
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.write_file(params).await
}
pub(crate) async fn fs_create_directory(
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.create_directory(params).await
}
pub(crate) async fn fs_get_metadata(
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.get_metadata(params).await
}
pub(crate) async fn fs_read_directory(
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.read_directory(params).await
}
pub(crate) async fn fs_remove(
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.remove(params).await
}
pub(crate) async fn fs_copy(
&self,
params: FsCopyParams,
) -> Result<FsCopyResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
self.file_system.copy(params).await
}
pub(crate) async fn read(
&self,
params: crate::protocol::ReadParams,
) -> Result<ReadResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
let after_seq = params.after_seq.unwrap_or(0);
let max_bytes = params.max_bytes.unwrap_or(usize::MAX);
let wait = Duration::from_millis(params.wait_ms.unwrap_or(0));
let deadline = tokio::time::Instant::now() + wait;
loop {
let (response, output_notify) = {
let process_map = self.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
let mut chunks = Vec::new();
let mut total_bytes = 0;
let mut next_seq = process.next_seq;
for retained in process.output.iter().filter(|chunk| chunk.seq > after_seq) {
let chunk_len = retained.chunk.len();
if !chunks.is_empty() && total_bytes + chunk_len > max_bytes {
break;
}
total_bytes += chunk_len;
chunks.push(ProcessOutputChunk {
seq: retained.seq,
stream: retained.stream,
chunk: retained.chunk.clone().into(),
});
next_seq = retained.seq + 1;
if total_bytes >= max_bytes {
break;
}
}
(
ReadResponse {
chunks,
next_seq,
exited: process.exit_code.is_some(),
exit_code: process.exit_code,
},
Arc::clone(&process.output_notify),
)
};
if !response.chunks.is_empty()
|| response.exited
|| tokio::time::Instant::now() >= deadline
{
return Ok(response);
}
let remaining = deadline.saturating_duration_since(tokio::time::Instant::now());
if remaining.is_zero() {
return Ok(response);
}
let _ = tokio::time::timeout(remaining, output_notify.notified()).await;
}
}
pub(crate) async fn write(
&self,
params: crate::protocol::WriteParams,
) -> Result<WriteResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
let writer_tx = {
let process_map = self.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
if !process.tty {
return Err(invalid_request(format!(
"stdin is closed for process {}",
params.process_id
)));
}
process.session.writer_sender()
};
writer_tx
.send(params.chunk.into_inner())
.await
.map_err(|_| internal_error("failed to write to process stdin".to_string()))?;
Ok(WriteResponse { accepted: true })
}
pub(crate) async fn terminate(
&self,
params: crate::protocol::TerminateParams,
) -> Result<TerminateResponse, codex_app_server_protocol::JSONRPCErrorError> {
self.require_initialized()?;
let running = {
let process_map = self.processes.lock().await;
if let Some(process) = process_map.get(&params.process_id) {
process.session.terminate();
true
} else {
false
}
};
Ok(TerminateResponse { running })
}
}
#[cfg(test)]
impl ExecServerHandler {
async fn handle_message(
&mut self,
message: crate::server::routing::ExecServerInboundMessage,
) -> Result<(), String> {
match message {
crate::server::routing::ExecServerInboundMessage::Request(request) => {
self.handle_request(request).await
}
crate::server::routing::ExecServerInboundMessage::Notification(
crate::server::routing::ExecServerClientNotification::Initialized,
) => self.initialized(),
}
}
async fn handle_request(
&mut self,
request: crate::server::routing::ExecServerRequest,
) -> Result<(), String> {
let outbound = match request {
crate::server::routing::ExecServerRequest::Initialize { request_id, .. } => {
Self::request_outbound(
request_id,
self.initialize()
.map(crate::server::routing::ExecServerResponseMessage::Initialize),
)
}
crate::server::routing::ExecServerRequest::Exec { request_id, params } => {
Self::request_outbound(
request_id,
self.exec(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::Exec),
)
}
crate::server::routing::ExecServerRequest::Read { request_id, params } => {
Self::request_outbound(
request_id,
self.read(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::Read),
)
}
crate::server::routing::ExecServerRequest::Write { request_id, params } => {
Self::request_outbound(
request_id,
self.write(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::Write),
)
}
crate::server::routing::ExecServerRequest::Terminate { request_id, params } => {
Self::request_outbound(
request_id,
self.terminate(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::Terminate),
)
}
crate::server::routing::ExecServerRequest::FsReadFile { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_read_file(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsReadFile),
)
}
crate::server::routing::ExecServerRequest::FsWriteFile { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_write_file(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsWriteFile),
)
}
crate::server::routing::ExecServerRequest::FsCreateDirectory { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_create_directory(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsCreateDirectory),
)
}
crate::server::routing::ExecServerRequest::FsGetMetadata { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_get_metadata(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsGetMetadata),
)
}
crate::server::routing::ExecServerRequest::FsReadDirectory { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_read_directory(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsReadDirectory),
)
}
crate::server::routing::ExecServerRequest::FsRemove { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_remove(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsRemove),
)
}
crate::server::routing::ExecServerRequest::FsCopy { request_id, params } => {
Self::request_outbound(
request_id,
self.fs_copy(params)
.await
.map(crate::server::routing::ExecServerResponseMessage::FsCopy),
)
}
};
self.outbound_tx
.send(outbound)
.await
.map_err(|_| "outbound channel closed".to_string())
}
fn request_outbound(
request_id: codex_app_server_protocol::RequestId,
result: Result<
crate::server::routing::ExecServerResponseMessage,
codex_app_server_protocol::JSONRPCErrorError,
>,
) -> crate::server::routing::ExecServerOutboundMessage {
match result {
Ok(response) => crate::server::routing::ExecServerOutboundMessage::Response {
request_id,
response,
},
Err(error) => {
crate::server::routing::ExecServerOutboundMessage::Error { request_id, error }
}
}
}
}
async fn stream_output(
process_id: String,
stream: ExecOutputStream,
mut receiver: tokio::sync::mpsc::Receiver<Vec<u8>>,
outbound_tx: mpsc::Sender<ExecServerOutboundMessage>,
processes: Arc<Mutex<HashMap<String, RunningProcess>>>,
output_notify: Arc<Notify>,
) {
while let Some(chunk) = receiver.recv().await {
let notification = {
let mut processes = processes.lock().await;
let Some(process) = processes.get_mut(&process_id) else {
break;
};
let seq = process.next_seq;
process.next_seq += 1;
process.retained_bytes += chunk.len();
process.output.push_back(RetainedOutputChunk {
seq,
stream,
chunk: chunk.clone(),
});
while process.retained_bytes > RETAINED_OUTPUT_BYTES_PER_PROCESS {
let Some(evicted) = process.output.pop_front() else {
break;
};
process.retained_bytes = process.retained_bytes.saturating_sub(evicted.chunk.len());
warn!(
"retained output cap exceeded for process {process_id}; dropping oldest output"
);
}
ExecOutputDeltaNotification {
process_id: process_id.clone(),
stream,
chunk: chunk.into(),
}
};
output_notify.notify_waiters();
if outbound_tx
.send(ExecServerOutboundMessage::Notification(
ExecServerServerNotification::OutputDelta(notification),
))
.await
.is_err()
{
break;
}
}
}
async fn watch_exit(
process_id: String,
exit_rx: tokio::sync::oneshot::Receiver<i32>,
outbound_tx: mpsc::Sender<ExecServerOutboundMessage>,
processes: Arc<Mutex<HashMap<String, RunningProcess>>>,
output_notify: Arc<Notify>,
) {
let exit_code = exit_rx.await.unwrap_or(-1);
{
let mut processes = processes.lock().await;
if let Some(process) = processes.get_mut(&process_id) {
process.exit_code = Some(exit_code);
}
}
output_notify.notify_waiters();
let _ = outbound_tx
.send(ExecServerOutboundMessage::Notification(
ExecServerServerNotification::Exited(ExecExitedNotification {
process_id: process_id.clone(),
exit_code,
}),
))
.await;
tokio::spawn(async move {
tokio::time::sleep(EXITED_PROCESS_RETENTION).await;
let mut processes = processes.lock().await;
processes.remove(&process_id);
});
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,734 @@
use std::collections::HashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use std::time::Duration;
use pretty_assertions::assert_eq;
use tokio::sync::Notify;
use tokio::time::timeout;
use super::ExecServerHandler;
use super::RetainedOutputChunk;
use super::RunningProcess;
use crate::protocol::ExecOutputStream;
use crate::protocol::ExecSandboxConfig;
use crate::protocol::ExecSandboxMode;
use crate::protocol::InitializeParams;
use crate::protocol::InitializeResponse;
use crate::protocol::PROTOCOL_VERSION;
use crate::protocol::ReadParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::server::routing::ExecServerClientNotification;
use crate::server::routing::ExecServerInboundMessage;
use crate::server::routing::ExecServerOutboundMessage;
use crate::server::routing::ExecServerRequest;
use crate::server::routing::ExecServerResponseMessage;
use codex_app_server_protocol::RequestId;
async fn recv_outbound(
outgoing_rx: &mut tokio::sync::mpsc::Receiver<ExecServerOutboundMessage>,
) -> ExecServerOutboundMessage {
let recv_result = timeout(Duration::from_secs(1), outgoing_rx.recv()).await;
let maybe_message = match recv_result {
Ok(maybe_message) => maybe_message,
Err(err) => panic!("timed out waiting for handler output: {err}"),
};
match maybe_message {
Some(message) => message,
None => panic!("handler output channel closed unexpectedly"),
}
}
#[tokio::test]
async fn initialize_response_reports_protocol_version() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(1);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
assert_eq!(
recv_outbound(&mut outgoing_rx).await,
ExecServerOutboundMessage::Response {
request_id: RequestId::Integer(1),
response: ExecServerResponseMessage::Initialize(InitializeResponse {
protocol_version: PROTOCOL_VERSION.to_string(),
}),
}
);
}
#[tokio::test]
async fn exec_methods_require_initialize() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(1);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(7),
params: crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
},
}))
.await
{
panic!("request handling should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected invalid-request error");
};
assert_eq!(request_id, RequestId::Integer(7));
assert_eq!(error.code, -32600);
assert_eq!(
error.message,
"client must call initialize before using exec methods"
);
}
#[tokio::test]
async fn exec_methods_require_initialized_notification_after_initialize() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(2);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(2),
params: crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
},
}))
.await
{
panic!("request handling should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected invalid-request error");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(error.code, -32600);
assert_eq!(
error.message,
"client must send initialized before using exec methods"
);
}
#[tokio::test]
async fn initialized_before_initialize_is_a_protocol_error() {
let (outgoing_tx, _outgoing_rx) = tokio::sync::mpsc::channel(1);
let mut handler = ExecServerHandler::new(outgoing_tx);
let result = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await;
match result {
Err(err) => {
assert_eq!(
err,
"received `initialized` notification before `initialize`"
);
}
Ok(()) => panic!("expected protocol error for early initialized notification"),
}
}
#[tokio::test]
async fn initialize_may_only_be_sent_once_per_connection() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(2);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(2),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("duplicate initialize should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected invalid-request error");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(error.code, -32600);
assert_eq!(
error.message,
"initialize may only be sent once per connection"
);
}
#[tokio::test]
async fn host_default_sandbox_requests_are_rejected_until_supported() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(3);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(2),
params: crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: Some(ExecSandboxConfig {
mode: ExecSandboxMode::HostDefault,
}),
},
}))
.await
{
panic!("request handling should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected unsupported sandbox error");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(error.code, -32600);
assert_eq!(
error.message,
"sandbox mode `hostDefault` is not supported by exec-server yet"
);
}
#[tokio::test]
async fn exec_echoes_client_process_ids() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(4);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
let params = crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
"sleep 30".to_string(),
],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
};
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(2),
params: params.clone(),
}))
.await
{
panic!("first exec should succeed: {err}");
}
let ExecServerOutboundMessage::Response {
request_id,
response: ExecServerResponseMessage::Exec(first_exec),
} = recv_outbound(&mut outgoing_rx).await
else {
panic!("expected first exec response");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(first_exec.process_id, "proc-1");
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(3),
params: crate::protocol::ExecParams {
process_id: "proc-2".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
..params
},
}))
.await
{
panic!("second exec should succeed: {err}");
}
let ExecServerOutboundMessage::Response {
request_id,
response: ExecServerResponseMessage::Exec(second_exec),
} = recv_outbound(&mut outgoing_rx).await
else {
panic!("expected second exec response");
};
assert_eq!(request_id, RequestId::Integer(3));
assert_eq!(second_exec.process_id, "proc-2");
handler.shutdown().await;
}
#[tokio::test]
async fn writes_to_pipe_backed_processes_are_rejected() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(4);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(2),
params: crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
"sleep 30".to_string(),
],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
},
}))
.await
{
panic!("exec should succeed: {err}");
}
let ExecServerOutboundMessage::Response {
response: ExecServerResponseMessage::Exec(exec_response),
..
} = recv_outbound(&mut outgoing_rx).await
else {
panic!("expected exec response");
};
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Write {
request_id: RequestId::Integer(3),
params: WriteParams {
process_id: exec_response.process_id,
chunk: b"hello\n".to_vec().into(),
},
},
))
.await
{
panic!("write should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected stdin-closed error");
};
assert_eq!(request_id, RequestId::Integer(3));
assert_eq!(error.code, -32600);
assert_eq!(error.message, "stdin is closed for process proc-1");
handler.shutdown().await;
}
#[tokio::test]
async fn writes_to_unknown_processes_are_rejected() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(2);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Write {
request_id: RequestId::Integer(2),
params: WriteParams {
process_id: "missing".to_string(),
chunk: b"hello\n".to_vec().into(),
},
},
))
.await
{
panic!("write should not fail the handler: {err}");
}
let ExecServerOutboundMessage::Error { request_id, error } =
recv_outbound(&mut outgoing_rx).await
else {
panic!("expected unknown-process error");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(error.code, -32600);
assert_eq!(error.message, "unknown process id missing");
}
#[tokio::test]
async fn terminate_unknown_processes_report_running_false() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(2);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Terminate {
request_id: RequestId::Integer(2),
params: crate::protocol::TerminateParams {
process_id: "missing".to_string(),
},
},
))
.await
{
panic!("terminate should not fail the handler: {err}");
}
assert_eq!(
recv_outbound(&mut outgoing_rx).await,
ExecServerOutboundMessage::Response {
request_id: RequestId::Integer(2),
response: ExecServerResponseMessage::Terminate(TerminateResponse { running: false }),
}
);
}
#[tokio::test]
async fn terminate_keeps_process_ids_reserved() {
let (outgoing_tx, mut outgoing_rx) = tokio::sync::mpsc::channel(2);
let mut handler = ExecServerHandler::new(outgoing_tx);
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test".to_string(),
},
},
))
.await
{
panic!("initialize should succeed: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
.await
{
panic!("initialized should succeed: {err}");
}
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(ExecServerRequest::Exec {
request_id: RequestId::Integer(2),
params: crate::protocol::ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
"sleep 30".to_string(),
],
cwd: std::env::current_dir().expect("cwd"),
env: HashMap::new(),
tty: false,
arg0: None,
sandbox: None,
},
}))
.await
{
panic!("exec should not fail the handler: {err}");
}
let _ = recv_outbound(&mut outgoing_rx).await;
if let Err(err) = handler
.handle_message(ExecServerInboundMessage::Request(
ExecServerRequest::Terminate {
request_id: RequestId::Integer(3),
params: crate::protocol::TerminateParams {
process_id: "proc-1".to_string(),
},
},
))
.await
{
panic!("terminate should not fail the handler: {err}");
}
assert_eq!(
recv_outbound(&mut outgoing_rx).await,
ExecServerOutboundMessage::Response {
request_id: RequestId::Integer(3),
response: ExecServerResponseMessage::Terminate(TerminateResponse { running: true }),
}
);
assert!(
handler.processes.lock().await.contains_key("proc-1"),
"terminated ids should stay reserved until exit cleanup removes them"
);
let deadline = tokio::time::Instant::now() + Duration::from_secs(1);
loop {
if !handler.processes.lock().await.contains_key("proc-1") {
break;
}
assert!(
tokio::time::Instant::now() < deadline,
"terminated ids should be removed after the exit retention window"
);
tokio::time::sleep(Duration::from_millis(25)).await;
}
handler.shutdown().await;
}
#[tokio::test]
async fn read_paginates_retained_output_without_skipping_omitted_chunks() {
let (outgoing_tx, _outgoing_rx) = tokio::sync::mpsc::channel(1);
let mut handler = ExecServerHandler::new(outgoing_tx);
let _ = handler.initialize().expect("initialize should succeed");
handler.initialized().expect("initialized should succeed");
let spawned = codex_utils_pty::spawn_pipe_process_no_stdin(
"bash",
&["-lc".to_string(), "true".to_string()],
std::env::current_dir().expect("cwd").as_path(),
&HashMap::new(),
&None,
)
.await
.expect("spawn test process");
{
let mut process_map = handler.processes.lock().await;
process_map.insert(
"proc-1".to_string(),
RunningProcess {
session: spawned.session,
tty: false,
output: VecDeque::from([
RetainedOutputChunk {
seq: 1,
stream: ExecOutputStream::Stdout,
chunk: b"abc".to_vec(),
},
RetainedOutputChunk {
seq: 2,
stream: ExecOutputStream::Stderr,
chunk: b"def".to_vec(),
},
]),
retained_bytes: 6,
next_seq: 3,
exit_code: None,
output_notify: Arc::new(Notify::new()),
},
);
}
let first = handler
.read(ReadParams {
process_id: "proc-1".to_string(),
after_seq: Some(0),
max_bytes: Some(3),
wait_ms: Some(0),
})
.await
.expect("first read should succeed");
assert_eq!(first.chunks.len(), 1);
assert_eq!(first.chunks[0].seq, 1);
assert_eq!(first.chunks[0].stream, ExecOutputStream::Stdout);
assert_eq!(first.chunks[0].chunk.clone().into_inner(), b"abc".to_vec());
assert_eq!(first.next_seq, 2);
let second = handler
.read(ReadParams {
process_id: "proc-1".to_string(),
after_seq: Some(first.next_seq - 1),
max_bytes: Some(3),
wait_ms: Some(0),
})
.await
.expect("second read should succeed");
assert_eq!(second.chunks.len(), 1);
assert_eq!(second.chunks[0].seq, 2);
assert_eq!(second.chunks[0].stream, ExecOutputStream::Stderr);
assert_eq!(second.chunks[0].chunk.clone().into_inner(), b"def".to_vec());
assert_eq!(second.next_seq, 3);
handler.shutdown().await;
}

View File

@@ -0,0 +1,188 @@
use tokio::sync::mpsc;
use tracing::debug;
use tracing::warn;
use crate::connection::CHANNEL_CAPACITY;
use crate::connection::JsonRpcConnection;
use crate::connection::JsonRpcConnectionEvent;
use crate::server::handler::ExecServerHandler;
use crate::server::routing::ExecServerClientNotification;
use crate::server::routing::ExecServerInboundMessage;
use crate::server::routing::ExecServerOutboundMessage;
use crate::server::routing::ExecServerRequest;
use crate::server::routing::ExecServerResponseMessage;
use crate::server::routing::RoutedExecServerMessage;
use crate::server::routing::encode_outbound_message;
use crate::server::routing::route_jsonrpc_message;
pub(crate) async fn run_connection(connection: JsonRpcConnection) {
let (json_outgoing_tx, mut incoming_rx, _connection_tasks) = connection.into_parts();
let (outgoing_tx, mut outgoing_rx) =
mpsc::channel::<ExecServerOutboundMessage>(CHANNEL_CAPACITY);
let mut handler = ExecServerHandler::new(outgoing_tx.clone());
let outbound_task = tokio::spawn(async move {
while let Some(message) = outgoing_rx.recv().await {
let json_message = match encode_outbound_message(message) {
Ok(json_message) => json_message,
Err(err) => {
warn!("failed to serialize exec-server outbound message: {err}");
break;
}
};
if json_outgoing_tx.send(json_message).await.is_err() {
break;
}
}
});
while let Some(event) = incoming_rx.recv().await {
match event {
JsonRpcConnectionEvent::Message(message) => match route_jsonrpc_message(message) {
Ok(RoutedExecServerMessage::Inbound(message)) => {
if let Err(err) = dispatch_to_handler(&mut handler, message, &outgoing_tx).await
{
warn!("closing exec-server connection after protocol error: {err}");
break;
}
}
Ok(RoutedExecServerMessage::ImmediateOutbound(message)) => {
if outgoing_tx.send(message).await.is_err() {
break;
}
}
Err(err) => {
warn!("closing exec-server connection after protocol error: {err}");
break;
}
},
JsonRpcConnectionEvent::Disconnected { reason } => {
if let Some(reason) = reason {
debug!("exec-server connection disconnected: {reason}");
}
break;
}
}
}
handler.shutdown().await;
drop(handler);
drop(outgoing_tx);
let _ = outbound_task.await;
}
async fn dispatch_to_handler(
handler: &mut ExecServerHandler,
message: ExecServerInboundMessage,
outgoing_tx: &mpsc::Sender<ExecServerOutboundMessage>,
) -> Result<(), String> {
match message {
ExecServerInboundMessage::Request(request) => {
let outbound = match request {
ExecServerRequest::Initialize { request_id, .. } => request_outbound(
request_id,
handler
.initialize()
.map(ExecServerResponseMessage::Initialize),
),
ExecServerRequest::Exec { request_id, params } => request_outbound(
request_id,
handler
.exec(params)
.await
.map(ExecServerResponseMessage::Exec),
),
ExecServerRequest::Read { request_id, params } => request_outbound(
request_id,
handler
.read(params)
.await
.map(ExecServerResponseMessage::Read),
),
ExecServerRequest::Write { request_id, params } => request_outbound(
request_id,
handler
.write(params)
.await
.map(ExecServerResponseMessage::Write),
),
ExecServerRequest::Terminate { request_id, params } => request_outbound(
request_id,
handler
.terminate(params)
.await
.map(ExecServerResponseMessage::Terminate),
),
ExecServerRequest::FsReadFile { request_id, params } => request_outbound(
request_id,
handler
.fs_read_file(params)
.await
.map(ExecServerResponseMessage::FsReadFile),
),
ExecServerRequest::FsWriteFile { request_id, params } => request_outbound(
request_id,
handler
.fs_write_file(params)
.await
.map(ExecServerResponseMessage::FsWriteFile),
),
ExecServerRequest::FsCreateDirectory { request_id, params } => request_outbound(
request_id,
handler
.fs_create_directory(params)
.await
.map(ExecServerResponseMessage::FsCreateDirectory),
),
ExecServerRequest::FsGetMetadata { request_id, params } => request_outbound(
request_id,
handler
.fs_get_metadata(params)
.await
.map(ExecServerResponseMessage::FsGetMetadata),
),
ExecServerRequest::FsReadDirectory { request_id, params } => request_outbound(
request_id,
handler
.fs_read_directory(params)
.await
.map(ExecServerResponseMessage::FsReadDirectory),
),
ExecServerRequest::FsRemove { request_id, params } => request_outbound(
request_id,
handler
.fs_remove(params)
.await
.map(ExecServerResponseMessage::FsRemove),
),
ExecServerRequest::FsCopy { request_id, params } => request_outbound(
request_id,
handler
.fs_copy(params)
.await
.map(ExecServerResponseMessage::FsCopy),
),
};
outgoing_tx
.send(outbound)
.await
.map_err(|_| "outbound channel closed".to_string())
}
ExecServerInboundMessage::Notification(ExecServerClientNotification::Initialized) => {
handler.initialized()
}
}
}
fn request_outbound(
request_id: codex_app_server_protocol::RequestId,
result: Result<ExecServerResponseMessage, codex_app_server_protocol::JSONRPCErrorError>,
) -> ExecServerOutboundMessage {
match result {
Ok(response) => ExecServerOutboundMessage::Response {
request_id,
response,
},
Err(error) => ExecServerOutboundMessage::Error { request_id, error },
}
}

View File

@@ -0,0 +1,585 @@
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use serde::de::DeserializeOwned;
use crate::protocol::EXEC_EXITED_METHOD;
use crate::protocol::EXEC_METHOD;
use crate::protocol::EXEC_OUTPUT_DELTA_METHOD;
use crate::protocol::EXEC_READ_METHOD;
use crate::protocol::EXEC_TERMINATE_METHOD;
use crate::protocol::EXEC_WRITE_METHOD;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::FS_COPY_METHOD;
use crate::protocol::FS_CREATE_DIRECTORY_METHOD;
use crate::protocol::FS_GET_METADATA_METHOD;
use crate::protocol::FS_READ_DIRECTORY_METHOD;
use crate::protocol::FS_READ_FILE_METHOD;
use crate::protocol::FS_REMOVE_METHOD;
use crate::protocol::FS_WRITE_FILE_METHOD;
use crate::protocol::INITIALIZE_METHOD;
use crate::protocol::INITIALIZED_METHOD;
use crate::protocol::InitializeParams;
use crate::protocol::InitializeResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsCreateDirectoryResponse;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsGetMetadataResponse;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadDirectoryResponse;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsReadFileResponse;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum ExecServerInboundMessage {
Request(ExecServerRequest),
Notification(ExecServerClientNotification),
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum ExecServerRequest {
Initialize {
request_id: RequestId,
params: InitializeParams,
},
Exec {
request_id: RequestId,
params: ExecParams,
},
Read {
request_id: RequestId,
params: ReadParams,
},
Write {
request_id: RequestId,
params: WriteParams,
},
Terminate {
request_id: RequestId,
params: TerminateParams,
},
FsReadFile {
request_id: RequestId,
params: FsReadFileParams,
},
FsWriteFile {
request_id: RequestId,
params: FsWriteFileParams,
},
FsCreateDirectory {
request_id: RequestId,
params: FsCreateDirectoryParams,
},
FsGetMetadata {
request_id: RequestId,
params: FsGetMetadataParams,
},
FsReadDirectory {
request_id: RequestId,
params: FsReadDirectoryParams,
},
FsRemove {
request_id: RequestId,
params: FsRemoveParams,
},
FsCopy {
request_id: RequestId,
params: FsCopyParams,
},
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum ExecServerClientNotification {
Initialized,
}
#[derive(Debug, Clone, PartialEq)]
pub(crate) enum ExecServerOutboundMessage {
Response {
request_id: RequestId,
response: ExecServerResponseMessage,
},
Error {
request_id: RequestId,
error: JSONRPCErrorError,
},
Notification(ExecServerServerNotification),
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum ExecServerResponseMessage {
Initialize(InitializeResponse),
Exec(ExecResponse),
Read(ReadResponse),
Write(WriteResponse),
Terminate(TerminateResponse),
FsReadFile(FsReadFileResponse),
FsWriteFile(FsWriteFileResponse),
FsCreateDirectory(FsCreateDirectoryResponse),
FsGetMetadata(FsGetMetadataResponse),
FsReadDirectory(FsReadDirectoryResponse),
FsRemove(FsRemoveResponse),
FsCopy(FsCopyResponse),
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum ExecServerServerNotification {
OutputDelta(ExecOutputDeltaNotification),
Exited(ExecExitedNotification),
}
#[derive(Debug, Clone, PartialEq)]
pub(crate) enum RoutedExecServerMessage {
Inbound(ExecServerInboundMessage),
ImmediateOutbound(ExecServerOutboundMessage),
}
pub(crate) fn route_jsonrpc_message(
message: JSONRPCMessage,
) -> Result<RoutedExecServerMessage, String> {
match message {
JSONRPCMessage::Request(request) => route_request(request),
JSONRPCMessage::Notification(notification) => route_notification(notification),
JSONRPCMessage::Response(response) => Err(format!(
"unexpected client response for request id {:?}",
response.id
)),
JSONRPCMessage::Error(error) => Err(format!(
"unexpected client error for request id {:?}",
error.id
)),
}
}
pub(crate) fn encode_outbound_message(
message: ExecServerOutboundMessage,
) -> Result<JSONRPCMessage, serde_json::Error> {
match message {
ExecServerOutboundMessage::Response {
request_id,
response,
} => Ok(JSONRPCMessage::Response(JSONRPCResponse {
id: request_id,
result: serialize_response(response)?,
})),
ExecServerOutboundMessage::Error { request_id, error } => {
Ok(JSONRPCMessage::Error(JSONRPCError {
id: request_id,
error,
}))
}
ExecServerOutboundMessage::Notification(notification) => Ok(JSONRPCMessage::Notification(
serialize_notification(notification)?,
)),
}
}
pub(crate) fn invalid_request(message: String) -> JSONRPCErrorError {
JSONRPCErrorError {
code: -32600,
data: None,
message,
}
}
pub(crate) fn invalid_params(message: String) -> JSONRPCErrorError {
JSONRPCErrorError {
code: -32602,
data: None,
message,
}
}
pub(crate) fn internal_error(message: String) -> JSONRPCErrorError {
JSONRPCErrorError {
code: -32603,
data: None,
message,
}
}
fn route_request(request: JSONRPCRequest) -> Result<RoutedExecServerMessage, String> {
match request.method.as_str() {
INITIALIZE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::Initialize { request_id, params }
})),
EXEC_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::Exec { request_id, params }
})),
EXEC_READ_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::Read { request_id, params }
})),
EXEC_WRITE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::Write { request_id, params }
})),
EXEC_TERMINATE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::Terminate { request_id, params }
})),
FS_READ_FILE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsReadFile { request_id, params }
})),
FS_WRITE_FILE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsWriteFile { request_id, params }
})),
FS_CREATE_DIRECTORY_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsCreateDirectory { request_id, params }
})),
FS_GET_METADATA_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsGetMetadata { request_id, params }
})),
FS_READ_DIRECTORY_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsReadDirectory { request_id, params }
})),
FS_REMOVE_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsRemove { request_id, params }
})),
FS_COPY_METHOD => Ok(parse_request_params(request, |request_id, params| {
ExecServerRequest::FsCopy { request_id, params }
})),
other => Ok(RoutedExecServerMessage::ImmediateOutbound(
ExecServerOutboundMessage::Error {
request_id: request.id,
error: invalid_request(format!("unknown method: {other}")),
},
)),
}
}
fn route_notification(
notification: JSONRPCNotification,
) -> Result<RoutedExecServerMessage, String> {
match notification.method.as_str() {
INITIALIZED_METHOD => Ok(RoutedExecServerMessage::Inbound(
ExecServerInboundMessage::Notification(ExecServerClientNotification::Initialized),
)),
other => Err(format!("unexpected notification method: {other}")),
}
}
fn parse_request_params<P, F>(request: JSONRPCRequest, build: F) -> RoutedExecServerMessage
where
P: DeserializeOwned,
F: FnOnce(RequestId, P) -> ExecServerRequest,
{
let request_id = request.id;
match serde_json::from_value::<P>(request.params.unwrap_or(serde_json::Value::Null)) {
Ok(params) => RoutedExecServerMessage::Inbound(ExecServerInboundMessage::Request(build(
request_id, params,
))),
Err(err) => RoutedExecServerMessage::ImmediateOutbound(ExecServerOutboundMessage::Error {
request_id,
error: invalid_params(err.to_string()),
}),
}
}
fn serialize_response(
response: ExecServerResponseMessage,
) -> Result<serde_json::Value, serde_json::Error> {
match response {
ExecServerResponseMessage::Initialize(response) => serde_json::to_value(response),
ExecServerResponseMessage::Exec(response) => serde_json::to_value(response),
ExecServerResponseMessage::Read(response) => serde_json::to_value(response),
ExecServerResponseMessage::Write(response) => serde_json::to_value(response),
ExecServerResponseMessage::Terminate(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsReadFile(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsWriteFile(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsCreateDirectory(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsGetMetadata(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsReadDirectory(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsRemove(response) => serde_json::to_value(response),
ExecServerResponseMessage::FsCopy(response) => serde_json::to_value(response),
}
}
fn serialize_notification(
notification: ExecServerServerNotification,
) -> Result<JSONRPCNotification, serde_json::Error> {
match notification {
ExecServerServerNotification::OutputDelta(params) => Ok(JSONRPCNotification {
method: EXEC_OUTPUT_DELTA_METHOD.to_string(),
params: Some(serde_json::to_value(params)?),
}),
ExecServerServerNotification::Exited(params) => Ok(JSONRPCNotification {
method: EXEC_EXITED_METHOD.to_string(),
params: Some(serde_json::to_value(params)?),
}),
}
}
#[cfg(test)]
mod tests {
use pretty_assertions::assert_eq;
use serde_json::json;
use super::ExecServerClientNotification;
use super::ExecServerInboundMessage;
use super::ExecServerOutboundMessage;
use super::ExecServerRequest;
use super::ExecServerResponseMessage;
use super::ExecServerServerNotification;
use super::RoutedExecServerMessage;
use super::encode_outbound_message;
use super::route_jsonrpc_message;
use crate::protocol::EXEC_EXITED_METHOD;
use crate::protocol::EXEC_METHOD;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::ExecSandboxConfig;
use crate::protocol::ExecSandboxMode;
use crate::protocol::INITIALIZE_METHOD;
use crate::protocol::INITIALIZED_METHOD;
use crate::protocol::InitializeParams;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
#[test]
fn routes_initialize_requests_to_typed_variants() {
let routed = route_jsonrpc_message(JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(1),
method: INITIALIZE_METHOD.to_string(),
params: Some(json!({ "clientName": "test-client" })),
trace: None,
}))
.expect("initialize request should route");
assert_eq!(
routed,
RoutedExecServerMessage::Inbound(ExecServerInboundMessage::Request(
ExecServerRequest::Initialize {
request_id: RequestId::Integer(1),
params: InitializeParams {
client_name: "test-client".to_string(),
},
},
))
);
}
#[test]
fn malformed_exec_params_return_immediate_error_outbound() {
let routed = route_jsonrpc_message(JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(2),
method: EXEC_METHOD.to_string(),
params: Some(json!({ "processId": "proc-1" })),
trace: None,
}))
.expect("exec request should route");
let RoutedExecServerMessage::ImmediateOutbound(ExecServerOutboundMessage::Error {
request_id,
error,
}) = routed
else {
panic!("expected invalid-params error outbound");
};
assert_eq!(request_id, RequestId::Integer(2));
assert_eq!(error.code, -32602);
}
#[test]
fn routes_initialized_notifications_to_typed_variants() {
let routed = route_jsonrpc_message(JSONRPCMessage::Notification(JSONRPCNotification {
method: INITIALIZED_METHOD.to_string(),
params: Some(json!({})),
}))
.expect("initialized notification should route");
assert_eq!(
routed,
RoutedExecServerMessage::Inbound(ExecServerInboundMessage::Notification(
ExecServerClientNotification::Initialized,
))
);
}
#[test]
fn serializes_typed_notifications_back_to_jsonrpc() {
let message = encode_outbound_message(ExecServerOutboundMessage::Notification(
ExecServerServerNotification::Exited(ExecExitedNotification {
process_id: "proc-1".to_string(),
exit_code: 0,
}),
))
.expect("notification should serialize");
assert_eq!(
message,
JSONRPCMessage::Notification(JSONRPCNotification {
method: EXEC_EXITED_METHOD.to_string(),
params: Some(json!({
"processId": "proc-1",
"exitCode": 0,
})),
})
);
}
#[test]
fn serializes_typed_responses_back_to_jsonrpc() {
let message = encode_outbound_message(ExecServerOutboundMessage::Response {
request_id: RequestId::Integer(3),
response: ExecServerResponseMessage::Exec(ExecResponse {
process_id: "proc-1".to_string(),
}),
})
.expect("response should serialize");
assert_eq!(
message,
JSONRPCMessage::Response(codex_app_server_protocol::JSONRPCResponse {
id: RequestId::Integer(3),
result: json!({
"processId": "proc-1",
}),
})
);
}
#[test]
fn routes_exec_requests_with_typed_params() {
let cwd = std::env::current_dir().expect("cwd");
let routed = route_jsonrpc_message(JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(4),
method: EXEC_METHOD.to_string(),
params: Some(json!({
"processId": "proc-1",
"argv": ["bash", "-lc", "true"],
"cwd": cwd,
"env": {},
"tty": true,
"arg0": null,
})),
trace: None,
}))
.expect("exec request should route");
let RoutedExecServerMessage::Inbound(ExecServerInboundMessage::Request(
ExecServerRequest::Exec { request_id, params },
)) = routed
else {
panic!("expected typed exec request");
};
assert_eq!(request_id, RequestId::Integer(4));
assert_eq!(
params,
ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().expect("cwd"),
env: std::collections::HashMap::new(),
tty: true,
arg0: None,
sandbox: None,
}
);
}
#[test]
fn routes_exec_requests_with_optional_sandbox_config() {
let cwd = std::env::current_dir().expect("cwd");
let routed = route_jsonrpc_message(JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(4),
method: EXEC_METHOD.to_string(),
params: Some(json!({
"processId": "proc-1",
"argv": ["bash", "-lc", "true"],
"cwd": cwd,
"env": {},
"tty": true,
"arg0": null,
"sandbox": {
"mode": "none",
},
})),
trace: None,
}))
.expect("exec request with sandbox should route");
let RoutedExecServerMessage::Inbound(ExecServerInboundMessage::Request(
ExecServerRequest::Exec { request_id, params },
)) = routed
else {
panic!("expected typed exec request");
};
assert_eq!(request_id, RequestId::Integer(4));
assert_eq!(
params,
ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["bash".to_string(), "-lc".to_string(), "true".to_string()],
cwd: std::env::current_dir().expect("cwd"),
env: std::collections::HashMap::new(),
tty: true,
arg0: None,
sandbox: Some(ExecSandboxConfig {
mode: ExecSandboxMode::None,
}),
}
);
}
#[test]
fn unknown_request_methods_return_immediate_invalid_request_errors() {
let routed = route_jsonrpc_message(JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(5),
method: "process/unknown".to_string(),
params: Some(json!({})),
trace: None,
}))
.expect("unknown request should still route");
assert_eq!(
routed,
RoutedExecServerMessage::ImmediateOutbound(ExecServerOutboundMessage::Error {
request_id: RequestId::Integer(5),
error: super::invalid_request("unknown method: process/unknown".to_string()),
})
);
}
#[test]
fn unexpected_client_notifications_are_rejected() {
let err = route_jsonrpc_message(JSONRPCMessage::Notification(JSONRPCNotification {
method: "process/output".to_string(),
params: Some(json!({})),
}))
.expect_err("unexpected client notification should fail");
assert_eq!(err, "unexpected notification method: process/output");
}
#[test]
fn unexpected_client_responses_are_rejected() {
let err = route_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse {
id: RequestId::Integer(6),
result: json!({}),
}))
.expect_err("unexpected client response should fail");
assert_eq!(err, "unexpected client response for request id Integer(6)");
}
}

View File

@@ -0,0 +1,166 @@
use std::net::SocketAddr;
use std::str::FromStr;
use tokio::net::TcpListener;
use tokio_tungstenite::accept_async;
use tracing::warn;
use crate::connection::JsonRpcConnection;
use crate::server::processor::run_connection;
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum ExecServerTransport {
Stdio,
WebSocket { bind_address: SocketAddr },
}
#[derive(Debug, Clone, Eq, PartialEq)]
pub enum ExecServerTransportParseError {
UnsupportedListenUrl(String),
InvalidWebSocketListenUrl(String),
}
impl std::fmt::Display for ExecServerTransportParseError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ExecServerTransportParseError::UnsupportedListenUrl(listen_url) => write!(
f,
"unsupported --listen URL `{listen_url}`; expected `stdio://` or `ws://IP:PORT`"
),
ExecServerTransportParseError::InvalidWebSocketListenUrl(listen_url) => write!(
f,
"invalid websocket --listen URL `{listen_url}`; expected `ws://IP:PORT`"
),
}
}
}
impl std::error::Error for ExecServerTransportParseError {}
impl ExecServerTransport {
pub const DEFAULT_LISTEN_URL: &str = "stdio://";
pub fn from_listen_url(listen_url: &str) -> Result<Self, ExecServerTransportParseError> {
if listen_url == Self::DEFAULT_LISTEN_URL {
return Ok(Self::Stdio);
}
if let Some(socket_addr) = listen_url.strip_prefix("ws://") {
let bind_address = socket_addr.parse::<SocketAddr>().map_err(|_| {
ExecServerTransportParseError::InvalidWebSocketListenUrl(listen_url.to_string())
})?;
return Ok(Self::WebSocket { bind_address });
}
Err(ExecServerTransportParseError::UnsupportedListenUrl(
listen_url.to_string(),
))
}
}
impl FromStr for ExecServerTransport {
type Err = ExecServerTransportParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::from_listen_url(s)
}
}
pub(crate) async fn run_transport(
transport: ExecServerTransport,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
match transport {
ExecServerTransport::Stdio => {
run_connection(JsonRpcConnection::from_stdio(
tokio::io::stdin(),
tokio::io::stdout(),
"exec-server stdio".to_string(),
))
.await;
Ok(())
}
ExecServerTransport::WebSocket { bind_address } => {
run_websocket_listener(bind_address).await
}
}
}
async fn run_websocket_listener(
bind_address: SocketAddr,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let listener = TcpListener::bind(bind_address).await?;
let local_addr = listener.local_addr()?;
print_websocket_startup_banner(local_addr);
loop {
let (stream, peer_addr) = listener.accept().await?;
tokio::spawn(async move {
match accept_async(stream).await {
Ok(websocket) => {
run_connection(JsonRpcConnection::from_websocket(
websocket,
format!("exec-server websocket {peer_addr}"),
))
.await;
}
Err(err) => {
warn!(
"failed to accept exec-server websocket connection from {peer_addr}: {err}"
);
}
}
});
}
}
#[allow(clippy::print_stderr)]
fn print_websocket_startup_banner(addr: SocketAddr) {
eprintln!("codex-exec-server listening on ws://{addr}");
}
#[cfg(test)]
mod tests {
use pretty_assertions::assert_eq;
use super::ExecServerTransport;
#[test]
fn exec_server_transport_parses_stdio_listen_url() {
let transport =
ExecServerTransport::from_listen_url(ExecServerTransport::DEFAULT_LISTEN_URL)
.expect("stdio listen URL should parse");
assert_eq!(transport, ExecServerTransport::Stdio);
}
#[test]
fn exec_server_transport_parses_websocket_listen_url() {
let transport = ExecServerTransport::from_listen_url("ws://127.0.0.1:1234")
.expect("websocket listen URL should parse");
assert_eq!(
transport,
ExecServerTransport::WebSocket {
bind_address: "127.0.0.1:1234".parse().expect("valid socket address"),
}
);
}
#[test]
fn exec_server_transport_rejects_invalid_websocket_listen_url() {
let err = ExecServerTransport::from_listen_url("ws://localhost:1234")
.expect_err("hostname bind address should be rejected");
assert_eq!(
err.to_string(),
"invalid websocket --listen URL `ws://localhost:1234`; expected `ws://IP:PORT`"
);
}
#[test]
fn exec_server_transport_rejects_unsupported_listen_url() {
let err = ExecServerTransport::from_listen_url("http://127.0.0.1:1234")
.expect_err("unsupported scheme should fail");
assert_eq!(
err.to_string(),
"unsupported --listen URL `http://127.0.0.1:1234`; expected `stdio://` or `ws://IP:PORT`"
);
}
}

View File

@@ -0,0 +1,462 @@
#![cfg(unix)]
use std::process::Stdio;
use std::time::Duration;
use anyhow::Context;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_exec_server::ExecOutputStream;
use codex_exec_server::ExecParams;
use codex_exec_server::ExecServerClient;
use codex_exec_server::ExecServerClientConnectOptions;
use codex_exec_server::ExecServerEvent;
use codex_exec_server::ExecServerLaunchCommand;
use codex_exec_server::InitializeParams;
use codex_exec_server::InitializeResponse;
use codex_exec_server::RemoteExecServerConnectArgs;
use codex_exec_server::spawn_local_exec_server;
use codex_utils_cargo_bin::cargo_bin;
use pretty_assertions::assert_eq;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
use tokio::process::Command;
use tokio::sync::broadcast;
use tokio::time::timeout;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_server_accepts_initialize_over_stdio() -> anyhow::Result<()> {
let binary = cargo_bin("codex-exec-server")?;
let mut child = Command::new(binary);
child.stdin(Stdio::piped());
child.stdout(Stdio::piped());
child.stderr(Stdio::inherit());
let mut child = child.spawn()?;
let mut stdin = child.stdin.take().expect("stdin");
let stdout = child.stdout.take().expect("stdout");
let mut stdout = BufReader::new(stdout).lines();
let initialize = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(1),
method: "initialize".to_string(),
params: Some(serde_json::to_value(InitializeParams {
client_name: "exec-server-test".to_string(),
})?),
trace: None,
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&initialize)?).as_bytes())
.await?;
let response_line = timeout(Duration::from_secs(5), stdout.next_line()).await??;
let response_line = response_line.expect("response line");
let response: JSONRPCMessage = serde_json::from_str(&response_line)?;
let JSONRPCMessage::Response(JSONRPCResponse { id, result }) = response else {
panic!("expected initialize response");
};
assert_eq!(id, RequestId::Integer(1));
let initialize_response: InitializeResponse = serde_json::from_value(result)?;
assert_eq!(initialize_response.protocol_version, "exec-server.v0");
let initialized = JSONRPCMessage::Notification(JSONRPCNotification {
method: "initialized".to_string(),
params: Some(serde_json::json!({})),
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&initialized)?).as_bytes())
.await?;
child.start_kill()?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_server_accepts_explicit_none_sandbox_over_stdio() -> anyhow::Result<()> {
let binary = cargo_bin("codex-exec-server")?;
let mut child = Command::new(binary);
child.stdin(Stdio::piped());
child.stdout(Stdio::piped());
child.stderr(Stdio::inherit());
let mut child = child.spawn()?;
let mut stdin = child.stdin.take().expect("stdin");
let stdout = child.stdout.take().expect("stdout");
let mut stdout = BufReader::new(stdout).lines();
send_initialize_over_stdio(&mut stdin, &mut stdout).await?;
let exec = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(2),
method: "process/start".to_string(),
params: Some(serde_json::json!({
"processId": "proc-1",
"argv": ["printf", "sandbox-none"],
"cwd": std::env::current_dir()?,
"env": {},
"tty": false,
"arg0": null,
"sandbox": {
"mode": "none"
}
})),
trace: None,
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&exec)?).as_bytes())
.await?;
let response_line = timeout(Duration::from_secs(5), stdout.next_line()).await??;
let response_line = response_line.expect("exec response line");
let response: JSONRPCMessage = serde_json::from_str(&response_line)?;
let JSONRPCMessage::Response(JSONRPCResponse { id, result }) = response else {
panic!("expected process/start response");
};
assert_eq!(id, RequestId::Integer(2));
assert_eq!(result, serde_json::json!({ "processId": "proc-1" }));
let deadline = tokio::time::Instant::now() + Duration::from_secs(5);
let mut saw_output = false;
while !saw_output {
let remaining = deadline.saturating_duration_since(tokio::time::Instant::now());
let line = timeout(remaining, stdout.next_line()).await??;
let line = line.context("missing process notification")?;
let message: JSONRPCMessage = serde_json::from_str(&line)?;
if let JSONRPCMessage::Notification(JSONRPCNotification { method, params }) = message
&& method == "process/output"
{
let params = params.context("missing process/output params")?;
assert_eq!(params["processId"], "proc-1");
assert_eq!(params["stream"], "stdout");
assert_eq!(params["chunk"], "c2FuZGJveC1ub25l");
saw_output = true;
}
}
child.start_kill()?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_server_rejects_host_default_sandbox_over_stdio() -> anyhow::Result<()> {
let binary = cargo_bin("codex-exec-server")?;
let mut child = Command::new(binary);
child.stdin(Stdio::piped());
child.stdout(Stdio::piped());
child.stderr(Stdio::inherit());
let mut child = child.spawn()?;
let mut stdin = child.stdin.take().expect("stdin");
let stdout = child.stdout.take().expect("stdout");
let mut stdout = BufReader::new(stdout).lines();
send_initialize_over_stdio(&mut stdin, &mut stdout).await?;
let exec = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(2),
method: "process/start".to_string(),
params: Some(serde_json::json!({
"processId": "proc-1",
"argv": ["bash", "-lc", "true"],
"cwd": std::env::current_dir()?,
"env": {},
"tty": false,
"arg0": null,
"sandbox": {
"mode": "hostDefault"
}
})),
trace: None,
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&exec)?).as_bytes())
.await?;
let response_line = timeout(Duration::from_secs(5), stdout.next_line()).await??;
let response_line = response_line.expect("exec error line");
let response: JSONRPCMessage = serde_json::from_str(&response_line)?;
let JSONRPCMessage::Error(codex_app_server_protocol::JSONRPCError { id, error }) = response
else {
panic!("expected process/start error");
};
assert_eq!(id, RequestId::Integer(2));
assert_eq!(error.code, -32600);
assert_eq!(
error.message,
"sandbox mode `hostDefault` is not supported by exec-server yet"
);
child.start_kill()?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_server_client_streams_output_and_accepts_writes() -> anyhow::Result<()> {
let mut env = std::collections::HashMap::new();
if let Some(path) = std::env::var_os("PATH") {
env.insert("PATH".to_string(), path.to_string_lossy().into_owned());
}
let server = spawn_local_exec_server(
ExecServerLaunchCommand {
program: cargo_bin("codex-exec-server")?,
args: Vec::new(),
},
ExecServerClientConnectOptions {
client_name: "exec-server-test".to_string(),
initialize_timeout: Duration::from_secs(5),
},
)
.await?;
let client = server.client();
let mut events = client.event_receiver();
let response = client
.exec(ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
"printf 'ready\\n'; while IFS= read -r line; do printf 'echo:%s\\n' \"$line\"; done"
.to_string(),
],
cwd: std::env::current_dir()?,
env,
tty: true,
arg0: None,
sandbox: None,
})
.await?;
let process_id = response.process_id;
let (stream, ready_output) = recv_until_contains(&mut events, &process_id, "ready").await?;
assert_eq!(stream, ExecOutputStream::Pty);
assert!(
ready_output.contains("ready"),
"expected initial ready output"
);
client.write(&process_id, b"hello\n".to_vec()).await?;
let (stream, echoed_output) =
recv_until_contains(&mut events, &process_id, "echo:hello").await?;
assert_eq!(stream, ExecOutputStream::Pty);
assert!(
echoed_output.contains("echo:hello"),
"expected echoed output"
);
client.terminate(&process_id).await?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_server_client_connects_over_websocket() -> anyhow::Result<()> {
let mut env = std::collections::HashMap::new();
if let Some(path) = std::env::var_os("PATH") {
env.insert("PATH".to_string(), path.to_string_lossy().into_owned());
}
let binary = cargo_bin("codex-exec-server")?;
let mut child = Command::new(binary);
child.args(["--listen", "ws://127.0.0.1:0"]);
child.stdin(Stdio::null());
child.stdout(Stdio::null());
child.stderr(Stdio::piped());
let mut child = child.spawn()?;
let stderr = child.stderr.take().expect("stderr");
let mut stderr_lines = BufReader::new(stderr).lines();
let websocket_url = read_websocket_url(&mut stderr_lines).await?;
let client = ExecServerClient::connect_websocket(RemoteExecServerConnectArgs {
websocket_url,
client_name: "exec-server-test".to_string(),
connect_timeout: Duration::from_secs(5),
initialize_timeout: Duration::from_secs(5),
})
.await?;
let mut events = client.event_receiver();
let response = client
.exec(ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
"printf 'ready\\n'; while IFS= read -r line; do printf 'echo:%s\\n' \"$line\"; done"
.to_string(),
],
cwd: std::env::current_dir()?,
env,
tty: true,
arg0: None,
sandbox: None,
})
.await?;
let process_id = response.process_id;
let (stream, ready_output) = recv_until_contains(&mut events, &process_id, "ready").await?;
assert_eq!(stream, ExecOutputStream::Pty);
assert!(
ready_output.contains("ready"),
"expected initial ready output"
);
client.write(&process_id, b"hello\n".to_vec()).await?;
let (stream, echoed_output) =
recv_until_contains(&mut events, &process_id, "echo:hello").await?;
assert_eq!(stream, ExecOutputStream::Pty);
assert!(
echoed_output.contains("echo:hello"),
"expected echoed output"
);
client.terminate(&process_id).await?;
child.start_kill()?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn websocket_disconnect_terminates_processes_for_that_connection() -> anyhow::Result<()> {
let mut env = std::collections::HashMap::new();
if let Some(path) = std::env::var_os("PATH") {
env.insert("PATH".to_string(), path.to_string_lossy().into_owned());
}
let marker_path = std::env::temp_dir().join(format!(
"codex-exec-server-disconnect-{}-{}",
std::process::id(),
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)?
.as_nanos()
));
let _ = std::fs::remove_file(&marker_path);
let binary = cargo_bin("codex-exec-server")?;
let mut child = Command::new(binary);
child.args(["--listen", "ws://127.0.0.1:0"]);
child.stdin(Stdio::null());
child.stdout(Stdio::null());
child.stderr(Stdio::piped());
let mut child = child.spawn()?;
let stderr = child.stderr.take().expect("stderr");
let mut stderr_lines = BufReader::new(stderr).lines();
let websocket_url = read_websocket_url(&mut stderr_lines).await?;
{
let client = ExecServerClient::connect_websocket(RemoteExecServerConnectArgs {
websocket_url,
client_name: "exec-server-test".to_string(),
connect_timeout: Duration::from_secs(5),
initialize_timeout: Duration::from_secs(5),
})
.await?;
let _response = client
.exec(ExecParams {
process_id: "proc-1".to_string(),
argv: vec![
"bash".to_string(),
"-lc".to_string(),
format!("sleep 2; printf disconnected > {}", marker_path.display()),
],
cwd: std::env::current_dir()?,
env,
tty: false,
arg0: None,
sandbox: None,
})
.await?;
}
tokio::time::sleep(Duration::from_secs(3)).await;
assert!(
!marker_path.exists(),
"managed process should be terminated when the websocket client disconnects"
);
child.start_kill()?;
let _ = std::fs::remove_file(&marker_path);
Ok(())
}
async fn read_websocket_url<R>(lines: &mut tokio::io::Lines<BufReader<R>>) -> anyhow::Result<String>
where
R: tokio::io::AsyncRead + Unpin,
{
let line = timeout(Duration::from_secs(5), lines.next_line()).await??;
let line = line.context("missing websocket startup banner")?;
let websocket_url = line
.split_whitespace()
.find(|part| part.starts_with("ws://"))
.context("missing websocket URL in startup banner")?;
Ok(websocket_url.to_string())
}
async fn send_initialize_over_stdio<W, R>(
stdin: &mut W,
stdout: &mut tokio::io::Lines<BufReader<R>>,
) -> anyhow::Result<()>
where
W: tokio::io::AsyncWrite + Unpin,
R: tokio::io::AsyncRead + Unpin,
{
let initialize = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(1),
method: "initialize".to_string(),
params: Some(serde_json::to_value(InitializeParams {
client_name: "exec-server-test".to_string(),
})?),
trace: None,
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&initialize)?).as_bytes())
.await?;
let response_line = timeout(Duration::from_secs(5), stdout.next_line()).await??;
let response_line = response_line
.ok_or_else(|| anyhow::anyhow!("missing initialize response line from stdio server"))?;
let response: JSONRPCMessage = serde_json::from_str(&response_line)?;
let JSONRPCMessage::Response(JSONRPCResponse { id, result }) = response else {
panic!("expected initialize response");
};
assert_eq!(id, RequestId::Integer(1));
let initialize_response: InitializeResponse = serde_json::from_value(result)?;
assert_eq!(initialize_response.protocol_version, "exec-server.v0");
let initialized = JSONRPCMessage::Notification(JSONRPCNotification {
method: "initialized".to_string(),
params: Some(serde_json::json!({})),
});
stdin
.write_all(format!("{}\n", serde_json::to_string(&initialized)?).as_bytes())
.await?;
Ok(())
}
async fn recv_until_contains(
events: &mut broadcast::Receiver<ExecServerEvent>,
process_id: &str,
needle: &str,
) -> anyhow::Result<(ExecOutputStream, String)> {
let deadline = tokio::time::Instant::now() + Duration::from_secs(5);
let mut collected = String::new();
loop {
let remaining = deadline.saturating_duration_since(tokio::time::Instant::now());
let event = timeout(remaining, events.recv()).await??;
if let ExecServerEvent::OutputDelta(output_event) = event
&& output_event.process_id == process_id
{
collected.push_str(&String::from_utf8_lossy(&output_event.chunk.into_inner()));
if collected.contains(needle) {
return Ok((output_event.stream, collected));
}
}
}
}

View File

@@ -6,6 +6,7 @@ mod ephemeral;
mod mcp_required_exit;
mod originator;
mod output_schema;
mod remote_exec_server;
mod resume;
mod sandbox;
mod server_error_exit;

View File

@@ -0,0 +1,180 @@
#![cfg(not(target_os = "windows"))]
#![allow(clippy::expect_used, clippy::unwrap_used)]
use std::net::TcpListener;
use std::path::PathBuf;
use std::process::Stdio;
use std::time::Duration;
use core_test_support::responses;
use core_test_support::test_codex_exec::test_codex_exec;
use pretty_assertions::assert_eq;
use serde_json::json;
use tokio::process::Command;
fn extract_output_text(item: &serde_json::Value) -> String {
item.get("output")
.and_then(|value| match value {
serde_json::Value::String(text) => Some(text.clone()),
serde_json::Value::Object(obj) => obj
.get("content")
.and_then(serde_json::Value::as_str)
.map(str::to_string),
_ => None,
})
.expect("function call output should include text content")
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn exec_cli_can_route_remote_exec_and_read_file_through_exec_server() -> anyhow::Result<()> {
let test = test_codex_exec();
let external_websocket_url = std::env::var("CODEX_EXEC_SERVER_TEST_WS_URL")
.ok()
.filter(|value| !value.trim().is_empty());
let external_remote_root = std::env::var("CODEX_EXEC_SERVER_TEST_REMOTE_ROOT")
.ok()
.filter(|value| !value.trim().is_empty())
.map(PathBuf::from);
let websocket_url = if let Some(websocket_url) = external_websocket_url {
websocket_url
} else {
let websocket_listener = TcpListener::bind("127.0.0.1:0")?;
let websocket_port = websocket_listener.local_addr()?.port();
drop(websocket_listener);
format!("ws://127.0.0.1:{websocket_port}")
};
let mut exec_server = if std::env::var("CODEX_EXEC_SERVER_TEST_WS_URL").is_ok() {
None
} else {
let child = Command::new(codex_utils_cargo_bin::cargo_bin("codex-exec-server")?)
.arg("--listen")
.arg(&websocket_url)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::inherit())
.spawn()?;
tokio::time::sleep(Duration::from_millis(250)).await;
Some(child)
};
let local_workspace_root = test.cwd_path().to_path_buf();
let remote_workspace_root = external_remote_root
.clone()
.unwrap_or_else(|| local_workspace_root.clone());
let seed_path = local_workspace_root.join("remote_exec_seed.txt");
if external_remote_root.is_none() {
std::fs::write(&seed_path, "remote-fs-seed\n")?;
}
let server = responses::start_mock_server().await;
let response_mock = responses::mount_sse_sequence(
&server,
vec![
responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(
"call-exec",
"exec_command",
&serde_json::to_string(&json!({
"cmd": if external_remote_root.is_some() {
"printf remote-fs-seed > remote_exec_seed.txt && printf from-remote > remote_exec_generated.txt"
} else {
"printf from-remote > remote_exec_generated.txt"
},
"yield_time_ms": 500,
}))?,
),
responses::ev_completed("resp-1"),
]),
responses::sse(vec![
responses::ev_response_created("resp-2"),
responses::ev_function_call(
"call-read",
"read_file",
&serde_json::to_string(&json!({
"file_path": seed_path,
}))?,
),
responses::ev_completed("resp-2"),
]),
responses::sse(vec![
responses::ev_response_created("resp-3"),
responses::ev_function_call(
"call-list",
"list_dir",
&serde_json::to_string(&json!({
"dir_path": local_workspace_root,
"offset": 1,
"limit": 20,
"depth": 1,
}))?,
),
responses::ev_completed("resp-3"),
]),
responses::sse(vec![
responses::ev_response_created("resp-4"),
responses::ev_assistant_message("msg-1", "done"),
responses::ev_completed("resp-4"),
]),
],
)
.await;
test.cmd_with_server(&server)
.arg("--skip-git-repo-check")
.arg("-s")
.arg("danger-full-access")
.arg("-c")
.arg("experimental_use_unified_exec_tool=true")
.arg("-c")
.arg("zsh_path=\"/usr/bin/zsh\"")
.arg("-c")
.arg("experimental_unified_exec_use_exec_server=true")
.arg("-c")
.arg(format!(
"experimental_unified_exec_exec_server_websocket_url={}",
serde_json::to_string(&websocket_url)?
))
.arg("-c")
.arg(format!(
"experimental_unified_exec_exec_server_workspace_root={}",
serde_json::to_string(&remote_workspace_root)?
))
.arg("-c")
.arg("experimental_supported_tools=[\"read_file\",\"list_dir\"]")
.arg("run remote exec-server tools")
.assert()
.success();
if external_remote_root.is_none() {
let generated_path = test.cwd_path().join("remote_exec_generated.txt");
let deadline = tokio::time::Instant::now() + Duration::from_secs(5);
while tokio::time::Instant::now() < deadline && !generated_path.exists() {
tokio::time::sleep(Duration::from_millis(50)).await;
}
assert_eq!(std::fs::read_to_string(&generated_path)?, "from-remote");
}
let requests = response_mock.requests();
let read_output = extract_output_text(&requests[2].function_call_output("call-read"));
assert!(
read_output.contains("remote-fs-seed"),
"expected read_file tool output to include remote file contents, got {read_output:?}"
);
let list_output = extract_output_text(&requests[3].function_call_output("call-list"));
assert!(
list_output.contains("remote_exec_seed.txt"),
"expected list_dir output to include remote_exec_seed.txt, got {list_output:?}"
);
assert!(
list_output.contains("remote_exec_generated.txt"),
"expected list_dir output to include remote_exec_generated.txt, got {list_output:?}"
);
if let Some(exec_server) = exec_server.as_mut() {
exec_server.start_kill()?;
let _ = exec_server.wait().await;
}
Ok(())
}

View File

@@ -78,4 +78,29 @@ developer message Codex inserts when realtime becomes active. It only affects
the realtime start message in prompt history and does not change websocket
backend prompt settings or the realtime end/inactive message.
## Unified exec over exec-server
`experimental_unified_exec_use_exec_server` routes unified-exec process
launches and filesystem-backed tools through `codex-exec-server` instead of
using only the local in-process implementations.
When `experimental_unified_exec_exec_server_websocket_url` is set, Codex
connects to that existing websocket endpoint and uses it for both unified-exec
processes and remote filesystem operations such as `read_file`, `list_dir`, and
`view_image`.
When `experimental_unified_exec_exec_server_workspace_root` is also set, Codex
remaps remote exec `cwd` values and remote filesystem tool paths from the local
session `cwd` root into that executor-visible workspace root. Use this when the
executor is running on another host and cannot see the laptop's absolute paths.
When `experimental_unified_exec_spawn_local_exec_server` is also enabled, Codex
starts a session-scoped local `codex-exec-server` subprocess on startup and
uses that connection for the same process and filesystem calls.
`experimental_supported_tools` can be used to opt specific experimental tools
into the tool list even when the selected model catalog entry does not advertise
them. This is useful when testing remote filesystem-backed tools such as
`read_file` and `list_dir` against an exec-server-backed environment.
Ctrl+C/Ctrl+D quitting uses a ~1 second double-press hint (`ctrl + c again to quit`).