Compare commits

...

13 Commits

Author SHA1 Message Date
starr-openai
fc135afd78 Thread environment ids through app-server routing slice
Derive stable environment bindings for standalone fs and command/exec paths while keeping the local executor path intact for the default environment.

Co-authored-by: Codex <noreply@openai.com>
2026-03-20 17:24:49 -07:00
starr-openai
b0b16a2693 Align exec-server handler naming and fs defaults
Restore the app-server fs default onto Environment::default(), bring back a sync local Environment default for that path, and rename the process RPC adapter to match the filesystem handler style.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 16:31:44 -07:00
starr-openai
1734a1fef0 Revert exec-server filesystem default wiring
Restore codex-rs/exec-server/src/server/filesystem.rs to the main-branch version as requested.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 16:24:24 -07:00
starr-openai
fa160b1699 Split exec process into local and remote implementations
Match the filesystem structure from #15232 for exec process: expose the trait on Environment, keep LocalProcess as the real implementation, use RemoteProcess as the network proxy, and make ProcessHandler a thin RPC adapter over LocalProcess.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 16:22:03 -07:00
starr-openai
3001e1e7b4 Fix exec-server local filesystem compile path
Apply the pending formatting reorderings and switch the server-side
filesystem default to LocalFileSystem so the current PR compiles again.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 15:21:27 -07:00
starr-openai
76978d5a83 Use LocalFileSystem directly in FsApi default
Remove the temporary Environment::local_filesystem() helper and export
LocalFileSystem so fs-only defaults can construct the local filesystem
directly without going through Environment.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 15:06:06 -07:00
starr-openai
048438a44a Rename environment process accessor to get_executor
Drop the stale remote_exec_server_client() accessor now that Environment
exposes the process capability directly, and rename the capability getter
to get_executor() on the environment trait and concrete type.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 15:03:44 -07:00
starr-openai
e60eeec4bf Make Environment construction async-only for process setup
Remove sync Default from Environment, construct the local process client
with connect_in_process() in create(None), and switch the fs-only app
server default to a local filesystem constructor.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:56:20 -07:00
starr-openai
bdfd5abcca Reuse in-process ExecServerClient for local ExecProcess
Remove the duplicate direct-handler local process implementation and make
the local ExecProcess path delegate to the existing in-process
ExecServerClient. Keep lazy initialization only to support sync
Environment::default().

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:44:29 -07:00
starr-openai
7bbc0a4e15 Use a direct local ExecProcess implementation
Replace the in-process client wrapper with a local ExecProcess that owns
ExecServerHandler directly and forwards process notifications to the
trait event stream. This keeps the default Environment process path
entirely local instead of routing through ExecServerClient.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:36:47 -07:00
starr-openai
2d52d35c09 Replace unavailable exec stub with local ExecProcess
Give Environment a real local ExecProcess implementation by default and
keep remote mode selecting the websocket-backed client when configured.
The local implementation lazily initializes an in-process exec-server
client on first use.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:31:47 -07:00
starr-openai
7eebdbd3a8 Use local exec process by default in Environment::create
Fall back to an in-process exec-server client when no remote URL is
configured, so constructed Environment values have a real process
capability in both local and remote modes.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:23:58 -07:00
starr-openai
952b7212b3 Introduce exec process capability traits
Add a narrow ExecProcess trait for process lifecycle RPCs and expose it
from Environment behind an ExecutorEnvironment trait. Keep the first cut
behavior-preserving by delegating remote mode to the existing
ExecServerClient and returning an unavailable process stub for default
local Environment values.

Co-authored-by: Codex <noreply@openai.com>
2026-03-19 14:18:10 -07:00
18 changed files with 1025 additions and 880 deletions

View File

@@ -67,9 +67,12 @@ struct ConnectionProcessId {
#[derive(Clone)]
enum CommandExecSession {
Active {
environment_id: String,
control_tx: mpsc::Sender<CommandControlRequest>,
},
UnsupportedWindowsSandbox,
UnsupportedWindowsSandbox {
environment_id: String,
},
}
enum CommandControl {
@@ -155,6 +158,7 @@ impl CommandExecManager {
output_bytes_cap,
size,
} = params;
let environment_id = environment_id_from_cwd(exec_request.cwd.as_path());
if process_id.is_none() && (tty || stream_stdin || stream_stdout_stderr) {
return Err(invalid_request(
"command/exec tty or streaming requires a client-supplied processId".to_string(),
@@ -195,7 +199,9 @@ impl CommandExecManager {
}
sessions.insert(
process_key.clone(),
CommandExecSession::UnsupportedWindowsSandbox,
CommandExecSession::UnsupportedWindowsSandbox {
environment_id: environment_id.clone(),
},
);
}
let sessions = Arc::clone(&self.sessions);
@@ -259,9 +265,17 @@ impl CommandExecManager {
}
sessions.insert(
process_key.clone(),
CommandExecSession::Active { control_tx },
CommandExecSession::Active {
environment_id: environment_id.clone(),
control_tx,
},
);
}
tracing::debug!(
environment_id = %environment_id,
process_id = %process_key.process_id.error_repr(),
"command/exec start"
);
let spawned = if tty {
codex_utils_pty::spawn_pty_process(
program,
@@ -389,13 +403,28 @@ impl CommandExecManager {
};
for control in controls {
if let CommandExecSession::Active { control_tx } = control {
let _ = control_tx
.send(CommandControlRequest {
control: CommandControl::Terminate,
response_tx: None,
})
.await;
match control {
CommandExecSession::Active {
environment_id,
control_tx,
} => {
tracing::debug!(
environment_id = %environment_id,
"command/exec connection closed"
);
let _ = control_tx
.send(CommandControlRequest {
control: CommandControl::Terminate,
response_tx: None,
})
.await;
}
CommandExecSession::UnsupportedWindowsSandbox { environment_id } => {
tracing::debug!(
environment_id = %environment_id,
"command/exec connection closed for windows sandbox"
);
}
}
}
}
@@ -418,10 +447,26 @@ impl CommandExecManager {
))
})?
};
let CommandExecSession::Active { control_tx } = session else {
return Err(invalid_request(
"command/exec/write, command/exec/terminate, and command/exec/resize are not supported for windows sandbox processes".to_string(),
));
let control_tx = match session {
CommandExecSession::Active {
environment_id,
control_tx,
} => {
tracing::debug!(
environment_id = %environment_id,
"command/exec control"
);
control_tx
}
CommandExecSession::UnsupportedWindowsSandbox { environment_id } => {
tracing::debug!(
environment_id = %environment_id,
"command/exec control rejected for windows sandbox"
);
return Err(invalid_request(
"command/exec/write, command/exec/terminate, and command/exec/resize are not supported for windows sandbox processes".to_string(),
));
}
};
let (response_tx, response_rx) = oneshot::channel();
let request = CommandControlRequest {
@@ -917,11 +962,12 @@ mod tests {
connection_id: request_id.connection_id,
process_id: InternalProcessId::Client("proc-11".to_string()),
};
manager
.sessions
.lock()
.await
.insert(process_id, CommandExecSession::UnsupportedWindowsSandbox);
manager.sessions.lock().await.insert(
process_id,
CommandExecSession::UnsupportedWindowsSandbox {
environment_id: "test-env".to_string(),
},
);
let err = manager
.write(
@@ -953,11 +999,12 @@ mod tests {
connection_id: request_id.connection_id,
process_id: InternalProcessId::Client("proc-12".to_string()),
};
manager
.sessions
.lock()
.await
.insert(process_id, CommandExecSession::UnsupportedWindowsSandbox);
manager.sessions.lock().await.insert(
process_id,
CommandExecSession::UnsupportedWindowsSandbox {
environment_id: "test-env".to_string(),
},
);
let err = manager
.terminate(
@@ -990,7 +1037,10 @@ mod tests {
connection_id: request_id.connection_id,
process_id: process_id.clone(),
},
CommandExecSession::Active { control_tx },
CommandExecSession::Active {
environment_id: "test-env".to_string(),
control_tx,
},
);
tokio::spawn(async move {

View File

@@ -28,12 +28,20 @@ use std::sync::Arc;
#[derive(Clone)]
pub(crate) struct FsApi {
environment_id: String,
file_system: Arc<dyn ExecutorFileSystem>,
}
impl Default for FsApi {
fn default() -> Self {
Self::new(Environment::default_environment_id(None))
}
}
impl FsApi {
pub(crate) fn new(environment_id: String) -> Self {
Self {
environment_id,
file_system: Arc::new(Environment::default().get_filesystem()),
}
}
@@ -44,6 +52,11 @@ impl FsApi {
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/readFile"
);
let bytes = self
.file_system
.read_file(&params.path)
@@ -58,6 +71,11 @@ impl FsApi {
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/writeFile"
);
let bytes = STANDARD.decode(params.data_base64).map_err(|err| {
invalid_request(format!(
"fs/writeFile requires valid base64 dataBase64: {err}"
@@ -74,6 +92,11 @@ impl FsApi {
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/createDirectory"
);
self.file_system
.create_directory(
&params.path,
@@ -90,6 +113,11 @@ impl FsApi {
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/getMetadata"
);
let metadata = self
.file_system
.get_metadata(&params.path)
@@ -107,6 +135,11 @@ impl FsApi {
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/readDirectory"
);
let entries = self
.file_system
.read_directory(&params.path)
@@ -128,6 +161,11 @@ impl FsApi {
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
path = %params.path,
"fs/remove"
);
self.file_system
.remove(
&params.path,
@@ -145,6 +183,12 @@ impl FsApi {
&self,
params: FsCopyParams,
) -> Result<FsCopyResponse, JSONRPCErrorError> {
tracing::debug!(
environment_id = %self.environment_id,
source_path = %params.source_path,
destination_path = %params.destination_path,
"fs/copy"
);
self.file_system
.copy(
&params.source_path,

View File

@@ -64,6 +64,7 @@ use codex_core::default_client::set_default_client_residency_requirement;
use codex_core::default_client::set_default_originator;
use codex_core::models_manager::collaboration_mode_presets::CollaborationModesConfig;
use codex_feedback::CodexFeedback;
use codex_exec_server::Environment;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::W3cTraceContext;
@@ -255,7 +256,9 @@ impl MessageProcessor {
analytics_events_client,
);
let external_agent_config_api = ExternalAgentConfigApi::new(config.codex_home.clone());
let fs_api = FsApi::default();
let fs_api = FsApi::new(Environment::default_environment_id(
config.experimental_exec_server_url.as_deref(),
));
Self {
outgoing,

View File

@@ -18,16 +18,13 @@ use codex_app_server_protocol::FsWriteFileResponse;
use codex_app_server_protocol::JSONRPCNotification;
use serde_json::Value;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::time::timeout;
use tokio_tungstenite::connect_async;
use tracing::debug;
use tracing::warn;
use crate::client_api::ExecServerClientConnectOptions;
use crate::client_api::ExecServerEvent;
use crate::client_api::RemoteExecServerConnectArgs;
use crate::connection::JsonRpcConnection;
use crate::process::ExecServerEvent;
use crate::protocol::EXEC_EXITED_METHOD;
use crate::protocol::EXEC_METHOD;
use crate::protocol::EXEC_OUTPUT_DELTA_METHOD;
@@ -58,15 +55,24 @@ use crate::protocol::WriteResponse;
use crate::rpc::RpcCallError;
use crate::rpc::RpcClient;
use crate::rpc::RpcClientEvent;
use crate::rpc::RpcNotificationSender;
use crate::rpc::RpcServerOutboundMessage;
mod local_backend;
use local_backend::LocalBackend;
const CONNECT_TIMEOUT: Duration = Duration::from_secs(10);
const INITIALIZE_TIMEOUT: Duration = Duration::from_secs(10);
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ExecServerClientConnectOptions {
pub client_name: String,
pub initialize_timeout: Duration,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RemoteExecServerConnectArgs {
pub websocket_url: String,
pub client_name: String,
pub connect_timeout: Duration,
pub initialize_timeout: Duration,
}
impl Default for ExecServerClientConnectOptions {
fn default() -> Self {
Self {
@@ -96,43 +102,14 @@ impl RemoteExecServerConnectArgs {
}
}
enum ClientBackend {
Remote(RpcClient),
InProcess(LocalBackend),
}
impl ClientBackend {
fn as_local(&self) -> Option<&LocalBackend> {
match self {
ClientBackend::Remote(_) => None,
ClientBackend::InProcess(backend) => Some(backend),
}
}
fn as_remote(&self) -> Option<&RpcClient> {
match self {
ClientBackend::Remote(client) => Some(client),
ClientBackend::InProcess(_) => None,
}
}
}
struct Inner {
backend: ClientBackend,
client: RpcClient,
events_tx: broadcast::Sender<ExecServerEvent>,
reader_task: tokio::task::JoinHandle<()>,
}
impl Drop for Inner {
fn drop(&mut self) {
if let Some(backend) = self.backend.as_local()
&& let Ok(handle) = tokio::runtime::Handle::try_current()
{
let backend = backend.clone();
handle.spawn(async move {
backend.shutdown().await;
});
}
self.reader_task.abort();
}
}
@@ -167,40 +144,6 @@ pub enum ExecServerError {
}
impl ExecServerClient {
pub async fn connect_in_process(
options: ExecServerClientConnectOptions,
) -> Result<Self, ExecServerError> {
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<RpcServerOutboundMessage>(256);
let backend = LocalBackend::new(crate::server::ExecServerHandler::new(
RpcNotificationSender::new(outgoing_tx),
));
let inner = Arc::new_cyclic(|weak| {
let weak = weak.clone();
let reader_task = tokio::spawn(async move {
while let Some(message) = outgoing_rx.recv().await {
if let Some(inner) = weak.upgrade()
&& let Err(err) = handle_in_process_outbound_message(&inner, message).await
{
warn!(
"in-process exec-server client closing after unexpected response: {err}"
);
return;
}
}
});
Inner {
backend: ClientBackend::InProcess(backend),
events_tx: broadcast::channel(256).0,
reader_task,
}
});
let client = Self { inner };
client.initialize(options).await?;
Ok(client)
}
pub async fn connect_websocket(
args: RemoteExecServerConnectArgs,
) -> Result<Self, ExecServerError> {
@@ -241,17 +184,11 @@ impl ExecServerClient {
} = options;
timeout(initialize_timeout, async {
let response = if let Some(backend) = self.inner.backend.as_local() {
backend.initialize().await?
} else {
let params = InitializeParams { client_name };
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during initialize".to_string(),
));
};
remote.call(INITIALIZE_METHOD, &params).await?
};
let response = self
.inner
.client
.call(INITIALIZE_METHOD, &InitializeParams { client_name })
.await?;
self.notify_initialized().await?;
Ok(response)
})
@@ -262,27 +199,16 @@ impl ExecServerClient {
}
pub async fn exec(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.exec(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during exec".to_string(),
));
};
remote.call(EXEC_METHOD, &params).await.map_err(Into::into)
self.inner
.client
.call(EXEC_METHOD, &params)
.await
.map_err(Into::into)
}
pub async fn read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.exec_read(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during read".to_string(),
));
};
remote
self.inner
.client
.call(EXEC_READ_METHOD, &params)
.await
.map_err(Into::into)
@@ -293,38 +219,28 @@ impl ExecServerClient {
process_id: &str,
chunk: Vec<u8>,
) -> Result<WriteResponse, ExecServerError> {
let params = WriteParams {
process_id: process_id.to_string(),
chunk: chunk.into(),
};
if let Some(backend) = self.inner.backend.as_local() {
return backend.exec_write(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during write".to_string(),
));
};
remote
.call(EXEC_WRITE_METHOD, &params)
self.inner
.client
.call(
EXEC_WRITE_METHOD,
&WriteParams {
process_id: process_id.to_string(),
chunk: chunk.into(),
},
)
.await
.map_err(Into::into)
}
pub async fn terminate(&self, process_id: &str) -> Result<TerminateResponse, ExecServerError> {
let params = TerminateParams {
process_id: process_id.to_string(),
};
if let Some(backend) = self.inner.backend.as_local() {
return backend.terminate(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during terminate".to_string(),
));
};
remote
.call(EXEC_TERMINATE_METHOD, &params)
self.inner
.client
.call(
EXEC_TERMINATE_METHOD,
&TerminateParams {
process_id: process_id.to_string(),
},
)
.await
.map_err(Into::into)
}
@@ -333,15 +249,8 @@ impl ExecServerClient {
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_read_file(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/readFile".to_string(),
));
};
remote
self.inner
.client
.call(FS_READ_FILE_METHOD, &params)
.await
.map_err(Into::into)
@@ -351,15 +260,8 @@ impl ExecServerClient {
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_write_file(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/writeFile".to_string(),
));
};
remote
self.inner
.client
.call(FS_WRITE_FILE_METHOD, &params)
.await
.map_err(Into::into)
@@ -369,15 +271,8 @@ impl ExecServerClient {
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_create_directory(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/createDirectory".to_string(),
));
};
remote
self.inner
.client
.call(FS_CREATE_DIRECTORY_METHOD, &params)
.await
.map_err(Into::into)
@@ -387,15 +282,8 @@ impl ExecServerClient {
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_get_metadata(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/getMetadata".to_string(),
));
};
remote
self.inner
.client
.call(FS_GET_METADATA_METHOD, &params)
.await
.map_err(Into::into)
@@ -405,15 +293,8 @@ impl ExecServerClient {
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_read_directory(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/readDirectory".to_string(),
));
};
remote
self.inner
.client
.call(FS_READ_DIRECTORY_METHOD, &params)
.await
.map_err(Into::into)
@@ -423,30 +304,16 @@ impl ExecServerClient {
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_remove(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/remove".to_string(),
));
};
remote
self.inner
.client
.call(FS_REMOVE_METHOD, &params)
.await
.map_err(Into::into)
}
pub async fn fs_copy(&self, params: FsCopyParams) -> Result<FsCopyResponse, ExecServerError> {
if let Some(backend) = self.inner.backend.as_local() {
return backend.fs_copy(params).await;
}
let Some(remote) = self.inner.backend.as_remote() else {
return Err(ExecServerError::Protocol(
"remote backend missing during fs/copy".to_string(),
));
};
remote
self.inner
.client
.call(FS_COPY_METHOD, &params)
.await
.map_err(Into::into)
@@ -482,7 +349,7 @@ impl ExecServerClient {
});
Inner {
backend: ClientBackend::Remote(rpc_client),
client: rpc_client,
events_tx: broadcast::channel(256).0,
reader_task,
}
@@ -494,13 +361,11 @@ impl ExecServerClient {
}
async fn notify_initialized(&self) -> Result<(), ExecServerError> {
match &self.inner.backend {
ClientBackend::Remote(client) => client
.notify(INITIALIZED_METHOD, &serde_json::json!({}))
.await
.map_err(ExecServerError::Json),
ClientBackend::InProcess(backend) => backend.initialized().await,
}
self.inner
.client
.notify(INITIALIZED_METHOD, &serde_json::json!({}))
.await
.map_err(ExecServerError::Json)
}
}
@@ -517,20 +382,6 @@ impl From<RpcCallError> for ExecServerError {
}
}
async fn handle_in_process_outbound_message(
inner: &Arc<Inner>,
message: RpcServerOutboundMessage,
) -> Result<(), ExecServerError> {
match message {
RpcServerOutboundMessage::Response { .. } | RpcServerOutboundMessage::Error { .. } => Err(
ExecServerError::Protocol("unexpected in-process RPC response".to_string()),
),
RpcServerOutboundMessage::Notification(notification) => {
handle_server_notification(inner, notification).await
}
}
}
async fn handle_server_notification(
inner: &Arc<Inner>,
notification: JSONRPCNotification,

View File

@@ -1,200 +0,0 @@
use std::sync::Arc;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::InitializeResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use crate::server::ExecServerHandler;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
use codex_app_server_protocol::FsCreateDirectoryResponse;
use codex_app_server_protocol::FsGetMetadataParams;
use codex_app_server_protocol::FsGetMetadataResponse;
use codex_app_server_protocol::FsReadDirectoryParams;
use codex_app_server_protocol::FsReadDirectoryResponse;
use codex_app_server_protocol::FsReadFileParams;
use codex_app_server_protocol::FsReadFileResponse;
use codex_app_server_protocol::FsRemoveParams;
use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
use super::ExecServerError;
#[derive(Clone)]
pub(super) struct LocalBackend {
handler: Arc<ExecServerHandler>,
}
impl LocalBackend {
pub(super) fn new(handler: ExecServerHandler) -> Self {
Self {
handler: Arc::new(handler),
}
}
pub(super) async fn shutdown(&self) {
self.handler.shutdown().await;
}
pub(super) async fn initialize(&self) -> Result<InitializeResponse, ExecServerError> {
self.handler
.initialize()
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn initialized(&self) -> Result<(), ExecServerError> {
self.handler
.initialized()
.map_err(ExecServerError::Protocol)
}
pub(super) async fn exec(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
self.handler
.exec(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn exec_read(
&self,
params: ReadParams,
) -> Result<ReadResponse, ExecServerError> {
self.handler
.exec_read(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn exec_write(
&self,
params: WriteParams,
) -> Result<WriteResponse, ExecServerError> {
self.handler
.exec_write(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn terminate(
&self,
params: TerminateParams,
) -> Result<TerminateResponse, ExecServerError> {
self.handler
.terminate(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_read_file(
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, ExecServerError> {
self.handler
.fs_read_file(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_write_file(
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, ExecServerError> {
self.handler
.fs_write_file(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_create_directory(
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, ExecServerError> {
self.handler
.fs_create_directory(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_get_metadata(
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, ExecServerError> {
self.handler
.fs_get_metadata(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_read_directory(
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, ExecServerError> {
self.handler
.fs_read_directory(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_remove(
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, ExecServerError> {
self.handler
.fs_remove(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
pub(super) async fn fs_copy(
&self,
params: FsCopyParams,
) -> Result<FsCopyResponse, ExecServerError> {
self.handler
.fs_copy(params)
.await
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})
}
}

View File

@@ -1,27 +0,0 @@
use std::time::Duration;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
/// Connection options for any exec-server client transport.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ExecServerClientConnectOptions {
pub client_name: String,
pub initialize_timeout: Duration,
}
/// WebSocket connection arguments for a remote exec-server.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RemoteExecServerConnectArgs {
pub websocket_url: String,
pub client_name: String,
pub connect_timeout: Duration,
pub initialize_timeout: Duration,
}
/// Connection-level server events.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ExecServerEvent {
OutputDelta(ExecOutputDeltaNotification),
Exited(ExecExitedNotification),
}

View File

@@ -1,13 +1,31 @@
use std::sync::Arc;
use crate::ExecServerClient;
use crate::ExecServerError;
use crate::RemoteExecServerConnectArgs;
use crate::fs;
use crate::fs::ExecutorFileSystem;
use crate::local_process::LocalProcess;
use crate::process::ExecProcess;
use crate::remote_process::RemoteProcess;
#[derive(Clone, Default)]
pub trait ExecutorEnvironment: Send + Sync {
fn get_executor(&self) -> Arc<dyn ExecProcess>;
}
#[derive(Clone)]
pub struct Environment {
experimental_exec_server_url: Option<String>,
remote_exec_server_client: Option<ExecServerClient>,
executor: Arc<dyn ExecProcess>,
}
impl Default for Environment {
fn default() -> Self {
Self {
experimental_exec_server_url: None,
executor: Arc::new(LocalProcess::default()),
}
}
}
impl std::fmt::Debug for Environment {
@@ -19,7 +37,7 @@ impl std::fmt::Debug for Environment {
)
.field(
"has_remote_exec_server_client",
&self.remote_exec_server_client.is_some(),
&self.experimental_exec_server_url.is_some(),
)
.finish()
}
@@ -29,22 +47,30 @@ impl Environment {
pub async fn create(
experimental_exec_server_url: Option<String>,
) -> Result<Self, ExecServerError> {
let remote_exec_server_client =
let executor: Arc<dyn ExecProcess> =
if let Some(websocket_url) = experimental_exec_server_url.as_deref() {
Some(
Arc::new(RemoteProcess::new(
ExecServerClient::connect_websocket(RemoteExecServerConnectArgs::new(
websocket_url.to_string(),
"codex-core".to_string(),
))
.await?,
)
))
} else {
None
let process = LocalProcess::default();
process
.initialize()
.map_err(|error| ExecServerError::Server {
code: error.code,
message: error.message,
})?;
process.initialized().map_err(ExecServerError::Protocol)?;
Arc::new(process)
};
Ok(Self {
experimental_exec_server_url,
remote_exec_server_client,
executor,
})
}
@@ -52,8 +78,8 @@ impl Environment {
self.experimental_exec_server_url.as_deref()
}
pub fn remote_exec_server_client(&self) -> Option<&ExecServerClient> {
self.remote_exec_server_client.as_ref()
pub fn get_executor(&self) -> Arc<dyn ExecProcess> {
Arc::clone(&self.executor)
}
pub fn get_filesystem(&self) -> impl ExecutorFileSystem + use<> {
@@ -61,6 +87,12 @@ impl Environment {
}
}
impl ExecutorEnvironment for Environment {
fn get_executor(&self) -> Arc<dyn ExecProcess> {
self.get_executor()
}
}
#[cfg(test)]
mod tests {
use super::Environment;
@@ -71,6 +103,5 @@ mod tests {
let environment = Environment::create(None).await.expect("create environment");
assert_eq!(environment.experimental_exec_server_url(), None);
assert!(environment.remote_exec_server_client().is_none());
}
}

View File

@@ -72,7 +72,7 @@ pub trait ExecutorFileSystem: Send + Sync {
}
#[derive(Clone, Default)]
pub(crate) struct LocalFileSystem;
pub struct LocalFileSystem;
#[async_trait]
impl ExecutorFileSystem for LocalFileSystem {

View File

@@ -1,17 +1,18 @@
mod client;
mod client_api;
mod connection;
mod environment;
mod fs;
mod local_process;
mod process;
mod protocol;
mod remote_process;
mod rpc;
mod server;
pub use client::ExecServerClient;
pub use client::ExecServerClientConnectOptions;
pub use client::ExecServerError;
pub use client_api::ExecServerClientConnectOptions;
pub use client_api::ExecServerEvent;
pub use client_api::RemoteExecServerConnectArgs;
pub use client::RemoteExecServerConnectArgs;
pub use codex_app_server_protocol::FsCopyParams;
pub use codex_app_server_protocol::FsCopyResponse;
pub use codex_app_server_protocol::FsCreateDirectoryParams;
@@ -28,13 +29,17 @@ pub use codex_app_server_protocol::FsRemoveResponse;
pub use codex_app_server_protocol::FsWriteFileParams;
pub use codex_app_server_protocol::FsWriteFileResponse;
pub use environment::Environment;
pub use environment::ExecutorEnvironment;
pub use fs::CopyOptions;
pub use fs::CreateDirectoryOptions;
pub use fs::ExecutorFileSystem;
pub use fs::FileMetadata;
pub use fs::FileSystemResult;
pub use fs::LocalFileSystem;
pub use fs::ReadDirectoryEntry;
pub use fs::RemoveOptions;
pub use process::ExecProcess;
pub use process::ExecServerEvent;
pub use protocol::ExecExitedNotification;
pub use protocol::ExecOutputDeltaNotification;
pub use protocol::ExecOutputStream;

View File

@@ -0,0 +1,515 @@
use std::collections::HashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use std::time::Duration;
use async_trait::async_trait;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_utils_pty::ExecCommandSession;
use codex_utils_pty::TerminalSize;
use tokio::sync::Mutex;
use tokio::sync::Notify;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tracing::warn;
use crate::ExecProcess;
use crate::ExecServerError;
use crate::ExecServerEvent;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecOutputStream;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::InitializeResponse;
use crate::protocol::ProcessOutputChunk;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use crate::rpc::RpcNotificationSender;
use crate::rpc::RpcServerOutboundMessage;
use crate::rpc::internal_error;
use crate::rpc::invalid_params;
use crate::rpc::invalid_request;
const RETAINED_OUTPUT_BYTES_PER_PROCESS: usize = 1024 * 1024;
const EVENT_CHANNEL_CAPACITY: usize = 256;
const NOTIFICATION_CHANNEL_CAPACITY: usize = 256;
#[cfg(test)]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_millis(25);
#[cfg(not(test))]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_secs(30);
#[derive(Clone)]
struct RetainedOutputChunk {
seq: u64,
stream: ExecOutputStream,
chunk: Vec<u8>,
}
struct RunningProcess {
session: ExecCommandSession,
tty: bool,
output: VecDeque<RetainedOutputChunk>,
retained_bytes: usize,
next_seq: u64,
exit_code: Option<i32>,
output_notify: Arc<Notify>,
}
enum ProcessEntry {
Starting,
Running(Box<RunningProcess>),
}
struct Inner {
notifications: RpcNotificationSender,
events_tx: broadcast::Sender<ExecServerEvent>,
processes: Arc<Mutex<HashMap<String, ProcessEntry>>>,
initialize_requested: AtomicBool,
initialized: AtomicBool,
}
#[derive(Clone)]
pub(crate) struct LocalProcess {
inner: Arc<Inner>,
}
impl Default for LocalProcess {
fn default() -> Self {
let (outgoing_tx, mut outgoing_rx) =
mpsc::channel::<RpcServerOutboundMessage>(NOTIFICATION_CHANNEL_CAPACITY);
tokio::spawn(async move { while outgoing_rx.recv().await.is_some() {} });
Self::new(RpcNotificationSender::new(outgoing_tx))
}
}
impl LocalProcess {
pub(crate) fn new(notifications: RpcNotificationSender) -> Self {
Self {
inner: Arc::new(Inner {
notifications,
events_tx: broadcast::channel(EVENT_CHANNEL_CAPACITY).0,
processes: Arc::new(Mutex::new(HashMap::new())),
initialize_requested: AtomicBool::new(false),
initialized: AtomicBool::new(false),
}),
}
}
pub(crate) async fn shutdown(&self) {
let remaining = {
let mut processes = self.inner.processes.lock().await;
processes
.drain()
.filter_map(|(_, process)| match process {
ProcessEntry::Starting => None,
ProcessEntry::Running(process) => Some(process),
})
.collect::<Vec<_>>()
};
for process in remaining {
process.session.terminate();
}
}
pub(crate) fn initialize(&self) -> Result<InitializeResponse, JSONRPCErrorError> {
if self.inner.initialize_requested.swap(true, Ordering::SeqCst) {
return Err(invalid_request(
"initialize may only be sent once per connection".to_string(),
));
}
Ok(InitializeResponse {})
}
pub(crate) fn initialized(&self) -> Result<(), String> {
if !self.inner.initialize_requested.load(Ordering::SeqCst) {
return Err("received `initialized` notification before `initialize`".into());
}
self.inner.initialized.store(true, Ordering::SeqCst);
Ok(())
}
pub(crate) fn require_initialized_for(
&self,
method_family: &str,
) -> Result<(), JSONRPCErrorError> {
if !self.inner.initialize_requested.load(Ordering::SeqCst) {
return Err(invalid_request(format!(
"client must call initialize before using {method_family} methods"
)));
}
if !self.inner.initialized.load(Ordering::SeqCst) {
return Err(invalid_request(format!(
"client must send initialized before using {method_family} methods"
)));
}
Ok(())
}
pub(crate) async fn exec(&self, params: ExecParams) -> Result<ExecResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let process_id = params.process_id.clone();
let (program, args) = params
.argv
.split_first()
.ok_or_else(|| invalid_params("argv must not be empty".to_string()))?;
{
let mut process_map = self.inner.processes.lock().await;
if process_map.contains_key(&process_id) {
return Err(invalid_request(format!(
"process {process_id} already exists"
)));
}
process_map.insert(process_id.clone(), ProcessEntry::Starting);
}
let spawned_result = if params.tty {
codex_utils_pty::spawn_pty_process(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
TerminalSize::default(),
)
.await
} else {
codex_utils_pty::spawn_pipe_process_no_stdin(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
)
.await
};
let spawned = match spawned_result {
Ok(spawned) => spawned,
Err(err) => {
let mut process_map = self.inner.processes.lock().await;
if matches!(process_map.get(&process_id), Some(ProcessEntry::Starting)) {
process_map.remove(&process_id);
}
return Err(internal_error(err.to_string()));
}
};
let output_notify = Arc::new(Notify::new());
{
let mut process_map = self.inner.processes.lock().await;
process_map.insert(
process_id.clone(),
ProcessEntry::Running(Box::new(RunningProcess {
session: spawned.session,
tty: params.tty,
output: VecDeque::new(),
retained_bytes: 0,
next_seq: 1,
exit_code: None,
output_notify: Arc::clone(&output_notify),
})),
);
}
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stdout
},
spawned.stdout_rx,
Arc::clone(&self.inner),
Arc::clone(&output_notify),
));
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stderr
},
spawned.stderr_rx,
Arc::clone(&self.inner),
Arc::clone(&output_notify),
));
tokio::spawn(watch_exit(
process_id.clone(),
spawned.exit_rx,
Arc::clone(&self.inner),
output_notify,
));
Ok(ExecResponse { process_id })
}
pub(crate) async fn exec_read(
&self,
params: ReadParams,
) -> Result<ReadResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let after_seq = params.after_seq.unwrap_or(0);
let max_bytes = params.max_bytes.unwrap_or(usize::MAX);
let wait = Duration::from_millis(params.wait_ms.unwrap_or(0));
let deadline = tokio::time::Instant::now() + wait;
loop {
let (response, output_notify) = {
let process_map = self.inner.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
let ProcessEntry::Running(process) = process else {
return Err(invalid_request(format!(
"process id {} is starting",
params.process_id
)));
};
let mut chunks = Vec::new();
let mut total_bytes = 0;
let mut next_seq = process.next_seq;
for retained in process.output.iter().filter(|chunk| chunk.seq > after_seq) {
let chunk_len = retained.chunk.len();
if !chunks.is_empty() && total_bytes + chunk_len > max_bytes {
break;
}
total_bytes += chunk_len;
chunks.push(ProcessOutputChunk {
seq: retained.seq,
stream: retained.stream,
chunk: retained.chunk.clone().into(),
});
next_seq = retained.seq + 1;
if total_bytes >= max_bytes {
break;
}
}
(
ReadResponse {
chunks,
next_seq,
exited: process.exit_code.is_some(),
exit_code: process.exit_code,
},
Arc::clone(&process.output_notify),
)
};
if !response.chunks.is_empty()
|| response.exited
|| tokio::time::Instant::now() >= deadline
{
return Ok(response);
}
let remaining = deadline.saturating_duration_since(tokio::time::Instant::now());
if remaining.is_zero() {
return Ok(response);
}
let _ = tokio::time::timeout(remaining, output_notify.notified()).await;
}
}
pub(crate) async fn exec_write(
&self,
params: WriteParams,
) -> Result<WriteResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let writer_tx = {
let process_map = self.inner.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
let ProcessEntry::Running(process) = process else {
return Err(invalid_request(format!(
"process id {} is starting",
params.process_id
)));
};
if !process.tty {
return Err(invalid_request(format!(
"stdin is closed for process {}",
params.process_id
)));
}
process.session.writer_sender()
};
writer_tx
.send(params.chunk.into_inner())
.await
.map_err(|_| internal_error("failed to write to process stdin".to_string()))?;
Ok(WriteResponse { accepted: true })
}
pub(crate) async fn terminate_process(
&self,
params: TerminateParams,
) -> Result<TerminateResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let running = {
let process_map = self.inner.processes.lock().await;
match process_map.get(&params.process_id) {
Some(ProcessEntry::Running(process)) => {
if process.exit_code.is_some() {
return Ok(TerminateResponse { running: false });
}
process.session.terminate();
true
}
Some(ProcessEntry::Starting) | None => false,
}
};
Ok(TerminateResponse { running })
}
}
#[async_trait]
impl ExecProcess for LocalProcess {
async fn start(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
self.exec(params).await.map_err(map_handler_error)
}
async fn read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError> {
self.exec_read(params).await.map_err(map_handler_error)
}
async fn write(
&self,
process_id: &str,
chunk: Vec<u8>,
) -> Result<WriteResponse, ExecServerError> {
self.exec_write(WriteParams {
process_id: process_id.to_string(),
chunk: chunk.into(),
})
.await
.map_err(map_handler_error)
}
async fn terminate(&self, process_id: &str) -> Result<TerminateResponse, ExecServerError> {
self.terminate_process(TerminateParams {
process_id: process_id.to_string(),
})
.await
.map_err(map_handler_error)
}
fn subscribe_events(&self) -> broadcast::Receiver<ExecServerEvent> {
self.inner.events_tx.subscribe()
}
}
fn map_handler_error(error: JSONRPCErrorError) -> ExecServerError {
ExecServerError::Server {
code: error.code,
message: error.message,
}
}
async fn stream_output(
process_id: String,
stream: ExecOutputStream,
mut receiver: tokio::sync::mpsc::Receiver<Vec<u8>>,
inner: Arc<Inner>,
output_notify: Arc<Notify>,
) {
while let Some(chunk) = receiver.recv().await {
let notification = {
let mut processes = inner.processes.lock().await;
let Some(entry) = processes.get_mut(&process_id) else {
break;
};
let ProcessEntry::Running(process) = entry else {
break;
};
let seq = process.next_seq;
process.next_seq += 1;
process.retained_bytes += chunk.len();
process.output.push_back(RetainedOutputChunk {
seq,
stream,
chunk: chunk.clone(),
});
while process.retained_bytes > RETAINED_OUTPUT_BYTES_PER_PROCESS {
let Some(evicted) = process.output.pop_front() else {
break;
};
process.retained_bytes = process.retained_bytes.saturating_sub(evicted.chunk.len());
warn!(
"retained output cap exceeded for process {process_id}; dropping oldest output"
);
}
ExecOutputDeltaNotification {
process_id: process_id.clone(),
stream,
chunk: chunk.into(),
}
};
output_notify.notify_waiters();
let _ = inner
.events_tx
.send(ExecServerEvent::OutputDelta(notification.clone()));
if inner
.notifications
.notify(crate::protocol::EXEC_OUTPUT_DELTA_METHOD, &notification)
.await
.is_err()
{
break;
}
}
}
async fn watch_exit(
process_id: String,
exit_rx: tokio::sync::oneshot::Receiver<i32>,
inner: Arc<Inner>,
output_notify: Arc<Notify>,
) {
let exit_code = exit_rx.await.unwrap_or(-1);
{
let mut processes = inner.processes.lock().await;
if let Some(ProcessEntry::Running(process)) = processes.get_mut(&process_id) {
process.exit_code = Some(exit_code);
}
}
output_notify.notify_waiters();
let notification = ExecExitedNotification {
process_id: process_id.clone(),
exit_code,
};
let _ = inner
.events_tx
.send(ExecServerEvent::Exited(notification.clone()));
if inner
.notifications
.notify(crate::protocol::EXEC_EXITED_METHOD, &notification)
.await
.is_err()
{
return;
}
tokio::time::sleep(EXITED_PROCESS_RETENTION).await;
let mut processes = inner.processes.lock().await;
if matches!(
processes.get(&process_id),
Some(ProcessEntry::Running(process)) if process.exit_code == Some(exit_code)
) {
processes.remove(&process_id);
}
}

View File

@@ -0,0 +1,35 @@
use async_trait::async_trait;
use tokio::sync::broadcast;
use crate::ExecServerError;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteResponse;
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ExecServerEvent {
OutputDelta(ExecOutputDeltaNotification),
Exited(ExecExitedNotification),
}
#[async_trait]
pub trait ExecProcess: Send + Sync {
async fn start(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError>;
async fn read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError>;
async fn write(
&self,
process_id: &str,
chunk: Vec<u8>,
) -> Result<WriteResponse, ExecServerError>;
async fn terminate(&self, process_id: &str) -> Result<TerminateResponse, ExecServerError>;
fn subscribe_events(&self) -> broadcast::Receiver<ExecServerEvent>;
}

View File

@@ -0,0 +1,51 @@
use async_trait::async_trait;
use tokio::sync::broadcast;
use crate::ExecProcess;
use crate::ExecServerClient;
use crate::ExecServerError;
use crate::ExecServerEvent;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteResponse;
#[derive(Clone)]
pub(crate) struct RemoteProcess {
client: ExecServerClient,
}
impl RemoteProcess {
pub(crate) fn new(client: ExecServerClient) -> Self {
Self { client }
}
}
#[async_trait]
impl ExecProcess for RemoteProcess {
async fn start(&self, params: ExecParams) -> Result<ExecResponse, ExecServerError> {
self.client.exec(params).await
}
async fn read(&self, params: ReadParams) -> Result<ReadResponse, ExecServerError> {
self.client.read(params).await
}
async fn write(
&self,
process_id: &str,
chunk: Vec<u8>,
) -> Result<WriteResponse, ExecServerError> {
self.client.write(process_id, chunk).await
}
async fn terminate(&self, process_id: &str) -> Result<TerminateResponse, ExecServerError> {
self.client.terminate(process_id).await
}
fn subscribe_events(&self) -> broadcast::Receiver<ExecServerEvent> {
self.client.event_receiver()
}
}

View File

@@ -1,5 +1,6 @@
mod filesystem;
mod handler;
mod process_handler;
mod processor;
mod registry;
mod transport;

View File

@@ -36,7 +36,7 @@ pub(crate) struct ExecServerFileSystem {
impl Default for ExecServerFileSystem {
fn default() -> Self {
Self {
file_system: Arc::new(Environment.get_filesystem()),
file_system: Arc::new(Environment::default().get_filesystem()),
}
}
}

View File

@@ -1,10 +1,3 @@
use std::collections::HashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use std::time::Duration;
use codex_app_server_protocol::FsCopyParams;
use codex_app_server_protocol::FsCopyResponse;
use codex_app_server_protocol::FsCreateDirectoryParams;
@@ -20,19 +13,10 @@ use codex_app_server_protocol::FsRemoveResponse;
use codex_app_server_protocol::FsWriteFileParams;
use codex_app_server_protocol::FsWriteFileResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_utils_pty::ExecCommandSession;
use codex_utils_pty::TerminalSize;
use tokio::sync::Mutex;
use tokio::sync::Notify;
use tracing::warn;
use crate::protocol::ExecExitedNotification;
use crate::protocol::ExecOutputDeltaNotification;
use crate::protocol::ExecOutputStream;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::InitializeResponse;
use crate::protocol::ProcessOutputChunk;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
@@ -40,336 +24,64 @@ use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use crate::rpc::RpcNotificationSender;
use crate::rpc::internal_error;
use crate::rpc::invalid_params;
use crate::rpc::invalid_request;
use crate::server::filesystem::ExecServerFileSystem;
const RETAINED_OUTPUT_BYTES_PER_PROCESS: usize = 1024 * 1024;
#[cfg(test)]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_millis(25);
#[cfg(not(test))]
const EXITED_PROCESS_RETENTION: Duration = Duration::from_secs(30);
#[derive(Clone)]
struct RetainedOutputChunk {
seq: u64,
stream: ExecOutputStream,
chunk: Vec<u8>,
}
struct RunningProcess {
session: ExecCommandSession,
tty: bool,
output: VecDeque<RetainedOutputChunk>,
retained_bytes: usize,
next_seq: u64,
exit_code: Option<i32>,
output_notify: Arc<Notify>,
}
enum ProcessEntry {
Starting,
Running(Box<RunningProcess>),
}
use crate::server::process_handler::ExecServerProcess;
pub(crate) struct ExecServerHandler {
notifications: RpcNotificationSender,
process: ExecServerProcess,
file_system: ExecServerFileSystem,
processes: Arc<Mutex<HashMap<String, ProcessEntry>>>,
initialize_requested: AtomicBool,
initialized: AtomicBool,
}
impl ExecServerHandler {
pub(crate) fn new(notifications: RpcNotificationSender) -> Self {
Self {
notifications,
process: ExecServerProcess::new(notifications),
file_system: ExecServerFileSystem::default(),
processes: Arc::new(Mutex::new(HashMap::new())),
initialize_requested: AtomicBool::new(false),
initialized: AtomicBool::new(false),
}
}
pub(crate) async fn shutdown(&self) {
let remaining = {
let mut processes = self.processes.lock().await;
processes
.drain()
.filter_map(|(_, process)| match process {
ProcessEntry::Starting => None,
ProcessEntry::Running(process) => Some(process),
})
.collect::<Vec<_>>()
};
for process in remaining {
process.session.terminate();
}
self.process.shutdown().await;
}
pub(crate) fn initialize(&self) -> Result<InitializeResponse, JSONRPCErrorError> {
if self.initialize_requested.swap(true, Ordering::SeqCst) {
return Err(invalid_request(
"initialize may only be sent once per connection".to_string(),
));
}
Ok(InitializeResponse {})
self.process.initialize()
}
pub(crate) fn initialized(&self) -> Result<(), String> {
if !self.initialize_requested.load(Ordering::SeqCst) {
return Err("received `initialized` notification before `initialize`".into());
}
self.initialized.store(true, Ordering::SeqCst);
Ok(())
}
fn require_initialized_for(&self, method_family: &str) -> Result<(), JSONRPCErrorError> {
if !self.initialize_requested.load(Ordering::SeqCst) {
return Err(invalid_request(format!(
"client must call initialize before using {method_family} methods"
)));
}
if !self.initialized.load(Ordering::SeqCst) {
return Err(invalid_request(format!(
"client must send initialized before using {method_family} methods"
)));
}
Ok(())
self.process.initialized()
}
pub(crate) async fn exec(&self, params: ExecParams) -> Result<ExecResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let process_id = params.process_id.clone();
let (program, args) = params
.argv
.split_first()
.ok_or_else(|| invalid_params("argv must not be empty".to_string()))?;
{
let mut process_map = self.processes.lock().await;
if process_map.contains_key(&process_id) {
return Err(invalid_request(format!(
"process {process_id} already exists"
)));
}
process_map.insert(process_id.clone(), ProcessEntry::Starting);
}
let spawned_result = if params.tty {
codex_utils_pty::spawn_pty_process(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
TerminalSize::default(),
)
.await
} else {
codex_utils_pty::spawn_pipe_process_no_stdin(
program,
args,
params.cwd.as_path(),
&params.env,
&params.arg0,
)
.await
};
let spawned = match spawned_result {
Ok(spawned) => spawned,
Err(err) => {
let mut process_map = self.processes.lock().await;
if matches!(process_map.get(&process_id), Some(ProcessEntry::Starting)) {
process_map.remove(&process_id);
}
return Err(internal_error(err.to_string()));
}
};
let output_notify = Arc::new(Notify::new());
{
let mut process_map = self.processes.lock().await;
process_map.insert(
process_id.clone(),
ProcessEntry::Running(Box::new(RunningProcess {
session: spawned.session,
tty: params.tty,
output: VecDeque::new(),
retained_bytes: 0,
next_seq: 1,
exit_code: None,
output_notify: Arc::clone(&output_notify),
})),
);
}
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stdout
},
spawned.stdout_rx,
self.notifications.clone(),
Arc::clone(&self.processes),
Arc::clone(&output_notify),
));
tokio::spawn(stream_output(
process_id.clone(),
if params.tty {
ExecOutputStream::Pty
} else {
ExecOutputStream::Stderr
},
spawned.stderr_rx,
self.notifications.clone(),
Arc::clone(&self.processes),
Arc::clone(&output_notify),
));
tokio::spawn(watch_exit(
process_id.clone(),
spawned.exit_rx,
self.notifications.clone(),
Arc::clone(&self.processes),
output_notify,
));
Ok(ExecResponse { process_id })
self.process.exec(params).await
}
pub(crate) async fn exec_read(
&self,
params: ReadParams,
) -> Result<ReadResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let after_seq = params.after_seq.unwrap_or(0);
let max_bytes = params.max_bytes.unwrap_or(usize::MAX);
let wait = Duration::from_millis(params.wait_ms.unwrap_or(0));
let deadline = tokio::time::Instant::now() + wait;
loop {
let (response, output_notify) = {
let process_map = self.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
let ProcessEntry::Running(process) = process else {
return Err(invalid_request(format!(
"process id {} is starting",
params.process_id
)));
};
let mut chunks = Vec::new();
let mut total_bytes = 0;
let mut next_seq = process.next_seq;
for retained in process.output.iter().filter(|chunk| chunk.seq > after_seq) {
let chunk_len = retained.chunk.len();
if !chunks.is_empty() && total_bytes + chunk_len > max_bytes {
break;
}
total_bytes += chunk_len;
chunks.push(ProcessOutputChunk {
seq: retained.seq,
stream: retained.stream,
chunk: retained.chunk.clone().into(),
});
next_seq = retained.seq + 1;
if total_bytes >= max_bytes {
break;
}
}
(
ReadResponse {
chunks,
next_seq,
exited: process.exit_code.is_some(),
exit_code: process.exit_code,
},
Arc::clone(&process.output_notify),
)
};
if !response.chunks.is_empty()
|| response.exited
|| tokio::time::Instant::now() >= deadline
{
return Ok(response);
}
let remaining = deadline.saturating_duration_since(tokio::time::Instant::now());
if remaining.is_zero() {
return Ok(response);
}
let _ = tokio::time::timeout(remaining, output_notify.notified()).await;
}
self.process.exec_read(params).await
}
pub(crate) async fn exec_write(
&self,
params: WriteParams,
) -> Result<WriteResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let writer_tx = {
let process_map = self.processes.lock().await;
let process = process_map.get(&params.process_id).ok_or_else(|| {
invalid_request(format!("unknown process id {}", params.process_id))
})?;
let ProcessEntry::Running(process) = process else {
return Err(invalid_request(format!(
"process id {} is starting",
params.process_id
)));
};
if !process.tty {
return Err(invalid_request(format!(
"stdin is closed for process {}",
params.process_id
)));
}
process.session.writer_sender()
};
writer_tx
.send(params.chunk.into_inner())
.await
.map_err(|_| internal_error("failed to write to process stdin".to_string()))?;
Ok(WriteResponse { accepted: true })
self.process.exec_write(params).await
}
pub(crate) async fn terminate(
&self,
params: TerminateParams,
) -> Result<TerminateResponse, JSONRPCErrorError> {
self.require_initialized_for("exec")?;
let running = {
let process_map = self.processes.lock().await;
match process_map.get(&params.process_id) {
Some(ProcessEntry::Running(process)) => {
if process.exit_code.is_some() {
return Ok(TerminateResponse { running: false });
}
process.session.terminate();
true
}
Some(ProcessEntry::Starting) | None => false,
}
};
Ok(TerminateResponse { running })
self.process.terminate(params).await
}
pub(crate) async fn fs_read_file(
&self,
params: FsReadFileParams,
) -> Result<FsReadFileResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.read_file(params).await
}
@@ -377,7 +89,7 @@ impl ExecServerHandler {
&self,
params: FsWriteFileParams,
) -> Result<FsWriteFileResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.write_file(params).await
}
@@ -385,7 +97,7 @@ impl ExecServerHandler {
&self,
params: FsCreateDirectoryParams,
) -> Result<FsCreateDirectoryResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.create_directory(params).await
}
@@ -393,7 +105,7 @@ impl ExecServerHandler {
&self,
params: FsGetMetadataParams,
) -> Result<FsGetMetadataResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.get_metadata(params).await
}
@@ -401,7 +113,7 @@ impl ExecServerHandler {
&self,
params: FsReadDirectoryParams,
) -> Result<FsReadDirectoryResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.read_directory(params).await
}
@@ -409,7 +121,7 @@ impl ExecServerHandler {
&self,
params: FsRemoveParams,
) -> Result<FsRemoveResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.remove(params).await
}
@@ -417,101 +129,10 @@ impl ExecServerHandler {
&self,
params: FsCopyParams,
) -> Result<FsCopyResponse, JSONRPCErrorError> {
self.require_initialized_for("filesystem")?;
self.process.require_initialized_for("filesystem")?;
self.file_system.copy(params).await
}
}
async fn stream_output(
process_id: String,
stream: ExecOutputStream,
mut receiver: tokio::sync::mpsc::Receiver<Vec<u8>>,
notifications: RpcNotificationSender,
processes: Arc<Mutex<HashMap<String, ProcessEntry>>>,
output_notify: Arc<Notify>,
) {
while let Some(chunk) = receiver.recv().await {
let notification = {
let mut processes = processes.lock().await;
let Some(entry) = processes.get_mut(&process_id) else {
break;
};
let ProcessEntry::Running(process) = entry else {
break;
};
let seq = process.next_seq;
process.next_seq += 1;
process.retained_bytes += chunk.len();
process.output.push_back(RetainedOutputChunk {
seq,
stream,
chunk: chunk.clone(),
});
while process.retained_bytes > RETAINED_OUTPUT_BYTES_PER_PROCESS {
let Some(evicted) = process.output.pop_front() else {
break;
};
process.retained_bytes = process.retained_bytes.saturating_sub(evicted.chunk.len());
warn!(
"retained output cap exceeded for process {process_id}; dropping oldest output"
);
}
ExecOutputDeltaNotification {
process_id: process_id.clone(),
stream,
chunk: chunk.into(),
}
};
output_notify.notify_waiters();
if notifications
.notify(crate::protocol::EXEC_OUTPUT_DELTA_METHOD, &notification)
.await
.is_err()
{
break;
}
}
}
async fn watch_exit(
process_id: String,
exit_rx: tokio::sync::oneshot::Receiver<i32>,
notifications: RpcNotificationSender,
processes: Arc<Mutex<HashMap<String, ProcessEntry>>>,
output_notify: Arc<Notify>,
) {
let exit_code = exit_rx.await.unwrap_or(-1);
{
let mut processes = processes.lock().await;
if let Some(ProcessEntry::Running(process)) = processes.get_mut(&process_id) {
process.exit_code = Some(exit_code);
}
}
output_notify.notify_waiters();
if notifications
.notify(
crate::protocol::EXEC_EXITED_METHOD,
&ExecExitedNotification {
process_id: process_id.clone(),
exit_code,
},
)
.await
.is_err()
{
return;
}
tokio::time::sleep(EXITED_PROCESS_RETENTION).await;
let mut processes = processes.lock().await;
if matches!(
processes.get(&process_id),
Some(ProcessEntry::Running(process)) if process.exit_code == Some(exit_code)
) {
processes.remove(&process_id);
}
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,70 @@
use codex_app_server_protocol::JSONRPCErrorError;
use crate::local_process::LocalProcess;
use crate::protocol::ExecParams;
use crate::protocol::ExecResponse;
use crate::protocol::InitializeResponse;
use crate::protocol::ReadParams;
use crate::protocol::ReadResponse;
use crate::protocol::TerminateParams;
use crate::protocol::TerminateResponse;
use crate::protocol::WriteParams;
use crate::protocol::WriteResponse;
use crate::rpc::RpcNotificationSender;
#[derive(Clone)]
pub(crate) struct ExecServerProcess {
process: LocalProcess,
}
impl ExecServerProcess {
pub(crate) fn new(notifications: RpcNotificationSender) -> Self {
Self {
process: LocalProcess::new(notifications),
}
}
pub(crate) async fn shutdown(&self) {
self.process.shutdown().await;
}
pub(crate) fn initialize(&self) -> Result<InitializeResponse, JSONRPCErrorError> {
self.process.initialize()
}
pub(crate) fn initialized(&self) -> Result<(), String> {
self.process.initialized()
}
pub(crate) fn require_initialized_for(
&self,
method_family: &str,
) -> Result<(), JSONRPCErrorError> {
self.process.require_initialized_for(method_family)
}
pub(crate) async fn exec(&self, params: ExecParams) -> Result<ExecResponse, JSONRPCErrorError> {
self.process.exec(params).await
}
pub(crate) async fn exec_read(
&self,
params: ReadParams,
) -> Result<ReadResponse, JSONRPCErrorError> {
self.process.exec_read(params).await
}
pub(crate) async fn exec_write(
&self,
params: WriteParams,
) -> Result<WriteResponse, JSONRPCErrorError> {
self.process.exec_write(params).await
}
pub(crate) async fn terminate(
&self,
params: TerminateParams,
) -> Result<TerminateResponse, JSONRPCErrorError> {
self.process.terminate_process(params).await
}
}

View File

@@ -25,6 +25,7 @@ const EVENT_TIMEOUT: Duration = Duration::from_secs(5);
pub(crate) struct ExecServerHarness {
child: Child,
websocket_url: String,
websocket: tokio_tungstenite::WebSocketStream<
tokio_tungstenite::MaybeTlsStream<tokio::net::TcpStream>,
>,
@@ -50,12 +51,17 @@ pub(crate) async fn exec_server() -> anyhow::Result<ExecServerHarness> {
let (websocket, _) = connect_websocket_when_ready(&websocket_url).await?;
Ok(ExecServerHarness {
child,
websocket_url,
websocket,
next_request_id: 1,
})
}
impl ExecServerHarness {
pub(crate) fn websocket_url(&self) -> &str {
&self.websocket_url
}
pub(crate) async fn send_request(
&mut self,
method: &str,

View File

@@ -0,0 +1,89 @@
#![cfg(unix)]
mod common;
use std::sync::Arc;
use anyhow::Result;
use codex_exec_server::Environment;
use codex_exec_server::ExecParams;
use codex_exec_server::ExecProcess;
use codex_exec_server::ExecResponse;
use codex_exec_server::ReadParams;
use pretty_assertions::assert_eq;
use common::exec_server::ExecServerHarness;
use common::exec_server::exec_server;
struct ProcessContext {
process: Arc<dyn ExecProcess>,
_server: Option<ExecServerHarness>,
}
async fn create_process_context(use_remote: bool) -> Result<ProcessContext> {
if use_remote {
let server = exec_server().await?;
let environment = Environment::create(Some(server.websocket_url().to_string())).await?;
Ok(ProcessContext {
process: environment.get_executor(),
_server: Some(server),
})
} else {
let environment = Environment::create(None).await?;
Ok(ProcessContext {
process: environment.get_executor(),
_server: None,
})
}
}
async fn assert_exec_process_starts_and_exits(use_remote: bool) -> Result<()> {
let context = create_process_context(use_remote).await?;
let response = context
.process
.start(ExecParams {
process_id: "proc-1".to_string(),
argv: vec!["true".to_string()],
cwd: std::env::current_dir()?,
env: Default::default(),
tty: false,
arg0: None,
})
.await?;
assert_eq!(
response,
ExecResponse {
process_id: "proc-1".to_string(),
}
);
let mut next_seq = 0;
loop {
let read = context
.process
.read(ReadParams {
process_id: "proc-1".to_string(),
after_seq: Some(next_seq),
max_bytes: None,
wait_ms: Some(100),
})
.await?;
next_seq = read.next_seq;
if read.exited {
assert_eq!(read.exit_code, Some(0));
break;
}
}
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_process_starts_and_exits_locally() -> Result<()> {
assert_exec_process_starts_and_exits(false).await
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_process_starts_and_exits_remotely() -> Result<()> {
assert_exec_process_starts_and_exits(true).await
}