mirror of
https://github.com/openai/codex.git
synced 2026-05-14 08:12:36 +00:00
## Why App-server clients sometimes need argv-based local process execution while sandbox policy is controlled outside Codex. Those environments can reject sandbox-disabling paths before a command ever starts, even when the caller intentionally wants unsandboxed execution. This PR adds a distinct `process/*` API for that use case instead of extending `command/exec` with another sandbox-disabling shape. Keeping the new surface separate also makes the future removal of `command/exec` simpler: clients that need explicit process lifecycle control can move to the newer handle-based API without depending on `command/exec` business logic. ## What changed - Added v2 process lifecycle methods: `process/spawn`, `process/writeStdin`, `process/resizePty`, and `process/kill`. - Added process notifications: `process/outputDelta` for streamed stdout/stderr chunks and `process/exited` for final exit status and buffered output. - Made `process/spawn` intentionally unsandboxed and omitted sandbox-selection fields such as `sandboxPolicy` and `permissionProfile`. - Added client-supplied, connection-scoped `processHandle` values for follow-up control requests and notification routing. - Supported cwd, environment overrides, PTY mode and size, stdin streaming, stdout/stderr streaming, per-stream output caps, and timeout controls. - Killed active process sessions when the originating app-server connection closes. - Wired the implementation through the modular `request_processors/` app-server layout, with process-handle request serialization for follow-up control calls. - Updated generated JSON/TypeScript schema fixtures and documented the new API in `codex-rs/app-server/README.md`. - Added v2 app-server integration coverage in `codex-rs/app-server/tests/suite/v2/process_exec.rs` for spawn acknowledgement before exit, buffered output caps, and process termination. ## Verification - `cargo test -p codex-app-server-protocol` - `cargo test -p codex-app-server` --------- Co-authored-by: Owen Lin <owen@openai.com>
1.4 KiB
Generated
1.4 KiB
Generated