codex-responses-api-proxy
tl;dr:
# Launch the proxy, dump request/response pairs to /tmp/proxy
cd path/to/codex/codex-rs
cargo build
echo $OPENAI_API_KEY | ./target/debug/codex-responses-api-proxy \
--port 60001 \
--dump-dir /tmp/proxy
# Add this to ~/.codex/config.toml:
[model_providers.codex-responses-api-proxy]
name = 'codex-responses-api-proxy'
base_url = 'http://127.0.0.1:60001/v1'
wire_api='responses'
[profiles.proxy]
model_provider = "codex-responses-api-proxy"
# Use it
codex -p proxy
Detailed docs
A strict HTTP proxy that only forwards POST requests to /v1/responses to the OpenAI API (https://api.openai.com), injecting the Authorization: Bearer $OPENAI_API_KEY header. Everything else is rejected with 403 Forbidden.
Expected Usage
IMPORTANT: codex-responses-api-proxy is designed to be run by a privileged user with access to OPENAI_API_KEY so that an unprivileged user cannot inspect or tamper with the process. Though if --http-shutdown is specified, an unprivileged user can make a GET request to /shutdown to shutdown the server, as an unprivileged user could not send SIGTERM to kill the process.
A privileged user (i.e., root or a user with sudo) who has access to OPENAI_API_KEY would run the following to start the server, as codex-responses-api-proxy reads the auth token from stdin:
printenv OPENAI_API_KEY | env -u OPENAI_API_KEY codex-responses-api-proxy --http-shutdown --server-info /tmp/server-info.json
A non-privileged user would then run Codex as follows, specifying the model_provider dynamically:
PROXY_PORT=$(jq .port /tmp/server-info.json)
PROXY_BASE_URL="http://127.0.0.1:${PROXY_PORT}"
codex exec -c "model_providers.openai-proxy={ name = 'OpenAI Proxy', base_url = '${PROXY_BASE_URL}/v1', wire_api='responses' }" \
-c model_provider="openai-proxy" \
'Your prompt here'
When the unprivileged user was finished, they could shutdown the server using curl (since kill -SIGTERM is not an option):
curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
Behavior
- Reads the API key from
stdin. All callers should pipe the key in (for example,printenv OPENAI_API_KEY | codex-responses-api-proxy). - Formats the header value as
Bearer <key>and attempts tomlock(2)the memory holding that header so it is not swapped to disk. - Listens on the provided port or an ephemeral port if
--portis not specified. - Accepts exactly
POST /v1/responses(no query string). The request body is forwarded tohttps://api.openai.com/v1/responseswithAuthorization: Bearer <key>set. All original request headers (except any incomingAuthorization) are forwarded upstream, withHostoverridden toapi.openai.com. For other requests, it responds with403. - Optionally writes a single-line JSON file with server info, currently
{ "port": <u16>, "pid": <u32> }. - Optionally writes request/response JSON dumps to a directory. Each accepted request gets a pair of files that share a sequence/timestamp prefix, for example
000001-1846179912345-request.jsonand000001-1846179912345-response.json. Header values are dumped in full exceptAuthorizationand any header whose name includescookie, which are redacted. Bodies are written as parsed JSON when possible, otherwise as UTF-8 text. - Optional
--http-shutdownenablesGET /shutdownto terminate the process with exit code0. This allows one user (e.g.,root) to start the proxy and another unprivileged user on the host to shut it down.
CLI
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>] [--dump-dir <DIR>]
--port <PORT>: Port to bind on127.0.0.1. If omitted, an ephemeral port is chosen.--server-info <FILE>: If set, the proxy writes a single line of JSON with{ "port": <PORT>, "pid": <PID> }once listening.--http-shutdown: If set, enablesGET /shutdownto exit the process with code0.--upstream-url <URL>: Absolute URL to forward requests to. Defaults tohttps://api.openai.com/v1/responses.--dump-dir <DIR>: If set, writes one request JSON file and one response JSON file per accepted proxy call under this directory. Filenames use a shared sequence/timestamp prefix so each pair is easy to correlate.- Authentication is fixed to
Authorization: Bearer <key>to match the Codex CLI expectations.
For Azure, for example (ensure your deployment accepts Authorization: Bearer <key>):
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-proxy \
--http-shutdown \
--server-info /tmp/server-info.json \
--upstream-url "https://YOUR_PROJECT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT/responses?api-version=2025-04-01-preview"
Notes
- Only
POST /v1/responsesis permitted. No query strings are allowed. - All request headers are forwarded to the upstream call (aside from overriding
AuthorizationandHost). Response status and content-type are mirrored from upstream.
Hardening Details
Care is taken to restrict access/copying to the value of OPENAI_API_KEY retained in memory:
- We leverage
codex_process_hardeningsocodex-responses-api-proxyis run with standard process-hardening techniques. - At startup, we allocate a
1024byte buffer on the stack and copy"Bearer "into the start of the buffer. - We then read from
stdin, copying the contents into the buffer after"Bearer ". - After verifying the key matches
/^[a-zA-Z0-9_-]+$/(and does not exceed the buffer), we create aStringfrom that buffer (so the data is now on the heap). - We zero out the stack-allocated buffer using https://crates.io/crates/zeroize so it is not optimized away by the compiler.
- We invoke
.leak()on theStringso we can treat its contents as a&'static str, as it will live for the rest of the process. - On UNIX, we
mlock(2)the memory backing the&'static str. - When using the
&'static strwhen building an HTTP request, we useHeaderValue::from_static()to avoid copying the&str. - We also invoke
.set_sensitive(true)on theHeaderValue, which in theory indicates to other parts of the HTTP stack that the header should be treated with "special care" to avoid leakage: