mirror of
https://github.com/openai/codex.git
synced 2026-05-17 01:32:32 +00:00
dh--request-user-input-tool
17 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
a175ddacc0 |
[codex-analytics] emit terminal review events (#18748)
## Why Review telemetry should describe reviews as first-class events, not only as counters denormalized onto terminal tool-item events. That lets us analyze guardian and user reviews consistently across command execution, file changes, permissions, and network access, while still preserving the terminal item summaries that existing tool analytics need. To make those review events accurate, analytics also needs the observed completion time for each review and enough command metadata to distinguish `shell` from `unified_exec` reviews. ## What changed - emit generic `codex_review_event` rows for completed user and guardian reviews, with review subjects, reviewer, trigger, terminal status, resolution, and observed duration - reduce approval request / response / abort facts into review events for command execution, file change, and permissions flows - keep denormalized review counts, final approval outcome, and permission-request flags on terminal tool-item events for item-associated reviews - plumb review completion timing so user-review responses and aborts use app-server-observed completion times, while guardian analytics reuse the same terminal timestamps emitted on guardian assessment events - carry command approval `source` through the protocol and app-server layers so review analytics can distinguish `shell` from `unified_exec` - add analytics coverage for user-review emission, guardian-review emission, permission reviews that should not denormalize onto tool items, item-summary isolation across threads, and the serialized review-event shape ## Verification - `cargo test -p codex-analytics` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/18748). * __->__ #18748 * #21434 * #18747 * #17090 * #17089 * #20514 |
||
|
|
bbb6bf0a37 |
Emit accepted line fingerprint analytics (#21601)
## Why Codex assisted-code attribution needs a client-side accepted-code source that does not upload raw code. This adds a hash-only analytics event derived from the turn diff so downstream attribution can compare accepted Codex lines against commit or PR diffs. ## What Changed - Parse accepted/effective added lines from the final turn diff and emit `codex_accepted_line_fingerprints` analytics. - Hash repo, path, and normalized line content before upload; raw code and raw diffs are not included in the event. - Chunk large fingerprint payloads and send accepted-line fingerprint events in isolated requests while preserving normal batching for other analytics events. - Canonicalize Git remote URLs before repo hashing so SSH/HTTPS GitHub remotes join to the same repo hash. - Add parser coverage for unified diff hunk lines that look like `+++` or `---` file headers. ## Verification - `cargo test -p codex-analytics` - `cargo test -p codex-git-utils canonicalize_git_remote_url` - `just fix -p codex-analytics` - `just bazel-lock-check` - `git diff --check` |
||
|
|
99016ec732 |
[codex-analytics] plumb protocol-native review timing (#21434)
## Why We want terminal tool review analytics, but the reducer should not stamp review timing from its own wall clock. This PR plumbs review timing through the real protocol and app-server seams so downstream analytics can consume the emitter's timestamps directly. Guardian reviews keep their enriched `started_at` / `completed_at` analytics fields by deriving those legacy second-based values from the same protocol-native millisecond lifecycle timestamps, rather than sampling a separate analytics clock. ## What changed - add `started_at_ms` to user approval request payloads - add `started_at_ms` / `completed_at_ms` to guardian review notifications - preserve Guardian review `started_at` / `completed_at` enrichment from the protocol-native timing source - stamp typed `ServerResponse` analytics facts with app-server-observed `completed_at_ms` - thread the new timing fields through core, protocol, app-server, TUI, and analytics fixtures ## Verification - `cargo test -p codex-app-server outgoing_message --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-app-server-protocol guardian --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-tui guardian --manifest-path codex-rs/Cargo.toml` - `cargo test -p codex-analytics analytics_client_tests --manifest-path codex-rs/Cargo.toml` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21434). * #18748 * __->__ #21434 * #18747 * #17090 * #17089 * #20514 |
||
|
|
fbdbc6b2fe |
[codex-analytics] emit tool item events from item lifecycle (#17090)
## Why After the tool-item schemas are in place, analytics needs to emit them from the app-server item lifecycle rather than requiring bespoke tracking at each callsite. The reducer should also reuse the shared thread analytics context introduced below it in the stack so later event families do not repeat the same reducer joins or missing-state ladder. ## What changed - Tracks tool-item completion notifications and emits the matching tool analytics event when a terminal item arrives. - Derives event-specific payload details for command execution, file changes, MCP calls, dynamic tools, collaboration tools, web search, and image generation. - Denormalizes thread, app-server client, runtime, and subagent provenance metadata through the shared thread analytics context. - Adds reducer coverage for item lifecycle emission and subagent metadata inheritance. ## Duration semantics `duration_ms` is computed from the app-server item lifecycle timestamps: `completed_at_ms - started_at_ms`. That makes it the duration of the lifecycle Codex observed locally, not necessarily the upstream provider's full execution time. - Web search usually has a meaningful observed lifecycle because Responses can send `response.output_item.added` before `response.output_item.done`; in that case `started_at_ms` comes from the added event and `completed_at_ms` comes from the done event. - Image generation can be much less precise. In the current observed stream, image generation often arrives only as a completed `response.output_item.done`; when there is no earlier added event, Codex synthesizes the started item immediately before completion, so `duration_ms` can be `0` even though upstream image generation took longer. - Standalone web search and standalone image generation work is expected to land after this stack. Those paths may introduce more direct lifecycle events or timing points, so the current web-search/image-generation duration semantics should be treated as the best available item-lifecycle approximation, not the final latency contract for those tool families. - `execution_duration_ms` is populated only where the completed item already carries a native execution duration; otherwise it remains `null` while `duration_ms` still reflects the local lifecycle interval. ## Currently placeholder / partial fields Some fields are included in the schema for the intended steady-state contract, but this PR does not yet populate them from real approval/review state: - `review_count`, `guardian_review_count`, and `user_review_count` currently default to `0`. - `final_approval_outcome` currently defaults to `unknown`. - `requested_additional_permissions` and `requested_network_access` currently default to `false`. ## Verification - `cargo test -p codex-analytics` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/17090). * #18748 * #18747 * __->__ #17090 * #17089 * #20514 |
||
|
|
72a39e3a96 |
[app-server] centralize client response analytics (#20059)
## Why The precursor PR keeps successful client responses typed until app-server's outgoing response seam. This follow-up uses that seam to move successful client-response analytics out of individual handlers and into the shared sender path, while keeping filtering decisions inside `codex-analytics`. ## What changed - Emit successful client-response analytics centrally from `OutgoingMessageSender::send_response`. - Remove duplicate handler-local response tracking for the current thread/turn lifecycle responses. - Keep analytics ingestion selective inside `AnalyticsEventsClient`, so unrelated client traffic is ignored before cloning or boxing. - Collapse client-response analytics facts onto one typed path and normalize payloads in the reducer. - Add direct client-filter coverage plus sender-level coverage for the centralized forwarding path. ## Verification - `cargo test -p codex-analytics` - `cargo test -p codex-app-server outgoing_message::tests --lib` |
||
|
|
973c5c823e |
[app-server] type client response payloads (#20050)
## Why `pr17088` adds typed server-originated request/response plumbing, but successful client responses are still erased into bare JSON-RPC `result` values before app-server can make any typed decision about them. This precursor PR keeps successful client responses typed until the outgoing response seam. It is intentionally limited to protocol/app-server plumbing so the analytics behavior change can review separately on top. ## What changed - Add `ClientResponsePayload` as the pre-serialization client response body type. - Route app-server successful response paths through the typed payload seam while preserving existing handler-local analytics behavior. - Keep `InterruptConversation` JSON-RPC-only because it has no `ClientResponse` variant. - Move the new payload conversion tests into a dedicated protocol test module. ## Verification - `cargo check -p codex-app-server` - `cargo test -p codex-app-server-protocol` |
||
|
|
0690ab0842 |
[codex-analytics] ingest server requests and responses (#17088)
## Why Codex analytics needs a typed seam for app-server-originated request/response traffic so future tool-approval analytics can consume those facts without adding bespoke callsite tracking each time. Server responses arrive as JSON-RPC `id + result` payloads, so analytics has to reconstruct the matching typed response from the original typed request while that request context still exists in app-server. This also puts analytics on the app-server outbound path, which needs to avoid keeping the runtime alive during shutdown. The final ownership fix keeps the normal strong auth-manager retention in analytics and makes the external-auth refresh bridge hold a weak back-reference to `OutgoingMessageSender`, breaking the runtime cycle at the bridge boundary instead of exposing retention policy through the analytics client API. ## What changed - Adds typed `ServerRequest` and `ServerResponse` analytics facts, plus `AnalyticsEventsClient::track_server_request` and `track_server_response`. - Renames the existing client-side facts to `ClientRequest` and `ClientResponse` so reducers can distinguish client-to-server traffic from server-to-client traffic. - Adds `ServerRequest::response_from_result`, allowing a stored typed request to decode the matching typed server response from a raw JSON-RPC result payload. - Threads `AnalyticsEventsClient` through `OutgoingMessageSender` and records targeted server requests, replayed targeted requests, and matching targeted responses with the responding connection id needed for correlation. - Intentionally leaves broadcast server requests/responses out of analytics for now because the current model is per connection, while broadcasts fan one logical request out across multiple connections. - Breaks the app-server shutdown cycle by storing `Weak<OutgoingMessageSender>` in `ExternalAuthRefreshBridge` and upgrading it only when an external-auth refresh is actually requested. - Keeps reducer ingestion of the new server-side facts as no-ops for now; this PR is plumbing for later tool-approval analytics work. ## Verification - `cargo test -p codex-analytics` - `cargo test -p codex-app-server outgoing_message::tests::` - Covers typed-response reconstruction plus the targeted, replayed, broadcast-exclusion, and response-attribution analytics paths. ## Follow-up This PR intentionally stops at ingestion plumbing, so `ServerRequest` and `ServerResponse` facts are still reducer no-ops. Once a follow-up PR adds real downstream analytics output for those facts: - replace the temporary pre-reducer observation seam with reducer tests for the emitted event shape; - add end-to-end coverage in `app-server/tests/suite/v2/analytics.rs` for the real app-server workflow and captured analytics payload; - remove the temporary sender-level observer tests added here in favor of the real-output coverage above. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/17088). * #18748 * #18747 * #17090 * #17089 * #20241 * #20239 * __->__ #17088 |
||
|
|
5882f3f95e |
refactor: route Codex auth through AuthProvider (#18811)
## Summary This PR moves Codex backend request authentication from direct bearer-token handling to `AuthProvider`. The new `codex-auth-provider` crate defines the shared request-auth trait. `CodexAuth::provider()` returns a provider that can apply all headers needed for the selected auth mode. This lets ChatGPT token auth and AgentIdentity auth share the same callsite path: - ChatGPT token auth applies bearer auth plus account/FedRAMP headers where needed. - AgentIdentity auth applies AgentAssertion plus account/FedRAMP headers where needed. Reference old stack: https://github.com/openai/codex/pull/17387/changes ## Callsite Migration | Area | Change | | --- | --- | | backend-client | accepts an `AuthProvider` instead of a raw token/header | | chatgpt client/connectors | applies auth through `CodexAuth::provider()` | | cloud tasks | keeps Codex-backend gating, applies auth through provider | | cloud requirements | uses Codex-backend auth checks and provider headers | | app-server remote control | applies provider headers for backend calls | | MCP Apps/connectors | gates on `uses_codex_backend()` and keys caches from generic account getters | | model refresh | treats AgentIdentity as Codex-backend auth | | OpenAI file upload path | rejects non-Codex-backend auth before applying headers | | core client setup | keeps model-provider auth flow and allows AgentIdentity through provider-backed OpenAI auth | ## Stack 1. https://github.com/openai/codex/pull/18757: full revert 2. https://github.com/openai/codex/pull/18871: isolated Agent Identity crate 3. https://github.com/openai/codex/pull/18785: explicit AgentIdentity auth mode and startup task allocation 4. This PR: migrate Codex backend auth callsites through AuthProvider 5. https://github.com/openai/codex/pull/18904: accept AgentIdentity JWTs and load `CODEX_AGENT_IDENTITY` ## Testing Tests: targeted Rust checks, cargo-shear, Bazel lock check, and CI. |
||
|
|
4e7399c6b9 |
[codex-analytics] guardian review analytics events emission (#17693)
## Why Guardian approvals now run as review sessions, but Codex analytics did not have a terminal event for those reviews. That made it hard to measure approval outcomes, failure modes, Guardian session reuse, model metadata, token usage, and timing separately from the parent turn. ## What changed Adds `codex_guardian_review` analytics emission for Guardian approval reviews. The event is emitted from the Guardian review path with review identity, target item id, approval request source, a PII-minimized reviewed-action shape, terminal decision/status, failure reason, Guardian assessment fields, Guardian session metadata, token usage, and timing metadata. The reviewed-action payload intentionally omits high-risk fields such as shell commands, working directories, argv, file paths, network targets/hosts, rationale, retry reason, and permission justifications. It also classifies prompt-build failures separately from Guardian session/runtime failures so fail-closed cases are distinguishable in analytics. ## Verification - Guardian review analytics tests cover terminal success, timeout/cancel/fail-closed paths, session metadata, and token usage plumbing. - `cargo clippy -p codex-core --lib --tests -- -D warnings` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/17693). * #17696 * #17695 * __->__ #17693 |
||
|
|
be75785504 |
fix: fully revert agent identity runtime wiring (#18757)
## Summary This PR fully reverts the previously merged Agent Identity runtime integration from the old stack: https://github.com/openai/codex/pull/17387/changes It removes the Codex-side task lifecycle wiring, rollout/session persistence, feature flag plumbing, lazy `auth.json` mutation, background task auth paths, and request callsite changes introduced by that stack. This leaves the repo in a clean pre-AgentIdentity integration state so the follow-up PRs can reintroduce the pieces in smaller reviewable layers. ## Stack 1. This PR: full revert 2. https://github.com/openai/codex/pull/18871: move Agent Identity business logic into a crate 3. https://github.com/openai/codex/pull/18785: add explicit AgentIdentity auth mode and startup task allocation 4. https://github.com/openai/codex/pull/18811: migrate auth callsites through AuthProvider ## Testing Tests: targeted Rust checks, cargo-shear, Bazel lock check, and CI. |
||
|
|
904c751a40 |
[codex] Use background agent task auth for backend calls (#18094)
## Summary Introduces a single background/control-plane agent task for ChatGPT backend requests that do not have a thread-scoped task, with `AuthManager` owning the default ChatGPT backend authorization decision. Callers now ask `AuthManager` for the default ChatGPT backend authorization header. `AuthManager` decides whether that is bearer or background AgentAssertion based on config/internal state, while low-level bootstrap paths can explicitly request bearer-only auth. This PR is stacked on PR4 and focuses on the shared background task auth plumbing plus the first tranche of backend/control-plane consumers. The remaining callsite wiring is split into PR4.2 to keep review size down. ## Stack - PR1: https://github.com/openai/codex/pull/17385 - add `features.use_agent_identity` - PR2: https://github.com/openai/codex/pull/17386 - register agent identities when enabled - PR3: https://github.com/openai/codex/pull/17387 - register agent tasks when enabled - PR3.1: https://github.com/openai/codex/pull/17978 - persist and prewarm registered tasks per thread - PR4: https://github.com/openai/codex/pull/17980 - use task-scoped `AgentAssertion` for downstream calls - PR4.1: this PR - introduce AuthManager-owned background/control-plane `AgentAssertion` auth - PR4.2: https://github.com/openai/codex/pull/18260 - use background task auth for additional backend/control-plane calls ## What Changed - add background task registration and assertion minting inside `codex-login` - persist `agent_identity.background_task_id` separately from per-session task state - make `BackgroundAgentTaskManager` private to `codex-login`; call sites do not instantiate or pass it around - teach `AuthManager` the ChatGPT backend base URL and feature-derived background auth mode from resolved config - expose bearer-only helpers for bootstrap/registration/refresh-style paths that must not use AgentAssertion - wire `AuthManager` default ChatGPT authorization through app listing, connector directory listing, remote plugins, MCP status/listing, analytics, and core-skills remote calls - preserve bearer fallback when the feature is disabled, the backend host is unsupported, or background task registration is not available ## Validation - `just fmt` - `cargo check -p codex-core -p codex-login -p codex-analytics -p codex-app-server -p codex-cloud-requirements -p codex-cloud-tasks -p codex-models-manager -p codex-chatgpt -p codex-model-provider -p codex-mcp -p codex-core-skills` - `cargo test -p codex-login agent_identity` - `cargo test -p codex-model-provider bearer_auth_provider` - `cargo test -p codex-core agent_assertion` - `cargo test -p codex-app-server remote_control` - `cargo test -p codex-cloud-requirements fetch_cloud_requirements` - `cargo test -p codex-models-manager manager::tests` - `cargo test -p codex-chatgpt` - `cargo test -p codex-cloud-tasks` - `just fix -p codex-core -p codex-login -p codex-analytics -p codex-app-server -p codex-cloud-requirements -p codex-cloud-tasks -p codex-models-manager -p codex-chatgpt -p codex-model-provider -p codex-mcp -p codex-core-skills` - `just fix -p codex-app-server` - `git diff --check` |
||
|
|
8720b7bdce |
Add codex_hook_run analytics event (#17996)
# Why
Add product analytics for hook handler executions so we can understand
which hooks are running, where they came from, and whether they
completed, failed, stopped, or blocked work.
# What
- add the new `codex_hook_run` analytics event and payload plumbing in
`codex-rs/analytics`
- emit hook-run analytics from the shared hook completion path in
`codex-rs/core`
- classify hook source from the loaded hook path as `system`, `user`,
`project`, or `unknown`
```
{
"event_type": "codex_hook_run",
"event_params": {
"thread_id": "string",
"turn_id": "string",
"model_slug": "string",
"hook_name": "string, // any HookEventName
"hook_source": "system | user | project | unknown",
"status": "completed | failed | stopped | blocked"
}
}
```
---------
Co-authored-by: Codex <noreply@openai.com>
|
||
|
|
b704df85b8 |
[codex-analytics] feature plumbing and emittance (#16640)
--- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16640). * #16870 * #16706 * #16641 * __->__ #16640 |
||
|
|
58933237cd |
feat(analytics): add guardian review event schema (#17055)
Just the analytics schema definition for guardian evaluations. No wiring done yet. |
||
|
|
5779be314a |
[codex-analytics] add compaction analytics event (#17155)
- event for compaction analytics - introduces thread-connection and thread metadata caches for data denormalization, expected to be useful for denormalization onto core emitted events in general - threads analytics event client into core (mirrors approved implementation in #16640) - denormalizes key thread metadata: thread_source, subagent_source, parent_thread_id, as well as app-server client and runtime metadata) - compaction strategy defaults to memento, forward compatible with expected prefill_compaction strategy 1. Manual standalone compact, local `INFO | 2026-04-09 17:35:50 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:526 | Tracked codex_compaction_event event params={'thread_id': '019d74d0-5cfb-70c0-bef9-165c3bf9b2df', 'turn_id': '019d74d0-d7f6-7c81-acc6-aae2030243d6', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': True}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'trigger': 'manual', 'reason': 'user_requested', 'implementation': 'responses', 'phase': 'standalone_turn', 'strategy': 'memento', 'status': 'completed', 'active_context_tokens_before': 20170, 'active_context_tokens_after': 4830, 'started_at': 1775781337, 'completed_at': 1775781350, 'thread_source': 'user', 'subagent_source': None, 'parent_thread_id': None, 'error': None, 'duration_ms': 13524} | ` 2. Auto pre-turn compact, local `INFO | 2026-04-09 17:37:30 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:526 | Tracked codex_compaction_event event params={'thread_id': '019d74d2-45ef-71d1-9c93-23cc0c13d988', 'turn_id': '019d74d2-7b42-7372-9f0e-c0da3f352328', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': True}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'trigger': 'auto', 'reason': 'context_limit', 'implementation': 'responses', 'phase': 'pre_turn', 'strategy': 'memento', 'status': 'completed', 'active_context_tokens_before': 20063, 'active_context_tokens_after': 4822, 'started_at': 1775781444, 'completed_at': 1775781449, 'thread_source': 'user', 'subagent_source': None, 'parent_thread_id': None, 'error': None, 'duration_ms': 5497} | ` 3. Auto mid-turn compact, local `INFO | 2026-04-09 17:38:28 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:526 | Tracked codex_compaction_event event params={'thread_id': '019d74d3-212f-7a20-8c0a-4816a978675e', 'turn_id': '019d74d3-3ee1-7462-89f6-2ffbeefcd5e3', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': True}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'trigger': 'auto', 'reason': 'context_limit', 'implementation': 'responses', 'phase': 'mid_turn', 'strategy': 'memento', 'status': 'completed', 'active_context_tokens_before': 20325, 'active_context_tokens_after': 14641, 'started_at': 1775781500, 'completed_at': 1775781508, 'thread_source': 'user', 'subagent_source': None, 'parent_thread_id': None, 'error': None, 'duration_ms': 7507} | ` 4. Remote /responses/compact, manual standalone `INFO | 2026-04-09 17:40:20 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:526 | Tracked codex_compaction_event event params={'thread_id': '019d74d4-7a11-78a1-89f7-0535a1149416', 'turn_id': '019d74d4-e087-7183-9c20-b1e40b7578c0', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': True}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'trigger': 'manual', 'reason': 'user_requested', 'implementation': 'responses_compact', 'phase': 'standalone_turn', 'strategy': 'memento', 'status': 'completed', 'active_context_tokens_before': 23461, 'active_context_tokens_after': 6171, 'started_at': 1775781601, 'completed_at': 1775781620, 'thread_source': 'user', 'subagent_source': None, 'parent_thread_id': None, 'error': None, 'duration_ms': 18971} | ` |
||
|
|
4fd5c35c4f |
[codex-analytics] subagent analytics (#15915)
- creates custom event that emits subagent thread analytics from core - wires client metadata (`product_client_id, client_name, client_version`), through from app-server - creates `created_at `timestamp in core - subagent analytics are behind `FeatureFlag::GeneralAnalytics` PR stack - [[telemetry] thread events #15690](https://github.com/openai/codex/pull/15690) - --> [[telemetry] subagent events #15915](https://github.com/openai/codex/pull/15915) - [[telemetry] turn events #15591](https://github.com/openai/codex/pull/15591) - [[telemetry] steer events #15697](https://github.com/openai/codex/pull/15697) - [[telemetry] queued prompt data #15804](https://github.com/openai/codex/pull/15804) Notes: - core does not spawn a subagent thread for compact, but represented in mapping for consistency `INFO | 2026-04-01 13:08:12 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:399 | Tracked codex_thread_initialized event params={'thread_id': '019d4aa9-233b-70f2-a958-c3dbae1e30fa', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': None}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'model': 'gpt-5.3-codex', 'ephemeral': False, 'initialization_mode': 'new', 'created_at': 1775074091, 'thread_source': 'subagent', 'subagent_source': 'thread_spawn', 'parent_thread_id': '019d4aa8-51ec-77e3-bafb-2c1b8e29e385'} | ` `INFO | 2026-04-01 13:08:41 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:399 | Tracked codex_thread_initialized event params={'thread_id': '019d4aa9-94e3-75f1-8864-ff8ad0e55e1e', 'product_surface': 'codex', 'app_server_client': {'product_client_id': 'CODEX_CLI', 'client_name': 'codex-tui', 'client_version': '0.0.0', 'rpc_transport': 'in_process', 'experimental_api_enabled': None}, 'runtime': {'codex_rs_version': '0.0.0', 'runtime_os': 'macos', 'runtime_os_version': '26.4.0', 'runtime_arch': 'aarch64'}, 'model': 'gpt-5.3-codex', 'ephemeral': False, 'initialization_mode': 'new', 'created_at': 1775074120, 'thread_source': 'subagent', 'subagent_source': 'review', 'parent_thread_id': None} | ` --------- Co-authored-by: jif-oai <jif@openai.com> Co-authored-by: Michael Bolin <mbolin@openai.com> |
||
|
|
e8de4ea953 |
[codex-analytics] thread events (#15690)
- add event for thread initialization - thread/start, thread/fork, thread/resume - feature flagged behind `FeatureFlag::GeneralAnalytics` - does not yet support threads started by subagents PR stack: - --> [[telemetry] thread events #15690](https://github.com/openai/codex/pull/15690) - [[telemetry] subagent events #15915](https://github.com/openai/codex/pull/15915) - [[telemetry] turn events #15591](https://github.com/openai/codex/pull/15591) - [[telemetry] steer events #15697](https://github.com/openai/codex/pull/15697) - [[telemetry] queued prompt data #15804](https://github.com/openai/codex/pull/15804) Sample extracted logs in Codex-backend ``` INFO | 2026-03-29 16:39:37 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bf7-9f5f-7f82-9877-6d48d1052531 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774827577 | INFO | 2026-03-29 16:45:46 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3b84-5731-79d0-9b3b-9c6efe5f5066 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=resumed subagent_source=None parent_thread_id=None created_at=1774820022 | INFO | 2026-03-29 16:45:49 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bfd-4cd6-7c12-a13e-48cef02e8c4d product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=forked subagent_source=None parent_thread_id=None created_at=1774827949 | INFO | 2026-03-29 17:20:29 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3c1d-0412-7ed2-ad24-c9c0881a36b0 product_surface=codex product_client_id=CODEX_SERVICE_EXEC client_name=codex_exec client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774830027 | ``` Notes - `product_client_id` gets canonicalized in codex-backend - subagent threads are addressed in a following pr |