efrazer-oai
5882f3f95e
refactor: route Codex auth through AuthProvider ( #18811 )
...
## Summary
This PR moves Codex backend request authentication from direct
bearer-token handling to `AuthProvider`.
The new `codex-auth-provider` crate defines the shared request-auth
trait. `CodexAuth::provider()` returns a provider that can apply all
headers needed for the selected auth mode.
This lets ChatGPT token auth and AgentIdentity auth share the same
callsite path:
- ChatGPT token auth applies bearer auth plus account/FedRAMP headers
where needed.
- AgentIdentity auth applies AgentAssertion plus account/FedRAMP headers
where needed.
Reference old stack: https://github.com/openai/codex/pull/17387/changes
## Callsite Migration
| Area | Change |
| --- | --- |
| backend-client | accepts an `AuthProvider` instead of a raw
token/header |
| chatgpt client/connectors | applies auth through
`CodexAuth::provider()` |
| cloud tasks | keeps Codex-backend gating, applies auth through
provider |
| cloud requirements | uses Codex-backend auth checks and provider
headers |
| app-server remote control | applies provider headers for backend calls
|
| MCP Apps/connectors | gates on `uses_codex_backend()` and keys caches
from generic account getters |
| model refresh | treats AgentIdentity as Codex-backend auth |
| OpenAI file upload path | rejects non-Codex-backend auth before
applying headers |
| core client setup | keeps model-provider auth flow and allows
AgentIdentity through provider-backed OpenAI auth |
## Stack
1. https://github.com/openai/codex/pull/18757 : full revert
2. https://github.com/openai/codex/pull/18871 : isolated Agent Identity
crate
3. https://github.com/openai/codex/pull/18785 : explicit AgentIdentity
auth mode and startup task allocation
4. This PR: migrate Codex backend auth callsites through AuthProvider
5. https://github.com/openai/codex/pull/18904 : accept AgentIdentity JWTs
and load `CODEX_AGENT_IDENTITY`
## Testing
Tests: targeted Rust checks, cargo-shear, Bazel lock check, and CI.
2026-04-23 17:14:02 -07:00
pakrym-oai
4c2e730488
Organize context fragments ( #18794 )
...
Organize context fragments under `core/context`. Implement same trait on
all of them.
2026-04-20 22:39:17 -07:00
xl-openai
3f7222ec76
feat: Budget skill metadata and surface trimming as a warning ( #18298 )
...
Cap the model-visible skills section to a small share of the context
window, with a fallback character budget, and keep only as many implicit
skills as fit within that budget.
Emit a non-fatal warning when enabled skills are omitted, and add a new
app-server warning notification
Record thread-start skill metrics for total enabled skills, kept skills,
and whether truncation happened
---------
Co-authored-by: Matthew Zeng <mzeng@openai.com >
Co-authored-by: Codex <noreply@openai.com >
2026-04-17 18:11:47 -07:00
pakrym-oai
96254a763a
Make skill loading filesystem-aware ( #17720 )
...
Migrates skill loading to support reading repo skills from the remote
environment.
2026-04-14 15:40:40 -07:00
Ahmed Ibrahim
9dbe098349
Extract codex-core-skills crate ( #15749 )
...
## Summary
- move skill loading and management into codex-core-skills
- leave codex-core with the thin integration layer and shared wiring
## Testing
- CI
---------
Co-authored-by: Codex <noreply@openai.com >
2026-03-25 12:57:42 -07:00