mirror of
https://github.com/openai/codex.git
synced 2026-05-17 17:53:06 +00:00
Add a Cargo build profile for benchmarking (#21574)
A clean release build takes ~18m and an incremental build takes ~12m. This is far too slow to iterate on performance related changes and the build time is dominated by LTO. This pull request adds a `profiling` profile for Cargo which takes ~13m clean and ~6m incremental, the primary change is that LTO is disabled. This matches a profile used in uv and follows the great work at https://github.com/astral-sh/uv/pull/5955 — there's a bit of commentary there about the trade-offs this implies. We've found that this does not inhibit the ability to accurately benchmark as measurements with LTO disabled are generally consistent with the results with LTO enabled and it makes it much faster (~2x) to rebuild after making a change. This is motivated by my interest in improving Codex TUI performance, which is blocked by the tragically builds right now. I tested incremental build times by making a no-op change to the `codex-cli` crate.
This commit is contained in:
@@ -493,6 +493,12 @@ strip = "symbols"
|
||||
# See https://github.com/openai/codex/issues/1411 for details.
|
||||
codegen-units = 1
|
||||
|
||||
[profile.profiling]
|
||||
inherits = "release"
|
||||
debug = "full"
|
||||
lto = false
|
||||
strip = false
|
||||
|
||||
[profile.ci-test]
|
||||
# Reduce binary size to reduce disk pressure.
|
||||
debug = "limited"
|
||||
|
||||
Reference in New Issue
Block a user