mirror of
https://github.com/openai/codex.git
synced 2026-05-17 01:32:32 +00:00
A clean release build takes ~18m and an incremental build takes ~12m. This is far too slow to iterate on performance related changes and the build time is dominated by LTO. This pull request adds a `profiling` profile for Cargo which takes ~13m clean and ~6m incremental, the primary change is that LTO is disabled. This matches a profile used in uv and follows the great work at https://github.com/astral-sh/uv/pull/5955 — there's a bit of commentary there about the trade-offs this implies. We've found that this does not inhibit the ability to accurately benchmark as measurements with LTO disabled are generally consistent with the results with LTO enabled and it makes it much faster (~2x) to rebuild after making a change. This is motivated by my interest in improving Codex TUI performance, which is blocked by the tragically builds right now. I tested incremental build times by making a no-op change to the `codex-cli` crate.
15 KiB
15 KiB