Warm page cache before save to eliminate cold-NVMe IOPS bottleneck#4
Warm page cache before save to eliminate cold-NVMe IOPS bottleneck#4joshfriend merged 1 commit intomainfrom
Conversation
Parallel readers fault all cache files into the OS page cache before tar starts, so tar reads at memory speed (~1300 MB/s) instead of cold NVMe IOPS-limited ~80 MB/s. 2-5x archive speedup depending on instance generation.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4283e47d7c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if !cfg.SkipWarm { | ||
| log.Debug("warming page cache") | ||
| warmStart := time.Now() | ||
| warmPageCache(sources) | ||
| log.Debug("page cache warm", "duration", time.Since(warmStart).Round(time.Millisecond)) |
There was a problem hiding this comment.
Honor context cancellation before warming files
Save now performs a full warmPageCache(sources) pass before starting CreateTarZstd, but this warm step does not observe ctx.Done(). In timeout/cancel scenarios (common in CI), Save will continue scanning and reading the entire cache tree before returning, which can add minutes of uninterruptible work after cancellation and delay job teardown. Please gate or short-circuit warming when the context is canceled.
Useful? React with 👍 / 👎.
| _ = filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error { | ||
| if err != nil || !d.Type().IsRegular() { | ||
| return nil |
There was a problem hiding this comment.
Skip warming files excluded from the archive
warmPageCache reads every regular file under each source, but CreateTarZstd later excludes many paths (including wrapper/dists/*/*/*.zip and CacheExclusions). This means we spend I/O warming data that will never be archived, and large excluded files can evict useful cache pages and reduce the intended speedup. The warm walk should apply the same exclusion rules as tar input selection.
Useful? React with 👍 / 👎.
Parallel readers fault all cache files into the OS page cache before tar starts, so tar reads at memory speed (~1300 MB/s) instead of cold NVMe IOPS-limited ~80 MB/s. 2-5x archive speedup depending on instance generation.