Skip to content

Add experimental block cache bypass via GRAPH_STORE_IGNORE_BLOCK_CACHE#6457

Merged
lutter merged 6 commits intomasterfrom
lutter/block-sim
Mar 27, 2026
Merged

Add experimental block cache bypass via GRAPH_STORE_IGNORE_BLOCK_CACHE#6457
lutter merged 6 commits intomasterfrom
lutter/block-sim

Conversation

@lutter
Copy link
Collaborator

@lutter lutter commented Mar 26, 2026

We want to reduce the size of the block cache by not keeping old entries. This PR makes it possible to simulate a pruned block cache without actually deleting the cached data yet. It's an intermediate step to automatically pruning the block cache.

For each chain, there is now a setting cache_size that determines how many blocks behind chain head to cache. It defaults to 500 and must be larger than the reorg threshold. For now, this setting only has an effect when GRAPH_STORE_IGNORE_BLOCK_CACHE=true. In that case, the system behaves as if the block data had been purged but headers are still present.

@lutter lutter requested a review from incrypto32 March 26, 2026 01:15
Copy link
Member

@incrypto32 incrypto32 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just few minor nit picks, feel free to ignore and merge as it is.

let (shard, cache_size) = chains
.get(&name)
.cloned()
.unwrap_or_else(|| (PRIMARY_SHARD.clone(), 500));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: This should use default_cache_size()

shards: Vec<(String, Shard)>,
shards: Vec<(String, Shard, BlockNumber)>,
// Per-chain cache_size settings
cache_sizes: HashMap<String, BlockNumber>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very minor nit: cache_sizes duplicates data already in shards. Could look it up with .find() or into a struct, but not a big deal, feel free to ignore

);
let ident = chain.network_identifier()?;
let logger = self.logger.new(o!("network" => chain.name.clone()));
let cache_size = self.cache_sizes.get(&chain.name).copied().unwrap_or(500);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This magic number 500 is duplicated in a few places, better to make this a constant somewhere?

Add a boolean env var that, when set, will cause block reads for blocks
outside the per-chain cache_size window to behave as if the data field
is null. This is the first step toward experimenting with reduced block
caching before the full block cache revamp.
@lutter lutter force-pushed the lutter/block-sim branch from 9bc9d48 to 3aaba10 Compare March 26, 2026 19:34
lutter added 3 commits March 26, 2026 16:04
Add a per-chain `cache_size` configuration parameter that controls
how many blocks from chain head to keep in the block cache. Defaults
to 500. Validated to be greater than reorg_threshold.
Pass the per-chain cache_size configuration from TOML config through
StoreBuilder and BlockStore to ChainStore, where it will be used
to determine which blocks should be treated as uncached.
When GRAPH_STORE_IGNORE_BLOCK_CACHE is set, block reads for blocks
that are more than cache_size blocks behind the chain head now behave
as if the block doesn't exist in the cache. This allows experimenting
with the effects of reduced block caching before the full block cache
revamp.
@lutter lutter force-pushed the lutter/block-sim branch from 3aaba10 to b438d8d Compare March 26, 2026 23:04
lutter added 2 commits March 26, 2026 16:37
Make ChainSection.cache_size the default for chains that don't set
cache_size explicitly. Chains deserialized without cache_size get 0 as
a sentinel; ChainSection::validate() then fills in the section-level
default before the reorg_threshold check.
@lutter lutter merged commit 192869f into master Mar 27, 2026
6 checks passed
@lutter lutter deleted the lutter/block-sim branch March 27, 2026 00:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants