Skip to content

perf: use autoresearch to reduce allocations#3225

Open
tac0turtle wants to merge 17 commits intomainfrom
autoresearch/block-perf-2026-04-04
Open

perf: use autoresearch to reduce allocations#3225
tac0turtle wants to merge 17 commits intomainfrom
autoresearch/block-perf-2026-04-04

Conversation

@tac0turtle
Copy link
Copy Markdown
Contributor

@tac0turtle tac0turtle commented Apr 4, 2026

Overview

This pr used autoresearch to run a loop on reducing allocations

3 a8b6352 78allocs 22,996 33,091 keep Unsafe reinterpret cast of Txs to [][]byte in ApplyBlock — eliminates make([][]byte, n) allocation
4 823aa62 77allocs 20,276 32,480 keep Direct pb.Data serialization in DACommitment — avoids pruned Data wrapper and txsToByteSlices allocations
5 0720b44 74allocs 12,192 31,624 keep unsafe.Slice in Data.ToProto() — eliminates txsToByteSlices [][]byte allocation
6 — 74allocs 12,187 31,217 discard Reverted hand-written HashSlim wire encoder — produced different hashes than MarshalBinary
7 ccbc2e4 64allocs 11,130 31,570 keep sync.Pool for protobuf message structs in MarshalBinary — eliminates 10 allocs per block
8 805672e 56allocs 10,217 31,037 keep Pooled SignedHeader.MarshalBinary — reuse pb.SignedHeader/pb.Header/pb.Signer/pb.Version structs

Summary by CodeRabbit

  • Tests

    • Added benchmark tests for header hashing performance and unit tests for data/state serialization semantics; updated a concurrency test for clarity.
  • Performance

    • Reduced memory allocations in transaction processing and data hashing via pooling and slice conversion optimizations.
  • Bug Fixes

    • Simplified state serialization to use direct binary marshalling and improved related error reporting.

Result: {"status":"keep","allocs_per_op":81,"bytes_per_op":25934,"ns_per_op":34001}
…allocations per block

Result: {"status":"keep","allocs_per_op":79,"bytes_per_op":25697,"ns_per_op":34147}
… make([][]byte, n) allocation

Result: {"status":"keep","allocs_per_op":78,"bytes_per_op":22996,"ns_per_op":33091}
…pper and txsToByteSlices allocations

Result: {"status":"keep","allocs_per_op":77,"bytes_per_op":20276,"ns_per_op":32480}
…allocation

Result: {"status":"keep","allocs_per_op":74,"bytes_per_op":12192,"ns_per_op":31624}
…10 allocs per block

Replace per-call allocation of pb.Header/pb.Version/pb.Data/pb.Metadata with
sync.Pool reuse in the hot MarshalBinary path. ToProto() API is unchanged —
only MarshalBinary is affected since it consumes the result immediately.

Metrics (100_txs benchmark):
- 74 → 64 allocs/op (-13.5%)
- ~12.1 → ~11.1 KB (-8.3%)
- ~31ns flat
…Signer/pb.Version

Eliminates 4 struct allocations per signed header marshal: pb.SignedHeader,
pb.Header, pb.Version, pb.Signer. These are now borrowed from sync.Pool and
returned after proto.Marshal completes.

Metrics (100_txs benchmark):
- 64 → 56 allocs/op
- ~11KB → ~10.2KB
…er block

State.ToProto allocated pb.State + pb.Version + timestamppb.Timestamp per block.
MarshalBinary now pools those structs and returns the marshaled bytes directly.
pkg/store/batch.UpdateState switched from ToProto+proto.Marshal to MarshalBinary.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 4, 2026

Warning

Rate limit exceeded

@tac0turtle has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 22 minutes and 43 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 22 minutes and 43 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b3b1b7df-9a4b-4c7f-ac21-ab2c50f3958b

📥 Commits

Reviewing files that changed from the base of the PR and between f0bd024 and 64fd706.

📒 Files selected for processing (2)
  • types/hashing.go
  • types/serialization.go
📝 Walkthrough

Walkthrough

Adds pooled protobuf and sha256 objects, replaces protobuf marshalling with direct binary marshalling for State, introduces unsafe slice reinterpreting to avoid allocations in hashing and executor code, and adds tests/benchmarks and compile-time size guards.

Changes

Cohort / File(s) Summary
Serialization & Pools
types/serialization.go, types/hashing.go
Introduces sync.Pool instances for protobuf messages and sha256 hashers; rewrites many MarshalBinary/serialization paths to use pooled protobuf objects and pooled hashers.
Executor unsafe conversion
block/internal/executing/executor.go
Replaces per-tx []byte(tx) allocation loop with unsafe.Slice reinterpretation to produce [][]byte for executeTxsWithRetry.
Store state marshalling
pkg/store/batch.go
Removes proto dependency in this file and switches DefaultBatch.UpdateState to use state.MarshalBinary() instead of state.ToProto() + proto.Marshal.
Tests & Benchmarks
types/serialization_test.go, types/hash_memo_bench_test.go
Adds tests: TestDataToProtoCopiesOuterTxSlice, TestStateMarshalBinaryRoundTrip; adds three header hashing benchmarks exercising memoization behavior.
Type guards
types/tx.go
Adds compile-time size-equality guards using unsafe.Sizeof to assert Tx/[]byte size compatibility.
Sync test minor change
pkg/sync/syncer_status_test.go
Refactors concurrency test to use wg.Go(...) instead of manual Add(1)/Done() bookkeeping.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • chatton
  • randygrok

Poem

🐰 I pooled a hash, I pooled a proto bright,
I nudged some slices into shape just right,
Tests hop by, benchmarks hum in tune,
Fewer allocs beneath the moon,
A rabbit cheers for code that's light 🧡

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description lacks a proper Overview section with context and rationale. It provides benchmark data but is missing structured explanation and proper formatting. Add a structured Overview section explaining what autoresearch is, why these changes reduce allocations, and summarize the key optimization techniques used.
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main purpose of the PR: using automated research results to reduce memory allocations across multiple components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch autoresearch/block-perf-2026-04-04

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedApr 13, 2026, 9:58 AM

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 4, 2026

Claude finished @tac0turtle's task in 3m 33s —— View job


Code Review

This PR applies good allocation-reduction techniques. Below are findings organized by severity, followed by notes on what's well-done.


Critical

pkg/sync/syncer_status_test.go:36require.NoError inside a goroutine is unsafe

require.NoError calls t.FailNow(), which internally calls runtime.Goexit(). When called from a non-test goroutine this only exits that goroutine — the test itself continues and passes even if errors occurred. This was introduced by the refactor from go func() { defer wg.Done() }() to wg.Go(func(){}).

// Current (broken):
wg.Go(func() {
    _, err := status.startOnce(...)
    require.NoError(t, err)  // ← FailNow() here only kills this goroutine
})

// Fix: collect errors and assert on main goroutine
errs := make(chan error, 8)
for range 8 {
    wg.Go(func() {
        _, err := status.startOnce(...)
        errs <- err
    })
}
wg.Wait()
close(errs)
for err := range errs {
    require.NoError(t, err)
}

Fix this →


Major

block/internal/executing/executor.go:800-801rawTxs aliases data.Txs — mutation risk

rawTxs = unsafe.Slice((*[]byte)(unsafe.SliceData(data.Txs)), n)

rawTxs and data.Txs share the same backing array. The array elements are slice headers (pointer, len, cap). Assigning rawTxs[i] = someNewSlice inside the execution client would silently mutate data.Txs[i].

The old code:

rawTxs = make([][]byte, n)
for i, tx := range data.Txs { rawTxs[i] = []byte(tx) }

created an independent outer slice — inner byte data was still aliased (same as now), but outer-slice mutations were isolated.

The ExecuteTxs interface contract probably forbids modifying passed transactions, but this is an invisible contract made more fragile by unsafe. A defensive comment in the coreexecutor.Executor interface doc would at minimum capture this assumption; ideally the slice should still be independent.

Fix this →


types/hashing.go:134 — Silently ignored marshal error in consensus-critical DACommitment()

dBytes, _ := proto.Marshal(&pbData)

If marshaling fails, dBytes is nil/empty and DACommitment() returns a hash of empty bytes. Since this is used to set header.DataHash, which validators check for consensus, a silent failure here would cause the node to produce an incorrect block without any error surfacing. The function should at minimum return (Hash, error) or log+panic on failure.

Fix this →


Moderate

types/serialization.go:143-144pd.Txs unsafe alias is Reset()-safe but fragile

if d.Txs != nil {
    pd.Txs = unsafe.Slice((*[]byte)(unsafe.SliceData(d.Txs)), len(d.Txs))
}
bz, err := proto.Marshal(pd)
pd.Reset()          // sets pd.Txs = nil — does NOT touch d.Txs ✓
pbDataPool.Put(pd)

The sequence is safe today: proto.Marshal is synchronous, pd.Reset() only nils the pointer without touching d.Txs. However this is invisible to anyone reading the code. A comment explaining that pd.Txs is a read-only alias during the marshal, and that pd.Reset() safely breaks the alias, would prevent future maintainers from introducing a bug here (e.g., reordering statements).


types/serialization.go:273 — Redundant double-assignment of psh.Signer

psh.Signer = psi is set at line 250 (before the nil-pubkey check), then again at line 273 (inside the pubkey branch). The second assignment is a no-op. Minor but adds noise.


Minor

types/hash_memo_bench_test.go:35-42BenchmarkHeaderHash_Single doesn't invalidate between iterations

Since Hash() never writes to cachedHash (only MemoizeHash() does), each iteration already computes from scratch — the benchmark is correct. But the comment // cold cache is misleading because it implies caching matters here. Worth clarifying the comment or adding h.InvalidateHash() defensively so the benchmark stays correct if Hash() ever starts memoizing.


What's well done

types/tx.go compile-time layout guard — excellent pattern:

var (
    _ [unsafe.Sizeof(Tx{}) - unsafe.Sizeof([]byte{})]struct{}
    _ [unsafe.Sizeof([]byte{}) - unsafe.Sizeof(Tx{})]struct{}
)

This will fail to compile with a negative-size-array error if the slice-header sizes ever diverge. This is the right way to protect an unsafe reinterpretation.

SignedHeader.MarshalBinary() defer pattern (after the second CodeRabbit review) — single defer cleanly handles all return paths for the four pooled proto objects. Correct.

sha256Pool with explicit Reset() in leafHashOpts.Reset() is called before write inside leafHashOpt, so the pool is never handed back dirty. ✓

pkg/store/batch.go — collapsing two error paths (ToProto + proto.Marshal) into a single state.MarshalBinary() call is a clean simplification with improved error message.

TestDataToProtoCopiesOuterTxSlice — good regression test confirming ToProto() defensive copy semantics. Pair this with a test that verifies MarshalBinary()'s output is not affected by modifying d.Txs after the call returns (it won't be — Marshal is synchronous — but the test would serve as documentation).


@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://evstack.github.io/docs-preview/pr-3225/

Built to branch main at 2026-04-04 10:13 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@tac0turtle tac0turtle force-pushed the autoresearch/block-perf-2026-04-04 branch from 9148ccc to 2438dc0 Compare April 4, 2026 10:10
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

❌ Patch coverage is 97.95918% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 62.20%. Comparing base (3abebca) to head (64fd706).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
types/serialization.go 98.47% 1 Missing and 1 partial ⚠️
pkg/store/batch.go 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3225      +/-   ##
==========================================
+ Coverage   61.67%   62.20%   +0.53%     
==========================================
  Files         120      120              
  Lines       12635    12799     +164     
==========================================
+ Hits         7793     7962     +169     
+ Misses       3968     3961       -7     
- Partials      874      876       +2     
Flag Coverage Δ
combined 62.20% <97.95%> (+0.53%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tac0turtle tac0turtle marked this pull request as ready for review April 9, 2026 15:08
@tac0turtle tac0turtle requested a review from julienrbrt April 9, 2026 15:08
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
types/hash_memo_bench_test.go (1)

35-42: Consider invalidating cache between iterations for consistent baseline.

In BenchmarkHeaderHash_Single, the header is created once outside the loop. While Hash() doesn't set cachedHash, if future refactoring changes this behavior, the benchmark would measure cached hits after the first iteration. Consider adding h.InvalidateHash() inside the loop for robustness:

♻️ Suggested improvement for robustness
 func BenchmarkHeaderHash_Single(b *testing.B) {
 	h := GetRandomHeader("bench-chain", GetRandomBytes(32))
 	b.ReportAllocs()
 	b.ResetTimer()
 	for b.Loop() {
+		h.InvalidateHash()
 		_ = h.Hash()
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hash_memo_bench_test.go` around lines 35 - 42,
BenchmarkHeaderHash_Single currently constructs h once and calls h.Hash()
repeatedly; to ensure each iteration measures an uncached baseline, call
h.InvalidateHash() before each h.Hash() invocation (or recreate h inside the
loop using GetRandomHeader/GetRandomBytes) so any cachedHash is cleared; update
the loop in BenchmarkHeaderHash_Single to invoke h.InvalidateHash() (or
reinitialize h) on every iteration before calling h.Hash().
types/hashing.go (1)

128-135: Ignored marshal error could silently produce incorrect commitment.

Line 131 ignores the error from proto.Marshal. While this is consistent with the existing pattern in Hash() (line 120-121), a marshal failure would produce a hash of nil/empty bytes, potentially causing consensus issues. Consider at least logging failures in debug builds or returning an error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hashing.go` around lines 128 - 135, DACommitment currently ignores the
error from proto.Marshal which can yield an incorrect commitment; change
DACommitment to surface marshal failures instead of silently hashing nil by
returning (Hash, error) and propagate this change to callers (mirror the new
signature across any places invoking DACommitment), or if changing signature is
impractical, check the error and log it via the package logger in a debug build
and return a zero-value Hash; locate DACommitment and the related Hash() usage
to implement proper error handling and ensure any callers handle the returned
error or the logged failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@types/hash_memo_bench_test.go`:
- Around line 35-42: BenchmarkHeaderHash_Single currently constructs h once and
calls h.Hash() repeatedly; to ensure each iteration measures an uncached
baseline, call h.InvalidateHash() before each h.Hash() invocation (or recreate h
inside the loop using GetRandomHeader/GetRandomBytes) so any cachedHash is
cleared; update the loop in BenchmarkHeaderHash_Single to invoke
h.InvalidateHash() (or reinitialize h) on every iteration before calling
h.Hash().

In `@types/hashing.go`:
- Around line 128-135: DACommitment currently ignores the error from
proto.Marshal which can yield an incorrect commitment; change DACommitment to
surface marshal failures instead of silently hashing nil by returning (Hash,
error) and propagate this change to callers (mirror the new signature across any
places invoking DACommitment), or if changing signature is impractical, check
the error and log it via the package logger in a debug build and return a
zero-value Hash; locate DACommitment and the related Hash() usage to implement
proper error handling and ensure any callers handle the returned error or the
logged failure.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 12528a06-2158-4aad-8475-8990e96b4bcb

📥 Commits

Reviewing files that changed from the base of the PR and between 3abebca and 45e1951.

📒 Files selected for processing (7)
  • block/internal/executing/executor.go
  • pkg/store/batch.go
  • types/hash_memo_bench_test.go
  • types/hashing.go
  • types/serialization.go
  • types/serialization_test.go
  • types/tx.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/sync/syncer_status_test.go (1)

28-37: ⚠️ Potential issue | 🟠 Major

Move require.NoError out of worker goroutines.

require.NoError(t, err) inside wg.Go is unsafe because require calls t.FailNow(), which must only be called from the test goroutine—calling it from a spawned goroutine will only exit that goroutine, not the test. Collect errors in a channel and assert after wg.Wait() on the main test goroutine.

Suggested fix
 				started := make(chan struct{})
 				release := make(chan struct{})
 				var wg sync.WaitGroup
+				errs := make(chan error, 8)

 				for range 8 {
 					wg.Go(func() {
 						_, err := status.startOnce(func() error {
 							if calls.Add(1) == 1 {
 								close(started)
 							}
 							<-release
 							return nil
 						})
-						require.NoError(t, err)
+						errs <- err
 					})
 				}

 				<-started
 				close(release)
 				wg.Wait()
+				close(errs)
+
+				for err := range errs {
+					require.NoError(t, err)
+				}

 				require.Equal(t, int32(1), calls.Load())
 				require.True(t, status.isStarted())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sync/syncer_status_test.go` around lines 28 - 37, Move the test assertion
out of the worker goroutine: replace the inline require.NoError(t, err) inside
the wg.Go anonymous function (the block that calls status.startOnce with the
calls/started/release coordination) by sending err to a buffered errors channel
(or storing it in a slice protected by a mutex), and after wg.Wait() (on the
main test goroutine) iterate the collected errors and call require.NoError for
each; keep the existing status.startOnce, calls, started, release and wg.Go
logic but ensure no t.FailNow()-style assertions run inside the spawned
goroutine.
🧹 Nitpick comments (2)
types/serialization.go (2)

252-291: Consider defer-based cleanup for maintainability.

The three exit paths each duplicate cleanup logic for four pooled objects. While the current implementation is correct and avoids defer overhead in this hot path, the verbosity increases maintenance risk if the function is modified later.

If future changes require additional exit paths, consider a helper pattern:

♻️ Optional refactor using defer
func (sh *SignedHeader) MarshalBinary() ([]byte, error) {
	psh := pbSignedHeaderPool.Get().(*pb.SignedHeader)
	ph := pbHeaderPool.Get().(*pb.Header)
	pv := pbVersionPool.Get().(*pb.Version)
	psi := pbSignerPool.Get().(*pb.Signer)
	
	defer func() {
		pv.Reset(); pbVersionPool.Put(pv)
		ph.Reset(); pbHeaderPool.Put(ph)
		psi.Reset(); pbSignerPool.Put(psi)
		psh.Reset(); pbSignedHeaderPool.Put(psh)
	}()
	
	// ... rest of logic with simple returns ...
}

Given this is a hot path where the defer overhead may matter, the current explicit approach is reasonable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization.go` around lines 252 - 291, The cleanup for pooled
protobuf objects (ph, pv, psi, psh from pbHeaderPool, pbVersionPool,
pbSignerPool, pbSignedHeaderPool) is duplicated across multiple return paths in
SignedHeader.MarshalBinary; replace the repeated reset/put blocks with a single
deferred cleanup closure at the top of the function that calls ph.Reset();
pbHeaderPool.Put(ph), pv.Reset(); pbVersionPool.Put(pv), psi.Reset();
pbSignerPool.Put(psi), and psh.Reset(); pbSignedHeaderPool.Put(psh) so all
return paths can simply return bz, err without repeating cleanup logic.

142-144: Inconsistency with DACommitment nil handling.

This code defensively checks d.Txs != nil before the unsafe.Slice reinterpretation, which is good practice. However, DACommitment() in types/hashing.go:128-137 uses the same pattern without a nil check:

pbData := pb.Data{Txs: unsafe.Slice((*[]byte)(unsafe.SliceData(d.Txs)), len(d.Txs))}

While both are technically safe (when d.Txs is nil, unsafe.Slice(nil, 0) returns nil), the inconsistency could cause confusion during maintenance. Consider aligning the nil-check approach across both usages for clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization.go` around lines 142 - 144, DACommitment in
types/hashing.go currently builds pb.Data using unsafe.Slice without a nil check
for d.Txs; make it consistent with the pattern in types/serialization.go by
checking if d.Txs != nil before calling unsafe.Slice, and only set pbData.Txs
when non-nil (use the same conditional assignment used around pd.Txs in the
serialization code) so both places handle nil slices the same way.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@pkg/sync/syncer_status_test.go`:
- Around line 28-37: Move the test assertion out of the worker goroutine:
replace the inline require.NoError(t, err) inside the wg.Go anonymous function
(the block that calls status.startOnce with the calls/started/release
coordination) by sending err to a buffered errors channel (or storing it in a
slice protected by a mutex), and after wg.Wait() (on the main test goroutine)
iterate the collected errors and call require.NoError for each; keep the
existing status.startOnce, calls, started, release and wg.Go logic but ensure no
t.FailNow()-style assertions run inside the spawned goroutine.

---

Nitpick comments:
In `@types/serialization.go`:
- Around line 252-291: The cleanup for pooled protobuf objects (ph, pv, psi, psh
from pbHeaderPool, pbVersionPool, pbSignerPool, pbSignedHeaderPool) is
duplicated across multiple return paths in SignedHeader.MarshalBinary; replace
the repeated reset/put blocks with a single deferred cleanup closure at the top
of the function that calls ph.Reset(); pbHeaderPool.Put(ph), pv.Reset();
pbVersionPool.Put(pv), psi.Reset(); pbSignerPool.Put(psi), and psh.Reset();
pbSignedHeaderPool.Put(psh) so all return paths can simply return bz, err
without repeating cleanup logic.
- Around line 142-144: DACommitment in types/hashing.go currently builds pb.Data
using unsafe.Slice without a nil check for d.Txs; make it consistent with the
pattern in types/serialization.go by checking if d.Txs != nil before calling
unsafe.Slice, and only set pbData.Txs when non-nil (use the same conditional
assignment used around pd.Txs in the serialization code) so both places handle
nil slices the same way.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 31a32c62-ce85-43f4-9ada-8db1607e112f

📥 Commits

Reviewing files that changed from the base of the PR and between 45e1951 and f0bd024.

📒 Files selected for processing (3)
  • pkg/sync/syncer_status_test.go
  • types/hashing.go
  • types/serialization.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • types/hashing.go

Copy link
Copy Markdown
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think Claude's comment make sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants