Skip to content

HYPERFLEET-995 - refactor: move logic from handlers to service#124

Open
kuudori wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
kuudori:HYPERFLEET-995
Open

HYPERFLEET-995 - refactor: move logic from handlers to service#124
kuudori wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
kuudori:HYPERFLEET-995

Conversation

@kuudori
Copy link
Copy Markdown
Contributor

@kuudori kuudori commented May 4, 2026

Move Replace→Patch refactoring and cluster-nodepool ownership checks from handler layer into service layer for better separation of concerns.

  • Replace Replace() with Patch() on ClusterService and NodePoolService, accepting patch request structs instead of pre-mutated models
  • Move generation increment logic (spec/labels change detection) from DAO Replace into service Patch methods
  • Add GetForCluster() and ListByCluster() to NodePoolService, moving ownership verification and filtered listing from handler
  • Remove Replace() from all DAO and service interfaces (Cluster, NodePool, AdapterStatus) and their mocks
  • Inject ClusterDao and GenericService into NodePoolService constructor
  • Remove GenericService dependency from ClusterNodePoolsHandler
  • Add unit tests for Patch, GetForCluster, and ListByCluster
  • Fix stale comment referencing removed Replace method

Summary

  • HYPERFLEET-995

Test Plan

  • Unit tests added/updated
  • make test-all passes
  • make lint passes
  • Helm chart changes validated with make test-helm (if applicable)
  • Deployed to a development cluster and verified (if Helm/config changes)
  • E2E tests passed (if cross-component or major changes)

Summary by CodeRabbit

  • Refactor

    • Reworked replace→patch flows: clusters and nodepools now use Patch endpoints; soft-delete cascades to nodepools and updates generations/statuses.
  • New Features / API

    • NodePool and Cluster services expose Patch, owner-scoped Get, and cluster-scoped listing; generic list no longer accepts a username.
  • Validation

    • Added UUID path ID validation.
  • Documentation

    • Clarified generation increment responsibility and persistence guidance.

@openshift-ci openshift-ci Bot requested review from Mischulee and pnguyen44 May 4, 2026 18:41
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 4, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign jsell-rh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: db6fcc5a-4c15-43c6-a8cb-deffea072b59

📥 Commits

Reviewing files that changed from the base of the PR and between 48aeed4 and 8dc77ea.

📒 Files selected for processing (34)
  • pkg/api/adapter_status_types.go
  • pkg/api/cluster_types.go
  • pkg/api/node_pool_types.go
  • pkg/api/presenters/cluster.go
  • pkg/api/presenters/cluster_test.go
  • pkg/api/presenters/node_pool.go
  • pkg/api/presenters/node_pool_test.go
  • pkg/dao/CLAUDE.md
  • pkg/dao/adapter_status.go
  • pkg/dao/cluster.go
  • pkg/dao/generic.go
  • pkg/dao/mocks/cluster.go
  • pkg/dao/mocks/node_pool.go
  • pkg/dao/node_pool.go
  • pkg/handlers/cluster.go
  • pkg/handlers/cluster_nodepools.go
  • pkg/handlers/cluster_nodepools_test.go
  • pkg/handlers/cluster_test.go
  • pkg/handlers/node_pool.go
  • pkg/handlers/validation.go
  • pkg/services/adapter_status.go
  • pkg/services/cluster.go
  • pkg/services/cluster_test.go
  • pkg/services/generic.go
  • pkg/services/generic_test.go
  • pkg/services/node_pool.go
  • pkg/services/node_pool_test.go
  • pkg/services/status_helpers.go
  • pkg/services/util.go
  • pkg/services/util_test.go
  • plugins/adapterStatus/plugin.go
  • plugins/clusters/plugin.go
  • plugins/generic/plugin.go
  • plugins/nodePools/plugin.go
💤 Files with no reviewable changes (2)
  • pkg/services/adapter_status.go
  • pkg/dao/mocks/cluster.go
✅ Files skipped from review due to trivial changes (2)
  • pkg/dao/CLAUDE.md
  • pkg/handlers/node_pool.go
🚧 Files skipped from review as they are similar to previous changes (6)
  • pkg/api/adapter_status_types.go
  • pkg/handlers/validation.go
  • pkg/services/status_helpers.go
  • pkg/dao/cluster.go
  • pkg/dao/mocks/node_pool.go
  • pkg/services/node_pool.go

Walkthrough

The diff removes Replace-style update methods from multiple DAOs and services and replaces inline replacement flows with patch-based APIs. ClusterService and NodePoolService gain Patch (NodePoolService also adds GetByIDAndOwner and ListByCluster). DAOs were refactored to take db.SessionFactory by value and use d.sessionFactory.New(ctx). Presenters no longer set CreatedBy/UpdatedBy. Handlers delegate ownership checks and patching to services and stop manual JSON marshaling and username threading. GenericService.List drops the username parameter. Status recomputation now uses JSON-equality and persists via Save/SaveAll. Tests and plugin wiring were updated to match new constructors and call sites.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Handler
    participant Service
    participant DAO
    participant DB
    Client->>Handler: PATCH /clusters/{id} with patch
    Handler->>Service: Patch(ctx, id, patch)
    Service->>DAO: GetForUpdate(ctx, id)
    DAO->>DB: SELECT ... FOR UPDATE
    DB-->>DAO: row
    DAO-->>Service: cluster
    Service->>Service: applyClusterPatch (marshal fields)
    alt no Spec/Labels change
        Service-->>Handler: cluster (no DB save)
    else changed
        Service->>DAO: Save(ctx, cluster)
        DAO->>DB: UPDATE cluster
        DB-->>DAO: updated row
        DAO-->>Service: cluster
        Service->>Service: update status from adapters
        Service-->>Handler: updated cluster
    end
    Handler-->>Client: 200 / error
Loading
sequenceDiagram
    participant Client
    participant NPHandler as Handler
    participant NPService as Service
    participant NPDAO as DAO
    participant DB
    Client->>NPHandler: PATCH /clusters/{clusterID}/nodepools/{id}
    NPHandler->>NPService: GetByIDAndOwner(ctx, id, clusterID)
    NPService->>NPDAO: GetByIDAndOwner(ctx, id, clusterID)
    NPDAO->>DB: SELECT nodepool WHERE id=? AND owner_id=?
    DB-->>NPDAO: nodepool / not found
    NPDAO-->>NPService: nodepool / not found
    alt not found
        NPService-->>NPHandler: NotFound
    else found
        NPHandler->>NPService: Patch(ctx, id, patch)
        NPService->>NPDAO: GetForUpdate(ctx, id)
        NPDAO->>DB: SELECT ... FOR UPDATE
        DB-->>NPDAO: nodepool row
        NPDAO-->>NPService: nodepool
        NPService->>NPService: applyNodePoolPatch (marshal fields)
        alt no Spec/Labels change
            NPService-->>NPHandler: nodepool (no DB save)
        else changed
            NPService->>NPDAO: Save(ctx, nodepool)
            NPDAO->>DB: UPDATE nodepool
            DB-->>NPDAO: updated row
            NPDAO-->>NPService: nodepool
            NPService->>NPService: update status from adapters
            NPService-->>NPHandler: updated nodepool
        end
    end
    NPHandler-->>Client: 200 / 404 / error
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.58% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main refactoring effort: moving logic from handlers to the service layer, specifically the Replace-to-Patch pattern.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
pkg/services/cluster.go (1)

98-110: ⚡ Quick win

Short-circuit semantic no-op patches.

When spec and labels are unchanged, this still does a full Save plus status recomputation. Returning early keeps PATCH idempotent and avoids unnecessary DB churn; sqlNodePoolService.Patch has the same pattern.

Proposed change
-	if !bytes.Equal(oldSpec, cluster.Spec) || !bytes.Equal(oldLabels, cluster.Labels) {
-		cluster.Generation++
-	}
-
-	if saveErr := s.clusterDao.Save(ctx, cluster); saveErr != nil {
+	if bytes.Equal(oldSpec, cluster.Spec) && bytes.Equal(oldLabels, cluster.Labels) {
+		return cluster, nil
+	}
+
+	cluster.Generation++
+
+	if saveErr := s.clusterDao.Save(ctx, cluster); saveErr != nil {
 		return nil, handleUpdateError("Cluster", saveErr)
 	}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/services/cluster.go` around lines 98 - 110, Detect the no-op patch by
checking bytes.Equal(oldSpec, cluster.Spec) and bytes.Equal(oldLabels,
cluster.Labels) and short-circuit before mutating Generation, calling
s.clusterDao.Save, or recomputing status; when both spec and labels are
unchanged return the current cluster and nil immediately (mirroring
sqlNodePoolService.Patch behavior). Modify the logic around the existing if that
increments cluster.Generation and the subsequent s.clusterDao.Save /
s.UpdateClusterStatusFromAdapters calls so the equality case returns early and
only when there are changes proceed to increment Generation, call
s.clusterDao.Save(ctx, cluster), and then s.UpdateClusterStatusFromAdapters(ctx,
cluster.ID).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/handlers/cluster_nodepools.go`:
- Line 90: The handler calls to h.nodePoolService.GetForCluster (used at lines
where nodePool is fetched) must be preceded or accompanied by an explicit
cluster existence check so we don't operate on nodepools whose OwnerID
references a deleted cluster; add a call to the cluster lookup/validation (e.g.,
h.clusterService.Get or h.clusterService.Exists) using the same clusterID used
for scoped operations (the same pattern as ListByCluster), return the
appropriate not-found error if the cluster is missing, then proceed to call
GetForCluster; do this for all occurrences (the nodePool fetch at GetForCluster
invocations referenced in this file).

In `@pkg/services/node_pool.go`:
- Around line 145-149: The search merge in node_pool.go incorrectly appends the
cluster filter directly to args.Search (using args.Search and clusterID), which
breaks operator precedence; update the logic so when args.Search is non-empty
you set args.Search = "(" + args.Search + ") AND owner_id = ?" (or similar) and
pass clusterID as a parameter instead of embedding it inline, and when empty set
args.Search = "owner_id = ?" with clusterID parameterized; ensure any code that
executes the query uses the parameter placeholder and supplies clusterID to
avoid raw string interpolation and to correctly scope the cluster filter across
the entire expression.

---

Nitpick comments:
In `@pkg/services/cluster.go`:
- Around line 98-110: Detect the no-op patch by checking bytes.Equal(oldSpec,
cluster.Spec) and bytes.Equal(oldLabels, cluster.Labels) and short-circuit
before mutating Generation, calling s.clusterDao.Save, or recomputing status;
when both spec and labels are unchanged return the current cluster and nil
immediately (mirroring sqlNodePoolService.Patch behavior). Modify the logic
around the existing if that increments cluster.Generation and the subsequent
s.clusterDao.Save / s.UpdateClusterStatusFromAdapters calls so the equality case
returns early and only when there are changes proceed to increment Generation,
call s.clusterDao.Save(ctx, cluster), and then
s.UpdateClusterStatusFromAdapters(ctx, cluster.ID).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 6beb0281-8dbe-4d84-acbd-4f6ca3763817

📥 Commits

Reviewing files that changed from the base of the PR and between 9798ec3 and 637ed1a.

📒 Files selected for processing (17)
  • pkg/dao/adapter_status.go
  • pkg/dao/cluster.go
  • pkg/dao/mocks/cluster.go
  • pkg/dao/mocks/node_pool.go
  • pkg/dao/node_pool.go
  • pkg/handlers/cluster.go
  • pkg/handlers/cluster_nodepools.go
  • pkg/handlers/cluster_nodepools_test.go
  • pkg/handlers/node_pool.go
  • pkg/services/adapter_status.go
  • pkg/services/cluster.go
  • pkg/services/cluster_test.go
  • pkg/services/node_pool.go
  • pkg/services/node_pool_test.go
  • pkg/services/status_helpers.go
  • plugins/clusters/plugin.go
  • plugins/nodePools/plugin.go
💤 Files with no reviewable changes (7)
  • pkg/dao/node_pool.go
  • pkg/dao/adapter_status.go
  • pkg/dao/cluster.go
  • pkg/dao/mocks/node_pool.go
  • plugins/clusters/plugin.go
  • pkg/services/adapter_status.go
  • pkg/dao/mocks/cluster.go

Comment thread pkg/handlers/cluster_nodepools.go Outdated
Comment thread pkg/services/node_pool.go
@rh-amarin
Copy link
Copy Markdown
Contributor

pkg/dao/node_pool.go:102 — JIRA AC miss: SoftDeleteByOwner still increments generation in the DAO

SoftDeleteByOwner still contains:

"generation": gorm.Expr("generation + 1"),

JIRA-995's acceptance criteria explicitly states: "Generation increment logic is removed from pkg/dao/cluster.go and pkg/dao/node_pool.go", and the description singles out this function by name: "SoftDeleteByOwner() bakes generation + 1 directly into SQL. These are business rules that belong in the service layer."

The Replace paths in both DAOs were correctly cleaned up, but SoftDeleteByOwner was missed.

Suggested fix: accept a pre-computed newGeneration int32 parameter from the caller and use it directly, consistent with how Patch now handles generation:

// DAO interface
SoftDeleteByOwner(ctx context.Context, ownerID string, t time.Time, deletedBy string, newGeneration int32) error

// implementation
Updates(map[string]interface{}{
    "deleted_time": t,
    "deleted_by":   deletedBy,
    "generation":   newGeneration,
})

The service layer (wherever SoftDeleteByOwner is called) computes currentGeneration + 1 before passing it down.


Posted by review-pr skill

Comment thread pkg/services/node_pool.go
Comment thread pkg/handlers/cluster.go
Comment thread pkg/handlers/cluster_nodepools.go Outdated
Comment thread pkg/dao/adapter_status.go
Where("resource_type = ? AND resource_id = ? AND adapter = ?",
adapterStatus.ResourceType, adapterStatus.ResourceID, adapterStatus.Adapter).
Where(
"observed_generation < ? OR (observed_generation = ? AND last_report_time < ?)",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we consider this to be a business rule and pertaining to the service layer?

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
pkg/services/cluster_test.go (1)

1639-1731: ⚡ Quick win

Add a service-level test for patching a deleted cluster.

ClusterService.Patch now owns the soft-delete conflict check, but this suite only covers changed/unchanged/not-found. The handler test mocks that behavior, so a regression in the real service path would slip through here.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/services/cluster_test.go` around lines 1639 - 1731, Add a new subtest
inside TestClusterPatch that verifies ClusterService.Patch returns a conflict
for a soft-deleted cluster: create the service (NewClusterService) with test
DAOs, insert a cluster into clusterDao.clusters with Meta.ID "c1" and a non-nil
DeletedAt (or Deleted flag used by your model), call service.Patch(ctx, "c1",
&api.ClusterPatchRequest{Spec: &someSpec}), and assert svcErr is non-nil and
svcErr.HTTPCode equals 409; this ensures ClusterService.Patch's soft-delete
conflict check is covered at the service level.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/services/cluster.go`:
- Around line 91-107: The Patch method currently forwards the incoming patch
pointer into applyClusterPatch without checking for nil, which can cause a
panic; update sqlClusterService.Patch to explicitly validate that the patch
argument is non-nil at the start (before calling applyClusterPatch or using
patch fields) and return a service error (e.g., errors.Validation("patch must be
provided") or similar) when nil is passed so callers receive a proper error
instead of panicking; touch the Patch function and any callers if needed to
ensure the nil-check happens at the service boundary before invoking
applyClusterPatch.

In `@pkg/services/node_pool.go`:
- Around line 104-127: The row lock from nodePoolDao.GetForUpdate is lost
because Save uses a new DB session via sessionFactory.New(ctx); to fix, wrap the
entire method (the code using nodePoolDao.GetForUpdate, mutating nodePool,
IncrementGeneration, and calling nodePoolDao.Save) in a single DB transaction or
add a composite DAO method that performs GetForUpdate + update/save within the
same session/transaction so the SELECT ... FOR UPDATE lock persists; update the
NodePool update method (the block using nodePoolDao.GetForUpdate and
nodePoolDao.Save) and likewise adjust SoftDelete paths (the sections around the
calls at the same pattern) to use the same transactional/session context instead
of separate sessionFactory.New(ctx) calls.

---

Nitpick comments:
In `@pkg/services/cluster_test.go`:
- Around line 1639-1731: Add a new subtest inside TestClusterPatch that verifies
ClusterService.Patch returns a conflict for a soft-deleted cluster: create the
service (NewClusterService) with test DAOs, insert a cluster into
clusterDao.clusters with Meta.ID "c1" and a non-nil DeletedAt (or Deleted flag
used by your model), call service.Patch(ctx, "c1",
&api.ClusterPatchRequest{Spec: &someSpec}), and assert svcErr is non-nil and
svcErr.HTTPCode equals 409; this ensures ClusterService.Patch's soft-delete
conflict check is covered at the service level.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: a6083079-1a3a-443e-8790-fdd853bffc6f

📥 Commits

Reviewing files that changed from the base of the PR and between db88e17 and 48aeed4.

📒 Files selected for processing (28)
  • pkg/api/cluster_types.go
  • pkg/api/node_pool_types.go
  • pkg/api/presenters/cluster.go
  • pkg/api/presenters/cluster_test.go
  • pkg/api/presenters/node_pool.go
  • pkg/api/presenters/node_pool_test.go
  • pkg/dao/CLAUDE.md
  • pkg/dao/adapter_status.go
  • pkg/dao/cluster.go
  • pkg/dao/mocks/cluster.go
  • pkg/dao/mocks/node_pool.go
  • pkg/dao/node_pool.go
  • pkg/handlers/cluster.go
  • pkg/handlers/cluster_nodepools.go
  • pkg/handlers/cluster_nodepools_test.go
  • pkg/handlers/cluster_test.go
  • pkg/handlers/node_pool.go
  • pkg/handlers/validation.go
  • pkg/services/adapter_status.go
  • pkg/services/cluster.go
  • pkg/services/cluster_test.go
  • pkg/services/generic.go
  • pkg/services/generic_test.go
  • pkg/services/node_pool.go
  • pkg/services/node_pool_test.go
  • pkg/services/status_helpers.go
  • plugins/clusters/plugin.go
  • plugins/nodePools/plugin.go
💤 Files with no reviewable changes (5)
  • plugins/clusters/plugin.go
  • pkg/dao/mocks/cluster.go
  • pkg/dao/adapter_status.go
  • pkg/services/adapter_status.go
  • pkg/dao/cluster.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/handlers/validation.go

Comment thread pkg/services/cluster.go
Comment thread pkg/services/node_pool.go
Comment thread pkg/services/cluster.go Outdated
Comment on lines +160 to +174
for _, np := range allNodePools {
if np.DeletedTime == nil {
np.MarkDeleted(deletedBy, deletedTime)
np.IncrementGeneration()
}
}

if svcErr := recomputeNodePoolConditions(ctx, allNodePools, s.adapterStatusDao, s.adapterConfig); svcErr != nil {
return nil, svcErr
}

if err := s.nodePoolDao.SaveAll(ctx, allNodePools); err != nil {
return nil, handleSoftDeleteError("NodePool", err)
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this logic live in node_pool.go ?
In the previous code, some specifics like marking for deletion or increment generation were hidden from cluster logic, being internal to node_pools

even the recompute of the NodePool conditions should be ideally internal to node_pools.

Is it done here for performance reasons? since we have all nodepools here instead of doing Save operations one by one?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, extracting this logic into some kind of BulkDeleteNodePools looks and reads better. Really didn't think about it at this point, great, you have an eagle eye 🦅

kuudori added 3 commits May 7, 2026 08:42
Go interfaces are already reference types. Storing *db.SessionFactory
and dereferencing with (*d.sessionFactory).New(ctx) adds unnecessary
indirection. Change all DAOs to store db.SessionFactory directly.
@kuudori
Copy link
Copy Markdown
Contributor Author

kuudori commented May 7, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 7, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@rafabene
Copy link
Copy Markdown
Contributor

rafabene commented May 8, 2026

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 8, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants