HYPERFLEET-995 - refactor: move logic from handlers to service#124
HYPERFLEET-995 - refactor: move logic from handlers to service#124kuudori wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (34)
💤 Files with no reviewable changes (2)
✅ Files skipped from review due to trivial changes (2)
🚧 Files skipped from review as they are similar to previous changes (6)
WalkthroughThe diff removes Replace-style update methods from multiple DAOs and services and replaces inline replacement flows with patch-based APIs. ClusterService and NodePoolService gain Patch (NodePoolService also adds GetByIDAndOwner and ListByCluster). DAOs were refactored to take db.SessionFactory by value and use d.sessionFactory.New(ctx). Presenters no longer set CreatedBy/UpdatedBy. Handlers delegate ownership checks and patching to services and stop manual JSON marshaling and username threading. GenericService.List drops the username parameter. Status recomputation now uses JSON-equality and persists via Save/SaveAll. Tests and plugin wiring were updated to match new constructors and call sites. Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler
participant Service
participant DAO
participant DB
Client->>Handler: PATCH /clusters/{id} with patch
Handler->>Service: Patch(ctx, id, patch)
Service->>DAO: GetForUpdate(ctx, id)
DAO->>DB: SELECT ... FOR UPDATE
DB-->>DAO: row
DAO-->>Service: cluster
Service->>Service: applyClusterPatch (marshal fields)
alt no Spec/Labels change
Service-->>Handler: cluster (no DB save)
else changed
Service->>DAO: Save(ctx, cluster)
DAO->>DB: UPDATE cluster
DB-->>DAO: updated row
DAO-->>Service: cluster
Service->>Service: update status from adapters
Service-->>Handler: updated cluster
end
Handler-->>Client: 200 / error
sequenceDiagram
participant Client
participant NPHandler as Handler
participant NPService as Service
participant NPDAO as DAO
participant DB
Client->>NPHandler: PATCH /clusters/{clusterID}/nodepools/{id}
NPHandler->>NPService: GetByIDAndOwner(ctx, id, clusterID)
NPService->>NPDAO: GetByIDAndOwner(ctx, id, clusterID)
NPDAO->>DB: SELECT nodepool WHERE id=? AND owner_id=?
DB-->>NPDAO: nodepool / not found
NPDAO-->>NPService: nodepool / not found
alt not found
NPService-->>NPHandler: NotFound
else found
NPHandler->>NPService: Patch(ctx, id, patch)
NPService->>NPDAO: GetForUpdate(ctx, id)
NPDAO->>DB: SELECT ... FOR UPDATE
DB-->>NPDAO: nodepool row
NPDAO-->>NPService: nodepool
NPService->>NPService: applyNodePoolPatch (marshal fields)
alt no Spec/Labels change
NPService-->>NPHandler: nodepool (no DB save)
else changed
NPService->>NPDAO: Save(ctx, nodepool)
NPDAO->>DB: UPDATE nodepool
DB-->>NPDAO: updated row
NPDAO-->>NPService: nodepool
NPService->>NPService: update status from adapters
NPService-->>NPHandler: updated nodepool
end
end
NPHandler-->>Client: 200 / 404 / error
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
pkg/services/cluster.go (1)
98-110: ⚡ Quick winShort-circuit semantic no-op patches.
When
specandlabelsare unchanged, this still does a fullSaveplus status recomputation. Returning early keeps PATCH idempotent and avoids unnecessary DB churn;sqlNodePoolService.Patchhas the same pattern.Proposed change
- if !bytes.Equal(oldSpec, cluster.Spec) || !bytes.Equal(oldLabels, cluster.Labels) { - cluster.Generation++ - } - - if saveErr := s.clusterDao.Save(ctx, cluster); saveErr != nil { + if bytes.Equal(oldSpec, cluster.Spec) && bytes.Equal(oldLabels, cluster.Labels) { + return cluster, nil + } + + cluster.Generation++ + + if saveErr := s.clusterDao.Save(ctx, cluster); saveErr != nil { return nil, handleUpdateError("Cluster", saveErr) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/services/cluster.go` around lines 98 - 110, Detect the no-op patch by checking bytes.Equal(oldSpec, cluster.Spec) and bytes.Equal(oldLabels, cluster.Labels) and short-circuit before mutating Generation, calling s.clusterDao.Save, or recomputing status; when both spec and labels are unchanged return the current cluster and nil immediately (mirroring sqlNodePoolService.Patch behavior). Modify the logic around the existing if that increments cluster.Generation and the subsequent s.clusterDao.Save / s.UpdateClusterStatusFromAdapters calls so the equality case returns early and only when there are changes proceed to increment Generation, call s.clusterDao.Save(ctx, cluster), and then s.UpdateClusterStatusFromAdapters(ctx, cluster.ID).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@pkg/handlers/cluster_nodepools.go`:
- Line 90: The handler calls to h.nodePoolService.GetForCluster (used at lines
where nodePool is fetched) must be preceded or accompanied by an explicit
cluster existence check so we don't operate on nodepools whose OwnerID
references a deleted cluster; add a call to the cluster lookup/validation (e.g.,
h.clusterService.Get or h.clusterService.Exists) using the same clusterID used
for scoped operations (the same pattern as ListByCluster), return the
appropriate not-found error if the cluster is missing, then proceed to call
GetForCluster; do this for all occurrences (the nodePool fetch at GetForCluster
invocations referenced in this file).
In `@pkg/services/node_pool.go`:
- Around line 145-149: The search merge in node_pool.go incorrectly appends the
cluster filter directly to args.Search (using args.Search and clusterID), which
breaks operator precedence; update the logic so when args.Search is non-empty
you set args.Search = "(" + args.Search + ") AND owner_id = ?" (or similar) and
pass clusterID as a parameter instead of embedding it inline, and when empty set
args.Search = "owner_id = ?" with clusterID parameterized; ensure any code that
executes the query uses the parameter placeholder and supplies clusterID to
avoid raw string interpolation and to correctly scope the cluster filter across
the entire expression.
---
Nitpick comments:
In `@pkg/services/cluster.go`:
- Around line 98-110: Detect the no-op patch by checking bytes.Equal(oldSpec,
cluster.Spec) and bytes.Equal(oldLabels, cluster.Labels) and short-circuit
before mutating Generation, calling s.clusterDao.Save, or recomputing status;
when both spec and labels are unchanged return the current cluster and nil
immediately (mirroring sqlNodePoolService.Patch behavior). Modify the logic
around the existing if that increments cluster.Generation and the subsequent
s.clusterDao.Save / s.UpdateClusterStatusFromAdapters calls so the equality case
returns early and only when there are changes proceed to increment Generation,
call s.clusterDao.Save(ctx, cluster), and then
s.UpdateClusterStatusFromAdapters(ctx, cluster.ID).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 6beb0281-8dbe-4d84-acbd-4f6ca3763817
📒 Files selected for processing (17)
pkg/dao/adapter_status.gopkg/dao/cluster.gopkg/dao/mocks/cluster.gopkg/dao/mocks/node_pool.gopkg/dao/node_pool.gopkg/handlers/cluster.gopkg/handlers/cluster_nodepools.gopkg/handlers/cluster_nodepools_test.gopkg/handlers/node_pool.gopkg/services/adapter_status.gopkg/services/cluster.gopkg/services/cluster_test.gopkg/services/node_pool.gopkg/services/node_pool_test.gopkg/services/status_helpers.goplugins/clusters/plugin.goplugins/nodePools/plugin.go
💤 Files with no reviewable changes (7)
- pkg/dao/node_pool.go
- pkg/dao/adapter_status.go
- pkg/dao/cluster.go
- pkg/dao/mocks/node_pool.go
- plugins/clusters/plugin.go
- pkg/services/adapter_status.go
- pkg/dao/mocks/cluster.go
|
"generation": gorm.Expr("generation + 1"),JIRA-995's acceptance criteria explicitly states: "Generation increment logic is removed from The Suggested fix: accept a pre-computed // DAO interface
SoftDeleteByOwner(ctx context.Context, ownerID string, t time.Time, deletedBy string, newGeneration int32) error
// implementation
Updates(map[string]interface{}{
"deleted_time": t,
"deleted_by": deletedBy,
"generation": newGeneration,
})The service layer (wherever Posted by review-pr skill |
| Where("resource_type = ? AND resource_id = ? AND adapter = ?", | ||
| adapterStatus.ResourceType, adapterStatus.ResourceID, adapterStatus.Adapter). | ||
| Where( | ||
| "observed_generation < ? OR (observed_generation = ? AND last_report_time < ?)", |
There was a problem hiding this comment.
Should we consider this to be a business rule and pertaining to the service layer?
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
pkg/services/cluster_test.go (1)
1639-1731: ⚡ Quick winAdd a service-level test for patching a deleted cluster.
ClusterService.Patchnow owns the soft-delete conflict check, but this suite only covers changed/unchanged/not-found. The handler test mocks that behavior, so a regression in the real service path would slip through here.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/services/cluster_test.go` around lines 1639 - 1731, Add a new subtest inside TestClusterPatch that verifies ClusterService.Patch returns a conflict for a soft-deleted cluster: create the service (NewClusterService) with test DAOs, insert a cluster into clusterDao.clusters with Meta.ID "c1" and a non-nil DeletedAt (or Deleted flag used by your model), call service.Patch(ctx, "c1", &api.ClusterPatchRequest{Spec: &someSpec}), and assert svcErr is non-nil and svcErr.HTTPCode equals 409; this ensures ClusterService.Patch's soft-delete conflict check is covered at the service level.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@pkg/services/cluster.go`:
- Around line 91-107: The Patch method currently forwards the incoming patch
pointer into applyClusterPatch without checking for nil, which can cause a
panic; update sqlClusterService.Patch to explicitly validate that the patch
argument is non-nil at the start (before calling applyClusterPatch or using
patch fields) and return a service error (e.g., errors.Validation("patch must be
provided") or similar) when nil is passed so callers receive a proper error
instead of panicking; touch the Patch function and any callers if needed to
ensure the nil-check happens at the service boundary before invoking
applyClusterPatch.
In `@pkg/services/node_pool.go`:
- Around line 104-127: The row lock from nodePoolDao.GetForUpdate is lost
because Save uses a new DB session via sessionFactory.New(ctx); to fix, wrap the
entire method (the code using nodePoolDao.GetForUpdate, mutating nodePool,
IncrementGeneration, and calling nodePoolDao.Save) in a single DB transaction or
add a composite DAO method that performs GetForUpdate + update/save within the
same session/transaction so the SELECT ... FOR UPDATE lock persists; update the
NodePool update method (the block using nodePoolDao.GetForUpdate and
nodePoolDao.Save) and likewise adjust SoftDelete paths (the sections around the
calls at the same pattern) to use the same transactional/session context instead
of separate sessionFactory.New(ctx) calls.
---
Nitpick comments:
In `@pkg/services/cluster_test.go`:
- Around line 1639-1731: Add a new subtest inside TestClusterPatch that verifies
ClusterService.Patch returns a conflict for a soft-deleted cluster: create the
service (NewClusterService) with test DAOs, insert a cluster into
clusterDao.clusters with Meta.ID "c1" and a non-nil DeletedAt (or Deleted flag
used by your model), call service.Patch(ctx, "c1",
&api.ClusterPatchRequest{Spec: &someSpec}), and assert svcErr is non-nil and
svcErr.HTTPCode equals 409; this ensures ClusterService.Patch's soft-delete
conflict check is covered at the service level.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: a6083079-1a3a-443e-8790-fdd853bffc6f
📒 Files selected for processing (28)
pkg/api/cluster_types.gopkg/api/node_pool_types.gopkg/api/presenters/cluster.gopkg/api/presenters/cluster_test.gopkg/api/presenters/node_pool.gopkg/api/presenters/node_pool_test.gopkg/dao/CLAUDE.mdpkg/dao/adapter_status.gopkg/dao/cluster.gopkg/dao/mocks/cluster.gopkg/dao/mocks/node_pool.gopkg/dao/node_pool.gopkg/handlers/cluster.gopkg/handlers/cluster_nodepools.gopkg/handlers/cluster_nodepools_test.gopkg/handlers/cluster_test.gopkg/handlers/node_pool.gopkg/handlers/validation.gopkg/services/adapter_status.gopkg/services/cluster.gopkg/services/cluster_test.gopkg/services/generic.gopkg/services/generic_test.gopkg/services/node_pool.gopkg/services/node_pool_test.gopkg/services/status_helpers.goplugins/clusters/plugin.goplugins/nodePools/plugin.go
💤 Files with no reviewable changes (5)
- plugins/clusters/plugin.go
- pkg/dao/mocks/cluster.go
- pkg/dao/adapter_status.go
- pkg/services/adapter_status.go
- pkg/dao/cluster.go
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/handlers/validation.go
| for _, np := range allNodePools { | ||
| if np.DeletedTime == nil { | ||
| np.MarkDeleted(deletedBy, deletedTime) | ||
| np.IncrementGeneration() | ||
| } | ||
| } | ||
|
|
||
| if svcErr := recomputeNodePoolConditions(ctx, allNodePools, s.adapterStatusDao, s.adapterConfig); svcErr != nil { | ||
| return nil, svcErr | ||
| } | ||
|
|
||
| if err := s.nodePoolDao.SaveAll(ctx, allNodePools); err != nil { | ||
| return nil, handleSoftDeleteError("NodePool", err) | ||
| } | ||
|
|
There was a problem hiding this comment.
Should this logic live in node_pool.go ?
In the previous code, some specifics like marking for deletion or increment generation were hidden from cluster logic, being internal to node_pools
even the recompute of the NodePool conditions should be ideally internal to node_pools.
Is it done here for performance reasons? since we have all nodepools here instead of doing Save operations one by one?
There was a problem hiding this comment.
You're right, extracting this logic into some kind of BulkDeleteNodePools looks and reads better. Really didn't think about it at this point, great, you have an eagle eye 🦅
Go interfaces are already reference types. Storing *db.SessionFactory and dereferencing with (*d.sessionFactory).New(ctx) adds unnecessary indirection. Change all DAOs to store db.SessionFactory directly.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
Move Replace→Patch refactoring and cluster-nodepool ownership checks from handler layer into service layer for better separation of concerns.
Summary
Test Plan
make test-allpassesmake lintpassesmake test-helm(if applicable)Summary by CodeRabbit
Refactor
New Features / API
Validation
Documentation