This guide explains how to write E2E tests for HyperFleet.
Tests are organized by resource type:
e2e/
├── e2e.go # Test suite registration
├── cluster/
│ └── creation.go # Cluster lifecycle tests
└── nodepool/
└── creation.go # NodePool lifecycle tests
Start by reading existing tests to understand the patterns:
e2e/cluster/creation.go- Cluster creation examplee2e/nodepool/creation.go- NodePool creation example
Test payloads are stored in testdata/payloads/:
testdata/payloads/
├── clusters/
│ └── cluster-request.json # resource cluster payload
└── nodepools/
└── nodepool-request.json # resource nodepool payload
Payload files support Go template syntax for dynamic values. This prevents naming conflicts when running tests multiple times in long-running environments.
Example (testdata/payloads/clusters/cluster-request.json):
{
"kind": "Cluster",
"name": "hp-cluster-{{.Random}}",
"labels": {
"environment": "production",
"created-at": "{{.Timestamp}}"
},
"spec": { ... }
}Each time the payload is loaded, template variables are replaced with fresh values, ensuring unique resource names. See pkg/client/payload.go for available template variables.
- File extension: Use
.go(NOT_test.go) - File name: Descriptive, e.g.,
creation.go,lifecycle.go - Location: Under
e2e/{resource-type}/
package cluster
import (
"context"
"github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/api/openapi"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/client"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/helper"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/labels"
)
var testName = "[Suite: cluster] Create Cluster via API"
var _ = ginkgo.Describe(testName,
ginkgo.Label(labels.Tier0),
func() {
var h *helper.Helper
var clusterID string
ginkgo.BeforeEach(func() {
h = helper.New()
})
ginkgo.It("should create cluster successfully", func(ctx context.Context) {
ginkgo.By("submitting cluster creation request")
cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/clusters/cluster-request.json")
Expect(err).NotTo(HaveOccurred())
clusterID = *cluster.Id
ginkgo.By("waiting for cluster to become Reconciled")
err = h.WaitForClusterCondition(ctx, clusterID, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue, h.Cfg.Timeouts.Cluster.Reconciled)
Expect(err).NotTo(HaveOccurred())
})
ginkgo.AfterEach(func(ctx context.Context) {
if h == nil || clusterID == "" {
return
}
if err := h.CleanupTestCluster(ctx, clusterID); err != nil {
ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err)
}
})
},
)var lifecycleTestName = "[Suite: cluster] Full Cluster Creation Flow"- Format:
[Suite: component] Description - Suite represents the HyperFleet component being tested (cluster, nodepool, api, adapter, etc.)
- Use clear, descriptive names
All tests must use labels for categorization. See pkg/labels/labels.go for complete definitions.
Required labels (1):
- Severity:
Tier0|Tier1|Tier2
Optional labels:
- Scenario:
Negative|Performance - Functionality:
Upgrade - Constraint:
Disruptive|Slow
Example:
import "github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/labels"
var testName = "[Suite: cluster] Full Cluster Creation Flow"
var _ = ginkgo.Describe(testName,
ginkgo.Label(labels.Tier0),
func() { ... }
)Example with optional labels:
// Negative test case with slow execution
var _ = ginkgo.Describe(testName,
ginkgo.Label(labels.Tier1, labels.Negative, labels.Slow),
func() { ... }
)ginkgo.BeforeEach(func() {
h = helper.New()
})- Create Helper instance (automatically loads configuration)
- Initialize test context
ginkgo.By("submitting cluster creation request")
// ... perform action
ginkgo.By("waiting for cluster to become Reconciled")
// ... wait for condition
ginkgo.By("verifying adapter conditions")
// ... verify conditions- Use
ginkgo.By()to mark major test steps - Makes test output readable
- DO NOT use
ginkgo.By()insideEventuallyclosures
ginkgo.AfterEach(func(ctx context.Context) {
if h == nil || clusterID == "" {
return
}
if err := h.CleanupTestCluster(ctx, clusterID); err != nil {
ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err)
}
})- Clean up resources after test
- Skip cleanup if helper not initialized or no cluster created
- Log cleanup failures as warnings
// Basic assertions
Expect(err).NotTo(HaveOccurred())
Expect(cluster.ID).NotTo(BeEmpty())
Expect(h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)).To(BeTrue())
// Eventually for async operations
Eventually(func(g Gomega) {
cluster, err := h.Client.GetCluster(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)).To(BeTrue())
}, h.Cfg.Timeouts.Cluster.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())Important: Inside Eventually closures, use g.Expect() instead of Expect()
err = h.WaitForClusterCondition(ctx, clusterID, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue, h.Cfg.Timeouts.Cluster.Reconciled)
Expect(err).NotTo(HaveOccurred())statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
Expect(err).NotTo(HaveOccurred())
for _, adapter := range statuses.Items {
hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
Expect(hasApplied).To(BeTrue())
}- Use descriptive test names and labels
- Mark major steps with
ginkgo.By() - Use
Eventuallyfor async operations - Clean up resources in
AfterEach - Use timeout values from config
- Store resource IDs for cleanup
- Use helper functions when available
- Don't use
_test.gosuffix (use.go) - Don't use
ginkgo.By()insideEventuallyclosures - Don't hardcode timeouts (use config values)
- Don't skip cleanup (unless debugging)
- Don't ignore errors
# For cluster tests
touch e2e/cluster/my-new-test.go
# For nodepool tests
touch e2e/nodepool/my-new-test.goCopy from existing tests and modify:
- Change test name and ID
- Update labels
- Implement test logic
- Add cleanup
Tests are automatically registered via the package import in e2e/e2e.go:
package e2e
import (
_ "github.com/openshift-hyperfleet/hyperfleet-e2e/e2e/cluster"
_ "github.com/openshift-hyperfleet/hyperfleet-e2e/e2e/nodepool"
)No need to manually register tests.
# Run all cluster tests
make build
./bin/hyperfleet-e2e test --focus "\[Suite: cluster\]"
# Run specific test by description
./bin/hyperfleet-e2e test --focus "Create Cluster via API"
# Or run by label
./bin/hyperfleet-e2e test --label-filter "critical && lifecycle"cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/clusters/cluster-request.json")
Expect(err).NotTo(HaveOccurred())Eventually(func(g Gomega) {
cluster, err := h.Client.GetCluster(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)).To(BeTrue())
}, timeout, pollInterval).Should(Succeed())statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
Expect(err).NotTo(HaveOccurred())
for _, adapter := range statuses.Items {
adapterName := adapter.Adapter
ginkgo.By(fmt.Sprintf("verifying adapter %s conditions", adapterName))
hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
Expect(hasApplied).To(BeTrue(), "adapter %s should have Applied=True", adapterName)
hasAvailable := h.HasCondition(adapter.Conditions, client.ConditionTypeAvailable, openapi.True)
Expect(hasAvailable).To(BeTrue(), "adapter %s should have Available=True", adapterName)
}While in development, it is common to use custom images for components (api, sentinel, adapters) instead of the CI images.
Is also convenient to use RabbitMQ to avoid dealing with GCP credentials for Pub/Sub
RabbitMQ has to be installed beforehand, you can use the hyperfleet-infra repository to execute:
make install-rabbitmq NAMESPACE=rabbitmq
Then you can deploy the e2e test components with support for RabbitMQ and custom images executing:
SENTINEL_BROKER_RABBITMQ_URL="amqp://guest:guest@rabbitmq.rabbitmq:5672" \
ADAPTER_BROKER_RABBITMQ_URL="amqp://guest:guest@rabbitmq.rabbitmq:5672" \ ADAPTER_BROKER_TYPE=rabbitmq \
SENTINEL_BROKER_TYPE=rabbitmq \
./deploy-scripts/deploy-clm.sh --action install \
--namespace <your-namespace> \
--image-registry quay.io/<your-user> \
--api-image-repo hyperfleet-api \
--api-image-tag <dev-xxx> \
--sentinel-image-repo hyperfleet-sentinel \
--sentinel-image-tag <dev-yyy> \
--adapter-image-repo hyperfleet-adapter \
--adapter-image-tag <dev-zzz> \
--api-base-url http://hyperfleet-api:8000 \
--api-adapters-cluster cl-namespace,cl-maestro,cl-deployment,cl-job \
--api-adapters-nodepool np-configmap \
--cluster-tier0-adapters cl-namespace,cl-maestro,cl-deployment,cl-job,cl-invalid-resource,cl-precondition-error \
--nodepool-tier0-adapters np-configmap
- Architecture: Understand the framework design in Architecture
- Configuration: Customize behavior in Configuration Reference
- Debug Tests: Learn debugging techniques in Troubleshooting Guide
- CLI Reference: Full command documentation in CLI Reference