Skip to content

Update cloud metadata query to use host.id#17981

Merged
jmmcorreia merged 2 commits intoelastic:mainfrom
jmmcorreia:ec2_issue
Apr 14, 2026
Merged

Update cloud metadata query to use host.id#17981
jmmcorreia merged 2 commits intoelastic:mainfrom
jmmcorreia:ec2_issue

Conversation

@jmmcorreia
Copy link
Copy Markdown
Contributor

@jmmcorreia jmmcorreia commented Mar 23, 2026

Proposed commit message

-Fixes #15013

The cloud metadata dashboard was design to to use the attribute cloud.instance.id. In the kube-stack samples, this attribute was being manually added by using the host.id value.

However, when reading the content package doc or using any other of the otlp samples in the repo, this value was not being added using a processor, causing the dashboard to fail due to a missing value.

Since all cloud provider detectors in the resourcedetectionprocessor emit the host.id value that is part of the semantic conventions, the proposal here is to rely on that value to simplify the configuration and ensure compatibility in most scenarios.

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.
  • I have verified that any added dashboard complies with Kibana's Dashboard good practices

How to test this PR locally

To validate the PR, the AWS EC2 metadata mock was used: https://github.com/aws/amazon-ec2-metadata-mock

Docker command : docker run -it --rm -p 1338:1338 public.ecr.aws/aws-ec2/amazon-ec2-metadata-mock:v1.13.0

Then, otel collector was run with the following config:

receivers:
  hostmetrics/system:
    collection_interval: 60s
    scrapers:
      disk:
      filesystem:
      cpu:
        metrics:
          system.cpu.utilization: { enabled: true }
          system.cpu.logical.count: { enabled: true }
      memory:
        metrics:
          system.memory.utilization: { enabled: true }
      network:
      processes:
      load:


processors:
  resourcedetection:
    detectors: ["system", "ec2"]
    ec2:
    system:
      hostname_sources: ["os"]
      resource_attributes:
        host.name: { enabled: true }
        host.arch: { enabled: true }
        host.ip: { enabled: true }
        host.mac: { enabled: true }
        os.description: { enabled: true }
        os.type: { enabled: true }

exporters:
  elasticsearch:
    endpoints: ["https://127.0.0.1:9200"]
    user: "elastic"
    password: "changeme"
    tls:
      insecure_skip_verify: true

service:
  pipelines:
    metrics/hostmetrics:
      receivers: [hostmetrics/system]
      processors: [resourcedetection]
      exporters: [elasticsearch]

The following env variable was also set:

export AWS_EC2_METADATA_SERVICE_ENDPOINT=http://127.0.0.1:1338

Related issues

@jmmcorreia jmmcorreia requested a review from a team as a code owner March 23, 2026 16:17
@jmmcorreia jmmcorreia added the enhancement New feature or request label Mar 23, 2026
@andrewkroh andrewkroh added dashboard Relates to a Kibana dashboard bug, enhancement, or modification. Integration:system_otel System OpenTelemetry Assets labels Mar 23, 2026
@jmmcorreia jmmcorreia requested a review from rogercoll March 25, 2026 15:45
@elasticmachine
Copy link
Copy Markdown

💚 Build Succeeded

@jmmcorreia jmmcorreia merged commit cb9546d into elastic:main Apr 14, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dashboard Relates to a Kibana dashboard bug, enhancement, or modification. enhancement New feature or request Integration:system_otel System OpenTelemetry Assets

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[system_otel]: Host cloud metadata fails even with cloud resource detectors

4 participants