Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spanmetricsprocessor] lacenty_bucket histogram +Inf label inconsistency #2838

Closed
yeya24 opened this issue Mar 24, 2021 · 9 comments
Closed
Labels
bug Something isn't working closed as inactive processor/spanmetrics Span Metrics processor Stale

Comments

@yeya24
Copy link
Contributor

yeya24 commented Mar 24, 2021

Describe the bug
A clear and concise description of what the bug is.

When using spanmetricsprocessor, if the output exporter is prometheus, then the lantency_bucket metrics have a bucket for +Inf and Math.MaxFloat64.

Screenshot from 2021-03-23 23-16-14

If the output exporter is prometheusremotewrite, then the latency_bucket metric doesn't contain the +Inf bucket

Screenshot from 2021-03-23 23-14-33

Steps to reproduce
If possible, provide a recipe for reproducing the error.

Use the config below

receivers:
  # Dummy receiver that's never used, because a pipeline is required to have one.
  otlp/spanmetrics:
    protocols:
      grpc:
        endpoint: "localhost:12345"

  otlp:
    protocols:
      grpc:
      http:

processors:
  spanmetrics:
    metrics_exporter: prometheusremotewrite
    latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms]
    dimensions:
      - name: namespace
      - name: reason
#      - name: http.method
#        default: GET

exporters:
  otlp/spanmetrics:
    endpoint: "localhost:55680"
    insecure: true

  prometheus:
    endpoint: "0.0.0.0:8889"
    namespace: promexample

  prometheusremotewrite:
    endpoint: "http://localhost:9090/api/v1/write"

  logging:
    loglevel: debug

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [spanmetrics]
      exporters: [logging]

  # The exporter name must match the metrics_exporter name.
  # The receiver is just a dummy and never used; added to pass validation requiring at least one receiver in a pipeline.
    metrics:
      # This receiver is just a dummy and never used.
      # Added to pass validation requiring at least one receiver in a pipeline.
      receivers: [otlp/spanmetrics]
      # The metrics_exporter must be present in this list.
      exporters: [prometheusremotewrite, logging]

What did you expect to see?
A clear and concise description of what you expected to see.

Ideally, the bucket for Math.MaxFloat64 shouldn't exist. I just want to see the upper bound of +Inf.

What did you see instead?
A clear and concise description of what you saw instead.

What version did you use?
Version: 0.23.0

What config did you use?
Config: (e.g. the yaml config file)

Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

Additional context
Add any other context about the problem here.

@yeya24 yeya24 added the bug Something isn't working label Mar 24, 2021
@bogdansandu
Copy link

It is reproducing for me also in spanmetricsprocessor v0.21.0.

@bogdansandu
Copy link

Any updates on this issue?

@yeya24
Copy link
Contributor Author

yeya24 commented Apr 6, 2021

@bogdansandu I am not sure but I think no one is working on it right now. Feel free to take it or we can discuss ways to fix this problem.

@albertteoh
Copy link
Contributor

@bogdandrutu should the spanmetrics processor even create a Math.MaxFloat64 bucket as an overflow? Or can we assume everything else will correctly fall into the +Inf bucket for both prometheus and prometheusremotewrite exporter?

alexperez52 referenced this issue in open-o11y/opentelemetry-collector-contrib Aug 18, 2021
* Set unprivileged user to container image

Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>

* Set alpine version to 3.13

Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
@alolita alolita added the processor/spanmetrics Span Metrics processor label Sep 30, 2021
@ankitnayan
Copy link

@bogdandrutu should the spanmetrics processor even create a Math.MaxFloat64 bucket as an overflow? Or can we assume everything else will correctly fall into the +Inf bucket for both prometheus and prometheusremotewrite exporter?

I tried doing this. This does not work either. Your last bucket value takes the place of Math.MaxFloat64.
Did you guys find a workaround?

@Ashmita152
Copy link

Hello everyone,
We're facing the same issue, were you able to find any workaround for this problem ?

@luistilingue
Copy link

I think it is related to that issue as well. #4975

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Dec 19, 2022
@github-actions
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working closed as inactive processor/spanmetrics Span Metrics processor Stale
Projects
None yet
Development

No branches or pull requests

7 participants