Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

query: different results for rate function when not dedup or using implicit step interval #7341

Open
tizki opened this issue May 9, 2024 · 10 comments

Comments

@tizki
Copy link
Contributor

tizki commented May 9, 2024

Thanos, Prometheus and Golang version used:
thanos query 0.35.0
thanos sidecar 0.35.0
prometheus 2.45.0

What happened:
when calculating rate for a metric, there are cases where the result if different when dedup is enabled/disabled.
In addition there is the same difference if using a step [5m:1m] and not using [5m] although the default step is the same.

What you expected to happen:
I expect the rate result to be the same (or much closer)

How to reproduce it (as minimally and precisely as possible):
My environment has 2 prometheus servers which are replicas of each other (scraping the same targets)
sidecars external labels: fqdn=, monitor="master", tenant_id="default-tenant"

I start thanos with those 2 servers as stores, replica-label=fqdn

then when querying in thanos the rate query in a specific time:
rate(my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m])
using dedup and without dedup, the results are completely different

the results are also different if querying with step (same results as without dedup)
rate(my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m:1m])

image

@GiedriusS
Copy link
Member

Is it the same if you specify the timestamp? How could we reproduce this?

@MichaHoffmann
Copy link
Contributor

FIWI: since subqueries ( even with same step ) are a very different construct, it is not unimaginable that they return different results. As for the dedup ~ i dont know whats happening here.

@tizki
Copy link
Contributor Author

tizki commented May 12, 2024

@GiedriusS I'm using a specific timestamp.
for me it reproduces any time on my environment. it's a counter metric that comes from 2 prometheus replicas.

@MichaHoffmann I tried to debug it a bit, I suspect it's something where the dedup choose which timestamps to use from the samples, but I got a little lost there.
if you can refer me to where to look, I can continue debugging. (or get some relevant info to post here)

@tizki
Copy link
Contributor Author

tizki commented May 12, 2024

found something interesting, if I do

my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m]

without deduplication in a specific timestamp I get the following:
server1:

2583033 @1714665427.092
2588869 @1714665487.092
2594666 @1714665547.092
2600461 @1714665607.092
2606298 @1714665667.092

server2:

2583033 @1714665450.205
2588869 @1714665510.205
2594666 @1714665570.205
2600461 @1714665630.205
2606298 @1714665690.205

but when I do the same with deduplication, I get one more sample, the interval between the first to the second is much small (23 seconds) than the rest (60 seconds), and the diff in values is the 0, since the first sample and second are the same.

2583033 @1714665427.092
2583033 @1714665450.205 (23 seconds interval from previous, same value as previous)
2588869 @1714665510.205
2594666 @1714665570.205
2600461 @1714665630.205
2606298 @1714665690.205

it seems it takes the first from servers 1 and then the rest from server2.
now it clear why adding a step solves it (removes the duplicated sample).

any idea?

@tizki
Copy link
Contributor Author

tizki commented May 15, 2024

I managed to reproduce this use case in a unit test.
I added the following test case to TestDedupSeriesSet in dedup/iter_test.go:

{
			name:      "My Regression test",
			isCounter: true,
			input: []series{
				{
					lset: labels.Labels{{Name: "a", Value: "1"}},
					samples: []sample{
						{1714665427092, 2583033},
						{1714665487092, 2588869},
						{1714665547092, 2594666},
						{1714665607092, 2600461},
						{1714665667092, 2606298},
					},
				}, {
					lset: labels.Labels{{Name: "a", Value: "1"}},
					samples: []sample{
						{1714665450205, 2583033},
						{1714665510205, 2588869},
						{1714665570205, 2594666},
						{1714665630205, 2600461},
						{1714665690205, 2606298},
					},
				},
			},
			exp: []series{
				{
					lset:    labels.Labels{{Name: "a", Value: "1"}},
					samples: []sample{{1714665427092, 2583033}, {1714665487092, 2588869}, {1714665547092, 2594666}, {1714665607092, 2600461}, {1714665667092, 2606298}},
				},
			},
		},

The first sample doesn't get deduplicated and it takes the first samples from both sets.
I noticed that in dedup.iter.go Next() there is a usage in initial penalty for the first sample. the value of that penalty is 5000. If I increase that value to 23113, my test passes. anything lower and the test fails.

I'm not sure if it's the expected behavior

@MichaHoffmann
Copy link
Contributor

Thank you for the test and the debug work! Ill look into this over weekend

@MichaHoffmann
Copy link
Contributor

So, yeah; the scrape interval is large enough that the dedup algorithm thinks that the second sample of the first iterator is actually missing and that we need to fill with the second iterator from now on. This might be a bit hard to solve since we ( right now ) dont know the proper scrape interval apriori. I think we could maybe add a configurable flag like --deduplication.penalty-scrape-interval-hint=30s kinda flag to prime the deduplication algorithm that an initial gap of 30s does not constitute a missing sample and we can keep with the first iterator.

@GiedriusS @fpetkovski @yeya24 how does that sound?

@tizki
Copy link
Contributor Author

tizki commented May 21, 2024

perhaps I'm mixing, but shouldn't --query.default-step affect that penalty?
also, I might be able with a PR once there will be a design for the fix.

@rami-prilutsky
Copy link

Hi.
This issue is causing us to return wrong values, which is very dangerous in our case. It puts our entire, thanos based infrastructure at risk as we can't rely on the results and act accordingly (we have automatic mechanisms that react to the results).
The only workaround we're aware of is always passing the step in range functions, but it's not optimal for range queries. Can someone advice?

@tizki
Copy link
Contributor Author

tizki commented Jul 4, 2024

hi @MichaHoffmann @GiedriusS @fpetkovski @yeya24
I added a PR with a suggested fix, can you please check and let me know what do you think?
thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants