-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: is ok to get the same x-b3-traceid for different request without a root context? #52
Comments
I was looking through the zipkin tracer, and it looks like it's using a 32-bit random number to seed the 64-bit random number generator for IDs, which would probably explain why it clashes sometimes. I'll put in a fix so that this problem will be much less likely. |
Brilliant, thank you @rnburn |
@rnburn any update on this? (apologies to ask this but I was planning to release a new version of the ingress controller) |
@aledbf - I'll try to get a patch in today |
I released a version of the zipkin tracer with a fix: https://github.com/rnburn/zipkin-cpp-opentracing/releases/tag/v0.5.1 |
building... @rnburn thank you for the quick fix! |
@rnburn I get this error
I just opened #53 because I had to update opentracing-cpp to 1.5 because with 1.4 zipkin-cpp doesn't compile. Any idea? |
It looks like a mismatch between something built with 1.4 and something else built with 1.5. Btw - for Jaeger, you have to use this commit. They haven't released a version for OT 1.5 yet. |
@rnburn I still see the issue
NGINX_OPENTRACING_VERSION=0.6.0 |
@aledbf - are you sure there's not any caching going on in that test? I'm trying something similar. I generated over 5k IDs but I'm not seeing any collisions. |
I am creating a new k8s cluster and compiling a new image just to be sure. |
I mean caching so that the same response is delivered. I took this example https://github.com/opentracing-contrib/nginx-opentracing/tree/master/example/trivial/zipkin. Then modified https://github.com/opentracing-contrib/nginx-opentracing/blob/master/example/trivial/zipkin/go/server.go with @@ -30,6 +30,11 @@ func handler(w http.ResponseWriter, r *http.Request) {
opentracing.ChildOf(wireContext))
defer span.Finish()
tm := time.Now().Format(time.RFC1123)
+ for name, headers := range r.Header {
+ for _, h := range headers {
+ w.Write([]byte(fmt.Sprintf("%v: %v\n", name, h)))
+ }
+ }
w.Write([]byte("The time is " + tm))
} After starting with docker-compose and running
I don't see any clashes. |
@rnburn this is strange, I still see duplicates. This is the config file https://gist.github.com/aledbf/d65841c037e29725872c5b4e9dd2cae5#file-nginx-conf-L392 |
Ok, this is an example of two requests where nginx returns different
Edit: I cannot find the referenced |
This is a nginx trace (not sure if this is useful)
|
@aledbf - By default, nginx-opentracing creates a span to represent the processing of a the entire request and spans for each location block associated with the request. It can be useful if the request moves through multiple location blocks, but redundant if there's only one. I think that parent span id represents the id of the request level span. You can turn off location block spans with |
@rnburn any idea how I can trace the issue with the duplication of ids? |
@aledbf - do the requests with the same trace-ids have the same |
No. There's no repetition in the x-request-id header |
Can you reproduce it with a smaller docker-compose setup? |
@rnburn any ideas what may be causing this? Really missing having tracing enabled, but have had to disable it for now due to the collisions |
@Stono - I'm not sure. I wasn't able to reproduce anything similar. I set up a docker-compose example that uses ingress-nginx's image: https://github.com/rnburn/example/tree/master/nginx-duplicate/zipkin You can run it with If you can modify |
@aledbf can you pass over your config where you were able to duplicate? |
@aledbf @rnburn some more discovery... With 1 worker:
With 12 workers:
|
@rnburn using https://github.com/rnburn/zipkin-cpp-opentracing/compare/master...aledbf:seed?expand=1 the issue is gone (I am not sure the impact in performance with this change) |
@aledbf - yeah, it looks like the thread_local storage is getting initialized in the master process, but not reinitialized after it forks the workers. Unfortunately changing it like that is going to be costly since it will reseed the random number generator for each call to the function. But I'll do some research to see if there's a better way. |
@aledbf - Can you try the version from this PR: rnburn/zipkin-cpp-opentracing#27? |
@rnburn it works as expected. Thanks for the quick fix! |
The upgrade 0.19.0 contains fix for an issue where multiple requests were clubbed in same trace. Refer https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md, kubernetes/ingress-nginx#2955 and opentracing-contrib/nginx-opentracing#52
The upgrade 0.19.0 contains fix for an issue where multiple requests were clubbed in same trace. Refer https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md, kubernetes/ingress-nginx#2955 and opentracing-contrib/nginx-opentracing#52
The upgrade 0.19.0 contains fix for an issue where multiple requests were clubbed in same trace. Refer https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md, kubernetes/ingress-nginx#2955 and opentracing-contrib/nginx-opentracing#52
The text was updated successfully, but these errors were encountered: