-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation on coco dataset #33
Comments
Hello! Thank you for your test, this started as a fork of https://github.com/jaehyunnn/ViTPose_pytorch just to improve the inference pipeline, can you try checking with that implementation if you obtain similar results? Also if you don't mind to share the code you use for eval, I won't have the time in the next couple weeks but I could do some tests. Also can you check the map you get with the detector or try to run with groundtruth bbox? They report "Using detection results from a detector that obtains 56 mAP on person" Thanks |
Hi I did some checks but I cannot give you an answer. I found that yolov8 had problems on MPS, if by any chance you are running on mac the evaluation. Updating the Ultralytics package solves the problem (I updated the requirements) |
@JunkyByte Thank you for your response! |
@JunkyByte mAP@0.5:0.95, detector threshold = 0.5 to filter out low confidence detection bboxes:
Therefore, the bbox detections resulting from yolov8 could be the main reason behind low scores in this pipeline. |
@omkaar718 thank you very much for inspecting this. I'm busy these days but I checked your PR and I will eventually merge it in the next few days, so thanks again. Applying the models to videos I see qualitatively good results, it might be that indeed yolo does not work well for the coco val images. I will get back to you :) have a nice day! |
The results of using this implementation on coco val dataset seem to be quite lower than those reported in the paper.
The text was updated successfully, but these errors were encountered: