Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP Computation in test.py #5

Closed
glenn-jocher opened this issue Sep 5, 2018 · 0 comments
Closed

mAP Computation in test.py #5

glenn-jocher opened this issue Sep 5, 2018 · 0 comments
Labels
duplicate This issue or pull request already exists

Comments

@glenn-jocher
Copy link
Member

COCO2014 mAP computation on official YOLOv3 weights corresponds to expected value of 0.58 (same as darknet), but mAP computation on trained checkpoints seems higher than should be. In particular many false positives do not seem to negatively impact mAP.

For example validation image 2 should have 4 people and 1 baseball bat. At epoch 37, I see ~140 objects detected. Precision and Recall look like this:
figure_1

Precision-Recall curve looks like this:
figure_1

AP for this image is then calculated as 0.78, which is strangely high for 4 TP and ~140 FP's.

AP = compute_ap(recall, precision)
Out[66]: 0.78596

Lastly, I believe mAP is supposed to be calculated per class in each image, but here all the classes seem combined.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

1 participant