You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
COCO2014 mAP computation on official YOLOv3 weights corresponds to expected value of 0.58 (same as darknet), but mAP computation on trained checkpoints seems higher than should be. In particular many false positives do not seem to negatively impact mAP.
For example validation image 2 should have 4 people and 1 baseball bat. At epoch 37, I see ~140 objects detected. Precision and Recall look like this:
Precision-Recall curve looks like this:
AP for this image is then calculated as 0.78, which is strangely high for 4 TP and ~140 FP's.
AP=compute_ap(recall, precision)
Out[66]: 0.78596
Lastly, I believe mAP is supposed to be calculated per class in each image, but here all the classes seem combined.
The text was updated successfully, but these errors were encountered:
COCO2014 mAP computation on official YOLOv3 weights corresponds to expected value of 0.58 (same as darknet), but mAP computation on trained checkpoints seems higher than should be. In particular many false positives do not seem to negatively impact mAP.
For example validation image 2 should have 4 people and 1 baseball bat. At epoch 37, I see ~140 objects detected. Precision and Recall look like this:
Precision-Recall curve looks like this:
AP for this image is then calculated as 0.78, which is strangely high for 4 TP and ~140 FP's.
Lastly, I believe mAP is supposed to be calculated per class in each image, but here all the classes seem combined.
The text was updated successfully, but these errors were encountered: