Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP different between YOLO and COCO #1191

Closed
tangzhe1995 opened this issue Oct 22, 2020 · 6 comments
Closed

mAP different between YOLO and COCO #1191

tangzhe1995 opened this issue Oct 22, 2020 · 6 comments
Labels
question Further information is requested

Comments

@tangzhe1995
Copy link

❔Question

Hi, I confuse that the mAP results are different between test.py of YOLOv5 and pycocotools of COCO. Can anyone help with this question?

Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', img_size=640, iou_thres=0.65, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

Fusing layers... Model Summary: 284 layers, 8.89222e+07 parameters, 0 gradients
Scanning labels ../coco/labels/val2017.cache (4952 found, 0 missing, 48 empty, 0 duplicate, for 5000 images): 5000it [00:00, 17761.74it/s]
               Class      Images     Targets           P           R      mAP@.5  mAP@.5:.95: 100% 157/157 [02:34<00:00,  1.02it/s]
                 all       5e+03    3.63e+04       0.409       0.754       0.669       0.476
Speed: 23.6/1.6/25.2 ms inference/NMS/total per 640x640 image at batch-size 32

COCO mAP with pycocotools... saving detections_val2017__results.json...
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.492
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.676
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.534
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.318
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.541
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.633
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.376
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.616
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.670
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.493
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.723
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.812

Thank you.

Additional context

@tangzhe1995 tangzhe1995 added the question Further information is requested label Oct 22, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Oct 22, 2020

Hello @tangzhe1995, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@tangzhe1995 pycocotools is the official metric of the COCO dataset, i.e. 0.492 above.

For reasons unclear, our in-house mAP calculation is typically about 1-2% below pycocotools results, i.e. 0.476 above.

@tangzhe1995
Copy link
Author

@glenn-jocher Thanks for quick reply. I Got it.

@glenn-jocher
Copy link
Member

@tangzhe1995 BTW, a recent PR closed this gap a bit, see #1206

@Mary14-design
Copy link

Can you share the code you used for the COCO mAP for your official page? @glenn-jocher

@glenn-jocher
Copy link
Member

@Mary14-design, we're glad you're interested in the details of our evaluation! The mAP reported on the official page was obtained by using the test.py script provided in the YOLOv5 repository, followed by the COCO API evaluation with the generated results files. Here's a quick overview:

  1. Use test.py to evaluate your dataset and save the results:
python test.py --data coco.yaml --weights yolov5x.pt --save-json
  1. This script will save a .json file in the yolov5/runs/test/ directory.

  2. The saved .json file is then evaluated using the COCO API for the official mAP scores.

This process ensures adherence to COCO's evaluation standards. Hopefully, this clarifies your query! Let me know if you have further questions. 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants