Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP when testing is very different with pycocotools #1281

Closed
itruonghai opened this issue Nov 4, 2020 · 15 comments
Closed

mAP when testing is very different with pycocotools #1281

itruonghai opened this issue Nov 4, 2020 · 15 comments
Labels
question Further information is requested Stale

Comments

@itruonghai
Copy link

I try to test my model on
!python test.py --weights /content/drive/My\ Drive/0TruongHai/TA_YOLOV5/exp0/weights/best.pt --data mydata.yaml --img 800 --task test --verbose --save-json
But the results of mAP@.5:.95 benchmark seem to be very differents

Namespace(augment=False, batch_size=32, conf_thres=0.005, data='./data/coco128.yaml', device='', img_size=800, iou_thres=0.65, save_conf=False, save_dir='runs/test', save_json=True, save_txt=False, single_cls=False, task='test', verbose=True, weights=['/content/drive/My Drive/0TruongHai/TA_YOLOV5/exp0/weights/best.pt'])
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

Fusing layers... 
Model Summary: 284 layers, 8.8431e+07 parameters, 0 gradients
Scanning labels /content/dataset/labels/test.cache (450 found, 0 missing, 0 empty, 10 duplicate, for 450 images): 450it [00:00, 10969.01it/s]
               Class      Images     Targets           P           R      mAP@.5  mAP@.5:.95: 100% 15/15 [00:15<00:00,  1.03s/it]
                 all         450    1.11e+03        0.54       0.689       0.722       0.476 <---
                   1         450         153       0.502        0.68       0.731       0.459
                   2         450         203       0.587       0.818       0.842       0.581
                   3         450          57       0.763       0.684       0.745       0.502
                   4         450         106       0.421       0.726       0.701       0.505
                   5         450         185       0.434       0.573        0.59       0.382
                   6         450         308       0.642       0.662        0.73       0.451
                   7         450         102       0.431       0.676       0.718       0.454
Speed: 21.0/1.9/22.9 ms inference/NMS/total per 800x800 image at batch-size 32

COCO mAP with pycocotools... saving runs/test/detections_val2017_best_results.json...
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.02s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.94s).
Accumulating evaluation results...
DONE (t=0.21s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.309 <---
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.622
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.267
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.244
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.627
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.606
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.295
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.369
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.371
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.312
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.675
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.610
Results saved to runs/test

As comparision, 0.309 in pycocotools and 0.47 with the default evaluator.

@itruonghai itruonghai added the question Further information is requested label Nov 4, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Nov 4, 2020

Hello @itruonghai, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@ZwNSW
Copy link

ZwNSW commented Nov 4, 2020

@itruonghai Hello, how did you achieve AR printing to the terminal? I tried it for a long time without realizing it. Looking forward to your reply. Thank you

@itruonghai
Copy link
Author

itruonghai commented Nov 4, 2020

@itruonghai Hello, how did you achieve AR printing to the terminal? I tried it for a long time without realizing it. Looking forward to your reply. Thank you

just --save-json and it will use pycocotools

@ZwNSW
Copy link

ZwNSW commented Nov 4, 2020

@itruonghai But I But I ran into this problem。
“COCO mAP with pycocotools... saving detections_val2017__results.json...
ERROR: pycocotools unable to run: invalid literal for int() with base 10: 'Image_20200930140952222”
can you help me?

@itruonghai
Copy link
Author

@itruonghai But I But I ran into this problem。
“COCO mAP with pycocotools... saving detections_val2017__results.json...
ERROR: pycocotools unable to run: invalid literal for int() with base 10: 'Image_20200930140952222”
can you help me?

maybe use change name of image?

@ZwNSW
Copy link

ZwNSW commented Nov 4, 2020

@itruonghai Changing the name of the picture is useless. Are you using the COCO dataset?

@itruonghai
Copy link
Author

@itruonghai Changing the name of the picture is useless. Are you using the COCO dataset?

the image name is just id only, not have String.

@ZwNSW
Copy link

ZwNSW commented Nov 4, 2020

@itruonghai Now I encountered this problem.
"COCO mAP with pycocotools... saving runs\test\detections_val2017_best_results.json...
ERROR: pycocotools unable to run: list index out of range"

@itruonghai
Copy link
Author

@itruonghai try to change the link of json file in test.py

@ZwNSW
Copy link

ZwNSW commented Nov 4, 2020

@itruonghai Sorry, I don't quite understand what you mean. Can you be more careful?

@glenn-jocher
Copy link
Member

@itruonghai pycocotools is only intended for COCO dataset mAP. With 7 classes you don't appear to meet that constraint.

@glenn-jocher
Copy link
Member

@itruonghai upon review, I think this may be related to a recent update to our mAP code in #1206

Can you try to update these two lines in utils/general.py to see if the pycocotools discrepancy remains?

yolov5/utils/general.py

Lines 331 to 334 in 15a1060

# Append sentinel values to beginning and end
mrec = recall # np.concatenate(([0.], recall, [recall[-1] + 1E-3]))
mpre = precision # np.concatenate(([0.], precision, [0.]))

You should update them to this:

    # Append sentinel values to beginning and end
    mrec = np.concatenate(([0.], recall, [recall[-1] + 1E-3]))
    mpre = np.concatenate(([0.], precision, [0.]))

Let me know the results after this test please!

@summit1993
Copy link

summit1993 commented Nov 6, 2020

I try to test my model on
!python test.py --weights /content/drive/My\ Drive/0TruongHai/TA_YOLOV5/exp0/weights/best.pt --data mydata.yaml --img 800 --task test --verbose --save-json
But the results of mAP@.5:.95 benchmark seem to be very differents

Namespace(augment=False, batch_size=32, conf_thres=0.005, data='./data/coco128.yaml', device='', img_size=800, iou_thres=0.65, save_conf=False, save_dir='runs/test', save_json=True, save_txt=False, single_cls=False, task='test', verbose=True, weights=['/content/drive/My Drive/0TruongHai/TA_YOLOV5/exp0/weights/best.pt'])
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

Fusing layers... 
Model Summary: 284 layers, 8.8431e+07 parameters, 0 gradients
Scanning labels /content/dataset/labels/test.cache (450 found, 0 missing, 0 empty, 10 duplicate, for 450 images): 450it [00:00, 10969.01it/s]
               Class      Images     Targets           P           R      mAP@.5  mAP@.5:.95: 100% 15/15 [00:15<00:00,  1.03s/it]
                 all         450    1.11e+03        0.54       0.689       0.722       0.476 <---
                   1         450         153       0.502        0.68       0.731       0.459
                   2         450         203       0.587       0.818       0.842       0.581
                   3         450          57       0.763       0.684       0.745       0.502
                   4         450         106       0.421       0.726       0.701       0.505
                   5         450         185       0.434       0.573        0.59       0.382
                   6         450         308       0.642       0.662        0.73       0.451
                   7         450         102       0.431       0.676       0.718       0.454
Speed: 21.0/1.9/22.9 ms inference/NMS/total per 800x800 image at batch-size 32

COCO mAP with pycocotools... saving runs/test/detections_val2017_best_results.json...
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.02s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.94s).
Accumulating evaluation results...
DONE (t=0.21s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.309 <---
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.622
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.267
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.244
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.627
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.606
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.295
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.369
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.371
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.312
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.675
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.610
Results saved to runs/test

As comparision, 0.309 in pycocotools and 0.47 with the default evaluator.

there is a bug in test.py, in line 171, change xy center to top-left corner, it is different from coco box format, delete it may be ok.
image

@itruonghai
Copy link
Author

@summit1993 format of coco is true in the github, if change like you, it different from coco format and the result gets 0.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 8, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

4 participants