Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obtain the images where there is a TP, FP or FN during the test. #3678

Closed
dariogonle opened this issue Jun 18, 2021 · 4 comments · Fixed by #5727
Closed

Obtain the images where there is a TP, FP or FN during the test. #3678

dariogonle opened this issue Jun 18, 2021 · 4 comments · Fixed by #5727
Labels
question Further information is requested

Comments

@dariogonle
Copy link

dariogonle commented Jun 18, 2021

❔Question

is it possible to show the images where I have a TP, FP and FN during test. I mean I'd to know if in image 001.png I have a TP, FP or FN.

For example, if I test with 001.png, 002.png and 003.png, I'd like to get something similar to:
TP: 001.png
FP: 002.png
FN: 003.png

Notice that it is possible to have a TP and a FP in the same image.

Additional context

@dariogonle dariogonle added the question Further information is requested label Jun 18, 2021
@glenn-jocher
Copy link
Member

@dariogonle during normal testing there may be hundreds or thousands of FPs per image (on every image).

@dariogonle
Copy link
Author

dariogonle commented Jun 18, 2021

Ok, you are right. It is possible to get the images where at least, I have one TP or one FN?

The most interesting for me, would be to know if in the GT there is an object and if the model is not detecting it.

@glenn-jocher
Copy link
Member

@dariogonle you'd have to insert custom code into test.py for this.

Typically if the TP, FN stats are reported they are reported on a per class basis like the rest of the metrics. At this time no metrics are reported on a per-image basis.

@glenn-jocher
Copy link
Member

@dariogonle good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants