Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to generate the number of TP/TN/FP/FN for each test image using the detect.py script? #5713

Closed
1 task done
ib124 opened this issue Nov 18, 2021 · 8 comments · Fixed by #5727
Closed
1 task done
Labels
question Further information is requested

Comments

@ib124
Copy link

ib124 commented Nov 18, 2021

Search before asking

Question

I am doing some detection accuracy analysis, and I am looking to model how different confidence settings (i.e. --conf 0.6 affect the number of true positive/false positive detections for my data. Is there any way the detect.py script can be modified to list the TP/FP/TN/FN values for each class of each image? I have a custom model that has multiple classes trained, but I only want these values for one class.

Note: I know that the val.py script graphs metrics such as the F1, Precision-Recall curve, etc., but I'm just trying to get some of the raw values for my individual calculations.

Additional

No response

@ib124 ib124 added the question Further information is requested label Nov 18, 2021
@glenn-jocher
Copy link
Member

@ib124 no of course not. Where do you expect TP values to be produced in detect.py exactly? Where are the labels used in your imaginary detect.py coming from?

@ib124
Copy link
Author

ib124 commented Nov 19, 2021

@glenn-jocher In that case, is there a way to print out these values using the val.py script?

@glenn-jocher
Copy link
Member

@ib124 yes, that's a possibility!

Several users have been asking for this but we don't have it enabled by default. You can access these values directly in the code here, there is one FP/TP vector per IoU threshold 0.5:0.05:0.95:

yolov5/utils/metrics.py

Lines 54 to 56 in eb51ffd

# Accumulate FPs and TPs
fpc = (1 - tp[i]).cumsum(0)
tpc = tp[i].cumsum(0)

@ib124
Copy link
Author

ib124 commented Nov 19, 2021

That is what I needed. Thank you!

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 19, 2021

@ib124 another reason we don't display TP and FP is because the information displayed is a sufficient statistic to reconstruct these, so displaying these would be redundant. Anyone can reconstruct these using the provided metrics, same with F1. See https://en.wikipedia.org/wiki/Precision_and_recall

               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95
                 all        128        929      0.577      0.414       0.46      0.279
              person        128        254      0.723      0.531      0.601       0.35

For person class:

TP = Recall * Labels = 135
FP = TP / Precision - TP = 52

@glenn-jocher glenn-jocher removed the TODO label Nov 19, 2021
@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
@glenn-jocher
Copy link
Member

@ib124 good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@ib124
Copy link
Author

ib124 commented Nov 22, 2021

@glenn-jocher This is awesome, thank you! I greatly appreciate this.

@glenn-jocher
Copy link
Member

@ib124 you're welcome! One thing to note is that these TP and FP values are computed at max-F1 confidence (same as P and R results):

yolov5/utils/metrics.py

Lines 82 to 87 in 7a39803

i = f1.mean(0).argmax() # max F1 index
p, r, f1 = p[:, i], r[:, i], f1[:, i]
tp = (r * nt).round() # true positives
fp = (tp / (p + eps) - tp).round() # false positives
return tp, fp, p, r, f1, ap, unique_classes.astype('int32')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants