Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confidence Threshold Effect on Results #7

Closed
glenn-jocher opened this issue Nov 22, 2020 · 8 comments
Closed

Confidence Threshold Effect on Results #7

glenn-jocher opened this issue Nov 22, 2020 · 8 comments

Comments

@glenn-jocher
Copy link

Hi, I have this confusion matrix implementation integrated into our YOLOv5 PR here:
ultralytics/yolov5#1474

I noticed during testing that the results depend significantly on the confidence threshold used. I ran an experiment across 3 different common confidence thresholds, but I'm not sure what conclusion to draw from the results.

conf 0.001

image

conf 0.25

image

conf 0.90

image

@glenn-jocher
Copy link
Author

I suppose the way to read this is that at confidence 0.90 there is very little confusion between classes. At confidence 0.25 there is greater confusion, but not necessarily between classes, moreso simply between detections and backgrounds, and then at confidence 0.001 the vast majority of detections are FPs (and actually background).

Oddly the person-background FN cell stays the same throughout, around 0.40. Not sure what that indicates.

@kaanakan
Copy link
Owner

kaanakan commented Dec 5, 2020

Hi,

Firstly, sorry for the late response.

the confidence threshold effects are expected, I think. In conf 0.001, there should be a lot of false alarms, and in conf 0.90, there should be a few to no false alarms.

For the second question, currently, I do not have any answers. It maybe caused because the number of objects in person class is significantly higher than the other classes in Pascal VOC dataset.

Please feel free to ask any further questions.

@glenn-jocher
Copy link
Author

@kaanakan thanks! Confusion matrix is integrated now and automatically produced at the end of trianing YOLOv5. Seems to be working well.

@arroobamaqsood
Copy link

I have implemented YOLO on my own dataset and when I plot the confusion matrix, it displays an additional class named 'background'. I didn't include this class while training. Can someone please explain this?

@TheArmbreaker
Copy link

TheArmbreaker commented Oct 30, 2022

@arroobamaqsood

I have implemented YOLO on my own dataset and when I plot the confusion matrix, it displays an additional class named 'background'. I didn't include this class while training. Can someone please explain this?

My Two Cents while learning ML:
The object loss is basically the binary cross entropy to differentiate between an object and the background. This helps with localising and counting the objects on an image.
Thus with a detection running on an image and no background being trained in the model, the whole thing would be image classification what yolo isn’t doing.

@RaselAmin
Copy link

for these above, how calculate accuracy of the model from this confusion matrics?

@RyanTNN
Copy link

RyanTNN commented Nov 8, 2023

Hi,

Firstly, sorry for the late response.

the confidence threshold effects are expected, I think. In conf 0.001, there should be a lot of false alarms, and in conf 0.90, there should be a few to no false alarms.

For the second question, currently, I do not have any answers. It maybe caused because the number of objects in person class is significantly higher than the other classes in Pascal VOC dataset.

Please feel free to ask any further questions.

Hi @kaanakan , I have confused the person-background FN stay around 0.40. How to reduce the confusion on background FN in the person class? does it affect the model results? Could you explain more clearly?

@JerickoDG
Copy link

JerickoDG commented Feb 2, 2024

Hi, @glenn-jocher . May I confirm if you used val.py to generate that confusion matrix and tweaked the --conf-thresh parameter to 0.001, 0.25, and 0.90? Because I think I had a high background false positives (FP) in my Class 2 despite of having a pretty high true negatives for it so I thought that there might be something wrong. Thus, I am planning to replicate what you did on my end. Here is the image of the confusion matrix that was generated after training. I have two (2) classes.

image

I hope for your kind response. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants