-
Notifications
You must be signed in to change notification settings - Fork 45.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Object detection model evaluation (precision) is always zero #1621
Comments
This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks! |
I solved the issue. For future reference, while I am creating evaluation tfrecords, I used 1 as difficulty for all records since I do not have difficulty in my own dataset. Somehow, eval.py does not count those records with difficulty 1. |
@ahmetkucuk I have the some issue, just described in #1696 |
@ahmetkucuk , i have same problem , Did you solve the problem? |
I have the same problem, were any of you able to solve the problem? |
I am using object detection API. I am fine-tune Faster R-CNN with Resnet-101 using config file provided in samples folders.
I start train and test script at the same time. Tensorboard shows that loss is decreasing and in the Tensorboard's image section, I can see detected objects on the images.
However, Precision and PerformanceByCategory is always zero. Probably, it is because of following warning, which is output of eval.py script:
WARNING:root:The following classes have no ground truth examples: [0 1 2]
I checked tfrecords converter code couple times but it looks correct. What might be causing this issue?
Label map looks like this:
System info:
The text was updated successfully, but these errors were encountered: