Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object detection model evaluation (precision) is always zero #1621

Closed
ahmetkucuk opened this issue Jun 20, 2017 · 6 comments
Closed

Object detection model evaluation (precision) is always zero #1621

ahmetkucuk opened this issue Jun 20, 2017 · 6 comments

Comments

@ahmetkucuk
Copy link

ahmetkucuk commented Jun 20, 2017

I am using object detection API. I am fine-tune Faster R-CNN with Resnet-101 using config file provided in samples folders.

I start train and test script at the same time. Tensorboard shows that loss is decreasing and in the Tensorboard's image section, I can see detected objects on the images.

However, Precision and PerformanceByCategory is always zero. Probably, it is because of following warning, which is output of eval.py script:

WARNING:root:The following classes have no ground truth examples: [0 1 2]

I checked tfrecords converter code couple times but it looks correct. What might be causing this issue?

Label map looks like this:

item {
  id: 0
  name: 'none_of_the_above'
}
item {
  id: 1
  name: 'ar'
}
item {
  id: 2
  name: 'ch'
}

System info:

== cat /etc/issue ===============================================
Linux 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
VERSION="16.04.2 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial

== are we in docker =============================================
Yes

== compiler =====================================================
c++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


== uname -a =====================================================
Linux 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy (1.12.1)
protobuf (3.2.0)
tensorflow-gpu (1.1.0)

== check for virtualenv =========================================
False

== tensorflow import ============================================
tf.VERSION = 1.1.0
tf.GIT_VERSION = v1.1.0-rc0-61-g1ec6ed5
tf.COMPILER_VERSION = v1.1.0-rc0-61-g1ec6ed5
Sanity check: array([1], dtype=int32)

== env ==========================================================
LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
Tue Jun 20 03:01:21 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P100-PCIE...  Off  | 0000:02:00.0     Off |                    0 |
| N/A   46C    P0    55W / 250W |  15601MiB / 16276MiB |     89%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P100-PCIE...  Off  | 0000:82:00.0     Off |                    0 |
| N/A   42C    P0    34W / 250W |  15599MiB / 16276MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

== cuda libs  ===================================================
/usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudart.so.8.0.61
/usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudart_static.a
@ahmetkucuk ahmetkucuk changed the title Object detection model evaluation precision is always zero Object detection model evaluation (precision) is always zero Jun 20, 2017
@tatatodd
Copy link

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!

@ahmetkucuk
Copy link
Author

I solved the issue. For future reference, while I am creating evaluation tfrecords, I used 1 as difficulty for all records since I do not have difficulty in my own dataset. Somehow, eval.py does not count those records with difficulty 1.

@YanLiang0813
Copy link

@ahmetkucuk I have the some issue, just described in #1696
but the datasets I used is pascal_voc_2012, the difficult in "Annotation" xml file is 0. Could you give me some suggestion? thanks

@PythonImageDeveloper
Copy link

PythonImageDeveloper commented Apr 12, 2018

@ahmetkucuk , i have same problem , Did you solve the problem?

@Karthik-Suresh93
Copy link

I have the same problem, were any of you able to solve the problem?

@psdas
Copy link

psdas commented Jun 14, 2018

I have the same problem as well . I used the ssd_mobilenet_v1_coco_11_06_2017 model for transfer learning. I have only 3 classes and my label file is :

 {
  id: 1
  name: 'tree'

  id: 2
  name: 'water body'

  id: 3
  name: 'building'
}

My eval.py file gets stuck after this . What do i do?
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants