Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower accuracy than expected on VOC 2007 model & data #5

Closed
mbuckler opened this issue Feb 10, 2017 · 7 comments
Closed

Lower accuracy than expected on VOC 2007 model & data #5

mbuckler opened this issue Feb 10, 2017 · 7 comments

Comments

@mbuckler
Copy link
Contributor

Thank you for providing your code. I have installed and run the test provided but unfortunately I am seeing lower accuracy on the VOC 2007 benchmark than I expected.

On the Readme I see that the model achieves 71.2 but when I run ./experiments/scripts/test_vgg16.sh 0 pascal_voc with VOC 2007 data and your model I see a result of Mean AP = 0.4955. If I am right this is meant to be interpreted as an mAP of 49.55. Should I be using a different testing script or different model than the one downloaded by ./data/scripts/fetch_faster_rcnn_models.sh ? Here are the full results:

AP for aeroplane = 0.5898 AP for bicycle = 0.5308 AP for bird = 0.4317 AP for boat = 0.3876 AP for bottle = 0.2347 AP for bus = 0.6052 AP for car = 0.5414 AP for cat = 0.6908 AP for chair = 0.2789 AP for cow = 0.5222 AP for diningtable = 0.5555 AP for dog = 0.6149 AP for horse = 0.7065 AP for motorbike = 0.5160 AP for person = 0.4421 AP for pottedplant = 0.2304 AP for sheep = 0.4441 AP for sofa = 0.5538 AP for train = 0.6770 AP for tvmonitor = 0.3559 Mean AP = 0.4955

@mbuckler
Copy link
Contributor Author

I should also note that after completion of the testing script I receive a notification about a segfault which I am unsure how to trace. Perhaps this may be a part of the problem?

./experiments/scripts/test_vgg16.sh: line 60: 338 Segmentation fault (core dumped) CUDA_VISIBLE_DEVICES=${GPU_ID} python ./tools/test_vgg16_net.py --imdb ${TEST_IMDB} --weight data/imagenet_weights/vgg16.weights --model ${NET_FINAL} --cfg experiments/cfgs/vgg16.yml --set ${EXTRA_ARGS}

@endernewton
Copy link
Owner

Not sure what happened in your case. Can you provide your hardware and software setup? I don't have segfault here.

@mbuckler
Copy link
Contributor Author

Hey! Thank you very much for replying. As it turns out the reason for the incorrect accuracy was due to the wrong GPU architecture being specified in the extra_compile_args in the setup.py script. I needed -arch=sm+52 because I am using a Titan X. Interestingly the segfault still appears at the end of testing, but since the testing is functional I have all that I need to continue.

I'll close this issue for sure since everything is working, but I should also mention that a few things might want to be changed in the code to help Linux users like myself. I've made a pull request that you might be interested in. Thanks again!

@caunion
Copy link

caunion commented Apr 28, 2017

Suffered from similar problem! I have a little old GPU, gtx 780, the third generation architecture (Kepler). If using the default sm_52, the accuracy is incorrect. Change it to 'sm_35' will get the reported mAP...

It is so tricky. I wonder why the compilation of nms was successful when using incorrect -arch parameter..

@caunion
Copy link

caunion commented Apr 29, 2017

@mbuckler After correct the sm_xx, did you get extact 71.2 mAP? I correct the sm_xx for my GTX780, but tested with the pretrained model voc07 from the authors page, I got the mAP 70.89; while with the pretrained model on voc07+voc12, I tested with the mAP 74.95. Is these correct numbers? @endernewton

@endernewton
Copy link
Owner

endernewton commented Apr 29, 2017 via email

@cjcchen
Copy link

cjcchen commented Aug 27, 2018

Hi, I also got 0.5206 mAP after running the test script. I am using Telsa P100 and I have set the arch to be sm_60 according to "http://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/". Is it anything wrong?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants