Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Viewing class probabilities assigned to images associated with test protobuf file, using either python wrappers or test_net.cpp #391

Closed
dannygoldstein opened this issue May 6, 2014 · 11 comments

Comments

@dannygoldstein
Copy link

Hi all,

I've successfully trained a slightly tweaked version of LeNet on my own dataset, which consists of ~80,000 1-channel 21x21-pixel astronomical images produced by the Palomar Transient Factory. I followed the standard training procedure: I split my data into two leveldbs, train and test, linked each to an appropriate protobuf file (train.protobuf and test.protobuf), then linked each protobuf file to a higher-level protobuf file specifying the structure of my net.

I am able to run both train_net.bin and test_net.bin and produce overall accuracy scores for the classifier. However, what I need are individual class probabilities for each image in the leveldb associated with my test protobuf file.

I have tried to produce these in a variety of ways. First, I followed the suggestions that @shelhamer made in #281. I attempted to modify test_net.cpp by defining this variable:

const vector<shared_ptr<Blob<float> > >& blobs = caffe_net->blobs();

after the execution of the Forward pass, then accessing its contents to try to get the probabilities. However, I get a compile error just from adding that line after the forward pass:

tools/test_net.cpp: In function 'int main(int, char**)':
tools/test_net.cpp:55:68: error: base operand of '->' has non-pointer type  'caffe::Net<float>'

Next, I try using the python wrappers to access the probabilities via net.blobs['prob'], but I get an array with all entries = 0.

Finally, I try loading my trained net as a caffe.ImagenetClassifier to expose the predict method. However, when I try to call predict on one of the raw images from my testing protobuf file, I get the following error:

F0505 23:21:01.731472 32559 _caffe.cpp:155] Check failed: dims[0] == blob->num() (10 vs. 64)

I would really appreciate a clear explanation of how to access the probabilities.

Thanks,
Danny

@elife33
Copy link

elife33 commented May 12, 2014

Hi danig,

In test_net.cpp I see below code
Net caffe_test_net(argv[1]);
caffe_test_net.CopyTrainedLayersFrom(argv[2]);

int total_iter = atoi(argv[3]);
LOG(ERROR) << "Running " << total_iter << " iterations.";

double test_accuracy = 0;
for (int i = 0; i < total_iter; ++i) {
const vector<Blob*>& result = caffe_test_net.ForwardPrefilled();
test_accuracy += result[0]->cpu_data()[0];
LOG(ERROR) << "Batch " << i << ", accuracy: " << result[0]->cpu_data()[0];
}
test_accuracy /= total_iter;

I think you should use const vector<shared_ptr<Blob > >& blobs = caffe_test_net.blobs(); instead of ->blobs() ?

@shelhamer
Copy link
Member

Hi Danny,

Happy to see Caffe applied to astronomy! Sorry for the frustration trying to extract the predictions. The latest release includes an overhauled python interface that's much easier to use. It can yield predictions on your training and test sets plus any new inputs in deployment.

Since you are interested in the predictions on your test set, you should load your test net and then run the forward pass without any arguments to do a "prefilled" pass populated by data from your test leveldb. The next call will predict the next batch of inputs, and so forth. It'll look something like:

import caffe

net = caffe.Net('/path/to/model_def.prototxt', '/path/to/model/weights')
out = net.forward()

`out` is now a dictionary of {output name: ndarray of outputs}. So for instance
if your softmax classifier output layer is called "prob", `out['prob'][0]` is
the prediction vector for your first test input.

You might want to take a look at our classifier example too.

Hope this helps!

@dannygoldstein
Copy link
Author

Hi Evan,

Thanks very much for the helpful response! The new python wrapper is very intuitive and easy to use. However, after pulling the most recent version of caffe from master, recompiling the code, and retraining my net, I'm still having a strange issue accessing the softmax probabilities for my test set.

I ran

import caffe

net = caffe.Net( net_def_proto , weights )
net.set_mode_cpu()
net.set_phase_test()
out = net.forward()

Now, when I iterate through out['prob'] (a 64-element list -- corresponding to my minibatch size -- 64), all of the class probabilities displayed are the same for each image in my test set! This is strange because each image in my test set is unique. Here is an example of what I mean.

In [5]: out= net.forward()
In [6]: out['prob'][0]
Out[6]: 
array([[[ 0.98499644]],
   [[ 0.01500351]]], dtype=float32)

In [7]: out['prob'][1]
Out[7]: 
array([[[ 0.98499644]],
   [[ 0.01500351]]], dtype=float32)

In [8]: out['prob'][3]
Out[8]: 
array([[[ 0.98499644]],
   [[ 0.01500351]]], dtype=float32)

In [9]: out['prob'][6] 
Out[9]: 
array([[[ 0.98499644]],
   [[ 0.01500351]]], dtype=float32)

The weirder thing is that when I call out = net.forward() again, then reiterate through out['prob'], the probabilities for each image are the same as they were in the previous pass (i.e. all array([[[ 0.98499644]], [[ 0.01500351]]], dtype=float32).

Any idea what's going on here?

Apologies if I'm missing something really obvious, and thanks in advance for your help.

Danny

@shelhamer
Copy link
Member

Did you upgrade the model prototxt and binaryproto according to the release notes https://github.com/BVLC/caffe/releases/tag/v0.999 ?

Do you get good results when running the net on the test set with the
test_net.bin tool? Uniform output probabilities are usually a sign of (1)
an input bug or (2) failed training (i.e. it could be predicting the class proportions).

If your training set is not class-balanced so that each class shows up an equal number of times, you should try re-training with a balanced data set.

On Fri, May 23, 2014 at 11:57 AM, Danny Goldstein
notifications@github.comwrote:

Hi Evan,

Thanks very much for the helpful response! The new python wrapper is very
intuitive and easy to use. However, after pulling the most recent version
of caffe from master, recompiling the code, and retraining my net, I'm
still having a strange issue accessing the softmax probabilities for my
test set.

I ran

import caffe

net = caffe.Net('/path/to/model_def.prototxt', '/path/to/model/weights')
net.set_mode_cpu()
net.set_phase_test()
out = net.forward()

Now, when I iterate through out['prob'](a 64-element list --
corresponding to my minibatch size -- 64), all of the class probabilities
displayed are the same for each image in my test set! This is strange
because each image in my test set is unique. Here is an example of what I
mean.

In [5]: out= net.forward()
In [6]: out['prob'][0]
Out[6]:
array([[[ 0.98499644]],
[[ 0.01500351]]], dtype=float32)

In [7]: out['prob'][1]
Out[7]:
array([[[ 0.98499644]],
[[ 0.01500351]]], dtype=float32)

In [8]: out['prob'][3]
Out[8]:
array([[[ 0.98499644]],
[[ 0.01500351]]], dtype=float32)

In [9]: out['prob'][6]
Out[9]:
array([[[ 0.98499644]],
[[ 0.01500351]]], dtype=float32)

The weirder thing is that when I call out = net.forward() again, then
reiterate through out['prob'], the probabilities for each element are the
same as they were in the previous pass!

Apologies if I'm missing something really obvious, and thanks in advance
for your help.

Danny


Reply to this email directly or view it on GitHubhttps://github.com//issues/391#issuecomment-44048036
.

@dannygoldstein
Copy link
Author

Yes and yes.

The results of the test_net.bin tool are sensible (they converge to a reasonable value (95% accuracy), which persists across cross-validation). The training set is split roughly 20% / 80% between its two classes, which, although not perfectly balanced, is too mixed to explain the probabilities that the python wrapper is producing.

@shelhamer
Copy link
Member

@DaniG it's been awhile, but it sounds like you might have been caught in an input preprocessing trap in pycaffe. See #525 and #816.

@shelhamer
Copy link
Member

Closing since this should be fixed in the latest release, but give a shout on the caffe-users mailing list when you have a chance to try it!

@RoroKA
Copy link

RoroKA commented Dec 14, 2016

why out['prob'] has 2 values?
which one is the softmax output

@williford
Copy link
Contributor

@RoroKA Softmax outputs have to sum to 1, so it doesn't make sense for Softmax to return just one value. If p is the first probability, the second probability is 1-p. Ask on Stackoverflow or the Caffe Users mailing list for more info.

@RoroKA
Copy link

RoroKA commented Dec 14, 2016 via email

@zhong-xin
Copy link

zhong-xin commented May 27, 2018

I got the following error.Who can help me?Thanks.

Traceback (most recent call last):
File "/home/xin/TrailNet/trailnet/models/nets/ResNet/feature_map.py", line 56, in
output_prob = output['prop'][0]
KeyError: 'prop'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants