Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

predict scores is lower than evaluate score #1055

Closed
kolyadin opened this issue Jul 3, 2019 · 8 comments
Closed

predict scores is lower than evaluate score #1055

kolyadin opened this issue Jul 3, 2019 · 8 comments

Comments

@kolyadin
Copy link

kolyadin commented Jul 3, 2019

Hi!

I have successfully train and evaluate model.
My evaluate stats:

Running network: 100% (257 of 257) |######################################################################################################| Elapsed Time: 0:17:19 Time: 0:17:19
Parsing annotations: 100% (257 of 257) |##################################################################################################| Elapsed Time: 0:00:00 Time: 0:00:00
0 instances of class nothing with average precision: 0.0000
257 instances of class cashier_check with average precision: 1.0000
mAP using the weighted average of precisions among classes: 1.0000

So precisions is "1.0000" (i am evaluate on same train data)

But when i run predict from this example:
https://github.com/fizyr/keras-retinanet/blob/master/examples/ResNet50RetinaNet.ipynb

I receive smaller precisions (e.g. "score": 0.905562162399292) on same images.

Why predict scores differs from evaluate scores?
Thanks.

@jsemric
Copy link
Contributor

jsemric commented Jul 6, 2019

Hi.

Maybe you filtered out the predictions with a low score (below 0.5). In the evaluation, the score threshold is 0.05 by default. Also, make sure you feed the images in the BGR format to the model.

@hgaiser
Copy link
Contributor

hgaiser commented Jul 9, 2019

@jsemric has a good point, the default score threshold for evaluation is 0.05, for the example notebook it is set to 0.5. This could explain the difference you're seeing.

@hgaiser
Copy link
Contributor

hgaiser commented Jul 30, 2019

Any update here?

@ikerodl96
Copy link

Hello,
Today I have experienced the same issue. In my case the problem was that the data generator in evaluate.py was not created by using the specific data preprocessing method associated to the selected model backbone. Instead, the default caffe method was used. To solve this, I have relied on the code that implements both the train and validation data generators in train.py, as in this case, all the additional parameters that influence the generators are considered (the particular backbone preprocessing method is passed as a parameter to the create_generators() function and other arguments such as the batch size or the image min side are stored in the common_args dict).
Hope it helps and that you can solve the problem.

@hgaiser
Copy link
Contributor

hgaiser commented Aug 20, 2019

Thanks for letting us know @ikerodl96 , could be related to #647 . I'm assuming the original issue is resolved though.

@hgaiser hgaiser closed this as completed Aug 20, 2019
@mariaculman18
Copy link

mariaculman18 commented Aug 22, 2019

Hi @ikerodl96, @hgaiser,

So I was comparing the codes: train.py and evaluate.py and I get to see what you indicated: when creating the generator in evalute.py there is not preprocess_image argument. I modified evalute.py to include it. BUT, now I don't know how to make keras-retinanet to consider the changes I made. I try reinstalling and rebuilding but it is not working. I know this because I included a print("Hello word") in the modified evalute.py and I don't see when running retinanet-evaluate in my environment. Please, I am a beginner :( I know that is basic but I am not into computer science.

I get to see this when running python setup.py build_py.

Thanks :)
Screenshot (208)_mod

@hgaiser
Copy link
Contributor

hgaiser commented Aug 22, 2019

@mariaculman18 instead of running retinanet-evaluate, try running via python keras_retinanet/bin/evaluate.py (assuming you're in the keras-retinanet repository).

@mariaculman18
Copy link

mariaculman18 commented Aug 22, 2019

@hgaiser @ikerodl96 it worked! Thank you :)

What I did in evaluate.py:

  • Included in the imports:
    from ..utils.anchors import make_shapes_callback

  • Modified the create generator function:
    def create_generator(args) -> def create_generator(args, preprocess_image)

  • Included in the create generator function:

common_args = {
	'preprocess_image' : preprocess_image,
 }
  • Included in the create generator function when dataset type is csv:
    **common_args

  • Included in the main function before creating the generator:
    backbone = models.backbone(args.backbone)

  • Modified the generator creation in the main function:
    generator = create_generator(args) -> generator = create_generator(args, backbone.preprocess_image)

  • Included in the main function after loading the model:
    generator.compute_shapes = make_shapes_callback(model)

Then I ran from my folder:
python ~/keras-retinanet/keras_retinanet/bin/evaluate.py --backbone=densenet121 csv dataset/train.csv dataset/classes.csv models/5/output_01.h5

I am attaching the modified script.
evaluate.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants