Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error for loading vgg16_reducedfc for exporting to onnx model #33

Closed
YaraAlnaggar opened this issue Feb 25, 2019 · 5 comments
Closed

Error for loading vgg16_reducedfc for exporting to onnx model #33

YaraAlnaggar opened this issue Feb 25, 2019 · 5 comments

Comments

@YaraAlnaggar
Copy link

I'm trying to export the onnx model for vgg16_reducedfc using this command
python3 convert_to_caffe2_models.py vgg16-ssd models/vgg16_reducedfc.pth models/voc-model-labels.txt
but I got the following error

Traceback (most recent call last): File "convert_to_caffe2_models.py", line 51, in <module> net.load(model_path) File "/media/pc/sdb1/pytorch-ssd/vision/ssd/ssd.py", line 135, in load self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage)) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 769, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for SSD: Missing key(s) in state_dict: "base_net.0.bias", "base_net.0.weight", "base_net.2.bias", "base_net.2.weight", "base_net.5.bias", "base_net.5.weight", "base_net.7.bias", "base_net.7.weight", "base_net.10.bias", "base_net.10.weight", "base_net.12.bias", "base_net.12.weight", "base_net.14.bias", "base_net.14.weight", "base_net.17.bias", "base_net.17.weight", "base_net.19.bias", "base_net.19.weight", "base_net.21.bias", "base_net.21.weight", "base_net.24.bias", "base_net.24.weight", "base_net.26.bias", "base_net.26.weight", "base_net.28.bias", "base_net.28.weight", "base_net.31.bias", "base_net.31.weight", "base_net.33.bias", "base_net.33.weight", "extras.0.0.bias", "extras.0.0.weight", "extras.0.2.bias", "extras.0.2.weight", "extras.1.0.bias", "extras.1.0.weight", "extras.1.2.bias", "extras.1.2.weight", "extras.2.0.bias", "extras.2.0.weight", "extras.2.2.bias", "extras.2.2.weight", "extras.3.0.bias", "extras.3.0.weight", "extras.3.2.bias", "extras.3.2.weight", "classification_headers.0.bias", "classification_headers.0.weight", "classification_headers.1.bias", "classification_headers.1.weight", "classification_headers.2.bias", "classification_headers.2.weight", "classification_headers.3.bias", "classification_headers.3.weight", "classification_headers.4.bias", "classification_headers.4.weight", "classification_headers.5.bias", "classification_headers.5.weight", "regression_headers.0.bias", "regression_headers.0.weight", "regression_headers.1.bias", "regression_headers.1.weight", "regression_headers.2.bias", "regression_headers.2.weight", "regression_headers.3.bias", "regression_headers.3.weight", "regression_headers.4.bias", "regression_headers.4.weight", "regression_headers.5.bias", "regression_headers.5.weight", "source_layer_add_ons.0.bias", "source_layer_add_ons.0.running_var", "source_layer_add_ons.0.running_mean", "source_layer_add_ons.0.weight". Unexpected key(s) in state_dict: "0.weight", "0.bias", "2.weight", "2.bias", "5.weight", "5.bias", "7.weight", "7.bias", "10.weight", "10.bias", "12.weight", "12.bias", "14.weight", "14.bias", "17.weight", "17.bias", "19.weight", "19.bias", "21.weight", "21.bias", "24.weight", "24.bias", "26.weight", "26.bias", "28.weight", "28.bias", "31.weight", "31.bias", "33.weight", "33.bias".

Is it because vgg16-ssd network definition is different from the reduced one?

@YaraAlnaggar YaraAlnaggar changed the title Error for exporting onnx model for vgg16_reducedfc Error for loading vgg16_reducedfc for exporting to onnx model Feb 25, 2019
@drcdr
Copy link

drcdr commented Feb 26, 2019

I've had success in exporting two of the ONNX models provided on the project page; here is a summary:

pre-trained model net-type Converts?
mobilenet-v1-ssd-mp-0_675.pth mb1-ssd Y
mb2-ssd-lite-mp-0_686.pth mb2-ssd-lite Y
vgg16-ssd-mp-0_7726.pth vgg16-ssd N (MaxPool2d ceil issue)
mobilenet_v1_with_relu_69_5.pth mb1-ssd (N: base model)
vgg16_reducedfc vgg16-ssd (N: base model)

To get to this point, I've tweaked each create_xxx_ssd function in convert_to_caffe2_models.py to pass device='cpu', and then pass device=device in the call to SSD (otherwise, there is a GPU/CPU storage mismatch ).

The MaxPool ceil issue is described here: onnx/onnx#549
and is due to this line in vgg.py:
layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
I'm not sure if there is a simple way around this (that maintains the accuracy for these pretrained weights) or not.

@qfgaohao
Copy link
Owner

qfgaohao commented Mar 4, 2019

Hi @YaraAlnaggar , you can only convert ssd models rather than pre-trained imagenet models by using the script in the project.

@qfgaohao
Copy link
Owner

qfgaohao commented Mar 4, 2019

@drcdr thanks for the nice summary and pointing out the issue related to MaxPool2d.

@ishang3
Copy link

ishang3 commented Jun 11, 2020

@drcdr Do you have resources that show how to parse the onnx output after inference?
I am successfully able to do inference, but not able to understand the output format.

@drcdr
Copy link

drcdr commented Jun 11, 2020

It's been a long time since I looked at this...but if you can be more specific about what you're looking for, I might be able to help (like what command you are executing, what output specifically you are looking at, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants