Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F.conv onnx export better support #656

Closed
lucasjinreal opened this issue Mar 19, 2021 · 5 comments
Closed

F.conv onnx export better support #656

lucasjinreal opened this issue Mar 19, 2021 · 5 comments
Labels
duplicate This issue or pull request already exists triaged Issue has been triaged by maintainers

Comments

@lucasjinreal
Copy link
Contributor

Pls test this simple model export:

class MG(nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x, b):
        preds = F.conv2d(x, b,
                             stride=1)
        return preds


torch_model = MG()
x = torch.randn([1, 4, 24, 24])
b = torch.randn([8, 4, 3, 3])
torch_out = torch_model(x, b)

# Export the model
torch.onnx.export(torch_model,               # model being run
                  (x, b),
                  "a.onnx",
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=12,          # the ONNX version to export the model to
                  do_constant_folding=True,
                  verbose=True)
print('Done!')

This is a dead simple model, but Pytorch can not export it make it convertable to trt.

When I convert to trt, it got:

❯ onnx2trt a.onnx 
----------------------------------------------------------------
Input filename:   a.onnx
ONNX IR version:  0.0.6
Opset version:    12
Producer name:    pytorch
Producer version: 1.7
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
While parsing node number 0 [Conv -> "2"]:
ERROR: /home/onnx-tensorrt/builtin_op_importers.cpp:512 In function importConv:
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"

image

I don't sure is pytorch side problem or onnx-tensorrt side problem, But I can not convert any model which contains self-defined F.conv op for example, SOLOv2.

Please help me if anyone knows how to solve it.

@lucasjinreal
Copy link
Contributor Author

I guess this issue is too decent that no one really got this step? @kevinch-nv Please have a look, pytorch code is included.

@lucasjinreal
Copy link
Contributor Author

lucasjinreal commented Mar 25, 2021

@mk-nvidia Pls have a look

pytorch/pytorch#54314

@KellenSunderland @kevinch-nv Pls have alook

@lucasjinreal
Copy link
Contributor Author

@kevinch-nv @KellenSunderland Daily address

@kevinch-nv kevinch-nv added enhancement New feature or request performance Slower performance than a different framework triaged Issue has been triaged by maintainers labels Jun 28, 2021
@kevinch-nv
Copy link
Collaborator

TRT requires the weights for the conv to be initializers, unless it is being overwritten by an INT8 -> Float dequantize layer in QDQ networks. General support of tensor conv weights are unsupported at the moment.

@kevinch-nv
Copy link
Collaborator

Closing as duplicate of #609

@kevinch-nv kevinch-nv added duplicate This issue or pull request already exists and removed enhancement New feature or request performance Slower performance than a different framework labels Mar 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

2 participants