Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could I convert lightning module to onnx? Thanks! #2271

Closed
stevenluzheng123456 opened this issue Jun 19, 2020 · 15 comments · Fixed by #2596
Closed

Could I convert lightning module to onnx? Thanks! #2271

stevenluzheng123456 opened this issue Jun 19, 2020 · 15 comments · Fixed by #2596
Assignees
Labels
feature Is an improvement or enhancement let's do it! approved to implement

Comments

@stevenluzheng123456
Copy link

stevenluzheng123456 commented Jun 19, 2020

🚀 Feature

pytorch Lightning works very good, but I cannot find any comments or examples to guide my convert to onnx from a pretrained lightning model, doesn't lightning module only use for researching purpose, without support of onnx cross platform?

@stevenluzheng123456 stevenluzheng123456 added feature Is an improvement or enhancement help wanted Open to be worked on labels Jun 19, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@Borda
Copy link
Member

Borda commented Jun 19, 2020

good point, @PyTorchLightning/core-contributors thoughts?

@SkafteNicki
Copy link
Member

From the docs https://pytorch.org/docs/stable/onnx.html it seems easy enough. We would require users to define the self.example_array attribute, but that's it.

Do we already have a method for saving the model that can be extended?

@williamFalcon
Copy link
Contributor

a lightningModule is just a nn.Module.

Wouldn't this just work?

# Input to the model
x = torch.randn(batch_size, 1, 224, 224, requires_grad=True)
torch_out = torch_model(x)

# Export the model
torch.onnx.export(torch_model,               # model being run
                  x,                         # model input (or a tuple for multiple inputs)
                  "super_resolution.onnx",   # where to save the model (can be a file or 

But we should just make this automatic...
@stevenluzheng123456 want to submit a PR for:

model = LitModel(...)
model.to_onnx(x)

@williamFalcon williamFalcon added the priority: 0 High priority task label Jun 19, 2020
@lezwon
Copy link
Contributor

lezwon commented Jul 12, 2020

@williamFalcon I could take up this issue.
Was wondering if we could automatically infer the input via the dataloader, instead of passing it to to_onnx. Something along the lines of:

batch = next(iter(model.train_dataloader()))
input_data = batch[0]
torch.onnx.export(model, input_data, file_path)

@Borda Borda added Important let's do it! approved to implement and removed priority: 0 High priority task help wanted Open to be worked on labels Jul 12, 2020
@Borda
Copy link
Member

Borda commented Jul 12, 2020

@lezwon from which PT version is there this option?

@lezwon
Copy link
Contributor

lezwon commented Jul 12, 2020

@Borda Going by the docs, I think ONNX export is supported since 0.3.0 in PyTorch.

@awaelchli
Copy link
Contributor

Could one also use LightningModules example_input_array for this?

@lezwon
Copy link
Contributor

lezwon commented Jul 12, 2020

@awaelchli Didn't know about example_input_array. We could use that if it's available. Else we could take the dataloader approach. We could also provide an optional parameter to pass in an Input Tensor.

@awaelchli
Copy link
Contributor

awaelchli commented Jul 12, 2020

Yes so far the example_input_array is used for the summary printing input and output shapes. It is optional but if the user wants the full summary they need to define it. The reason why the user needs to define it manually is

  • module forward may have different input types/shapes than training_step (what dataloader serves).
  • dataloader definition is optional in a LightningModule, since these can be passed also to fit/test

To me it looks like exporting to onnx is very similar to the model summary in this regard.

@lezwon
Copy link
Contributor

lezwon commented Jul 13, 2020

It does seem very similar. So the way I see it, the example_input_array approach might be better as the forward method might have an input with a different shape. Maybe we could also allow an optional parameter in case example_input_array is not present?
So the approach would be:

  1. Check if optional input tensor provided
  2. If not provided look for example_input_array
  3. If not present throw error.

model.to_onnx(file_path: str, input: Optional[Tensor])

@awaelchli
Copy link
Contributor

I like it! 👍

@lezwon
Copy link
Contributor

lezwon commented Jul 13, 2020

Cool. I'll get working on it 😊

@justusschock
Copy link
Member

@lezwon Can you please make sure to allow additional kwargs that are passed to the onnx-export? These would come handy for customisation :)

@lezwon
Copy link
Contributor

lezwon commented Jul 13, 2020

Will do :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement let's do it! approved to implement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants