Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Exporting the operator hardswish to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator. #831

Closed
lucasjinreal opened this issue Aug 24, 2020 · 7 comments
Labels
bug Something isn't working Stale

Comments

@lucasjinreal
Copy link

Hardswish changed into exportable way:

class Hardswish(nn.Module):  # alternative to nn.Hardswish() for export
    @staticmethod
    def forward(x):
        # return x * F.hardsigmoid(x)
        return x * F.hardtanh(x + 3, 0., 6.) / 6.

still occured this error. ( i have changed conv in common.py)

@lucasjinreal lucasjinreal added the bug Something isn't working label Aug 24, 2020
@glenn-jocher
Copy link
Member

glenn-jocher commented Aug 25, 2020

@jinfagang hey there. I've been experimenting with export recently. This custom Hardswish() class provides alternatives to the pytorch nn.Hardswish() class.

The first line is best for CoreML export, the second is best for ONNX export. But in both cases you need to replace existing nn.Hardswish() with this custom version with something like this:

    import models
    import utils

    # Update model
    for k, m in model.named_modules():
        m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatability
        if isinstance(m, models.common.Conv):
           m.act = utils.activations.Hardswish()  # assign activation

If you are training a model from scratch however (intended for export), I would recommend simply using nn.LeakyReLU(0.1) here to avoid all this hassle:

yolov5/models/common.py

Lines 20 to 27 in a8751e5

class Conv(nn.Module):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super(Conv, self).__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.Hardswish() if act else nn.Identity()

@lucasjinreal
Copy link
Author

@glenn-jocher I got a more better performance with v3.0 architecture, I not sure it's benifit from new activation or new FPN connection behaviour, I am training a new model with custom Hardswish, can it able to export in this way?

@glenn-jocher
Copy link
Member

@jinfagang yes, you can definitely export a v3.0 model this way, including all Hardswish activations. This exports correctly to all 3 (torchscript, onnx and coreml). I'm thinking of updating export.py to handle this automatically BTW, I might to that today to fully address this.

I should warn you though some export destinations are not hardswish optimized, since it's very new. PyTorch speed is unaffected, but CoreML speeds are much slower for now, until Apple does the work on their side to optimize these ops.

BTW, export.py has been updated now for automatic model fusion at the beginning, see #827. Previously I only had this working for ONNX, but now its also applied to torchscript and CoreML. This should result in a significant layer count reduction in exported models.

@glenn-jocher
Copy link
Member

TODO: Automatic nn.Hardswish() replacement with utils.activations.Hardswish() for v3.0 exports.

@glenn-jocher
Copy link
Member

glenn-jocher commented Aug 25, 2020

Commit 4d7f222 updates export.py for full v3.0 hardswish compatibility. Please git pull and try again, and let me know if you run into any more problems.

Current v3.0 export command:

$ export PYTHONPATH="$PWD" && python models/export.py --weights ./yolov5s.pt --img 640 --batch 1

Output:

Namespace(batch_size=1, img_size=[640, 640], weights='./yolov5s.pt')
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt to ./yolov5s.pt...

Fusing layers... 
Model Summary: 140 layers, 7.45958e+06 parameters, 6.61683e+06 gradients, 17.5 GFLOPS

Starting TorchScript export with torch 1.6.0...
...
TorchScript export success, saved as ./yolov5s.torchscript.pt

Starting ONNX export with onnx 1.7.0...
ONNX export success, saved as ./yolov5s.onnx

Starting CoreML export with coremltools 4.0b3...
Running MIL optimization passes: 100%|██████████| 16/16 [00:00<00:00, 16.12 passes/s]
...
Translating MIL ==> MLModel Ops: 100%|██████████| 1077/1077 [00:00<00:00, 1092.24 ops/s]
CoreML export success, saved as ./yolov5s.mlmodel

Export complete. Visualize with https://github.com/lutzroeder/netron.

Process finished with exit code 0

@glenn-jocher
Copy link
Member

Update: 4fb8cb3 adds backwards compatibility and robustness for earlier (or custom) models with alternate activation strategies.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

2 participants