-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Exporting the operator hardswish to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator. #831
Comments
@jinfagang hey there. I've been experimenting with export recently. This custom Hardswish() class provides alternatives to the pytorch nn.Hardswish() class. The first line is best for CoreML export, the second is best for ONNX export. But in both cases you need to replace existing nn.Hardswish() with this custom version with something like this: import models
import utils
# Update model
for k, m in model.named_modules():
m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatability
if isinstance(m, models.common.Conv):
m.act = utils.activations.Hardswish() # assign activation If you are training a model from scratch however (intended for export), I would recommend simply using nn.LeakyReLU(0.1) here to avoid all this hassle: Lines 20 to 27 in a8751e5
|
@glenn-jocher I got a more better performance with v3.0 architecture, I not sure it's benifit from new activation or new FPN connection behaviour, I am training a new model with custom Hardswish, can it able to export in this way? |
@jinfagang yes, you can definitely export a v3.0 model this way, including all Hardswish activations. This exports correctly to all 3 (torchscript, onnx and coreml). I'm thinking of updating export.py to handle this automatically BTW, I might to that today to fully address this. I should warn you though some export destinations are not hardswish optimized, since it's very new. PyTorch speed is unaffected, but CoreML speeds are much slower for now, until Apple does the work on their side to optimize these ops. BTW, export.py has been updated now for automatic model fusion at the beginning, see #827. Previously I only had this working for ONNX, but now its also applied to torchscript and CoreML. This should result in a significant layer count reduction in exported models. |
TODO: Automatic nn.Hardswish() replacement with utils.activations.Hardswish() for v3.0 exports. |
Commit 4d7f222 updates export.py for full v3.0 hardswish compatibility. Please git pull and try again, and let me know if you run into any more problems. Current v3.0 export command: $ export PYTHONPATH="$PWD" && python models/export.py --weights ./yolov5s.pt --img 640 --batch 1 Output: Namespace(batch_size=1, img_size=[640, 640], weights='./yolov5s.pt')
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt to ./yolov5s.pt...
Fusing layers...
Model Summary: 140 layers, 7.45958e+06 parameters, 6.61683e+06 gradients, 17.5 GFLOPS
Starting TorchScript export with torch 1.6.0...
...
TorchScript export success, saved as ./yolov5s.torchscript.pt
Starting ONNX export with onnx 1.7.0...
ONNX export success, saved as ./yolov5s.onnx
Starting CoreML export with coremltools 4.0b3...
Running MIL optimization passes: 100%|██████████| 16/16 [00:00<00:00, 16.12 passes/s]
...
Translating MIL ==> MLModel Ops: 100%|██████████| 1077/1077 [00:00<00:00, 1092.24 ops/s]
CoreML export success, saved as ./yolov5s.mlmodel
Export complete. Visualize with https://github.com/lutzroeder/netron.
Process finished with exit code 0 |
Update: 4fb8cb3 adds backwards compatibility and robustness for earlier (or custom) models with alternate activation strategies. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hardswish changed into exportable way:
still occured this error. ( i have changed conv in common.py)
The text was updated successfully, but these errors were encountered: