-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when applying the model.fuse(), the processing of torchscript export fail. (how to accelerate the model inference when using torchscript depend on libtorch in c++ language?) #827
Comments
Hello @silicon2006, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@silicon2006 thank you for raising this issue. commit a8751e5 should fix this. Model fusing during export is now enabled and takes effect for all export destinations: torchscript, onnx, and coreml. Please git pull and try again. |
@silicon2006 also FYI, you can use Netron viewer to verify that the exported models are fused. |
That's great, I solved this problem yesterday ,but your's solution is much better. thanks! |
❔Question
Additional context
The text was updated successfully, but these errors were encountered: