Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when applying the model.fuse(), the processing of torchscript export fail. (how to accelerate the model inference when using torchscript depend on libtorch in c++ language?) #827

Closed
silicon2006 opened this issue Aug 24, 2020 · 4 comments
Labels
question Further information is requested

Comments

@silicon2006
Copy link

❔Question

Additional context

@silicon2006 silicon2006 added the question Further information is requested label Aug 24, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Aug 24, 2020

Hello @silicon2006, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@silicon2006 thank you for raising this issue. commit a8751e5 should fix this. Model fusing during export is now enabled and takes effect for all export destinations: torchscript, onnx, and coreml.

Please git pull and try again.

@glenn-jocher
Copy link
Member

@silicon2006 also FYI, you can use Netron viewer to verify that the exported models are fused.

@silicon2006
Copy link
Author

That's great, I solved this problem yesterday ,but your's solution is much better. thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants