Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT --dynamic fix #9691

Merged
merged 3 commits into from
Oct 4, 2022
Merged

TensorRT --dynamic fix #9691

merged 3 commits into from
Oct 4, 2022

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Oct 4, 2022

May resolve #9688

Signed-off-by: Glenn Jocher glenn.jocher@ultralytics.com

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Improved ONNX export compatibility with TensorRT versions 7 and 8.

πŸ“Š Key Changes

  • Removed False in the call to export_onnx function, now passing dynamic directly.
  • Fixed indentation and formatting in the dynamic flag handling block.

🎯 Purpose & Impact

  • πŸ›  The update ensures better compliance with different TensorRT versions when exporting YOLOv5 models to ONNX format.
  • βš™οΈ The removal of the hardcoded False argument allows for the dynamic flag to be passed properly, improving the dynamic batching capabilities.
  • πŸ“ˆ This change could lead to more efficient model deployment for users who utilize TensorRT for inference, potentially enhancing performance and resource utilization.

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@glenn-jocher glenn-jocher changed the title TensorRT --dynamic fix TensorRT --dynamic fix Oct 4, 2022
@glenn-jocher glenn-jocher merged commit e4398cf into master Oct 4, 2022
@glenn-jocher glenn-jocher deleted the glenn-jocher-patch-2 branch October 4, 2022 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Exporting TensorRT (engine) with dynamic batch size failing
1 participant