Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Properly expose batch_size from OpenVINO similarly to TensorRT #8514

Merged
merged 3 commits into from
Jul 7, 2022

Conversation

democat3457
Copy link
Contributor

@democat3457 democat3457 commented Jul 7, 2022

This is the fixed version of #8437.

This PR exposes OpenVINO's network batch_size similarly to TensorRT's batch_size in DetectMultiBackend. This allows for a more consistent and cross-backend indication of whether a fixed non-one batch size exists and what the value of the batch size is.

Use case:
Current way to get a model's fixed batch size

model = DetectMultiBackend(path, device=torch.device(device))
batch_size = model.batch_size if hasattr(model, "batch_size") else None
if model.xml: # explicit handling of openvino
  ... # a lot of stuff

New way to get a model's fixed batch size

model = DetectMultiBackend(path, device=torch.device(device))
batch_size = model.batch_size if hasattr(model, "batch_size") else None
# no need for explicit handling of openvino, code is much cleaner

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Enhancements for OpenVINO model loading in YOLOv5.

📊 Key Changes

  • Added OpenVINO Layout and get_batch to model loading.
  • Implemented layout setting for parameter tensors to "NCHW" format if they have an empty layout.
  • Added code to handle static batch dimension for model input, setting batch size accordingly.

🎯 Purpose & Impact

  • 🛠 Ensures compatibility of YOLOv5 with the latest OpenVINO API changes.
  • 🚀 Facilitates a smooth YOLOv5 model deployment using OpenVINO backend, potentially improving inference performance on Intel hardware.
  • 👥 Users deploying YOLOv5 models on OpenVINO will experience improved ease of use and robustness.

@democat3457 democat3457 changed the title Properly expose batch_size from OpenVINO Properly expose batch_size from OpenVINO similarly to TensorRT Jul 7, 2022
@glenn-jocher glenn-jocher merged commit be42a24 into ultralytics:master Jul 7, 2022
@democat3457 democat3457 deleted the patch-4 branch July 7, 2022 21:53
@glenn-jocher
Copy link
Member

@democat3457 PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐

Shivvrat pushed a commit to Shivvrat/epic-yolov5 that referenced this pull request Jul 12, 2022
…ralytics#8514)

Properly expose `batch_size` from OpenVINO

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
…ralytics#8514)

Properly expose `batch_size` from OpenVINO

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants