Achieve Faster Inference Speeds with Ultralytics YOLOv8 & Intel’s OpenVINO #8152
pderrenger
started this conversation in
Show and tell
Replies: 2 comments 1 reply
-
If you are running on Intel hardware, OpenVINO is unbeatable in terms of speed for YOLOv8 models, it's the best choice for Intel CPU inference. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hello everyone, Check out my new GitHub repository for running YOLOv8 object detection and segmentation inference using OpenVINO and NumPy only. This implementation is faster than the Torch version, offering improved performance and efficiency. Visit the repository here: Faster Inference YOLOv8. Your feedback and contributions are welcome! Thanks |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Ultralytics YOLOv8 and Intel's OpenVINO™ come together to triple inference speeds across CPUs and GPUs. Ideal for enhancing video analytics, smart cities, and retail, our latest blog provides a deep dive into leveraging this integration for optimal AI model performance.
🔍 Key Highlights:
Dive into our latest blog post for a step-by-step on harnessing the full potential of YOLOv8 with OpenVINO™!
Learn More 👉 https://www.ultralytics.com/blog/achieve-faster-inference-speeds-ultralytics-yolov8-openvino
Beta Was this translation helpful? Give feedback.
All reactions