YoloV5x-high prediction time #5703
Unanswered
revathib01
asked this question in
Q&A
Replies: 1 comment
-
@revathib01 👋 Hello! Thanks for asking about inference speed issues. YOLOv5 🚀 can be run on CPU (i.e. detect.py inferencepython detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/ YOLOv5 PyTorch Hub inferenceimport torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# Speed: 631.5ms pre-process, 19.2ms inference, 1.6ms NMS per image at shape (2, 3, 640, 640) Increase SpeedsIf you would like to increase your inference speed some options are:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have trained my custom dataset of 300 images using all 4 versions of yolov5 with image size 640 in Google Colab (selected YoloV5x based on the initial prediction accuracy) . We tested the Yolov5x model accuracy in both Google Colab and a local Pc with Ubuntu operating system for Image sizes 640 and 320. The time taken for prediction in the local PC is high. Below is the comparison between the Google Colab and local PC,
<style> </style>I want to deploy my final model on a single board computer but, as you can see from above table prediction time is high. Based on the accuracy of the model we would like to go with Yolov5x model. From the above table we found we may need specific hardware requirement to run the model where the prediction can take time between 100-200ms.
Can you please let us know the Hardware requirements that meet my requirements with yolov5x model (Image Size 640).
Beta Was this translation helpful? Give feedback.
All reactions