Replies: 1 comment 3 replies
-
@DSA101 👋 Hello! Thanks for asking about inference speed issues. YOLOv5 🚀 can be run on CPU (i.e. detect.py inferencepython detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/ YOLOv5 PyTorch Hub inferenceimport torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# Speed: 631.5ms pre-process, 19.2ms inference, 1.6ms NMS per image at shape (2, 3, 640, 640) Increase SpeedsIf you would like to increase your inference speed some options are:
Good luck 🍀 and let us know if you have any other questions! |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I produced several versions of a model custom trained from yolov5m checkpoint, with the same 640 image size. The model trained with 1K images seems to be twice as fast as the model trained on 3K images. Is this expected or am I missing something? I would imagine that the inference speed should be the same, since the model size remains the same (I checked from logs that it is the case). I noticed however that the faster model is 55MB in size, while the slower one is 42MB.
Sorry for the noob questions and thanks for any ideas.
Beta Was this translation helpful? Give feedback.
All reactions