Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov5 FPS boost #10496

Closed
1 task done
ozicmoi opened this issue Dec 14, 2022 · 3 comments
Closed
1 task done

yolov5 FPS boost #10496

ozicmoi opened this issue Dec 14, 2022 · 3 comments
Labels
question Further information is requested Stale

Comments

@ozicmoi
Copy link

ozicmoi commented Dec 14, 2022

Search before asking

Question

Hello there. I trained my own custom dataset using yolov5s. I developed a license plate recognition system. I am using python's Flask web framework in the web environment. While the FPS value is 5 on the USB camera, I get 1 FPS when I run the project on ip webcam and mp4 files. Can you help me increase the FPS value?

Thank you @jkocherhans @adrianholovaty @cgerum @farleylai @glenn-jocher @Nioolek

while the project is running https://vimeo.com/781019379

Additional

No response

@ozicmoi ozicmoi added the question Further information is requested label Dec 14, 2022
@JustasBart
Copy link

Hi, I'm not sure in terms of what the setup/requirements are etc...

But I always run my inferences through C++ by using the OpenCV's dnn module that was built with CUDA and CuDNN.
In my case I can easily out-run a 50FPS 1080p Camera even with the YOLOv5l6 @1280x1280 so it's fast alright...

I'm using an NVidia Quadro RTX 4000.

Perhaps one way for you might be to use a NCS2? Again hard to make a recommendation without knowing the full context...

Hope any of this helps you at all, good luck! 🚀

@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 17, 2022

@ozlematiz 👋 Hello! Thanks for asking about inference speed issues. PyTorch Hub speeds will vary by hardware, software, model, inference settings, etc. Our default example in Colab with a V100 looks like this:

Screen Shot 2022-05-03 at 10 20 39 AM

YOLOv5 🚀 can be run on CPU (i.e. --device cpu, slow) or GPU if available (i.e. --device 0, faster). You can determine your inference device by viewing the YOLOv5 console output:

detect.py inference

python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/

Screen Shot 2022-05-03 at 2 48 42 PM

YOLOv5 PyTorch Hub inference

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
# Speed: 631.5ms pre-process, 19.2ms inference, 1.6ms NMS per image at shape (2, 3, 640, 640)

Increase Speeds

If you would like to increase your inference speed some options are:

  • Use batched inference with YOLOv5 PyTorch Hub
  • Reduce --img-size, i.e. 1280 -> 640 -> 320
  • Reduce model size, i.e. YOLOv5x -> YOLOv5l -> YOLOv5m -> YOLOv5s -> YOLOv5n
  • Use half precision FP16 inference with python detect.py --half and python val.py --half
  • Use a faster GPUs, i.e.: P100 -> V100 -> A100
  • Export to ONNX or OpenVINO for up to 3x CPU speedup (CPU Benchmarks)
  • Export to TensorRT for up to 5x GPU speedup (GPU Benchmarks)
  • Use a free GPU backends with up to 16GB of CUDA memory: Open In Colab Open In Kaggle

Good luck 🍀 and let us know if you have any other questions!

@github-actions
Copy link
Contributor

github-actions bot commented Jan 17, 2023

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale label Jan 17, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

3 participants