Skip to content

How to get more FPS when using PyTorch Hub. #4004

Answered by glenn-jocher
shershunov asked this question in Q&A
Discussion options

You must be logged in to vote

@EkelviNistars 👋 Hello! Thanks for asking about improving inference speed. YOLOv5 🚀 can be run on CPU and GPU. When running inference on GPU the steps you can take to speed up inference are:

  • Increase --batch-size for PyTorch Hub models
  • Reduce --img-size
  • Reduce model size, i.e. from YOLOv5x -> YOLOv5l -> YOLOv5m -> YOLOv5s
  • Upgrade your hardware to a faster GPU

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@shershunov
Comment options

@glenn-jocher
Comment options

Answer selected by shershunov
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants