Skip to content

GPU RAM requirements for training #5777

Answered by glenn-jocher
DSA101 asked this question in Q&A
Discussion options

You must be logged in to vote

@DSA101 👋 Hello! Thanks for asking about CUDA memory issues. YOLOv5 🚀 can be trained on CPU, single-GPU, or multi-GPU. When training on GPU it is important to keep your batch-size small enough that you do not use all of your GPU memory, otherwise you will see a CUDA Out Of Memory (OOM) Error and your training will crash. You can observe your CUDA memory utilization using either the nvidia-smi command or by viewing your console output:

CUDA Out of Memory Solutions

If you encounter a CUDA OOM error, the steps you can take to reduce your memory usage are:

  • Reduce --batch-size
  • Reduce --img-size
  • Reduce model size, i.e. from YOLOv5x -> YOLOv5l -> YOLOv5m -> YOLOv5s
  • Train with multi-GPU at the …

Replies: 1 comment 7 replies

Comment options

You must be logged in to vote
7 replies
@rgaufman
Comment options

@rgaufman
Comment options

@ShAmoNiA
Comment options

@rgaufman
Comment options

@ShAmoNiA
Comment options

Answer selected by DSA101
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants