Replies: 1 comment
-
@jmj23 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement. If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below. Dataset
Model SelectionLarger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml Training SettingsBefore modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
Further ReadingIf you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: |
Beta Was this translation helpful? Give feedback.
-
Hello,
I'm wondering if I could get some feedback on model training. I am training yolov5 models for object detection on medical images.
Single class
Dataset size: 300k images, 55% with >= 1 box
Hyperparameters: hyp.scratch with HSV augmentation off (because grayscale)
I am getting solid performance from the small size model. However, when I go up to the medium or large size models, the training curves get chaotic and seem to underperform.
Below are some example training curves captures in WandB. They are labeled with 's', 'm', or 'l' corresponding to the model size. Some(all) of the training runs crashed early and I realize that yolov5 should typically trained for >100 epochs. It still seems at these early stages the behavior is strange.
I was hoping to see if anyone had thoughts/quick solutions, rather than running full hyperparameter evolution, since this is quite the undertaking.
Happy to provide more detailed info as helpful.
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions