Replies: 1 comment
-
@stefp 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results. Red curve is overfitting. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement. If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below. Dataset
Model SelectionLarger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml Training SettingsBefore modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
Further ReadingIf you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: |
Beta Was this translation helpful? Give feedback.
-
Hi,
I am training a YOLOv5x to detect tree crowns from drone images and classify them into different health classes (see an example pf the predictions below).
I have to say I am farily satified with the obtained P (0.8) and R (0.7) and mAP@.5 (0.76) against my validation set. Also, the model is able to tranfer well on new drone images accounting for most of the sources of variation in image quality such as season , time of the day (different illumination), or weather (clouds vs sun).
The issue I wondering about is why the val/obj_loss curve behaves as it shows in the plot below. Could the issue be realted to the fact that most of the images are of rather crowded scenes (i.e. many trees in the forest :) )? How else could this be explained? any thoughts?
Beta Was this translation helpful? Give feedback.
All reactions