Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sudden performance decrease in training #5721

Closed
1 task done
gboeer opened this issue Nov 19, 2021 · 20 comments
Closed
1 task done

Sudden performance decrease in training #5721

gboeer opened this issue Nov 19, 2021 · 20 comments
Labels
question Further information is requested Stale

Comments

@gboeer
Copy link

gboeer commented Nov 19, 2021

Search before asking

Question

Hi,
first, thanks for the great yolo implementation you provide.

In my recent training, I noticed some behavior I haven't seen before. The loss was decreasing very nicely for a lot of epochs and the performance metrics increasing respectively. Then suddenly the performance drops by a large margin.
I was suspecting an issue with the adaptive learning rate, however it is decreasing nicely as expected.
I'm pretty satisfied with the performance of the best model but was curious if somebody may have other aspects of the training I may look into for debugging this behavior.

I'm using the yolov5l6.pt model with pretrained weights and train on a custom dataset.

Additional

grafik

@gboeer gboeer added the question Further information is requested label Nov 19, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2021

👋 Hello @legor, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@gboeer gboeer changed the title Sudden performance decrease in trainig Sudden performance decrease in training Nov 19, 2021
@glenn-jocher
Copy link
Member

@legor that's an odd result. This may be due to a sudden spike in a loss component, perhaps by something in your dataset combining in an odd way during a particular augmentation. I suspect the reproducibility is near zero however (if you retrain, do you get the same drop?), so this would be difficult to debug.

@Zengyf-CVer
Copy link
Contributor

@legor
I have encountered your problem many times, so I can share with you some of my experience.
First of all, this sudden drop in AP is largely a problem of custom data sets. The following is my solution:

  1. Confirm that the negative sample is an instance that has not participated in the training, but has a label. This kind of data needs to be converted into a negative sample, that is, the label file txt is empty.
  2. Confirm that all instances are allocated. If the instances do not participate in the training, they are allocated as negative samples.

@gboeer
Copy link
Author

gboeer commented Nov 22, 2021

Hi, thanks for your comments.
@glenn-jocher I can't confirm, as of now, if this happens in an identical training run again. I will run a new training again in time. In fact, I have trained several yolo models on the same data before and did never encounter this behavior until now.

@Zengyf-CVer I do have several negative samples in the dataset as well. To my understanding, those samples simply do not have to have an annotation file supplied with them. Hence, for negative samples I simply put the respective image in the image folder, but no (empty) text file for the annotations. I couldn't quite understand your second point. What do you mean by the instances have to be allocated? Do you mean loaded into memory and if so, why would unallocated images be handled as negatives?

Edit: It seems to me that maybe you meant annotated and not allocated? Because then it would make more sense to me ;)

@glenn-jocher
Copy link
Member

@legor yes for background images you can simply place images in your images directories, no labels files are necessary.

@Zengyf-CVer
Copy link
Contributor

@legor
For a simple example, if you have 30 categories, but only use 20, then the remaining 10 categories will be stored in some pictures, then pay attention, if there are remaining 10 categories in some pictures, but they do not exist For the 20 classes you use, you should set these images as negative samples, with the label as an empty file.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 23, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@realgump
Copy link

realgump commented Jul 8, 2022

Hi, so how did you fix this issue finally? I have met the same annoying problem when training. The metric of mAP reaches 0.9666 at the 96 epoch and but drops to 0 suddenly. I use one-class training thus I might not need to consider the categories which don't participate in the training. I have used the same methods to generate my custom dataset many times, and everything goes well except this time.
image

@glenn-jocher
Copy link
Member

@realgump is this reproducible? If you train again does the same thing happen?

I suspect it may be a dataset issue, as I've not seen this on any of the official datasets.

@sonovice
Copy link

Just encountered a similar behaviour. I am training with custom data (15k images) that contains many tiny objects. For unknown reasons, all metrics drop significantly around epoch 25 and do not recover fully, even if training for 100 more epochs.

image

This is my training command utilizing GPUs (4x RTX 2080 Ti):

python -m torch.distributed.launch --nproc_per_node 4 train.py
--data dataset/dataset.yaml
--cfg yolov5l6.yaml
--weights ""
--img 1280
--hyp datasets/hyper.yaml
--save-period 10
--epochs 500
--batch-size 12
--device 0,1,2,3
--name yolo_2022-07-15_DDP_seed_10
--seed 10

The first two training runs had identical configuration, so I changed the random seed for the third (blue) run to exclude the possibility of very unfortunate combinations in image and augmentations at the same step in training. It helped slightly but the performance is still worse than before.

Augmentation in general is very limited to only scale (0.4) and translation (0.3), but due to heavy class imbalance I opted for fl_gamma = 1.0.

Since these runs seem to be more reproducable than the other ones above, they might give a hint at where to start looking for the reasons for such performance drop? (Unfortunately, I cannot share the dataset for legal reasons, but I will happily reproduce the runs with any helpful suggestions.)

@glenn-jocher
Copy link
Member

@sonovice can you try training with the gradient clipping PR here?
#8598

@sonovice
Copy link

sonovice commented Jul 18, 2022

@glenn-jocher Unfortunately it did not help:
20220718_064605.jpg

@glenn-jocher
Copy link
Member

@sonovice hmm really strange. There might be something wrong with your dataset in that case, especially since we don't see anything similar on the other datasets, i.e. COCO, VOC, Objects365 etc.

@sonovice
Copy link

@glenn-jocher I don't want to rule that out, but it's a bit surprising that it works for many epochs up to the point of failure. Could the implementation of Focal Loss and the imbalanced dataset play a role in this? Or the high count of tiny objects?

I have checked examples of all classes visually with fiftyone and did not notice any errors. The dataset is generated artificially thus the possibility of erroneous annotations is rather thin.

Are there any internals that would be worth to log to get a better understanding for this problem?

@glenn-jocher
Copy link
Member

@sonovice focal loss is not recommended. And of course I can't speak to any other performance except that of the default hyperparameters. Once you start playing with those you are on your own.

@sonovice
Copy link

@glenn-jocher Thank you. I will try configurations without focal loss and also one with no augmentation at all. Will post again when results arrive.

If focal loss is not recommended, are there any other ways to fix class imbalance other than smarter image sampling?

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 19, 2022

@sonovice class imbalance is present in every dataset, and default training already performs well on these datasets. I would simply review the Tips for Best Results tutorial below and ensure you are in alignment there on your dataset statistics.

Tips for Best Training Results

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

  • Images per class. ≥ 1500 images per class recommended
  • Instances per class. ≥ 10000 instances (labeled objects) per class recommended
  • Image variety. Must be representative of deployed environment. For real-world use cases we recommend images from different times of day, different seasons, different weather, different lighting, different angles, different sources (scraped online, collected locally, different cameras) etc.
  • Label consistency. All instances of all classes in all images must be labelled. Partial labelling will not work.
  • Label accuracy. Labels must closely enclose each object. No space should exist between an object and it's bounding box. No objects should be missing a label.
  • Label verification. View train_batch*.jpg on train start to verify your labels appear correct, i.e. see example mosaic.
  • Background images. Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total). No labels are required for background images.

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

  • Start from Pretrained weights. Recommended for small to medium sized datasets (i.e. VOC, VisDrone, GlobalWheat). Pass the name of the model to the --weights argument. Models download automatically from the latest YOLOv5 release.
python train.py --data custom.yaml --weights yolov5s.pt
                                             yolov5m.pt
                                             yolov5l.pt
                                             yolov5x.pt
                                             custom_pretrained.pt
  • Start from Scratch. Recommended for large datasets (i.e. COCO, Objects365, OIv6). Pass the model architecture yaml you are interested in, along with an empty --weights '' argument:
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
                                                      yolov5m.yaml
                                                      yolov5l.yaml
                                                      yolov5x.yaml

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

  • Epochs. Start with 300 epochs. If this overfits early then you can reduce epochs. If overfitting does not occur after 300 epochs, train longer, i.e. 600, 1200 etc epochs.
  • Image size. COCO trains at native resolution of --img 640, though due to the high amount of small objects in the dataset it can benefit from training at higher resolutions such as --img 1280. If there are many small objects then custom datasets will benefit from training at native or higher resolution. Best inference results are obtained at the same --img as the training was run at, i.e. if you train at --img 1280 you should also test and detect at --img 1280.
  • Batch size. Use the largest --batch-size that your hardware allows for. Small batch sizes produce poor batchnorm statistics and should be avoided.
  • Hyperparameters. Default hyperparameters are in hyp.scratch-low.yaml. We recommend you train with default hyperparameters first before thinking of modifying any. In general, increasing augmentation hyperparameters will reduce and delay overfitting, allowing for longer trainings and higher final mAP. Reduction in loss component gain hyperparameters like hyp['obj'] will help reduce overfitting in those specific loss components. For an automated method of optimizing these hyperparameters, see our Hyperparameter Evolution Tutorial.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/

Good luck 🍀 and let us know if you have any other questions!

@sonovice
Copy link

@glenn-jocher Thanks for the tutorial. The dataset was in fact assembled after these recommendations.

Turns out the actual cause for the performance drop is indeed the many small (rather tiny) objects combined with too strong scaling augmentation. At some point the model decides to randomly pick pixels in the images. Increasing the model input resolution or slicing the images has helped to overcome this at the expense of increased training/inference time.

@gboeer
Copy link
Author

gboeer commented Jul 21, 2022

Hi @sonovice I'm curious how you debugged this, since my dataset as well contains several very small objects. Also, maybe you can explain shortly what you mean by slicing the images?

Greetings

@sonovice
Copy link

@legor I simply split my images into 2x3 slices, do object detection on all 6 images individually and merge the outputs with https://github.com/obss/sahi/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

5 participants