Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to check overfitting #5061

Closed
aseprohman opened this issue Oct 6, 2021 · 10 comments
Closed

How to check overfitting #5061

aseprohman opened this issue Oct 6, 2021 · 10 comments
Labels
question Further information is requested

Comments

@aseprohman
Copy link

aseprohman commented Oct 6, 2021

❔Question

Hello @glenn-jocher,
which graph box shows that the training process that I am doing is overfitting or not ( val/box_loss, val/obj_loss, etc ) or is there another method to check ? can you give me a little explanation about the difference between val/box_loss and val/obj_loss
many thanks

Additional contex

results
t

@aseprohman aseprohman added the question Further information is requested label Oct 6, 2021
@glenn-jocher
Copy link
Member

@aseprohman validation losses should overfit or you haven't trained long enough.

For loss descriptions see original YOLO publications:
https://pjreddie.com/publications/

@aseprohman
Copy link
Author

which one chart should check to determine overfitting ? box_loss or obj_loss ?

@glenn-jocher
Copy link
Member

@aseprohman overfitting can occur in any val loss component.

@aseprohman
Copy link
Author

aseprohman commented Oct 6, 2021

no priority chart for validate overfit ? if one of either reach overfit it means training shoud be stop ?

@suryasid09
Copy link

Hello @aseprohman, did you understand how to tell the model is overfitting? Is it both val plot need to see, or if one val loss is increasing with respect to training loss, we can say it is overfitting? Please let me know.

@glenn-jocher
Copy link
Member

Hi @suryasid09, to determine if the model is overfitting, it's best to check all the validation loss plots including val loss, val/box_loss, and val/obj_loss. If any of these validation losses start to increase while the training loss is still decreasing, then it could be an indication that the model is starting to overfit.

However, it's also possible that some validation losses may reach their minimum value and stay there while the training continues to improve, which is known as convergence. In this case, it's not a sign of overfitting, and you should continue training the model until you achieve satisfactory results.

Overall, you should use your best judgement to decide whether the model is overfitting or not based on all the validation loss plots in conjunction with your specific use case and performance goals.

@wtjasmine
Copy link

Hi, if I have already followed all these recommendations on my dataset:

  • Images per class. ≥ 1500 images per class recommended

  • Instances per class. ≥ 10000 instances (labeled objects) per class recommended

  • Image variety. Must be representative of deployed environment. For real-world use cases we recommend images from different times of day, different seasons, different weather, different lighting, different angles, different sources (scraped online, collected locally, different cameras) etc.

  • Label consistency. All instances of all classes in all images must be labelled. Partial labelling will not work.

  • Label accuracy. Labels must closely enclose each object. No space should exist between an object and it's bounding box. No objects should be missing a label.

  • Label verification. View train_batch*.jpg on train start to verify your labels appear correct, i.e. see example mosaic.

  • Background images. Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total). No labels are required for background images.

I have tried to train with 300 epochs without changing any default settings of the model, the training stops halfway after 100 epochs the metrics do not show improvement. Although the losses of both validation and training still reducing during training, metrics like precision, recall, and mAP seems to stick around 0.53, does this means the training has reached the final results and training should stop? Because I am trying to detect small objects from images, could you pls suggest some tips to improve the results? Thank you.

@glenn-jocher
Copy link
Member

@wtjasmine hi,

Based on what you've described, the training seems to have reached a plateau in terms of metrics such as precision, recall, and mAP. This could indicate that the model has converged and further training may not result in significant improvement.

If you are specifically trying to detect small objects, there are a few things you can consider to potentially improve the results:

  1. Data augmentation: Apply augmentation techniques such as random scaling, flipping, rotation, and color jittering to increase the diversity of the training data and improve the model's ability to generalize.

  2. Model architecture: Explore different model architectures that are designed to handle small objects, such as YOLOv4-416 or YOLOv5x. These models have more parameters and may be better suited for detecting small objects.

  3. Model initialization: Check if the model is properly initialized with pre-trained weights. You can start training from pre-trained weights on a similar dataset to help the model converge faster.

  4. Hyperparameter tuning: Experiment with adjusting hyperparameters such as learning rate, batch size, and optimizer to find the optimal configuration for your specific task and dataset.

  5. Data filtering: If your dataset contains a significant number of false positives or irrelevant images, consider filtering out these instances to improve the model's performance on small objects.

Remember to test these modifications incrementally and monitor the changes in metrics to determine their effectiveness.

Hope these suggestions help! Let me know if you have any further questions.

@Asshok1
Copy link

Asshok1 commented Apr 15, 2024

hi @glenn-jocher i have trained my yolo model for 100 epochs( initially i have given 100 epochs ) and the model saves the best.pt file as i have seen that not that much mAp increases so i decided to run to 100 epochs further instead of starting from 0 , now i need to continue from 100 epoch how can i do this ?

@glenn-jocher
Copy link
Member

@Asshok1 hi there! 😊 To continue training from your 100th epoch, simply use the --resume flag with your training command. If you've kept the default runs/train directory without starting new training sessions, the command would look like this:

python train.py --resume

This will automatically pick up where you left off. If you've moved your training checkpoint or have multiple sessions, you may need to point to the specific last.pt file using the --weights flag like so:

python train.py --weights path/to/your/last.pt

Happy training! Let me know if you need more help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants