You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I am training Yolov5 on coco_15k dataset with standard and optimized hyperparameters and I am not sure I understand the results.
When training with standard hyperparameters I get this:
It looks fairly reasonable with the exception of metrics being so high at the very beginning of the training.
Then I try to optimize the hyperparameters for 300 generations and with the optizmized hp I get this:
mAP is the highest after the first epoch and it drops quickly. All metrics are clearly better for optimized hp, but the longer I train, the worse the results are. It looks like it is best to train the nt with just one epoch to get the best results.
Now, when I compare the metrics for the original pretrained yolov5n.pt I get this:
Class Images Instances P R mAP50 mAP50-95:
all 3000 22397 0.567 0.43 0.454 0.278
For training with standard hp:
Class Images Instances P R mAP50 mAP50-95:
all 3000 22397 0.541 0.375 0.392 0.231
For best.pt with GA:
Class Images Instances P R mAP50 mAP50-95:
all 3000 22397 0.582 0.413 0.446 0.268
Questions/observations:
It is clear that the default weights are the best and retraining the net does not improve the mAP on coco15k.
Since the best results for GA optimized hp are obtaind after the very first epoch, does it mean, that I actually worsen the pretrained net with each new training iteration?
And since the optimized learning rate (lr0: 0.00267 lrf: 0.01098 momentum: 0.62582) is much smaller than the default one (lr0: 0.01 lrf: 0.01 momentum: 0.937) does it mean that the net actually deteriorates slower that when the default parameteres are used?
Are the default hp the result of some sort of optimization for the entire COCO set and thus trying to find better ones on coco15k makes little sense?
If I take another dataset (not based on COCO), should the GA optimization make sense and actually improve the performance?
Additional
No response
The text was updated successfully, but these errors were encountered:
@Witek- it seems like you have put a lot of time and effort into understanding the results of training YOLOv5 on the coco_15k dataset with different hyperparameters. Here are some answers to your questions/observations:
It is possible that the default weights are the best and retraining the net may not improve the mAP on coco15k. However, it also depends on the specifics of your use case and the dataset you are working with.
It's hard to say for sure without knowing more about the specifics of your training runs, but it is possible that the net could be worsening with each new training iteration.
The learning rate and momentum parameters can definitely have an impact on the training performance, but it's hard to say for sure without looking more closely at your training runs.
The default hyperparameters are a result of extensive experimentation and optimization on the COCO dataset, so it is possible that trying to find better ones specifically for coco_15k may not make as much sense. However, it's always worth experimenting and seeing if you can achieve better results with different hyperparameters.
GA optimization could potentially improve the performance on another dataset, but it ultimately depends on the specifics of that dataset and the hyperparameters being used.
I hope this helps! Let me know if you have any further questions or concerns.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
I am training Yolov5 on coco_15k dataset with standard and optimized hyperparameters and I am not sure I understand the results.
When training with standard hyperparameters I get this:
It looks fairly reasonable with the exception of metrics being so high at the very beginning of the training.
Then I try to optimize the hyperparameters for 300 generations and with the optizmized hp I get this:
mAP is the highest after the first epoch and it drops quickly. All metrics are clearly better for optimized hp, but the longer I train, the worse the results are. It looks like it is best to train the nt with just one epoch to get the best results.
Now, when I compare the metrics for the original pretrained yolov5n.pt I get this:
For training with standard hp:
For best.pt with GA:
Questions/observations:
Additional
No response
The text was updated successfully, but these errors were encountered: