Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unstable training #974

Open
OctopusRice opened this issue Oct 21, 2022 · 3 comments
Open

Unstable training #974

OctopusRice opened this issue Oct 21, 2022 · 3 comments

Comments

@OctopusRice
Copy link

Greetings Contributors,

I trained yolov7 with my custom dataset and got weird results.
After 140 epochs, one goes to 0 mAP and the other one swings strangely.

1

The only thing I do is just augmentation like copy & paste, which copies objects from another images and paste it.
I know that "copy & paste" is for segmentation, because in detection, it have to copy its background as well.
So, there might be some flaws. However it is well trained on yolov5. Also, It was stable.

And more strange thing is that it is well trained at next try with powerful recover ability. It recovered from 0 mAP.
Is it common in yolov7? Could you let me know why it happens?
Learning rate can be shown in below.

2

@lavenderlove52
Copy link

I run into the same problem, my dataset has a lot of small targets。
I had bad results on yolov7, but good results on yolov5
Or because there is a small target, the pixel value is around 3 at the input of 224, and the scaling inside the data argmeny makes the pixel smaller
I cancelled the data enhancement because it made the small target smaller and couldn't be trained effectively. The result was better, and much better than the beginning, but still not as good as yolov5

@gboeer
Copy link

gboeer commented Oct 25, 2022

This is something I've noted in Yolov5 as well, some time ago. There was a longer discussion about it here 👍 ultralytics/yolov5#5721

There, as well, a suspicion was, that it was due to very small objects in the custom training data. I have to say that I didn't do anything to tackle this problem, however, with the latest version of yolov5 the training with the same data is running much smoother, without any sudden dopps of performance.

Since this Yolov7 repository is based heavily on the Yolov5 implementation, I can only assume, that is has the same issues, which do not seem to be present in the current Yolov5 version but still persist in Yolov7.

@lavenderlove52
Copy link

lavenderlove52 commented Oct 25, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants