-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a warmup
period to EarlyStopping and ModelCheckpoint
#2644
Comments
Your code is working for me. Also make it |
@rohitgr7 Thanks, was just editing my post as you said that. I've got it working now. |
for early stopping you can set for checkpoints, if you set |
yeah |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@rohitgr7 I know this is 3 years late, but you can use |
🚀 Feature
Add an optional
warmup
period forEarlyStopping
andModelCheckpoint
callbacks.Motivation
Sometimes the metric you want to
monitor
can take a number of epochs to stabilize and become meaningful.For example: with GANs, you might want to monitor and minimize G's loss, but usually it starts out unreasonably low because it's based on the output of D, which hasn't learned anything about discriminating yet.
Pitch
I'd like to have this result in the callbacks having no effect for the first 10 epochs:
Alternatives
I added this option through inheritance:
Additional context
Edit: couldn't get this working at first, but got it figured out after upgrading my
pl
version. I would be happy to PR something like this if anyone can provide guidance on where to add it.The text was updated successfully, but these errors were encountered: