-
-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log pretrained model validation before fine-tuning #744
Comments
I haven't tried this yet but the linked issue above contains a "solution" that advises to run |
yes, it seems like in v 1.5.1, calling |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Not perfect, but it does the trick... |
When running trainer.validate(model) before trainer.fit(model), the val dataloader is initialized twice (workers are initialized twice). In my use case, workers initialization is a costly process, so it would be beneficial if the val dataloader workers created when running trainer.validate would be reused when running trainer.fit. Is there a way to set this up while using a DataModule as the datasource for trainer.validate and trainer.fit? |
@odedbd have you tried setting |
@juanmc2005 Thank you for the suggestion, I am already using persistent_workers=True and it doesn't seem to effect the creation of workers when using DataModule and running trainer.validate before trainer.fit. I just realized I posted my comment on an issue of the pyannote-audio repository rather then that related Pytorch Lightning issue, where it probably belongs. Sorry for the confusion. |
To make sure fine-tuning actually improves performance, it would be nice to run (and log) a full validation loop before training.
Related Lightning-AI/pytorch-lightning#1715
The text was updated successfully, but these errors were encountered: