Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify that best (not latest) model is used for testing after early stopping #63

Closed
djdameln opened this issue Jan 7, 2022 · 1 comment · Fixed by #195
Closed

Verify that best (not latest) model is used for testing after early stopping #63

djdameln opened this issue Jan 7, 2022 · 1 comment · Fixed by #195
Assignees
Labels
Bug Something isn't working Pipeline
Milestone

Comments

@djdameln
Copy link
Contributor

djdameln commented Jan 7, 2022

In train.py, we call trainer.test right after the model has been trained with trainer.train. It is unclear if this leads to correct behaviour when early stopping is applied. Since we do not explicitly load the best model before calling trainer.test, it could be the case that we simply use the latest model. This should be investigated.

@djdameln
Copy link
Contributor Author

djdameln commented Mar 9, 2022

Can confirm that the latest model is used. This needs to be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working Pipeline
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants