Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn off validation if val_percent_check=0 #646

Closed
kuynzereb opened this issue Dec 21, 2019 · 6 comments · Fixed by #649
Closed

Turn off validation if val_percent_check=0 #646

kuynzereb opened this issue Dec 21, 2019 · 6 comments · Fixed by #649
Labels
bug Something isn't working

Comments

@kuynzereb
Copy link
Contributor

As was suggested by @williamFalcon in #536 (comment) val_percent_check=0 should turn off the validation loop. But now it will not work because of

 self.num_val_batches = max(1, self.num_val_batches)

So I suggest to fix it. Moreover I suggest to make more thorough processing of train_percent_check and val_check_interval:

  1. We should require all *_percent_check and val_check_interval to be in the range [0.0; 1.0].
  2. Final num_val_batches can be equal to 0 that will effectively disable validation.
  3. Final num_train_batches and num_test_batches should be at least 1. (See also num_training_batches rounds down, causing 0 batches count #631)
  4. Final val_check_interval should be at least 1.
  5. The user can try to turn off validation by setting val_check_interval to a big value. Maybe in that case we should print a helpful message that validation can be turned off by setting val_percent_check=0.

Any thoughts?

@kuynzereb kuynzereb added the bug Something isn't working label Dec 21, 2019
@awaelchli
Copy link
Contributor

awaelchli commented Dec 23, 2019

One question, why should *_percent_check be in range [0, 1]. You mean [0, 100]? As the name suggests, it should be a percentage.

@kuynzereb
Copy link
Contributor Author

Yeah, you are right, the naming is not correct. It is not real percents but just fractions. So now it is assumed that 1. means to use the whole dataset.

@cmpute
Copy link
Contributor

cmpute commented Mar 18, 2020

This behavior seems to be absent in the documentation?

@Borda
Copy link
Member

Borda commented Mar 18, 2020

@cmpute mind sending a PRs with docs?

cmpute added a commit to cmpute/pytorch-lightning that referenced this issue Apr 29, 2020
@cmpute cmpute mentioned this issue Apr 29, 2020
5 tasks
mergify bot pushed a commit that referenced this issue Apr 29, 2020
* edit doc

mentioned in #646

* edit doc

* underline

* class reference

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
@miccio-dk
Copy link

Sorry to exhume this old issue, but this feature doesn't seem to work any longer. These are my trainer arguments:

min_epochs: 1
max_epochs: 1
max_steps: 1000
num_sanity_val_steps: 0
overfit_batches: 1
log_every_n_steps: 1
val_percent_check: 0

I'm using this settings to completely avoid the validation dataset, and just overfit a single batch of data over 1000 training steps.

@ananthsub
Copy link
Contributor

@miccio-dk this is a very old issue - I'd recommend creating a new one as the project has changed significantly since this was filed.

you may also be interested in this: #10888

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants