Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime Error if validation_step is defined, but valid_loader isn't provided to Trainer #3052

Closed
teddykoker opened this issue Aug 19, 2020 · 7 comments
Assignees
Labels
bug Something isn't working help wanted Open to be worked on priority: 0 High priority task
Milestone

Comments

@teddykoker
Copy link
Contributor

🐛 Bug

If validation_step is defined in your LightningModule, the model will not train unless you provide a validation loader to the trainer.

You get this warning (as expected):

UserWarning: you defined a validation_step but have no val_dataloader. Skipping validation loop

But then this error, which prevents training:

/usr/local/lib/python3.6/dist-packages/pytorch_lightning/callbacks/progress.py in on_sanity_check_start(self, trainer, pl_module)
    294         super().on_sanity_check_start(trainer, pl_module)
    295         self.val_progress_bar = self.init_sanity_tqdm()
--> 296         self.val_progress_bar.total = convert_inf(trainer.num_sanity_val_steps * len(trainer.val_dataloaders))
    297         self.main_progress_bar = tqdm(disable=True)  # dummy progress bar
    298 

TypeError: object of type 'NoneType' has no len()

To Reproduce

  1. Define LightningModule with validation_step
  2. Train.fit() with only training loader.

Code sample

https://colab.research.google.com/drive/1-pyGmHMAJaIg86T7s4y2PKxOX79dZq91?usp=sharing

Expected behavior

Should still give user warning, but should train, skipping the validation step.

@teddykoker teddykoker added bug Something isn't working help wanted Open to be worked on labels Aug 19, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@awaelchli
Copy link
Member

Pretty sure this will be solved by this PR #2892 automatically. But we should remember to add a test for this warning.

@awaelchli awaelchli self-assigned this Aug 19, 2020
@williamFalcon
Copy link
Contributor

i don't think #2892 will make it into 0.9 because it has a lot going on...

Can we get this into 0.9.0? this will require a new PR

@williamFalcon williamFalcon added the priority: 0 High priority task label Aug 19, 2020
@manipopopo
Copy link
Contributor

manipopopo commented Aug 22, 2020

The issue corresponding to #2892 has been fixed by #2917. But the code sample still got errors. It seems that the problem can be fixed by change the initial test_dataloaders and val_dataloaders from
https://github.com/PyTorchLightning/pytorch-lightning/blob/7cca3859a7b97a9ab4a6c6fb5f36ff94bff7f218/pytorch_lightning/trainer/trainer.py#L383-L384
to

self.test_dataloaders = []
self.val_dataloaders = []

Should a new issue be created?

@edenlightning edenlightning added this to the 0.9.x milestone Sep 1, 2020
@awaelchli
Copy link
Member

But the code sample still got errors. It seems that the problem can be fixed by change the initial test_dataloaders and val_dataloaders from

@manipopopo I cannot reproduce it on master. What exactly is the remaining issue, how do I reproduce it?

@awaelchli
Copy link
Member

awaelchli commented Sep 14, 2020

The code in your google colab link now runs without the reported error if I install from master branch. Closing this.
If there is something else that needs to be fixed, please open a new issue so I can take a look.

@manipopopo
Copy link
Contributor

Hi @awaelchli , the issue has been fixed by #3197 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on priority: 0 High priority task
Projects
None yet
Development

No branches or pull requests

5 participants