You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Documentation states that you can use limit_val_batches=100 (with integer value) to limit number of batches. However, when using IterableDataset I get the following error:
pytorch_lightning.utilities.exceptions.MisconfigurationException: When using an infinite DataLoader (e.g. with an IterableDataset or when DataLoader does not implement __len__) for limit_val_batches, Trainer(limit_val_batches) must be 0.0 or 1.0.
To Reproduce
Just use:
pl.Trainer(val_check_interval=10, limit_val_batches=100)
🐛 Bug
Documentation states that you can use limit_val_batches=100 (with integer value) to limit number of batches. However, when using IterableDataset I get the following error:
pytorch_lightning.utilities.exceptions.MisconfigurationException: When using an infinite DataLoader (e.g. with an IterableDataset or when DataLoader does not implement
__len__
) forlimit_val_batches
,Trainer(limit_val_batches)
must be0.0
or1.0
.To Reproduce
Just use:
pl.Trainer(val_check_interval=10, limit_val_batches=100)
with infinite DataLoader.
Expected behavior
Do only N (100 in this case) validation steps.
Environment
- GPU:
- GeForce GTX 1080 Ti
- available: True
- version: 10.0.130
- numpy: 1.17.4
- pyTorch_debug: False
- pyTorch_version: 1.3.1
- pytorch-lightning: 0.8.5
- tensorboard: 2.2.2
- tqdm: 4.46.1
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.4
- version: Proposal for help #1 SMP Wed Mar 7 19:03:37 UTC 2018
The text was updated successfully, but these errors were encountered: