Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible bug in test_wake.py: 'int' object is not iterable #45

Closed
jeff-regier opened this issue Jun 23, 2020 · 6 comments · Fixed by #30
Closed

possible bug in test_wake.py: 'int' object is not iterable #45

jeff-regier opened this issue Jun 23, 2020 · 6 comments · Fixed by #30
Assignees
Labels
bug Something isn't working

Comments

@jeff-regier
Copy link
Contributor

tests/test_wake.py:157: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:921: in fit
    self.single_gpu_train(model)
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py:171: in single_gpu_train
    self.run_pretrain_routine(model)
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1091: in run_pretrain_routine
    self.train()
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py:374: in train
    self.run_training_epoch()
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py:424: in run_training_epoch
    self.on_epoch_start()
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py:57: in on_epoch_start
    callback.on_epoch_start(self, self.get_model())
../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/callbacks/progress.py:314: in on_epoch_start
    total_val_batches = self.total_val_batches
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning.callbacks.progress.ProgressBar object at 0x7f049c112450>

    @property
    def total_val_batches(self) -> int:
        """
        The total number of training batches during validation, which may change from epoch to epoch.
        Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the
        validation dataloader is of infinite size.
        """
        trainer = self.trainer
        total_val_batches = 0
        if trainer.fast_dev_run and trainer.val_dataloaders is not None:
            total_val_batches = len(trainer.val_dataloaders)
        elif not self.trainer.disable_validation:
            is_val_epoch = trainer.current_epoch % trainer.check_val_every_n_epoch == 0
            total_val_batches = trainer.num_val_batches if is_val_epoch else 0
>           total_val_batches = sum(total_val_batches)
E           TypeError: 'int' object is not iterable

../../miniconda3/envs/stats507/lib/python3.7/site-packages/pytorch_lightning/callbacks/progress.py:101: TypeError
@ismael-mendoza
Copy link
Collaborator

I'm running in the same error, @zzhaozhe-profolio do you know to fix this?

@zhezhaozz
Copy link
Contributor

zhezhaozz commented Jun 23, 2020

I'm running in the same error, @zzhaozhe-profolio do you know to fix this?

Ohh It's a bug within the pytorch-lightning, if the current epoch is not validation, we will have sum(0) for total_val_batches in the error message, which will throw a TypeError. Lightning-AI/pytorch-lightning#2213

I think while waiting for new version of lightning, currently we can bypass this bug by setting check_val_every_epoch to be default and it might solve the problem. @ismael2395

@ismael-mendoza
Copy link
Collaborator

Thanks @zzhaozhe-profolio , will try that.

@ismael-mendoza ismael-mendoza added the bug Something isn't working label Jun 23, 2020
@ismael-mendoza
Copy link
Collaborator

it works for me once I left it as default. Is there a reason we want to set check_val_every_n_epoch to 10 in the test? @zzhaozhe-profolio ? Otherwise we can just remove it and close this issue.

@zhezhaozz
Copy link
Contributor

it works for me once I left it as default. Is there a reason we want to set check_val_every_n_epoch to 10 in the test? @zzhaozhe-profolio ? Otherwise we can just remove it and close this issue.

Nothing particularly important. I was strictly follow the old wake.py, which checks the encoder on the test image (set run_map = True) every 10 epochs. So it's fine we can remove it and val_dataloader and validation_step.

@zhezhaozz zhezhaozz linked a pull request Jun 23, 2020 that will close this issue
4 tasks
@ismael-mendoza
Copy link
Collaborator

ismael-mendoza commented Jun 23, 2020

Great the PR I'm about to merge solves this by just getting rid of it.

@zhezhaozz zhezhaozz removed a link to a pull request Jun 23, 2020
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants