Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

element 0 of tensors does not require grad and does not have a grad_fn #2

Open
yuefeng21 opened this issue Jun 9, 2021 · 0 comments

Comments

@yuefeng21
Copy link

Thank you for sharing your code.
When I was running your code, this error comes up:
"element 0 of tensors does not require grad and does not have a grad_fn"
Do you have any idea why this happen?

I am attaching all the error message as follows:

Appreciate your help.

> | Name               | Type              | Params
> ---------------------------------------------------------
> 0 | G                  | Generator         | 95.7 K
> 1 | D                  | Discriminator     | 439 K 
> 2 | discriminator_loss | BCEWithLogitsLoss | 0     
> 3 | generator_loss     | BCEWithLogitsLoss | 0     
> ---------------------------------------------------------
> 534 K     Trainable params
> 0         Non-trainable params
> 534 K     Total params
> 2.139     Total estimated model params size (MB)
> Epoch 0:   0%|          | 0/999 [00:00<?, ?it/s] 
> ---------------------------------------------------------------------------
> RuntimeError                              Traceback (most recent call last)
> <ipython-input-13-7b7399b812e0> in <module>
> ----> 1 trainer.fit(
>       2     model=pi_GAN,
>       3     train_dataloader=image_loader
>       4 )
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
>     456         )
>     457 
> --> 458         self._run(model)
>     459 
>     460         assert self.state.stopped
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model)
>     754 
>     755         # dispatch `start_training` or `start_evaluating` or `start_predicting`
> --> 756         self.dispatch()
>     757 
>     758         # plugin will finalized fitting (e.g. ddp_spawn will load trained model)
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in dispatch(self)
>     795             self.accelerator.start_predicting(self)
>     796         else:
> --> 797             self.accelerator.start_training(self)
>     798 
>     799     def run_stage(self):
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py in start_training(self, trainer)
>      94 
>      95     def start_training(self, trainer: 'pl.Trainer') -> None:
> ---> 96         self.training_type_plugin.start_training(trainer)
>      97 
>      98     def start_evaluating(self, trainer: 'pl.Trainer') -> None:
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_training(self, trainer)
>     142     def start_training(self, trainer: 'pl.Trainer') -> None:
>     143         # double dispatch to initiate the training loop
> --> 144         self._results = trainer.run_stage()
>     145 
>     146     def start_evaluating(self, trainer: 'pl.Trainer') -> None:
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self)
>     805         if self.predicting:
>     806             return self.run_predict()
> --> 807         return self.run_train()
>     808 
>     809     def _pre_training_routine(self):
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_train(self)
>     867                 with self.profiler.profile("run_training_epoch"):
>     868                     # run train epoch
> --> 869                     self.train_loop.run_training_epoch()
>     870 
>     871                 if self.max_steps and self.max_steps <= self.global_step:
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self)
>     487             # ------------------------------------
>     488             with self.trainer.profiler.profile("run_training_batch"):
> --> 489                 batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
>     490 
>     491             # when returning -1 from train_step, we end epoch early
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py in run_training_batch(self, batch, batch_idx, dataloader_idx)
>     702                     # automatic_optimization=False: don't block synchronization here
>     703                     with self.block_ddp_sync_behaviour():
> --> 704                         self.training_step_and_backward(
>     705                             split_batch, batch_idx, opt_idx, optimizer, self.trainer.hiddens
>     706                         )
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py in training_step_and_backward(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
>     824                 if result is not None:
>     825                     with self.trainer.profiler.profile("backward"):
> --> 826                         self.backward(result, optimizer, opt_idx)
>     827 
>     828                     # hook - call this hook only
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py in backward(self, result, optimizer, opt_idx, *args, **kwargs)
>     857             self.trainer.accelerator.backward(result, optimizer, opt_idx, should_accumulate, *args, **kwargs)
>     858         else:
> --> 859             result.closure_loss = self.trainer.accelerator.backward(
>     860                 result.closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs
>     861             )
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py in backward(self, closure_loss, optimizer, optimizer_idx, should_accumulate, *args, **kwargs)
>     306         self.training_type_plugin.pre_backward(closure_loss, should_accumulate, optimizer, optimizer_idx)
>     307 
> --> 308         output = self.precision_plugin.backward(
>     309             self.lightning_module, closure_loss, optimizer, optimizer_idx, should_accumulate, *args, **kwargs
>     310         )
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py in backward(self, model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)
>      77         # do backward pass
>      78         if automatic_optimization:
> ---> 79             model.backward(closure_loss, optimizer, opt_idx)
>      80         else:
>      81             closure_loss.backward(*args, **kwargs)
> 
> ~/.local/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py in backward(self, loss, optimizer, optimizer_idx, *args, **kwargs)
>    1273         """
>    1274         if self.automatic_optimization or self._running_manual_backward:
> -> 1275             loss.backward(*args, **kwargs)
>    1276 
>    1277     def toggle_optimizer(self, optimizer: Optimizer, optimizer_idx: int):
> 
> ~/.local/lib/python3.8/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
>     219                 retain_graph=retain_graph,
>     220                 create_graph=create_graph)
> --> 221         torch.autograd.backward(self, gradient, retain_graph, create_graph)
>     222 
>     223     def register_hook(self, hook):
> 
> ~/.local/lib/python3.8/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
>     128         retain_graph = create_graph
>     129 
> --> 130     Variable._execution_engine.run_backward(
>     131         tensors, grad_tensors_, retain_graph, create_graph,
>     132         allow_unreachable=True)  # allow_unreachable flag
> 
> RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
> 

[-]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant