Skip to content

Commit

Permalink
fix retruning returns (Lightning-AI#1431)
Browse files Browse the repository at this point in the history
* returns

* changelog
  • Loading branch information
Borda authored and tullie committed May 6, 2020
1 parent 6307342 commit 74caa13
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 3 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Fixed default `DistributedSampler` for DDP training ([#1425](https://github.com/PyTorchLightning/pytorch-lightning/pull/1425))
- Fixed workers warning not on windows ([#1430](https://github.com/PyTorchLightning/pytorch-lightning/pull/1430))
- Fixed returning tuple from `run_training_batch` ([#1431](https://github.com/PyTorchLightning/pytorch-lightning/pull/1431))

## [0.7.2] - 2020-04-07

Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/trainer/training_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ def dump_checkpoint(self):
checkpoint['hparams_type'] = 'namespace' if is_namespace else 'dict'
else:
rank_zero_warn(
"Did not find hyperparameters at model.hparams. Saving checkpoint without hyperparameters."
"Did not find hyperparameters at model hparams. Saving checkpoint without hyperparameters."
)

# give the model a chance to add a few things
Expand Down
4 changes: 2 additions & 2 deletions pytorch_lightning/trainer/training_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -532,7 +532,7 @@ def run_training_batch(self, batch, batch_idx):
all_log_metrics = []

if batch is None:
return 0, grad_norm_dic, {}
return 0, grad_norm_dic, {}, {}

# Batch start events
with self.profiler.profile('on_batch_start'):
Expand All @@ -542,7 +542,7 @@ def run_training_batch(self, batch, batch_idx):
if self.is_function_implemented('on_batch_start'):
response = self.get_model().on_batch_start(batch)
if response == -1:
return -1, grad_norm_dic, {}
return -1, grad_norm_dic, {}, {}

splits = [batch]
if self.truncated_bptt_steps is not None:
Expand Down

0 comments on commit 74caa13

Please sign in to comment.