Skip to content

Commit

Permalink
Don't copy the batch when training on a single gpu (#1576)
Browse files Browse the repository at this point in the history
* fix

* whitespace

Co-authored-by: Josh Karlin <karlinjf@gmail.com>
  • Loading branch information
karlinjf and jkarlin authored Apr 23, 2020
1 parent 0b22b64 commit 41b6cbb
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion pytorch_lightning/trainer/training_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -754,7 +754,11 @@ def training_forward(self, batch, batch_idx, opt_idx, hiddens):
gpu_id = 0
if isinstance(self.data_parallel_device_ids, list):
gpu_id = self.data_parallel_device_ids[0]
batch = self.transfer_batch_to_gpu(copy.copy(batch), gpu_id)

# Don't copy the batch since there is a single gpu that the batch could
# be referenced from and if there are multiple optimizers the batch will
# wind up copying it to the same device repeatedly.
batch = self.transfer_batch_to_gpu(batch, gpu_id)
args[0] = batch
output = self.model.training_step(*args)

Expand Down

0 comments on commit 41b6cbb

Please sign in to comment.