Skip to content

Commit

Permalink
Enable non-blocking for gpu device transfer (#1843)
Browse files Browse the repository at this point in the history
* Update distrib_parts.py

* Update CHANGELOG.md
  • Loading branch information
justusschock committed May 14, 2020
1 parent bee0392 commit c05077f
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 2 deletions.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Changed

- Enable `non-blocking` for device transfers to GPU ([#1843](https://github.com/PyTorchLightning/pytorch-lightning/pull/1843))

- Replace mata_tags.csv with hparams.yaml ([#1271](https://github.com/PyTorchLightning/pytorch-lightning/pull/1271))

- Reduction when `batch_size < num_gpus` ([#1609](https://github.com/PyTorchLightning/pytorch-lightning/pull/1609))
Expand Down
8 changes: 6 additions & 2 deletions pytorch_lightning/trainer/distrib_parts.py
Original file line number Diff line number Diff line change
Expand Up @@ -449,10 +449,14 @@ def __transfer_data_to_device(self, batch, device, gpu_id=None):
if device == 'gpu':
# base case: object can be directly moved using `cuda` or `to`
if callable(getattr(batch, 'cuda', None)):
return batch.cuda(gpu_id)
# non_blocking will be ignored if tensor is not pinned.
# so we can always set it to True
return batch.cuda(gpu_id, non_blocking=True)

if callable(getattr(batch, 'to', None)):
return batch.to(torch.device('cuda', gpu_id))
# non_blocking will be ignored if tensor is not pinned.
# so we can always set it to True
return batch.to(torch.device('cuda', gpu_id), non_blocking=True)

# when list
if isinstance(batch, list):
Expand Down

0 comments on commit c05077f

Please sign in to comment.