Skip to content

Commit

Permalink
made ddp the default if no backend specified with multiple GPUs
Browse files Browse the repository at this point in the history
  • Loading branch information
williamFalcon committed May 12, 2020
1 parent 9d2df24 commit 6710f2e
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
2 changes: 2 additions & 0 deletions docs/source/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,8 @@ Lightning allows multiple ways of training
- Horovod (`distributed_backend='horovod'`) (multi-machine, multi-gpu, configured at runtime)
- TPUs (`num_tpu_cores=8|x`) (tpu or TPU pod)

.. note:: If you request multiple GPUs without setting a mode, ddp will be automatically used.

Data Parallel (dp)
^^^^^^^^^^^^^^^^^^
`DataParallel <https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel>`_ splits a batch across k GPUs. That is, if you have a batch of 32 and use dp with 2 gpus,
Expand Down
4 changes: 2 additions & 2 deletions pytorch_lightning/trainer/distrib_data_parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,8 +203,8 @@ def set_distributed_mode(self, distributed_backend):
elif self.num_gpus > 1:
rank_zero_warn('You requested multiple GPUs but did not specify a backend, e.g.'
' Trainer(distributed_backend=dp) (or ddp, ddp2).'
' Setting distributed_backend=dp for you.')
self.use_dp = True
' Setting distributed_backend=ddp for you.')
self.use_ddp = True
elif distributed_backend == "dp":
# do nothing if num_gpus == 0
if self.num_gpus == 1:
Expand Down

0 comments on commit 6710f2e

Please sign in to comment.