Skip to content

Commit

Permalink
docs update and follow up of #2789 (#2797)
Browse files Browse the repository at this point in the history
* docs update and follow up of #2789

* pep8

* Update trainer.py

* Update trainer.py

Co-authored-by: edenlightning <66261195+edenlightning@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
  • Loading branch information
3 people committed Aug 3, 2020
1 parent ed8a01a commit 6b9c548
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 8 deletions.
4 changes: 3 additions & 1 deletion pytorch_lightning/trainer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -801,7 +801,9 @@ def on_train_end(self, trainer, pl_module):
replace_sampler_ddp
^^^^^^^^^^^^^^^^^^^
Enables auto adding of distributed sampler.
Enables auto adding of distributed sampler. By default it will add ``shuffle=True``
for train sampler and ``shuffle=False`` for val/test sampler. If you want to customize
it, you can set ``replace_ddp_sampler=False`` and add your own distributed sampler.
.. testcode::
Expand Down
17 changes: 10 additions & 7 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ def __init__(
distributed_backend: The distributed backend to use (dp, ddp, ddp2, ddp_spawn, ddp_cpu)
precision: Full precision (32), half precision (16).
precision: Full precision (32), half precision (16). Can be used on CPU, GPU or TPUs.
weights_summary: Prints a summary of the weights when training begins.
Expand All @@ -310,26 +310,29 @@ def __init__(
num_sanity_val_steps: Sanity check runs n validation batches before starting the training routine.
Set it to `-1` to run all batches in all validation dataloaders. Default: 2
truncated_bptt_steps: Truncated back prop breaks performs backprop every k steps of
truncated_bptt_steps: Truncated back prop breaks performs backprop every k steps of much longer
sequence.
resume_from_checkpoint: To resume training from a specific checkpoint pass in the path here.
This can be a URL.
profiler: To profile individual steps during training and assist in
profiler: To profile individual steps during training and assist in identifying bottlenecks.
reload_dataloaders_every_epoch: Set to True to reload dataloaders every epoch
reload_dataloaders_every_epoch: Set to True to reload dataloaders every epoch.
auto_lr_find: If set to True, will `initially` run a learning rate finder,
trying to optimize initial learning for faster convergence. Sets learning
rate in self.lr or self.learning_rate in the LightningModule.
To use a different key, set a string instead of True with the key name.
replace_sampler_ddp: Explicitly enables or disables sampler replacement.
If not specified this will toggled automatically ddp is used
replace_sampler_ddp: Explicitly enables or disables sampler replacement. If not specified this
will toggled automatically when DDP is used. By default it will add ``shuffle=True`` for
train sampler and ``shuffle=False`` for val/test sampler. If you want to customize it,
you can set ``replace_ddp_sampler=False`` and add your own distributed sampler.
benchmark: If true enables cudnn.benchmark.
deterministic: If true enables cudnn.deterministic
deterministic: If true enables cudnn.deterministic.
terminate_on_nan: If set to True, will terminate training (by raising a `ValueError`) at the
end of each training batch, if any of the parameters or the loss are NaN or +/-inf.
Expand Down

0 comments on commit 6b9c548

Please sign in to comment.