Skip to content

Commit

Permalink
Fix training resuming docs (#1265)
Browse files Browse the repository at this point in the history
  • Loading branch information
rzepinskip committed Mar 29, 2020
1 parent fb42872 commit b74a3c5
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 11 deletions.
10 changes: 6 additions & 4 deletions docs/source/weights_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,7 @@ To save your own checkpoint call:
Checkpoint Loading
------------------

You might want to not only load a model but also continue training it. Use this method to
restore the trainer state as well. This will continue from the epoch and global step you last left off.
However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter).
To load a model along with its weights, biases and hyperparameters use following method:

.. code-block:: python
Expand All @@ -95,4 +93,8 @@ However, the dataloaders will start from the first batch again (if you shuffled
y_hat = model(x)
A LightningModule is no different than a nn.Module. This means you can load it and use it for
predictions as you would a nn.Module.
predictions as you would a nn.Module.


.. note:: To restore the trainer state as well use
:meth:`pytorch_lightning.trainer.trainer.Trainer.resume_from_checkpoint`.
8 changes: 1 addition & 7 deletions pytorch_lightning/trainer/training_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,9 @@
.. code-block:: python
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger(
save_dir='./savepath',
version=1 # An existing version with a saved checkpoint
)
trainer = Trainer(
logger=logger,
default_save_path='./savepath'
resume_from_checkpoint=PATH
)
# this fit call loads model weights and trainer state
Expand Down

0 comments on commit b74a3c5

Please sign in to comment.