Skip to content

Commit

Permalink
clean v2 docs (#691)
Browse files Browse the repository at this point in the history
* updated gitignore

* Update README.md

* updated gitignore

* updated links in ninja file

* updated docs

* Update README.md

* Update README.md

* finished callbacks

* finished callbacks

* finished callbacks

* fixed left menu

* added callbacks to menu

* added direct links to docs

* added direct links to docs

* added direct links to docs

* added direct links to docs

* added direct links to docs

* fixing TensorBoard (#687)

* flake8

* fix typo

* fix tensorboardlogger
drop test_tube dependence

* formatting

* fix tensorboard & tests

* upgrade Tensorboard

* test formatting separately

* try to fix JIT issue

* add tests for 1.4

* added direct links to docs

* updated gitignore

* updated links in ninja file

* updated docs

* finished callbacks

* finished callbacks

* finished callbacks

* fixed left menu

* added callbacks to menu

* added direct links to docs

* added direct links to docs

* added direct links to docs

* added direct links to docs

* added direct links to docs

* added direct links to docs

* finished rebase

* making private  members

* making private  members

* making private  members

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* set auto dp if no backend

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* working on trainer docs

* fixed lightning import

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* cleared  spaces

* finished lightning module

* finished lightning module

* finished lightning module

* finished lightning module

* added callbacks

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* set auto dp if no backend

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* added loggers

* flake 8

* flake 8

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
  • Loading branch information
williamFalcon and Borda committed Jan 17, 2020
1 parent bde549c commit bc67689
Show file tree
Hide file tree
Showing 22 changed files with 1,158 additions and 657 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ tests/save_dir
default/
lightning_logs/
tests/tests/
*.rst
/docs/source/*.md

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
24 changes: 12 additions & 12 deletions docs/source/_templates/theme_variables.jinja
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
{%- set external_urls = {
'github': 'https://github.com/williamFalcon/pytorch-lightning',
'github_issues': 'https://github.com/williamFalcon/pytorch-lightning/issues',
'contributing': 'https://github.com/williamFalcon/pytorch-lightning/blob/master/CONTRIBUTING.md',
'docs': 'https://williamfalcon.github.io/pytorch-lightning',
'github': 'https://github.com/PytorchLightning/pytorch-lightning',
'github_issues': 'https://github.com/PytorchLightning/pytorch-lightning/issues',
'contributing': 'https://github.com/PytorchLightning/pytorch-lightning/blob/master/CONTRIBUTING.md',
'docs': 'https://pytorchlightning.github.io/pytorch-lightning',
'twitter': 'https://twitter.com/PyTorchLightnin',
'discuss': 'https://discuss.pytorch.org',
'tutorials': 'https://williamfalcon.github.io/pytorch-lightning/',
'previous_pytorch_versions': 'https://williamfalcon.github.io/pytorch-lightning/',
'home': 'https://williamfalcon.github.io/pytorch-lightning/',
'get_started': 'https://williamfalcon.github.io/pytorch-lightning/',
'features': 'https://williamfalcon.github.io/pytorch-lightning/',
'blog': 'https://williamfalcon.github.io/pytorch-lightning/',
'resources': 'https://williamfalcon.github.io/pytorch-lightning/',
'support': 'https://williamfalcon.github.io/pytorch-lightning/',
'tutorials': 'https://pytorchlightning.github.io/pytorch-lightning/',
'previous_pytorch_versions': 'https://pytorchlightning.github.io/pytorch-lightning/',
'home': 'https://pytorchlightning.github.io/pytorch-lightning/',
'get_started': 'https://pytorchlightning.github.io/pytorch-lightning/',
'features': 'https://pytorchlightning.github.io/pytorch-lightning/',
'blog': 'https://pytorchlightning.github.io/pytorch-lightning/',
'resources': 'https://pytorchlightning.github.io/pytorch-lightning/',
'support': 'https://pytorchlightning.github.io/pytorch-lightning/',
}
-%}
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@
'sphinx.ext.autosummary',
'sphinx.ext.napoleon',
'recommonmark',
'sphinx.ext.autosectionlabel',
# 'm2r',
'nbsphinx',
]
Expand Down
32 changes: 29 additions & 3 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,34 @@
Examples & Tutorials
====================
GAN
====
.. toctree::
:maxdepth: 3

pl_examples.domain_templates.gan

MNIST
====
.. toctree::
:maxdepth: 3

pl_examples.basic_examples.lightning_module_template

Multi-node (ddp) MNIST
====
.. toctree::
:maxdepth: 3

pl_examples.multi_node_examples.multi_node_ddp_demo

Multi-node (ddp2) MNIST
====
.. toctree::
:maxdepth: 3

pl_examples.multi_node_examples.multi_node_ddp2_demo

Imagenet
====
.. toctree::
:maxdepth: 3

pl_examples
pl_examples.full_examples.imagenet.imagenet_example
37 changes: 31 additions & 6 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,47 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to PyTorch-Lightning!
PyTorch-Lightning Documentation
=============================

.. toctree::
:maxdepth: 4
:maxdepth: 1
:name: start
:caption: Quick Start
:caption: Start Here

new-project
examples

.. toctree::
:maxdepth: 4
:name: docs
:caption: Docs
:caption: Python API

callbacks
lightning-module
logging
trainer

.. toctree::
:maxdepth: 1
:name: Examples
:caption: Examples

examples

.. toctree::
:maxdepth: 1
:name: Tutorials
:caption: Tutorials

tutorials

.. toctree::
:maxdepth: 1
:name: Common Use Cases
:caption: Common Use Cases

common-cases

documentation

.. toctree::
:maxdepth: 1
Expand All @@ -29,6 +53,7 @@ Welcome to PyTorch-Lightning!
CODE_OF_CONDUCT.md
CONTRIBUTING.md
BECOMING_A_CORE_CONTRIBUTOR.md
governance.md


Indices and tables
Expand Down
15 changes: 8 additions & 7 deletions docs/source/new-project.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Quick Start
===========
To start a new project define two files, a LightningModule and a Trainer file.
To illustrate Lightning power and simplicity, here's an example of a typical research flow.
| To start a new project define two files, a LightningModule and a Trainer file.
| To illustrate the power of Lightning and its simplicity, here's an example of a typical research flow.
Case 1: BERT
------------

Let's say you're working on something like BERT but want to try different ways of training or even different networks.
You would define a single LightningModule and use flags to switch between your different ideas.
| Let's say you're working on something like BERT but want to try different ways of training or even different networks.
| You would define a single LightningModule and use flags to switch between your different ideas.
.. code-block:: python
Expand Down Expand Up @@ -66,6 +66,7 @@ Then you could do rapid research by switching between these two and using the sa
**Notice a few things about this flow:**

1. You're writing pure PyTorch... no unnecessary abstractions or new libraries to learn.
2. You get free GPU and 16-bit support without writing any of that code in your model.
3. You also get all of the capabilities below (without coding or testing yourself).
1. You're writing pure PyTorch... no unnecessary abstractions or new libraries to learn.
2. You get free GPU and 16-bit support without writing any of that code in your model.
3. You also get early stopping, multi-gpu training, 16-bit and MUCH more without coding anything!

127 changes: 80 additions & 47 deletions pytorch_lightning/callbacks/pt_callbacks.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
"""
Callbacks
====================================
Callbacks supported by Lightning
"""

import os
import shutil
import logging
Expand All @@ -8,26 +14,7 @@


class Callback(object):
"""Abstract base class used to build new callbacks.
# Properties
* params: dict. Training parameters
(eg. verbosity, batch size, number of epochs...).
Reference of the model being trained.
The `logs` dictionary that callback methods take as argument will contain keys
for quantities relevant to the current batch or epoch.
Currently, the `.fit()` method of the `Sequential` model class will include the following
quantities in the `logs` that it passes to its callbacks:
* on_epoch_end: logs include `acc` and `loss`, and
optionally include `val_loss`
(if validation is enabled in `fit`), and `val_acc`
(if validation and accuracy monitoring are enabled).
* on_batch_begin: logs include `size`,
the number of samples in the current batch.
* on_batch_end: logs include `loss`, and optionally `acc`
(if accuracy monitoring is enabled).
r"""Abstract base class used to build new callbacks.
"""

def __init__(self):
Expand All @@ -43,12 +30,30 @@ def set_model(self, model):
self.model = model

def on_epoch_begin(self, epoch, logs=None):
"""
called when the epoch begins
Args:
epoch (int): current epoch
logs (dict): key-value pairs of quantities to monitor
Example:
on_epoch_begin(epoch=2, logs={'val_loss': 0.2})
"""
pass

def on_epoch_end(self, epoch, logs=None):
pass

def on_batch_begin(self, batch, logs=None):
"""
called when the batch starts.
Args:
batch (Tensor): current batch tensor
logs (dict): key-value pairs of quantities to monitor
"""
pass

def on_batch_end(self, batch, logs=None):
Expand All @@ -62,25 +67,33 @@ def on_train_end(self, logs=None):


class EarlyStopping(Callback):
"""Stop training when a monitored quantity has stopped improving.
r"""
Stop training when a monitored quantity has stopped improving.
# Arguments
monitor: quantity to be monitored.
min_delta: minimum change in the monitored quantity
Args:
monitor (str): quantity to be monitored.
min_delta (float): minimum change in the monitored quantity
to qualify as an improvement, i.e. an absolute
change of less than min_delta, will count as no
improvement.
patience: number of epochs with no improvement
patience (int): number of epochs with no improvement
after which training will be stopped.
verbose: verbosity mode.
mode: one of {auto, min, max}. In `min` mode,
verbose (bool): verbosity mode.
mode (str): one of {auto, min, max}. In `min` mode,
training will stop when the quantity
monitored has stopped decreasing; in `max`
mode it will stop when the quantity
monitored has stopped increasing; in `auto`
mode, the direction is automatically inferred
from the name of the monitored quantity.
Example::
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import EarlyStopping
early_stopping = EarlyStopping('val_loss')
Trainer(early_stop_callback=early_stopping)
"""

def __init__(self, monitor='val_loss',
Expand Down Expand Up @@ -150,20 +163,22 @@ def on_train_end(self, logs=None):


class ModelCheckpoint(Callback):
"""Save the model after every epoch.
The `filepath` can contain named formatting options,
which will be filled the value of `epoch` and
keys in `logs` (passed in `on_epoch_end`).
For example: if `filepath` is `weights.{epoch:02d}-{val_loss:.2f}.hdf5`,
then the model checkpoints will be saved with the epoch number and
the validation loss in the filename.
# Arguments
filepath: string, path to save the model file.
monitor: quantity to monitor.
verbose: verbosity mode, 0 or 1.
save_top_k: if `save_top_k == k`,
r"""
Save the model after every epoch.
Args:
filepath (str): path to save the model file.
Can contain named formatting options to be auto-filled.
Example::
# save epoch and val_loss in name
ModelCheckpoint(filepath='{epoch:02d}-{val_loss:.2f}.hdf5')
# saves file like: /path/epoch_2-val_loss_0.2.hdf5
monitor (str): quantity to monitor.
verbose (bool): verbosity mode, 0 or 1.
save_top_k (int): if `save_top_k == k`,
the best k models according to
the quantity monitored will be saved.
if `save_top_k == 0`, no models are saved.
Expand All @@ -172,19 +187,28 @@ class ModelCheckpoint(Callback):
if `save_top_k >= 2` and the callback is called multiple
times inside an epoch, the name of the saved file will be
appended with a version count starting with `v0`.
mode: one of {auto, min, max}.
mode (str): one of {auto, min, max}.
If `save_top_k != 0`, the decision
to overwrite the current save file is made
based on either the maximization or the
minimization of the monitored quantity. For `val_acc`,
this should be `max`, for `val_loss` this should
be `min`, etc. In `auto` mode, the direction is
automatically inferred from the name of the monitored quantity.
save_weights_only: if True, then only the model's weights will be
save_weights_only (bool): if True, then only the model's weights will be
saved (`model.save_weights(filepath)`), else the full model
is saved (`model.save(filepath)`).
period: Interval (number of epochs) between checkpoints.
period (int): Interval (number of epochs) between checkpoints.
Example::
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint
checkpoint_callback = ModelCheckpoint(filepath='my_path')
Trainer(checkpoint_callback=checkpoint_callback)
# saves checkpoints to my_path whenever 'val_loss' has a new min
"""

def __init__(self, filepath, monitor='val_loss', verbose=0,
Expand Down Expand Up @@ -330,11 +354,20 @@ def on_epoch_end(self, epoch, logs=None):


class GradientAccumulationScheduler(Callback):
"""Change gradient accumulation factor according to scheduling.
r"""
Change gradient accumulation factor according to scheduling.
Args:
scheduling (dict): scheduling in format {epoch: accumulation_factor}
Example::
# Arguments
scheduling: dict, scheduling in format {epoch: accumulation_factor}
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import GradientAccumulationScheduler
# at epoch 5 start accumulating every 2 batches
accumulator = GradientAccumulationScheduler(scheduling: {5: 2})
Trainer(accumulate_grad_batches=accumulator)
"""

def __init__(self, scheduling: dict):
Expand Down
Loading

0 comments on commit bc67689

Please sign in to comment.