Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weekly Patch Release v.1.2.5 [full merge, no squash] #6646

Merged
merged 10 commits into from
Mar 24, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/docs-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ jobs:
# First run the same pipeline as Read-The-Docs
cd docs
make clean
make html --debug --jobs $(nproc) SPHINXOPTS="-W"
make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going"

- name: Upload built docs
uses: actions/upload-artifact@v2
Expand Down
121 changes: 8 additions & 113 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,118 +5,21 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).


## [UnReleased] - 2021-MM-DD

### Added

- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))

- Added support to checkpoint after training steps in `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))

- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))


- Added `RunningStage.SANITY_CHECKING` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


- Added `TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


- Added `Trainer.validate()` method to perform one evaluation epoch over the validation set ([#4948](https://github.com/PyTorchLightning/pytorch-lightning/pull/4948))


- Added `LightningEnvironment` for Lightning-specific DDP ([#5915](https://github.com/PyTorchLightning/pytorch-lightning/pull/5915))


- Added `auto_insert_metric_name` parameter to `ModelCheckpoint` ([#6277](https://github.com/PyTorchLightning/pytorch-lightning/pull/6277))


- Added arg to `self.log` that enables users to give custom names when dealing with multiple dataloaders ([#6274](https://github.com/PyTorchLightning/pytorch-lightning/pull/6274))


- Added no return warning to predict ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139))

## [1.2.5] - 2021-03-23

### Changed

- Renamed `pytorch_lightning.callbacks.swa` to `pytorch_lightning.callbacks.stochastic_weight_avg` ([#6259](https://github.com/PyTorchLightning/pytorch-lightning/pull/6259))


- Refactor `RunningStage` and `TrainerState` usage ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


- Changed `trainer.evaluating` to return `True` if validating or testing ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


- Changed `setup()` and `teardown()` stage argument to take any of `{fit,validate,test,predict}` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))


### Deprecated

- `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))


- Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


### Removed

- Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164))


- Removed no return warning from val/test step ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139))


- Removed passing a `ModelCheckpoint` instance to `Trainer(checkpoint_callback)` ([#6166](https://github.com/PyTorchLightning/pytorch-lightning/pull/6166))


- Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163))


- Removed deprecated metrics ([#6161](https://github.com/PyTorchLightning/pytorch-lightning/pull/6161))
* from `pytorch_lightning.metrics.functional.classification` removed `to_onehot`, `to_categorical`, `get_num_classes`, `roc`, `multiclass_roc`, `average_precision`, `precision_recall_curve`, `multiclass_precision_recall_curve`
* from `pytorch_lightning.metrics.functional.reduction` removed `reduce`, `class_reduce`


- Removed deprecated `ModelCheckpoint` arguments `prefix`, `mode="auto"` ([#6162](https://github.com/PyTorchLightning/pytorch-lightning/pull/6162))


- Removed `mode='auto'` from `EarlyStopping` ([#6167](https://github.com/PyTorchLightning/pytorch-lightning/pull/6167))


- Removed deprecated `LightningModule` `hparams` setter ([#6207](https://github.com/PyTorchLightning/pytorch-lightning/pull/6207))


- Removed `optimizer_idx` argument from `training_step` in manual optimization ([#6093](https://github.com/PyTorchLightning/pytorch-lightning/pull/6093))
- Update Gradient Clipping for the TPU Accelerator ([#6576](https://github.com/PyTorchLightning/pytorch-lightning/pull/6576))
- Refactored setup for typing friendly ([#6590](https://github.com/PyTorchLightning/pytorch-lightning/pull/6590))


### Fixed

- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))


- Move lightning module to correct device type when using LightningDistributedWrapper ([#6070](https://github.com/PyTorchLightning/pytorch-lightning/pull/6070))


- Do not print top-k verbose log with `ModelCheckpoint(monitor=None)` ([#6109](https://github.com/PyTorchLightning/pytorch-lightning/pull/6109))


- Fixed `ModelCheckpoint(monitor=None, save_last=True)` not saving checkpoints ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136))


- Fixed `ModelCheckpoint(save_top_k=0, save_last=True)` not saving the `last` checkpoint ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136))


- Fixed duplicate logs appearing in console when using the python logging module ([#5509](https://github.com/PyTorchLightning/pytorch-lightning/pull/5509), [#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275))


- Fixed `.teardown(stage='fit')` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))


- Fixed `.on_fit_{start,end}()` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))


- Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416))
- Fixed a bug where `all_gather` would not work correctly with `tpu_cores=8` ([#6587](https://github.com/PyTorchLightning/pytorch-lightning/pull/6587))
- Fixed comparing required versions ([#6434](https://github.com/PyTorchLightning/pytorch-lightning/pull/6434))
- Fixed duplicate logs appearing in console when using the python logging module ([#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275))
- Added Autocast in validation, test and predict modes for Native AMP ([#6565](https://github.com/PyTorchLightning/pytorch-lightning/pull/6565))


## [1.2.4] - 2021-03-16
Expand All @@ -137,9 +40,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed an issue with `Tuner.scale_batch_size` not finding the batch size attribute in the datamodule ([#5968](https://github.com/PyTorchLightning/pytorch-lightning/pull/5968))
- Fixed an exception in the layer summary when the model contains torch.jit scripted submodules ([#6511](https://github.com/PyTorchLightning/pytorch-lightning/pull/6511))
- Fixed when Train loop config was run during `Trainer.predict` ([#6541](https://github.com/PyTorchLightning/pytorch-lightning/pull/6541))
Borda marked this conversation as resolved.
Show resolved Hide resolved


- Fixed when Train loop config was run during `Trainer.predict` ([#6541](https://github.com/PyTorchLightning/pytorch-lightning/pull/6541))
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115)


## [1.2.3] - 2021-03-09
Expand Down Expand Up @@ -189,12 +90,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed error message for AMP + CPU incompatibility ([#6107](https://github.com/PyTorchLightning/pytorch-lightning/pull/6107))


- Disabled batch transfer in DP mode ([#6093](https://github.com/PyTorchLightning/pytorch-lightning/pull/6093))


- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115)


## [1.2.0] - 2021-02-18

### Added
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@ test: clean

docs: clean
pip install --quiet -r requirements/docs.txt
python -m sphinx -b html -W docs/source docs/build
python -m sphinx -b html -W --keep-going docs/source docs/build
10 changes: 5 additions & 5 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -95,12 +95,12 @@ jobs:
python -m pytest benchmarks -v --maxfail=2 --durations=0
displayName: 'Testing: benchmarks'

- bash: |
- script: |
set -e
python -m pytest pl_examples -v --maxfail=2 --durations=0
python setup.py install --user --quiet
bash pl_examples/run_ddp-example.sh
cd pl_examples/basic_examples
bash submit_ddp_job.sh
bash submit_ddp2_job.sh
pip uninstall -y pytorch-lightning
# cd pl_examples/basic_examples
# bash submit_ddp_job.sh
# bash submit_ddp2_job.sh
displayName: 'Examples'
50 changes: 42 additions & 8 deletions docs/source/advanced/multiple_loaders.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Multiple Datasets
Lightning supports multiple dataloaders in a few ways.

1. Create a dataloader that iterates multiple datasets under the hood.
2. In the training loop you can pass multiple loaders as a dict or list/tuple and lightning
2. In the training loop you can pass multiple loaders as a dict or list/tuple and lightning
will automatically combine the batches from different loaders.
3. In the validation and test loop you also have the option to return multiple dataloaders
which lightning will call sequentially.
Expand Down Expand Up @@ -75,21 +75,38 @@ For more details please have a look at :attr:`~pytorch_lightning.trainer.trainer

loader_a = torch.utils.data.DataLoader(range(6), batch_size=4)
loader_b = torch.utils.data.DataLoader(range(15), batch_size=5)

# pass loaders as a dict. This will create batches like this:
# {'a': batch from loader_a, 'b': batch from loader_b}
loaders = {'a': loader_a,
'b': loader_b}

# OR:
# OR:
# pass loaders as sequence. This will create batches like this:
# [batch from loader_a, batch from loader_b]
loaders = [loader_a, loader_b]

return loaders

Furthermore, Lightning also supports that nested lists and dicts (or a combination) can
be returned
be returned.

.. testcode::

class LitModel(LightningModule):

def train_dataloader(self):

loader_a = torch.utils.data.DataLoader(range(8), batch_size=4)
loader_b = torch.utils.data.DataLoader(range(16), batch_size=2)

return {'a': loader_a, 'b': loader_b}

def training_step(self, batch, batch_idx):
# access a dictionnary with a batch from each dataloader
batch_a = batch["a"]
batch_b = batch["b"]


.. testcode::

Expand All @@ -103,12 +120,29 @@ be returned
loader_c = torch.utils.data.DataLoader(range(64), batch_size=4)

# pass loaders as a nested dict. This will create batches like this:
# {'loader_a_b': {'a': batch from loader a, 'b': batch from loader b},
# 'loader_c_d': {'c': batch from loader c, 'd': batch from loader d}}
loaders = {'loaders_a_b': {'a': loader_a, 'b': loader_b},
'loaders_c_d': {'c': loader_c, 'd': loader_d}}
loaders = {
'loaders_a_b': {
'a': loader_a,
'b': loader_b
},
'loaders_c_d': {
'c': loader_c,
'd': loader_d
}
}
return loaders

def training_step(self, batch, batch_idx):
# access the data
batch_a_b = batch["loaders_a_b"]
batch_c_d = batch["loaders_c_d"]

batch_a = batch_a_b["a"]
batch_b = batch_a_b["a"]

batch_c = batch_c_d["c"]
batch_d = batch_c_d["d"]

----------

Test/Val dataloaders
Expand Down
24 changes: 14 additions & 10 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.

# import m2r
import builtins
import glob
import os
import shutil
Expand All @@ -27,10 +26,13 @@

FOLDER_GENERATED = 'generated'
SPHINX_MOCK_REQUIREMENTS = int(os.environ.get('SPHINX_MOCK_REQUIREMENTS', True))
if SPHINX_MOCK_REQUIREMENTS:
builtins.__LIGHTNING_SETUP__ = True

import pytorch_lightning # noqa: E402
try:
from pytorch_lightning import info
except ImportError:
# alternative https://stackoverflow.com/a/67692/4521646
sys.path.append(os.path.join(PATH_ROOT, "pytorch_lightning"))
import info

# -- Project documents -------------------------------------------------------

Expand Down Expand Up @@ -79,13 +81,13 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
# -- Project information -----------------------------------------------------

project = 'PyTorch Lightning'
copyright = pytorch_lightning.__copyright__
author = pytorch_lightning.__author__
copyright = info.__copyright__
author = info.__author__

# The short X.Y version
version = pytorch_lightning.__version__
version = info.__version__
# The full version, including alpha/beta/rc tags
release = pytorch_lightning.__version__
release = info.__version__

# -- General configuration ---------------------------------------------------

Expand Down Expand Up @@ -176,8 +178,8 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
# documentation.

html_theme_options = {
'pytorch_project': pytorch_lightning.__homepage__,
'canonical_url': pytorch_lightning.__homepage__,
'pytorch_project': info.__homepage__,
'canonical_url': info.__homepage__,
'collapse_navigation': False,
'display_version': True,
'logo_only': False,
Expand Down Expand Up @@ -279,6 +281,7 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
'torch': ('https://pytorch.org/docs/stable/', None),
'numpy': ('https://numpy.org/doc/stable/', None),
'PIL': ('https://pillow.readthedocs.io/en/stable/', None),
'torchmetrics': ('https://torchmetrics.readthedocs.io/en/stable/', None),
}

# -- Options for todo extension ----------------------------------------------
Expand Down Expand Up @@ -331,6 +334,7 @@ def package_list_from_file(file):
}
MOCK_PACKAGES = []
if SPHINX_MOCK_REQUIREMENTS:
MOCK_PACKAGES += ['fairscale']
# mock also base packages when we are on RTD since we don't install them there
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements.txt'))
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements', 'extra.txt'))
Expand Down
14 changes: 10 additions & 4 deletions docs/source/extensions/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -259,13 +259,19 @@ Configure console logging
*************************

Lightning logs useful information about the training process and user warnings to the console.
You can retrieve the Lightning logger and change it to your liking. For example, increase the logging level
to see fewer messages like so:
You can retrieve the Lightning logger and change it to your liking. For example, adjust the logging level
or redirect output for certain modules to log files:

.. code-block:: python
.. testcode::

import logging
logging.getLogger("lightning").setLevel(logging.ERROR)

# configure logging at the root level of lightning
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)

# configure logging on module level, redirect to file
logger = logging.getLogger("pytorch_lightning.core")
logger.addHandler(logging.FileHandler("core.log"))

Read more about custom Python logging `here <https://docs.python.org/3/library/logging.html>`_.

Expand Down
4 changes: 3 additions & 1 deletion pl_examples/basic_examples/conv_sequential_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ def instantiate_datamodule(args):
])

cifar10_dm = pl_bolts.datamodules.CIFAR10DataModule(
data_dir=args.data_dir,
batch_size=args.batch_size,
train_transforms=train_transforms,
test_transforms=test_transforms,
Expand All @@ -206,6 +207,7 @@ def instantiate_datamodule(args):

parser = ArgumentParser(description="Pipe Example")
parser.add_argument("--use_rpc_sequential", action="store_true")
parser.add_argument("--manual_optimization", action="store_true")
parser = Trainer.add_argparse_args(parser)
parser = pl_bolts.datamodules.CIFAR10DataModule.add_argparse_args(parser)
args = parser.parse_args()
Expand All @@ -216,7 +218,7 @@ def instantiate_datamodule(args):
if args.use_rpc_sequential:
plugins = RPCSequentialPlugin()

model = LitResnet(batch_size=args.batch_size, manual_optimization=not args.automatic_optimization)
model = LitResnet(batch_size=args.batch_size, manual_optimization=args.manual_optimization)

trainer = pl.Trainer.from_argparse_args(args, plugins=[plugins] if plugins else None)
trainer.fit(model, cifar10_dm)
Expand Down
2 changes: 1 addition & 1 deletion pl_examples/basic_examples/submit_ddp2_job.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ source activate $1
# -------------------------

# run script from above
srun python3 image_classifier.py --accelerator 'ddp2' --gpus 2 --num_nodes 2
srun python3 simple_image_classifier.py --accelerator 'ddp2' --gpus 2 --num_nodes 2 --max_epochs 5
2 changes: 1 addition & 1 deletion pl_examples/basic_examples/submit_ddp_job.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ source activate $1
# -------------------------

# run script from above
srun python3 image_classifier.py --accelerator 'ddp' --gpus 2 --num_nodes 2
srun python3 simple_image_classifier.py --accelerator 'ddp' --gpus 2 --num_nodes 2 --max_epochs 5
Loading