Skip to content

Commit

Permalink
replace train_percent_check with limit_train_batches (#2220)
Browse files Browse the repository at this point in the history
* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* drop train_percent_check

* chlog

* deprecated

* deprecated

* deprecated

* tests

* tests

* Apply suggestions from code review

* tests

* hydra support

* tests

* hydra support

* hydra support

* hydra support

* tests

* typo

* typo

* Update test_dataloaders.py

* docs

* docs

* docs

* docs

Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
  • Loading branch information
3 people committed Jun 17, 2020
1 parent 9945e87 commit 2411c3b
Show file tree
Hide file tree
Showing 32 changed files with 416 additions and 247 deletions.
10 changes: 5 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added

- Added overfit_batches, limit_xxx_batches flags (overfit now uses training set for all three) ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Added metric Base classes ([#1326](https://github.com/PyTorchLightning/pytorch-lightning/pull/1326), [#1877](https://github.com/PyTorchLightning/pytorch-lightning/pull/1877))
- Added Sklearn metrics classes ([#1327](https://github.com/PyTorchLightning/pytorch-lightning/pull/1327))
- Added Native torch metrics ([#1488](https://github.com/PyTorchLightning/pytorch-lightning/pull/1488))
- Added `overfit_batches`, `limit_{val|test}_batches` flags (overfit now uses training set for all three) ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Added metrics
* Base classes ([#1326](https://github.com/PyTorchLightning/pytorch-lightning/pull/1326), [#1877](https://github.com/PyTorchLightning/pytorch-lightning/pull/1877))
* Sklearn metrics classes ([#1327](https://github.com/PyTorchLightning/pytorch-lightning/pull/1327))
Expand Down Expand Up @@ -58,7 +55,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Deprecated

- Deprecated `overfit_pct`, `val_percent_check`, `test_percent_check` ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Deprecated flags: ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
* `overfit_pct` >> `overfit_batches`
* `val_percent_check` >> `limit_val_batches`
* `test_percent_check` >> `limit_test_batches`
- Deprecated `ModelCheckpoint`'s attributes `best` and `kth_best_model` ([#1799](https://github.com/PyTorchLightning/pytorch-lightning/pull/1799))
- Dropped official support/testing for older PyTorch versions <1.3 ([#1917](https://github.com/PyTorchLightning/pytorch-lightning/pull/1917))

Expand Down
30 changes: 15 additions & 15 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Example:
We successfully extended functionality without polluting our super clean
:class:`~pytorch_lightning.core.LightningModule` research code.

---------
---

.. automodule:: pytorch_lightning.callbacks.base
:noindex:
Expand All @@ -56,7 +56,7 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---------
---

.. automodule:: pytorch_lightning.callbacks.early_stopping
:noindex:
Expand All @@ -66,36 +66,36 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---------
---

.. automodule:: pytorch_lightning.callbacks.model_checkpoint
.. automodule:: pytorch_lightning.callbacks.gradient_accumulation_scheduler
:noindex:
:exclude-members:
_del_model,
_save_model,
_abc_impl,
check_monitor_top_k,

---------
---

.. automodule:: pytorch_lightning.callbacks.gradient_accumulation_scheduler
.. automodule:: pytorch_lightning.callbacks.lr_logger
:noindex:
:exclude-members:
_extract_lr,
_find_names

---

.. automodule:: pytorch_lightning.callbacks.model_checkpoint
:noindex:
:exclude-members:
_del_model,
_save_model,
_abc_impl,
check_monitor_top_k,

---------
---

.. automodule:: pytorch_lightning.callbacks.progress
:noindex:
:exclude-members:

---------

.. automodule:: pytorch_lightning.callbacks.lr_logger
:noindex:
:exclude-members:
_extract_lr,
_find_names
13 changes: 10 additions & 3 deletions docs/source/fast_training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ Fast Training
There are multiple options to speed up different parts of the training by choosing to train
on a subset of data. This could be done for speed or debugging purposes.

----------------------

Check validation every n epochs
-------------------------------
If you have a small dataset you might want to check validation every n epochs
Expand All @@ -17,6 +19,8 @@ If you have a small dataset you might want to check validation every n epochs
# DEFAULT
trainer = Trainer(check_val_every_n_epoch=1)

----------------------

Force training for min or max epochs
------------------------------------
It can be useful to force training for a minimum number of epochs or limit to a max number.
Expand All @@ -29,6 +33,7 @@ It can be useful to force training for a minimum number of epochs or limit to a
# DEFAULT
trainer = Trainer(min_epochs=1, max_epochs=1000)

----------------------

Set validation check frequency within 1 training epoch
------------------------------------------------------
Expand All @@ -47,6 +52,8 @@ Must use an int if using an IterableDataset.
# check every 100 train batches (ie: for IterableDatasets or fixed frequency)
trainer = Trainer(val_check_interval=100)

----------------------

Use data subset for training, validation and test
-------------------------------------------------
If you don't want to check 100% of the training/validation/test set (for debugging or if it's huge), set these flags.
Expand All @@ -55,18 +62,18 @@ If you don't want to check 100% of the training/validation/test set (for debuggi

# DEFAULT
trainer = Trainer(
train_percent_check=1.0,
limit_train_batches=1.0,
limit_val_batches=1.0,
limit_test_batches=1.0
)

# check 10%, 20%, 30% only, respectively for training, validation and test set
trainer = Trainer(
train_percent_check=0.1,
limit_train_batches=0.1,
limit_val_batches=0.2,
limit_test_batches=0.3
)

.. note:: ``train_percent_check``, ``limit_val_batches`` and ``limit_test_batches`` will be overwritten by ``overfit_batches`` if ``overfit_batches`` > 0. ``limit_val_batches`` will be ignored if ``fast_dev_run=True``.
.. note:: ``limit_train_batches``, ``limit_val_batches`` and ``limit_test_batches`` will be overwritten by ``overfit_batches`` if ``overfit_batches`` > 0. ``limit_val_batches`` will be ignored if ``fast_dev_run=True``.

.. note:: If you set ``limit_val_batches=0``, validation will be disabled.
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ PyTorch Lightning Documentation
:caption: Python API

callbacks
hooks
lightning-module
loggers
metrics
hooks
trainer

.. toctree::
Expand Down
2 changes: 1 addition & 1 deletion docs/source/new-project.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Without changing a SINGLE line of your code, you can now do the following with t
tpu_cores=8,
precision=16,
early_stop_checkpoint=True,
train_percent_check=0.5,
limit_train_batches=0.5,
val_check_interval=0.25
)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/weights_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To change the checkpoint path pass in:

To modify the behavior of checkpointing pass in your own callback.

.. testcode::
.. code-block:: python
from pytorch_lightning.callbacks import ModelCheckpoint
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
else:
from pytorch_lightning.core import LightningModule
from pytorch_lightning.trainer import Trainer
from pytorch_lightning.trainer.seed import seed_everything
from pytorch_lightning.utilities.seed import seed_everything
from pytorch_lightning.callbacks import Callback
from pytorch_lightning.core import data_loader

Expand Down
4 changes: 2 additions & 2 deletions pytorch_lightning/callbacks/lr_logger.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
r"""
Logging of learning rates
=========================
Learning Rate Logger
====================
Log learning rate for lr schedulers during training
Expand Down
Loading

0 comments on commit 2411c3b

Please sign in to comment.