Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename and fix overfit_pct, val_percent_check and test_percent_check #2213

Merged
merged 31 commits into from
Jun 17, 2020
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
0299727
fixed percent check for val/test
williamFalcon Jun 16, 2020
bf4b85b
fixed percent check for val/test
williamFalcon Jun 16, 2020
ef24198
fixed percent check for val/test
williamFalcon Jun 16, 2020
01a4e09
fixed percent check for val/test
williamFalcon Jun 16, 2020
987c637
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
892827d
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
3f776c2
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
4da2895
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
c180f12
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
4df9943
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
1e065c7
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
44240b5
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
65f06a0
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
c3d348a
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
324a30e
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
cd41b8e
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
d58932c
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
910037d
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
2d9eede
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
3b1fac1
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
1b2f127
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
f4937e0
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
770db96
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
1935bf8
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
142e46e
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
fe66d09
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
f7345fa
overfit_pct now uses train loaders for val and test and does not shuffle
williamFalcon Jun 17, 2020
dc2d547
add on fit_start on fit_end hooks
williamFalcon Jun 17, 2020
ff8951e
add on fit_start on fit_end hooks
williamFalcon Jun 17, 2020
33db866
add on fit_start on fit_end hooks
williamFalcon Jun 17, 2020
b41f8af
Merge branch 'master' into pctc
williamFalcon Jun 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added

- Added overfit_batches, limit_xxx_batches flags (overfit now uses training set for all three) ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Added metric Base classes ([#1326](https://github.com/PyTorchLightning/pytorch-lightning/pull/1326), [#1877](https://github.com/PyTorchLightning/pytorch-lightning/pull/1877))
- Added Sklearn metrics classes ([#1327](https://github.com/PyTorchLightning/pytorch-lightning/pull/1327))
- Added Native torch metrics ([#1488](https://github.com/PyTorchLightning/pytorch-lightning/pull/1488))
Expand Down Expand Up @@ -52,6 +53,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Deprecated

- Deprecated `overfit_pct`, `val_percent_check`, `test_percent_check` ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Deprecated `ModelCheckpoint`'s attributes `best` and `kth_best_model` ([#1799](https://github.com/PyTorchLightning/pytorch-lightning/pull/1799))
- Dropped official support/testing for older PyTorch versions <1.3 ([#1917](https://github.com/PyTorchLightning/pytorch-lightning/pull/1917))

Expand Down
11 changes: 9 additions & 2 deletions docs/source/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,19 @@ Make model overfit on subset of data
A good debugging technique is to take a tiny portion of your data (say 2 samples per class),
and try to get your model to overfit. If it can't, it's a sign it won't work with large datasets.

(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_pct`
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_batches`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. testcode::

trainer = Trainer(overfit_pct=0.01)
# use only 1% of training data (and use the same training Dataloader (with shuffle off) in val and test)
trainer = Trainer(overfit_batches=0.01)

# or overfit a number of batches
trainer = Trainer(overfit_batches=0.01)

With this flag, the train, val, and test sets will all be the same train set. We will also replace the sampler
in the training set to turn off shuffle for you.

Print a summary of your LightningModule
---------------------------------------
Expand Down
12 changes: 6 additions & 6 deletions docs/source/fast_training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,17 +56,17 @@ If you don't want to check 100% of the training/validation/test set (for debuggi
# DEFAULT
trainer = Trainer(
train_percent_check=1.0,
val_percent_check=1.0,
test_percent_check=1.0
limit_val_batches=1.0,
limit_test_batches=1.0
)

# check 10%, 20%, 30% only, respectively for training, validation and test set
trainer = Trainer(
train_percent_check=0.1,
val_percent_check=0.2,
test_percent_check=0.3
limit_val_batches=0.2,
limit_test_batches=0.3
)

.. note:: ``train_percent_check``, ``val_percent_check`` and ``test_percent_check`` will be overwritten by ``overfit_pct`` if ``overfit_pct`` > 0. ``val_percent_check`` will be ignored if ``fast_dev_run=True``.
.. note:: ``train_percent_check``, ``limit_val_batches`` and ``limit_test_batches`` will be overwritten by ``overfit_batches`` if ``overfit_batches`` > 0. ``limit_val_batches`` will be ignored if ``fast_dev_run=True``.

.. note:: If you set ``val_percent_check=0``, validation will be disabled.
.. note:: If you set ``limit_val_batches=0``, validation will be disabled.
2 changes: 2 additions & 0 deletions pytorch_lightning/callbacks/progress.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ def total_val_batches(self) -> int:
elif not self.trainer.disable_validation:
is_val_epoch = trainer.current_epoch % trainer.check_val_every_n_epoch == 0
total_val_batches = trainer.num_val_batches if is_val_epoch else 0
total_val_batches = sum(total_val_batches)
return total_val_batches

@property
Expand All @@ -111,6 +112,7 @@ def total_test_batches(self) -> int:
total_test_batches = len(self.trainer.test_dataloaders)
else:
total_test_batches = self.trainer.num_test_batches
total_test_batches = sum(total_test_batches)
return total_test_batches

def disable(self):
Expand Down
131 changes: 74 additions & 57 deletions pytorch_lightning/trainer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,40 @@ def on_train_end(self, trainer, pl_module):
# default used by the Trainer
trainer = Trainer(gradient_clip_val=0.0)


limit_test_batches
^^^^^^^^^^^^^^^^^^

How much of test dataset to check.

Example::

# default used by the Trainer
trainer = Trainer(limit_test_batches=1.0)

# run through only 25% of the test set each epoch
trainer = Trainer(limit_test_batches=0.25)

# run for only 10 batches
trainer = Trainer(limit_test_batches=10)

limit_val_batches
^^^^^^^^^^^^^^^^^

How much of validation dataset to check.
Useful when debugging or testing something that happens at the end of an epoch.

Example::

# default used by the Trainer
trainer = Trainer(limit_val_batches=1.0)

# run through only 25% of the validation set each epoch
trainer = Trainer(limit_val_batches=0.25)

# run for only 10 batches
trainer = Trainer(limit_val_batches=10)

log_gpu_memory
^^^^^^^^^^^^^^
Options:
Expand Down Expand Up @@ -652,29 +686,28 @@ def on_train_end(self, trainer, pl_module):

overfit_pct
^^^^^^^^^^^
Uses this much data of all datasets (training, validation, test).

.. warning:: .. deprecated:: 0.8.0.

Use `overfit_batches`. Will remove 1.0.0.

overfit_batches
^^^^^^^^^^^^^^^
Uses this much data of the training set. If will use the same training set for validation and testing.
If the training Dataloaders(shuffle=True), Lightning will automatically disable it.

Useful for quickly debugging or trying to overfit on purpose.

Example::

# default used by the Trainer
trainer = Trainer(overfit_pct=0.0)

# use only 1% of the train, test, val datasets
trainer = Trainer(overfit_pct=0.01)
trainer = Trainer(overfit_batches=0.0)

# equivalent:
trainer = Trainer(
train_percent_check=0.01,
val_percent_check=0.01,
test_percent_check=0.01
)

See Also:
- `train_percent_check`_
- `val_percent_check`_
- `test_percent_check`_
# use only 1% of the train set (and use the train set for val and test)
trainer = Trainer(overfit_batches=0.01)

# overfit on 10 of the same batches
trainer = Trainer(overfit_batches=10)

precision
^^^^^^^^^
Expand Down Expand Up @@ -829,39 +862,7 @@ def on_train_end(self, trainer, pl_module):
test_percent_check
^^^^^^^^^^^^^^^^^^

How much of test dataset to check.

Example::

# default used by the Trainer
trainer = Trainer(test_percent_check=1.0)

# run through only 25% of the test set each epoch
trainer = Trainer(test_percent_check=0.25)

val_check_interval
^^^^^^^^^^^^^^^^^^

How often within one training epoch to check the validation set.
Can specify as float or int.

- use (float) to check within a training epoch
- use (int) to check every n steps (batches)

.. code-block:: python

# default used by the Trainer
trainer = Trainer(val_check_interval=1.0)

Example::

# check validation set 4 times during a training epoch
trainer = Trainer(val_check_interval=0.25)

# check validation set every 1000 training batches
# use this when using iterableDataset and your dataset has no length
# (ie: production cases with streaming data)
trainer = Trainer(val_check_interval=1000)
.. warning:: deprecated in v0.8.0 please use `limit_test_batches`. Will remove in 1.0.0

track_grad_norm
^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -955,20 +956,36 @@ def tbptt_split_batch(self, batch, split_size):
# do your own splitting on the batch
return splits

val_check_interval
^^^^^^^^^^^^^^^^^^

val_percent_check
^^^^^^^^^^^^^^^^^
How often within one training epoch to check the validation set.
Can specify as float or int.

How much of validation dataset to check.
Useful when debugging or testing something that happens at the end of an epoch.
- use (float) to check within a training epoch
- use (int) to check every n steps (batches)

Example::
.. code-block:: python

# default used by the Trainer
trainer = Trainer(val_percent_check=1.0)
trainer = Trainer(val_check_interval=1.0)

Example::

# check validation set 4 times during a training epoch
trainer = Trainer(val_check_interval=0.25)

# check validation set every 1000 training batches
# use this when using iterableDataset and your dataset has no length
# (ie: production cases with streaming data)
trainer = Trainer(val_check_interval=1000)


val_percent_check
^^^^^^^^^^^^^^^^^

.. warning:: deprecated in v0.8.0 please use `limit_val_batches`. Will remove in 1.0.0

# run through only 25% of the validation set each epoch
trainer = Trainer(val_percent_check=0.25)

weights_save_path
^^^^^^^^^^^^^^^^^
Expand Down
Loading