Skip to content

Commit

Permalink
coverage increase (#1167)
Browse files Browse the repository at this point in the history
* fixed docs

* Docs (#1164)

* fixed docs

* fixed docs

* fixed docs

* fixing Win failed import (#1163)

* version

* try fix distrib

* update try import

* fixed docs

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
  • Loading branch information
williamFalcon and Borda committed Mar 17, 2020
1 parent e461ec0 commit 8de7b40
Show file tree
Hide file tree
Showing 3 changed files with 2 additions and 178 deletions.
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,11 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Removed

-
- Removed duplicated module `pytorch_lightning.utilities.arg_parse` for loading CLI arguments ([#1167](https://github.com/PyTorchLightning/pytorch-lightning/issues/1167))

### Fixed

- Fixed bug related to type cheking of `ReduceLROnPlateau` lr schedulers([#1114](https://github.com/PyTorchLightning/pytorch-lightning/issues/1114))
- Fixed bug related to type cheking of `ReduceLROnPlateau` lr schedulers ([#1114](https://github.com/PyTorchLightning/pytorch-lightning/issues/1114))

## [0.7.1] - 2020-03-07

Expand Down
100 changes: 0 additions & 100 deletions pytorch_lightning/utilities/arg_parse.py

This file was deleted.

76 changes: 0 additions & 76 deletions pytorch_lightning/utilities/debugging.py
Original file line number Diff line number Diff line change
@@ -1,78 +1,2 @@
"""
These flags are useful to help debug a model.
Fast dev run
------------
This flag is meant for debugging a full train/val/test loop.
It'll activate callbacks, everything but only with 1 training and 1 validation batch.
Use this to debug a full run of your program quickly
.. code-block:: python
# DEFAULT
trainer = Trainer(fast_dev_run=False)
Inspect gradient norms
----------------------
Looking at grad norms can help you figure out where training might be going wrong.
.. code-block:: python
# DEFAULT (-1 doesn't track norms)
trainer = Trainer(track_grad_norm=-1)
# track the LP norm (P=2 here)
trainer = Trainer(track_grad_norm=2)
Make model overfit on subset of data
------------------------------------
A useful debugging trick is to make your model overfit a tiny fraction of the data.
setting `overfit_pct > 0` will overwrite train_percent_check, val_percent_check, test_percent_check
.. code-block:: python
# DEFAULT don't overfit (ie: normal training)
trainer = Trainer(overfit_pct=0.0)
# overfit on 1% of data
trainer = Trainer(overfit_pct=0.01)
Print the parameter count by layer
----------------------------------
By default lightning prints a list of parameters *and submodules* when it starts training.
.. code-block:: python
# DEFAULT print a full list of all submodules and their parameters.
trainer = Trainer(weights_summary='full')
# only print the top-level modules (i.e. the children of LightningModule).
trainer = Trainer(weights_summary='top')
Print which gradients are nan
-----------------------------
This option prints a list of tensors with nan gradients::
# DEFAULT
trainer = Trainer(print_nan_grads=False)
Log GPU usage
-------------
Lightning automatically logs gpu usage to the test tube logs.
It'll only do it at the metric logging interval, so it doesn't slow down training.
"""


class MisconfigurationException(Exception):
pass

0 comments on commit 8de7b40

Please sign in to comment.