Skip to content

Commit

Permalink
update the readme for 1.3.1
Browse files Browse the repository at this point in the history
  • Loading branch information
sebastian-lapuschkin committed Jan 6, 2021
1 parent 93694c2 commit aad9f7d
Showing 1 changed file with 55 additions and 47 deletions.
102 changes: 55 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# The LRP Toolbox for Artificial Neural Networks (1.3.0)
# The LRP Toolbox for Artificial Neural Networks (1.3.1)

The Layer-wise Relevance Propagation (LRP) algorithm explains a classifer's prediction
specific to a given data point by attributing
Expand All @@ -9,7 +9,7 @@ of the input by using the topology of the learned model itself.

The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and python. The Toolbox realizes LRP functionality for the Caffe Deep Learning Framework as an extension of Caffe source code published in 10/2015.

The implementations for Matlab and python are intended as a sandbox or playground to familiarize the user to the LRP algorithm and thus are implemented with readability and transparency in mind. Models and data can be imported and exported using raw text formats, Matlab's .mat files and the .npy format for python/numpy.
The implementations for Matlab and python are intended as a sandbox or playground to familiarize the user to the LRP algorithm and thus are implemented with readability and transparency in mind. Models and data can be imported and exported using raw text formats, Matlab's `.mat` files and the `.npy` format for python/numpy/cupy.

<img src="doc/images/1.png" width="270"><img src="doc/images/2.png" width="270"><img src="doc/images/7.png" width="270">

Expand All @@ -27,57 +27,20 @@ To try out either the python-based MNIST demo, or the Caffe based ImageNet demo






### New in 1.3.0:
#### Standalone Python implementation:
* update to python 3
* updated treatment of softmax and target class
* lrp_aware option for efficient calculation of multiple backward passes (at the cost of a more expensive forward pass)
* custom colormaps in render.py
* __gpu support__ when [cupy](https://github.com/cupy/cupy) is installed. this is an optional feature. without the cupy package, the python code will execute using the cpu/numpy.

#### Caffe implementation
* updated the installation config
* new [**recommended**](https://arxiv.org/abs/1910.09840) formula types 100, 102, 104
* support for Guided Backprop via formula type 166
* new python wrapper to use lrp in pycaffe
* pycaffe demo file
* bugfixes
* [singularity image definition](singularity/caffe-lrp-cpu-u16.04.def) for building a hassle-free OS-agnostic command line executable


### New in version 1.2.0
#### The standalone implementations for python and Matlab:
* Convnets with Sum- and Maxpooling are now supported, including demo code.
* LRP-parameters can now be set for each layer individually
* w² and flat weight decomposition implemented.

#### Caffe:
* Minimal output versions implemented.
* Matthew Zeiler et al.'s Deconvolution, Karen Simonyan et al.'s Sensitivity Maps, and aspects of Grégoire Montavon et al.'s Deep Taylor Decomposition are implemented, alongside the flat weight decomposition for uniformly projecting relevance scores to a neuron's receptive field have been implemented.

#### Also:
* Various optimizations, refactoring, bits and pieces here and there.



### Obtaining the LRP Toolbox:
## Obtaining the LRP Toolbox
Clone or download it from github!

### Installing the Toolbox

### Installing the Toolbox:

After having obtained the toolbox code, data and models of choice, simply move into the subpackage folder of you choice -- matlab, python or caffe-master-lrp -- and execute the installation script (written for Ubuntu 14.04 or newer).
After having obtained the toolbox code, data and models of choice, simply move into the subpackage folder of you choice -- matlab, python or caffe-master-lrp -- and execute the installation script (written for Ubuntu 14.04 or newer).

<obtain the toolbox>
cd lrp_toolbox/$yourChoice
bash install.sh

Make sure to at least skim through the installation scripts! For more details and instructions please refer to [the manual](https://github.com/sebastian-lapuschkin/lrp_toolbox/blob/master/doc/manual/manual.pdf).

#### Attention for Caffe-LRP
#### Attention Caffe-Users
We highly recommend building LRP for Caffe via the [singularity image definition](singularity/caffe-lrp-cpu-u16.04.def) (You might regret doing something else outside of Ubuntu 14.04 LTS or Ubuntu 16.04 LTS...).
In this case, we also recommend to *only* download the content of the [singularity](singularity) folder.
Call
Expand All @@ -95,7 +58,7 @@ Have a look at [the manual](https://github.com/sebastian-lapuschkin/lrp_toolbox/



### The LRP Toolbox Paper
## The LRP Toolbox Paper

When using (any part) of this toolbox, please cite [our paper](http://jmlr.org/papers/volume17/15-618/15-618.pdf)

Expand All @@ -111,10 +74,55 @@ When using (any part) of this toolbox, please cite [our paper](http://jmlr.org/p
}


### Misc & Related

## Misc & Related

For further research and projects involving LRP, visit [heatmapping.org](http://heatmapping.org)

Also, consider paying https://github.com/albermax/innvestigate a visit! Next to LRP, iNNvestigate efficiently implements a hand full of additional DNN analysis methods and can boast with a >500-fold increase in computation speed when compared with our CPU-bound Caffe implementation!
Also, consider paying https://github.com/albermax/innvestigate a visit! Next to LRP, iNNvestigate efficiently implements a hand full of additional DNN analysis methods and can boast with a >500-fold increase in computation speed when compared with our CPU-bound Caffe implementation!


## Updates and Version History

### New in 1.3.1:
#### Caffe implementation
* a slightly updated singularity image `.def`-file
* formula 11 now implements the vanilla backprop gradient
* formula 99 is now the only variant implementing Sensitivity Analysis


### New in 1.3.0:
#### Standalone Python implementation:
* update to python 3
* updated treatment of softmax and target class
* lrp_aware option for efficient calculation of multiple backward passes (at the cost of a more expensive forward pass)
* custom colormaps in render.py
* __gpu support__ when [cupy](https://github.com/cupy/cupy) is installed. this is an optional feature. without the cupy package, the python code will execute using the cpu/numpy.

#### Caffe implementation
* updated the installation config
* new [**recommended**](https://arxiv.org/abs/1910.09840) formula types 100, 102, 104
* support for Guided Backprop via formula type 166
* new python wrapper to use lrp in pycaffe
* pycaffe demo file
* bugfixes
* [singularity image definition](singularity/caffe-lrp-cpu-u16.04.def) for building a hassle-free OS-agnostic command line executable


### New in 1.2.0
#### The standalone implementations for python and Matlab:
* Convnets with Sum- and Maxpooling are now supported, including demo code.
* LRP-parameters can now be set for each layer individually
* w² and flat weight decomposition implemented.

#### Caffe:
* Minimal output versions implemented.
* Matthew Zeiler et al.'s Deconvolution, Karen Simonyan et al.'s Sensitivity Maps, and aspects of Grégoire Montavon et al.'s Deep Taylor Decomposition are implemented, alongside the flat weight decomposition for uniformly projecting relevance scores to a neuron's receptive field have been implemented.

#### Also:
* Various optimizations, refactoring, bits and pieces here and there.





0 comments on commit aad9f7d

Please sign in to comment.