Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Format code #247

Merged
merged 3 commits into from
Jun 15, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Our goal is to make analyzing neural networks' predictions easy!

iNNvestigate can be installed with the following commands.
The library is based on Keras and therefore requires a supported [Keras-backend](https://keras.io/backend/)
o(Currently only the Tensorflow backend is supported. We test with Python 3.6, Tensorflow 1.12 and Cuda 9.x.):
o(Currently only the TensorFlow backend is supported. We test with Python 3.6, TensorFlow 1.12 and Cuda 9.x.):

```bash
pip install innvestigate
Expand Down Expand Up @@ -82,7 +82,7 @@ The iNNvestigate library contains implementations for the following methods:
* *attribution:*
* **input_t_gradient:** Input \* Gradient
* **deep_taylor[.bounded]:** [DeepTaylor](https://www.sciencedirect.com/science/article/pii/S0031320316303582?via%3Dihub) computes for each neuron a rootpoint, that is close to the input, but which's output value is 0, and uses this difference to estimate the attribution of each neuron recursively.
* **pattern.attribution:** [PatternAttribution](https://arxiv.org/abs/1705.05598) applies Deep Taylor by searching rootpoints along the singal direction of each neuron.
* **pattern.attribution:** [PatternAttribution](https://arxiv.org/abs/1705.05598) applies Deep Taylor by searching root points along the signal direction of each neuron.
* **lrp.\*:** [LRP](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140) attributes recursively to each neuron's input relevance proportional to its contribution of the neuron output.
* **integrated_gradients:** [IntegratedGradients](https://arxiv.org/abs/1703.01365) integrates the gradient along a path from the input to a reference.
* **deeplift.wrapper:** [DeepLIFT (wrapper around original code, slower)](http://proceedings.mlr.press/v70/shrikumar17a.html) computes a backpropagation based on "finite" gradients.
Expand Down
62 changes: 34 additions & 28 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,20 @@
#
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))

sys.path.insert(0, os.path.abspath("../.."))


# -- Project information -----------------------------------------------------

project = 'iNNvestigate'
copyright = '2018, Maximilian Alber'
author = 'Maximilian Alber'
project = "iNNvestigate"
copyright = "2018, Maximilian Alber"
author = "Maximilian Alber"

# The short X.Y version
version = ''
version = ""
# The full version, including alpha/beta/rc tags
release = 'not set'
release = "not set"


# -- General configuration ---------------------------------------------------
Expand All @@ -39,21 +40,21 @@
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.mathjax',
"sphinx.ext.autodoc",
"sphinx.ext.mathjax",
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]

# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_suffix = ".rst"

# The master toctree document.
master_doc = 'index'
master_doc = "index"

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand All @@ -68,15 +69,15 @@
exclude_patterns = []

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
pygments_style = "sphinx"


# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
html_theme = "alabaster"

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
Expand All @@ -87,7 +88,7 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]

# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
Expand All @@ -103,7 +104,7 @@
# -- Options for HTMLHelp output ---------------------------------------------

# Output file base name for HTML help builder.
htmlhelp_basename = 'iNNvestigatedoc'
htmlhelp_basename = "iNNvestigatedoc"


# -- Options for LaTeX output ------------------------------------------------
Expand All @@ -112,15 +113,12 @@
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',

# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',

# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',

# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
Expand All @@ -130,19 +128,21 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'iNNvestigate.tex', 'iNNvestigate Documentation',
'Maximilian Alber', 'manual'),
(
master_doc,
"iNNvestigate.tex",
"iNNvestigate Documentation",
"Maximilian Alber",
"manual",
),
]


# -- Options for manual page output ------------------------------------------

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'innvestigate', 'iNNvestigate Documentation',
[author], 1)
]
man_pages = [(master_doc, "innvestigate", "iNNvestigate Documentation", [author], 1)]


# -- Options for Texinfo output ----------------------------------------------
Expand All @@ -151,10 +151,16 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'iNNvestigate', 'iNNvestigate Documentation',
author, 'iNNvestigate', 'One line description of project.',
'Miscellaneous'),
(
master_doc,
"iNNvestigate",
"iNNvestigate Documentation",
author,
"iNNvestigate",
"One line description of project.",
"Miscellaneous",
),
]


# -- Extension configuration -------------------------------------------------
# -- Extension configuration -------------------------------------------------
19 changes: 11 additions & 8 deletions examples/embedding_minimal_example.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
from keras import Sequential
from keras.layers import Dense, Conv1D, Embedding, GlobalMaxPooling1D
import numpy as np
from keras import Sequential
from keras.layers import Conv1D, Dense, Embedding, GlobalMaxPooling1D

import innvestigate

model = Sequential()
model.add(Embedding(input_dim=219, output_dim=8))
model.add(Conv1D(filters=64, kernel_size=8, padding='valid', activation='relu'))
model.add(Conv1D(filters=64, kernel_size=8, padding="valid", activation="relu"))
model.add(GlobalMaxPooling1D())
model.add(Dense(16, activation='relu'))
model.add(Dense(16, activation="relu"))
model.add(Dense(2, activation=None))

#test
model.predict(np.random.randint(1, 219, (1,100))) # [[0.04913538 0.04234646]]
# test
model.predict(np.random.randint(1, 219, (1, 100))) # [[0.04913538 0.04234646]]

analyzer = innvestigate.create_analyzer('lrp.epsilon', model, neuron_selection_mode='max_activation', **{'epsilon': 1})
a = analyzer.analyze(np.random.randint(1, 219, (1,100)))
analyzer = innvestigate.create_analyzer(
"lrp.epsilon", model, neuron_selection_mode="max_activation", **{"epsilon": 1}
)
a = analyzer.analyze(np.random.randint(1, 219, (1, 100)))
print(a[0], a[0].shape)
68 changes: 39 additions & 29 deletions examples/nbconvert.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
# Begin: Python 2/3 compatibility header small
# Get Python 3 functionality:
from __future__ import\
absolute_import, print_function, division, unicode_literals
from future.utils import raise_with_traceback, raise_from
from __future__ import absolute_import, division, print_function, unicode_literals

import argparse
import os
import subprocess
import sys

# catch exception with: except Exception as e
from builtins import range, map, zip, filter
from builtins import filter, map, range, zip
from io import open

import six
from future.utils import raise_from, raise_with_traceback

# End: Python 2/3 compatability header small


Expand All @@ -15,20 +22,15 @@
###############################################################################


import argparse
import os
import sys
import subprocess


###############################################################################
###############################################################################
###############################################################################


if __name__ == "__main__":

def is_executable(filepath):
#determine whether the target file exists and is executable
# determine whether the target file exists and is executable
return os.path.isfile(filepath) and os.access(filepath, os.X_OK)

# Try jupyter binary that is the same directory as the running python.
Expand All @@ -38,13 +40,14 @@ def is_executable(filepath):

if not is_executable(jupyter_executable):
# Fallback to find any jupyter (and hope for the best).
jupyter_executable = subprocess.check_output(["which jupyter"],
shell=True).strip()

#assert a valid executable of jupyter has been found
assert is_executable(jupyter_executable), 'No executable for "jupyter" could be found'

jupyter_executable = subprocess.check_output(
["which jupyter"], shell=True
).strip()

# assert a valid executable of jupyter has been found
assert is_executable(
jupyter_executable
), 'No executable for "jupyter" could be found'

# Get all notebooks
notebook_dir = os.path.join(os.path.dirname(__file__), "notebooks")
Expand All @@ -55,10 +58,13 @@ def is_executable(filepath):
os.makedirs(output_dir)

parser = argparse.ArgumentParser(
description="Script to handle the example notebooks via command line.")
description="Script to handle the example notebooks via command line."
)
parser.add_argument(
'command', choices=["execute", "to_script", "to_script_and_execute"],
help="What to do.")
"command",
choices=["execute", "to_script", "to_script_and_execute"],
help="What to do.",
)
args = parser.parse_args()

if args.command == "execute":
Expand All @@ -71,24 +77,28 @@ def is_executable(filepath):
print()

call = [
jupyter_executable, "nbconvert",
jupyter_executable,
"nbconvert",
"--output-dir='%s'" % output_dir,
"--ExecutePreprocessor.timeout=-1",
"--to", "notebook", "--execute",
os.path.join(notebook_dir, notebook)
"--to",
"notebook",
"--execute",
os.path.join(notebook_dir, notebook),
]
subprocess.check_call(call)
elif args.command in ["to_script", "to_script_and_execute"]:
for notebook in notebooks:
print("Convert notebook:", notebook)
input_file = os.path.join(notebook_dir, notebook)
output_file = os.path.join(
output_dir, notebook.replace(".ipynb", ".py"))
output_file = os.path.join(output_dir, notebook.replace(".ipynb", ".py"))

call = [
jupyter_executable, "nbconvert",
jupyter_executable,
"nbconvert",
"--output-dir='%s'" % output_dir,
"--to", "script",
"--to",
"script",
input_file,
]
subprocess.check_call(call)
Expand All @@ -108,7 +118,7 @@ def is_executable(filepath):

if args.command == "to_script_and_execute":
subprocess.check_call(
[sys.executable, notebook.replace(".ipynb", ".py")],
cwd=output_dir)
[sys.executable, notebook.replace(".ipynb", ".py")], cwd=output_dir
)
else:
raise ValueError("Command not recognized")
Loading