Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application of composite LRP #255

Closed
HugoTex98 opened this issue Jun 21, 2021 · 6 comments
Closed

Application of composite LRP #255

HugoTex98 opened this issue Jun 21, 2021 · 6 comments

Comments

@HugoTex98
Copy link

HugoTex98 commented Jun 21, 2021

Hello everyone.

I was trying to implement the composite LRP like the one presented in: G. Montavon, A. Binder, S. Lapuschkin, W. Samek, K.-R. MüllerLayer-wise Relevance Propagation: An Overviewin Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer LNCS, vol. 11700,2019, but unsuccessful...

Anyone knows how can I implement this?

Here is my model:

def ConnectomeCNN(input_shape, keep_pr=0.65, n_filter=32, n_dense1=64, n_classes=2):
bias_init = tf.constant_initializer(value=0.001)
input_1 = InputLayer(input_shape=input_shape, name="input")  
conv1 = Conv2D(
    filters=n_filter,
    kernel_size=(1, input_shape[1]),
    strides=(1, 1),
    padding="valid",
    activation="selu",
    kernel_initializer="glorot_uniform",
    bias_initializer=bias_init,
    name="conv1",
    input_shape=input_shape,
)
dropout1 = Dropout(keep_pr, name="dropout1")
conv2 = Conv2D(
    filters=n_filter * 2,
    kernel_size=(input_shape[1], 1),
    strides=(1, 1),
    padding="valid",
    activation="selu",
    kernel_initializer="glorot_uniform",
    bias_initializer=bias_init,
    name="conv2",
)
dropout2 = Dropout(keep_pr, name="dropout2")
reshape = Reshape((n_filter * 2,), name="reshape")
dense1 = Dense(
    n_dense1, activation="selu", name="dense1", kernel_regularizer="l1_l2"
)  # kernel_regularizer = regularizers.l1(0.0001))
if n_classes == 1:
    activation = "sigmoid"
else:
    activation = "softmax"
output = Dense(n_classes, activation=activation, name="output")

model = keras.models.Sequential(
    [input_1, conv1, dropout1, conv2, dropout2, reshape, dense1, output]
)
return model

`

Thank you!

@sebastian-lapuschkin
Copy link
Contributor

you can use the preimplemented LRPPreset* analyzers

@HugoTex98
Copy link
Author

Thank you for answering @sebastian-lapuschkin !

I just have one doubt by using LRPPreset analyzer. If I only want to see the positive contributions of my input variables, I should use LRPPresetA right? And also, is it ok by using it with selu activations? I saw in relevance_analyzer.py that it is not advised...

@sebastian-lapuschkin
Copy link
Contributor

  1. you could clamp your attributions at 0 and only keep the positive part

  2. try it out. I would assume that it works

@HugoTex98
Copy link
Author

When I use LPRPresetA or LPRPresetB without a specific neuron selection it works fine but when I specify a neuron (I have 2 in the output) the relevance scores are negative (i.e -3.1056861e-06). Is there any reason for this to happen?

@sebastian-lapuschkin
Copy link
Contributor

sebastian-lapuschkin commented Jun 27, 2021

negative relevance scores for both presents are no indicators that things are not working fine, cf this paper (alt link) for example, where in the examples in fig1 and the appendix blue regions also have attributed negative relevance (read, in the heatmap wrt class cat: "from the model's point of view, bernese mountain dog facial features are not cat tiger features, ie provide evidence to the model for deciding against class tiger cat")

especially if the output on the non-dominant logits is negative (which is your case, and is likely if the model has decided otherwise) negative relevance reveals that the model does not decide for your class of choice represented by your selected output neuron because "all that stuff does not look like the neuron's target class"

if this does not illuminate your situation sufficienty, please provide some more info regarding decomposed model output (ie output neuron activation) and resulting heatmap in input space (or whatever feature space you are analyzing)

best

@adrhill
Copy link
Collaborator

adrhill commented Nov 22, 2021

Closing this issue as the missing example is tracked in #261.

@adrhill adrhill closed this as completed Nov 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants