-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undesirable behavior of LayerActivation in networks with inplace ReLUs #156
Comments
Hi @mrsalehi, yes, this is a bug, thanks for pointing it out! We will push a fix for this soon. |
facebook-github-bot
pushed a commit
that referenced
this issue
Nov 11, 2019
Summary: This PR fixes neuron / layer attributions with in-place operations by keeping appropriate clones of intermediate values to ensure that they are not modified by future operations. Addresses Issue: #156 Pull Request resolved: #165 Differential Revision: D18435244 Pulled By: vivekmig fbshipit-source-id: c658baded1f781710f5a363a8b3652fd3333ca20
Fix has been merged here: 5bf06ba |
miguelmartin75
pushed a commit
to miguelmartin75/captum
that referenced
this issue
Dec 20, 2019
) Summary: This PR fixes neuron / layer attributions with in-place operations by keeping appropriate clones of intermediate values to ensure that they are not modified by future operations. Addresses Issue: pytorch#156 Pull Request resolved: pytorch#165 Differential Revision: D18435244 Pulled By: vivekmig fbshipit-source-id: c658baded1f781710f5a363a8b3652fd3333ca20
miguelmartin75
pushed a commit
to miguelmartin75/captum
that referenced
this issue
Dec 20, 2019
) Summary: This PR fixes neuron / layer attributions with in-place operations by keeping appropriate clones of intermediate values to ensure that they are not modified by future operations. Addresses Issue: pytorch#156 Pull Request resolved: pytorch#165 Differential Revision: D18435244 Pulled By: vivekmig fbshipit-source-id: c658baded1f781710f5a363a8b3652fd3333ca20
NarineK
pushed a commit
to NarineK/captum-1
that referenced
this issue
Nov 19, 2020
) Summary: This PR fixes neuron / layer attributions with in-place operations by keeping appropriate clones of intermediate values to ensure that they are not modified by future operations. Addresses Issue: pytorch#156 Pull Request resolved: pytorch#165 Differential Revision: D18435244 Pulled By: vivekmig fbshipit-source-id: c658baded1f781710f5a363a8b3652fd3333ca20
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I was trying to use
captum.attr._core.layer_activation.LayerActivation
to get the activation of the first convolutional layer in a simple model. Here is my code:In fact, I have computed the activation in two different ways and compared them afterwards. Obviously, I expected a value close to zero to be printed as the output, however, this is what I got:
I hypothesize that the inplace
ReLU
layer after the convolutional layer acts on its output since there were many zeros in the activation computed by Captum ( i.e.layer_act.attribute(input)
). In fact, when I changed the architecture of the network to the following:then the outputs matched.
System information
The text was updated successfully, but these errors were encountered: