You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Between captum 0.5.0 and 0.6.0 the arguments grad_input and grad_output in the DeepLIFT implementation switched from being a tuple of length 1 to just being a PyTorch tensor. Some of the non-linearity functions switched correctly, such as nonlinear:
This causes the first index of the tensor grad_input to be used rather than the full thing. I'm not sure I understood how this happened because grad_output in the same function seems to have been changed correctly.
🐛 Bug
Between captum 0.5.0 and 0.6.0 the arguments
grad_input
andgrad_output
in the DeepLIFT implementation switched from being a tuple of length 1 to just being a PyTorch tensor. Some of the non-linearity functions switched correctly, such asnonlinear
:0.5.0:
captum/captum/attr/_core/deep_lift.py
Line 956 in c2be437
Now: https://github.com/pytorch/captum/blob/master/captum/attr/_core/deep_lift.py#L879
But I don't think the max pooling function got switched over.
0.5.0:
captum/captum/attr/_core/deep_lift.py
Line 1118 in c2be437
Now: https://github.com/pytorch/captum/blob/master/captum/attr/_core/deep_lift.py#L1023
This causes the first index of the tensor
grad_input
to be used rather than the full thing. I'm not sure I understood how this happened becausegrad_output
in the same function seems to have been changed correctly.0.5.0:
captum/captum/attr/_core/deep_lift.py
Line 1075 in c2be437
Now: https://github.com/pytorch/captum/blob/master/captum/attr/_core/deep_lift.py#L1023
The text was updated successfully, but these errors were encountered: