-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch from register_full_backward_hooks to tensor hooks #979
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this PR, @vivekmig! Looks much cleaner now, especially DeepLift.
Added couple questions and minor comments.
@vivekmig has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@vivekmig has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
This switches usage of full backward hooks to instead apply forward hooks which then add tensor backward hooks, as suggested in #914 . We initially did not choose this approach since it may have limitations with backward hooks on modules with multiple tensors as inputs / outputs (each tensor must be called independently in the hook), but all current use-cases within Captum only require a single tensor input / output.
This change allows us to enable in-place modules as well as remove the limitation on neuron input attribution. DeepLift also no longer needs valid module checks, as these are no longer applicable with usage of tensor hooks.