-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Neuron Aggregation #495
Neuron Aggregation #495
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vivekmig has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Thank you for working on this, @vivekmig!
A question: Are we allowing slice dt for neuron_selector and summing per example across all neurons ? According to the documentation Callable can return scalar per example but it can also return slice, can't it ?
sum of the neurons in the layer or sum of neurons with | ||
activations in a particular range. It is expected that | ||
this function returns either a tensor with one element | ||
or a scalar per example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the gradient attribution algorithms such as IG it must return a scalar per example, doesn't it ?
captum/_utils/gradient.py
Outdated
@@ -423,7 +423,9 @@ def _forward_layer_eval_with_neuron_grads( | |||
evals in a dictionary protected by a lock, analogous to the gather implementation | |||
for the core PyTorch DataParallel implementation. | |||
""" | |||
grad_enabled = True if gradient_neuron_index is not None or grad_enabled else False | |||
grad_enabled = ( | |||
True if gradient_neuron_selector is not None or grad_enabled else False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: redundant if-else, grad_enabled = gradient_neuron_selector is not None or grad_enabled
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vivekmig has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: This adds support for neuron aggregation, neuron_selector can be a function which returns a custom aggregate of a layer's neurons for all neuron methods other than neuron conductance, which has dependence on output gradients. The neuron_index argument was renamed, and a deprecation decorator was added to provide a warning for usage of the old parameter as a keyword argument. This decorator can be removed prior to the 0.4.0 release. Documentation of the new callable functionality has been added to NeuronDeepLift, this documentation will be propagated to other relevant methods after review. Pull Request resolved: pytorch#495 Reviewed By: miguelmartin75 Differential Revision: D24346065 Pulled By: vivekmig fbshipit-source-id: c3853e19256de4c8c32a8ff615965bf513a5cd22
This adds support for neuron aggregation, neuron_selector can be a function which returns a custom aggregate of a layer's neurons for all neuron methods other than neuron conductance, which has dependence on output gradients. The neuron_index argument was renamed, and a deprecation decorator was added to provide a warning for usage of the old parameter as a keyword argument. This decorator can be removed prior to the 0.4.0 release.
Documentation of the new callable functionality has been added to NeuronDeepLift, this documentation will be propagated to other relevant methods after review.