Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add batch_size to noise tunnel #555

Closed

Conversation

NarineK
Copy link
Contributor

@NarineK NarineK commented Dec 10, 2020

Adding support for batch_size in NoiseTunnel as proposed in: #497

@NarineK NarineK force-pushed the support_batch_size_for_noise_tunnel branch from c37830a to 666bc5a Compare December 15, 2020 02:11
@NarineK NarineK changed the title [WIP] Add batch_size to noise tunnel Add batch_size to noise tunnel Dec 15, 2020
Copy link
Contributor

@vivekmig vivekmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, this is a really useful feature :) !

@@ -79,13 +80,16 @@ def multiplies_by_inputs(self):
@noise_tunnel_n_samples_deprecation_decorator
def attribute(
self,
inputs: Union[Tensor, Tuple[Tensor, ...]],
inputs: TensorOrTupleOfTensorsGeneric,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think the reason the type hint initially didn't use TensorOrTupleOfTensorsGeneric is that the output type doesn't necessarily match the input, particularly if used with layer attribution. To apply to all cases, the return would need to cover these cases as well, so would need to be something like tensor, tuple[tensor, ..], tuple[tensor, tensor], tuple[tuple[tensor], tensor],etc. But this is more readable and does cover most use-cases, so either way works.

Copy link
Contributor Author

@NarineK NarineK Dec 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point! I assumed that, that is the reason why we don't have overrides for different attribute signatures in this case.
This method was also lacking return type hint which I added also using TensorOrTupleOfTensorsGeneric. I was thinking that we can use TensorOrTupleOfTensorsGeneric shortcut instead of Union[Tensor, Tuple[Tensor, ...]]. But it looks like there needs to be an agreement between inputs and output because TensorOrTupleOfTensorsGeneric is generics. In that case I should probably also avoid using it for the output type.

# if the algorithm supports targets, baselines and/or
# additional_forward_args they will be expanded based
# on the nt_samples_partition and corresponding kwargs
# variables will be updated accordingly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think this came up before, but feature_mask might also need to be expanded for perturbation-based methods. Not related to this diff though, so feel free to leave that change for later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point! I'll add feature mask support here.

@@ -36,6 +40,25 @@ def _get_multiargs_basic_config() -> Tuple[
return model, inputs, grads, additional_forward_args


def _get_multiargs_basic_config_large() -> Tuple[
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It might be good to add a GPU / DataParallel test with this functionality if possible, it should only require adding a new config similar to these existing NoiseTunnel configs with this new parameter. https://github.com/pytorch/captum/blob/master/tests/attr/helpers/test_config.py#L282

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll add DP tests.

@NarineK NarineK force-pushed the support_batch_size_for_noise_tunnel branch from 98b8abf to d9669ce Compare December 24, 2020 02:27
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NarineK has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NarineK has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@NarineK merged this pull request in 8ba1480.

vivekmig pushed a commit that referenced this pull request Jan 21, 2021
Summary:
Adding support for batch_size in NoiseTunnel as proposed in: #497

Pull Request resolved: #555

Reviewed By: vivekmig

Differential Revision: D25700056

Pulled By: NarineK

fbshipit-source-id: ea34899035486798b1cf3c49ce850291d1f1e76c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants