Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the captum.metrics.infidelity #613

Closed
bachml opened this issue Feb 8, 2021 · 2 comments
Closed

A question about the captum.metrics.infidelity #613

bachml opened this issue Feb 8, 2021 · 2 comments

Comments

@bachml
Copy link

bachml commented Feb 8, 2021

As shown in the example of API reference:

net = ImageClassifier()
saliency = Saliency(net)
input = torch.randn(2, 3, 32, 32, requires_grad=True)
Computes saliency maps for class 3.
attribution = saliency.attribute(input, target=3)
define a perturbation function for the input
def perturb_fn(inputs):
noise = torch.tensor(np.random.normal(0, 0.003, inputs.shape)).float()
return noise, inputs - noise
Computes infidelity score for saliency maps
infid = infidelity(net, perturb_fn, input, attribution)

As you can see, if we multiply attribution with a constant, such like attribution = saliency.attribute(input, target=3)*100, then we will have larger infidelity results.

This means the result of infidelity is sensitive to the norm of the attribution map.

But I retry the experiment by the author's implementation(the author who proposed this metric in NIPS19, https://github.com/chihkuanyeh/saliency_evaluation), seems like the author's infidelity implementation is robust to the norm of the attribution map.

So captum's implementation is different from the author's implementation?

@NarineK
Copy link
Contributor

NarineK commented Feb 15, 2021

Thank you for the question, @bachml! I think the reason is because in their implementation the authors of the paper scale the explanation sum by a factor as you can see here:
https://github.com/chihkuanyeh/saliency_evaluation/blob/44a66e2531f30b803be3bf5b0786971b7e7f72a1/infid_sen_utils.py#L296
That scaling factor isn't included in the original paper that's why we haven't included it in our implementation: https://arxiv.org/pdf/1901.09392.pdf
We could potentially perform that type of scaling based on an input flag as well.

facebook-github-bot pushed a commit that referenced this issue Apr 19, 2021
Summary:
#613

Support normalizing the infidelity like the author's implementation https://github.com/chihkuanyeh/saliency_evaluation/blob/44a66e2531f30b803be3bf5b0786971b7e7f72a1/infid_sen_utils.py#L295

Pull Request resolved: #639

Reviewed By: vivekmig

Differential Revision: D27293213

Pulled By: aobo-y

fbshipit-source-id: d06c57a8b81a32e1509874f50e47950104139214
@NarineK
Copy link
Contributor

NarineK commented May 25, 2021

Closing, since it got addressed.

@NarineK NarineK closed this as completed May 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants