-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the captum.metrics.infidelity #613
Comments
Thank you for the question, @bachml! I think the reason is because in their implementation the authors of the paper scale the explanation sum by a factor as you can see here: |
Summary: #613 Support normalizing the infidelity like the author's implementation https://github.com/chihkuanyeh/saliency_evaluation/blob/44a66e2531f30b803be3bf5b0786971b7e7f72a1/infid_sen_utils.py#L295 Pull Request resolved: #639 Reviewed By: vivekmig Differential Revision: D27293213 Pulled By: aobo-y fbshipit-source-id: d06c57a8b81a32e1509874f50e47950104139214
Closing, since it got addressed. |
As shown in the example of API reference:
As you can see, if we multiply attribution with a constant, such like attribution = saliency.attribute(input, target=3)*100, then we will have larger infidelity results.
This means the result of infidelity is sensitive to the norm of the attribution map.
But I retry the experiment by the author's implementation(the author who proposed this metric in NIPS19, https://github.com/chihkuanyeh/saliency_evaluation), seems like the author's infidelity implementation is robust to the norm of the attribution map.
So captum's implementation is different from the author's implementation?
The text was updated successfully, but these errors were encountered: