You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I am using captum in a project of mine as the main implantation source of the attribution methods. I am using the ResNet50 model that was trained on the ImageNet dataset and encountered weird behavior of the Deconvolution and KernelSHAP attribution methods. I have tested other attribution method such as GuidedBackprop and they seems to work just fine. My main problem is that attribution maps created by for example the Deconvolution method look nothing like the ones featured in the Deconvolution paper or the ones generated by the GuidedBackprop.
I am operating on the 256x256 images from the ImageNet dataset.
Example
Image:
In all examples the goal was to create an attribution map for the target class Zebra
Attribution map created by the GuidedBackprop:
Attribution map created by the Deconvoltion:
Attibution map created by the KernelSHAP using a feature mask that groups the pixels into 4x4 blocks for each pixel color and the params:
n_samples: 1024
perturbations_per_eval: 1
To Reproduce
Steps to reproduce the behavior:
Load the ResNet50 model and the class containing an instance of the captum.attr._utils.attribution.Attribution class
Wrap both element into Lightning Modules using fabric.setup
Create the attribution map using the attribute method of the Attribution object
The attributions created using KernelShap or Deconvolution should similarly to the ones described in the Deconvolution paper (or the ones observed by using other methods like GuidedBackProp) and not like they do now
Environment
Describe the environment used for Captum
- captum: 0.7.0
- torch: 2.2.2
- torchaudio: 2.2.2
- torchvision: 0.17.2
- ightning: 2.2.2
- ightning-utilities: 0.11.2
- OS (e.g., Linux): Linux Ubuntu 20.04.6 LTS
- How you installed Captum / PyTorch: Captum using pip / pytorch using conda
- Python version: 3.10.13
- CUDA/cuDNN version: 12.1 / 8.9.2_0
- GPU models and configuration: A100 40GB CUDA 12.2
- Any other relevant information: I am using hydra and fabric to instantiate objects
The text was updated successfully, but these errors were encountered:
🐛 Bug
Hi I am using captum in a project of mine as the main implantation source of the attribution methods. I am using the ResNet50 model that was trained on the ImageNet dataset and encountered weird behavior of the
Deconvolution
andKernelSHAP
attribution methods. I have tested other attribution method such asGuidedBackprop
and they seems to work just fine. My main problem is that attribution maps created by for example theDeconvolution
method look nothing like the ones featured in the Deconvolution paper or the ones generated by theGuidedBackprop
.I am operating on the 256x256 images from the ImageNet dataset.
Example
Image:
In all examples the goal was to create an attribution map for the target class Zebra
Attribution map created by the
GuidedBackprop
:Attribution map created by the
Deconvoltion
:Attibution map created by the
KernelSHAP
using a feature mask that groups the pixels into 4x4 blocks for each pixel color and the params:To Reproduce
Steps to reproduce the behavior:
captum.attr._utils.attribution.Attribution
classfabric.setup
attribute
method of the Attribution objectExpected behavior
The attributions created using KernelShap or Deconvolution should similarly to the ones described in the Deconvolution paper (or the ones observed by using other methods like GuidedBackProp) and not like they do now
Environment
Describe the environment used for Captum
The text was updated successfully, but these errors were encountered: