Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deconvoltion and KernelSHAP methods create uninformative attribution maps #1290

Open
ZetrextJG opened this issue May 27, 2024 · 0 comments
Open

Comments

@ZetrextJG
Copy link

🐛 Bug

Hi I am using captum in a project of mine as the main implantation source of the attribution methods. I am using the ResNet50 model that was trained on the ImageNet dataset and encountered weird behavior of the Deconvolution and KernelSHAP attribution methods. I have tested other attribution method such as GuidedBackprop and they seems to work just fine. My main problem is that attribution maps created by for example the Deconvolution method look nothing like the ones featured in the Deconvolution paper or the ones generated by the GuidedBackprop.

I am operating on the 256x256 images from the ImageNet dataset.

Example

Image:
image

In all examples the goal was to create an attribution map for the target class Zebra

Attribution map created by the GuidedBackprop:
guided_backprop

Attribution map created by the Deconvoltion:
deconvolution

Attibution map created by the KernelSHAP using a feature mask that groups the pixels into 4x4 blocks for each pixel color and the params:

  • n_samples: 1024
  • perturbations_per_eval: 1

kernel_shap

To Reproduce

Steps to reproduce the behavior:

  1. Load the ResNet50 model and the class containing an instance of the captum.attr._utils.attribution.Attribution class
  2. Wrap both element into Lightning Modules using fabric.setup
  3. Create the attribution map using the attribute method of the Attribution object
class AttributionMethod(ABC, nn.Module):
    def __init__(self,
        method: Attribution,
        model: torch.nn.Module,
        noise_tunnel: None|dict):
        ...
classifier = fabric.setup(instantiate(config.classifier))
explainer = fabric.setup(instantiate(config.explainer)(model = classifier))
for idx, batch in enumerate(dataloader):
    batch_imgs, batch_idx, batch_labels, batch_pred_labels = batch
    utils.log_imgs(batch_idx, batch_imgs, 'images')  

    target = utils.get_target_id(config, batch_pred_labels)
    batch_maps = explainer.method.attribute(batch_imgs, target = target, **kwargs)

Expected behavior

The attributions created using KernelShap or Deconvolution should similarly to the ones described in the Deconvolution paper (or the ones observed by using other methods like GuidedBackProp) and not like they do now

Environment

Describe the environment used for Captum

- captum: 0.7.0
- torch: 2.2.2
- torchaudio: 2.2.2
- torchvision: 0.17.2
- ightning: 2.2.2
- ightning-utilities: 0.11.2
- OS (e.g., Linux): Linux Ubuntu 20.04.6 LTS
- How you installed Captum / PyTorch: Captum using pip / pytorch using conda
- Python version: 3.10.13
- CUDA/cuDNN version: 12.1 / 8.9.2_0
- GPU models and configuration: A100 40GB CUDA 12.2
- Any other relevant information: I am using hydra and fabric to instantiate objects
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant