Skip to content

Repository for benchmarking different post-hoc xai explanation methods on image datasets

License

Notifications You must be signed in to change notification settings

MarcoParola/saliency-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

saliency-benchmark

Repository for benchmarking different post-hoc XAI explanation methods on image datasets. Here is a quick guide on how to install and use the repo. More information about installation and usage can be found in the documentation.

Install

To install the project, follow these steps:

  1. Clone the repository:

    git clone https://github.com/yourusername/saliency-benchmark.git
  2. Navigate to the project directory:

    cd saliency-benchmark
  3. Create a virtual environment:

    python -m venv env
  4. Activate the virtual environment:

    . env/bin/activate
  5. Install the dependencies:

    python -m pip install -r requirements.txt

These steps will set up your working environment, install necessary dependencies, and prepare you to run the project.

Training

To train the networks using this repository, use the following command:

python3 train.py model=VGG11_Weights.IMAGENET1K_V1 dataset.name=cifar10 train.finetune=True
  • model: Specifies the pre-trained model to use. The full list of available models can be found here

  • dataset.name: Specifies the dataset to use. The supported datasets are:

    • cifar10
    • cifar100
    • caltech101
    • mnist
    • svhn
    • oxford-iiit-pet
  • train.finetune: Determines the training mode.

    • True: Fine-tunes the entire model.
    • False: Uses the model as a feature extractor.

These parameters allow you to customize the training process according to your specific requirements. For a detailed configuration, you may refer to or modify the train section of the config.yaml file according to your specific requirements.

Testing

To evaluate the trained model, use the following command:

python3 test.py

You need to specify the following parameters in the config.yaml file:

  • model: The pre-trained model to use.
  • dataset.name: The dataset used for testing.
  • checkpoint: Path to the model checkpoint. Choose from the model checkpoints available in the checkpoints folder.

Evaluate Explainability

After training and testing the model you can evaluate the explainability of the model by using the following command:

python3 evaluate_saliency.py

You need to specify the following parameters in the config.yaml file:

  • model: The pre-trained model to use.
  • dataset.name: The dataset used for testing.
  • checkpoint: Path to the model checkpoint. Choose from the model checkpoints available in the checkpoints folder.
  • saliency.method: Saliency method used for evaluating the model's explanations. The supported methods are: gradcam, rise, sidu, lime.
  • metrics.output_file: Specifies the file name for saving the evaluation metrics.

About

Repository for benchmarking different post-hoc xai explanation methods on image datasets

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published