Skip to content

This work study the "Activation/Saliency Map" in image classification, which emphasize the regions in a image where model focus on to give the final predication result.

Notifications You must be signed in to change notification settings

WJiangH/Class_Activation_Map

Repository files navigation

Class_Activation_Map (Tensorflow)

This project includes three scripts that describe how to extract "Activation Map" to explain the performance of the neural network

  1. Class_Activation_Map_MNIST.ipynb uses Fashion MNIST dataset dataset.
    The model is built by several Conv blocks and the validation accuracy reaches 85% after a few epochs. The result:

where the darker areas in backgound imply more attention the neural network paid.
  1. Class_Activation_Map_CatandDogs.ipynb uses Cats vs Dogs in the tensorflow_datasets, which is a binary classification problem.
    The validation accuracy reaches 0.87 after 25 epochs and a sample result is shown as,

Highlighted areas such as eyes and nose play an important role in classifying objects using neural networks
  1. Saliency_Map.ipynb uses Inception V3 model to plot saliency maps, which also tells us what parts of the image the model is focusing on when making its predictions.
  • The main difference is in saliency maps, we are just shown the relevant pixels instead of the learned features.
  • one could generate saliency maps by getting the gradient of the loss with respect to the image pixels.
  • changes in certain pixels that strongly affect the loss will be shown brightly in your saliency map. Here is a sample result of a running golden retriever,

About

This work study the "Activation/Saliency Map" in image classification, which emphasize the regions in a image where model focus on to give the final predication result.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published