Skip to content

PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning

Notifications You must be signed in to change notification settings

fawazsammani/knowing-when-to-look-adaptive-attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adaptive Attention in PyTorch

PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning Paper
Original Torch Implementation by Lu. et al can be found here

ss

Instructions

  1. Download the COCO 2014 dataset from here. In particualr, you'll need the 2014 Training, Validation and Testing images, as well as the 2014 Train/Val annotations.

  2. Download Karpathy's Train/Val/Test Split. You may download it from here.

  3. If you want to do evaluation on COCO, make sure to download the COCO API from here if your on Linux or from here if your on Windows. Then download the COCO caption toolkit from here and re-name the folder to cococaption. (This also requires java. Simply dowload it from here if you don't have it).

Files

preprocess.py Creates the WORDMAP.json file and the .h5 files
dataset.py Creates the custom dataset
util.py Functions to be used throught the code
models.py Defines the architectures
train_eval For Training and Evaluation
run.ipynb For Testing and Visualization
The folder caption data includes data used along with images, mainly for evaluation purposes.

Testing

Place the test image in the folder 'test_imgs', and name it as test.jpg, and then run the run.ipynbjupyter notebook file to get the results.

Results

The file here contains the obtained evalaution scores on the validation split. The pretrained model is provided here as well (trained for 12 epochs). Some results from Karpathy's Split is shown below. The visual grounding probability (1-beta) is shown in green.

demo

References

Code adopted from sgrvinod implementation of "Show, Attend and Tell"

About

PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published