Skip to content

edward62740/Vision-Transformers-for-Violence-Detection-on-the-Edge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision Transformers for Violence Detection on the Edge

This project uses pre-trained Vision Transformers (ViT) on Imagenet1k as backbone networks for video violence detection on the edge.


Figure 1: Proposed model architecture


Various pre-trained ViTs and hybrid ViTs are used as backbone, yielding on average 2-3% better accuracy than CNN/LSTM baseline methods.
Additionally, this project also aims to deploy the violence detection model onto the Google Edge TPU. As such, this work proposed various techniques to modify the ViT graph structure for execution on TPU.
Due to limitations on quantization schemes for hybrid ViTs, the DeiT model was used for deployment on TPU.

The table below provides an overview of the codes used. Each subdirectory contains a README to describe the usage.

Folder Description
models Tflite/torch model definitions and trained weights
test Testing code for measuring accuracy etc.
train Training code
utils Preprocessing, QAT, PTQ, image processing algorithms

The proposed modifications in section 2.1.1 - 2.1.4 are found in models/deit.py and constructed with models/reconstruct_deit.py. The code can be used to reconstruct any of the DeiT/ViT models with the appropriate substitutions to the model instantiation.

Screenshot 2024-04-10 102311
Figure 2: DeiT Edge TPU post-compile execution graph

The TFlite model (and the one compiled for TPU) is in models/deit+transformer.
The UCF-crime dataset is available here.

The paper will be available at [].

This project was completed under NTU's URECA programme.

Releases

No releases published

Packages

No packages published