Skip to content

mainaksingha01/APPLeNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization using CLIP

Official repository of APPLeNet, which is one of the first works in Remote Sensing to perform unknown class and domain generalization using prompt learning by adapting pre-trained vision-language models (VLM) like CLIP.

CVPRw 2023

paper supplement arXiv

Abstract

teaser

In recent years, the success of large-scale visionlanguage models (VLMs) such as CLIP has led to their increased usage in various computer vision tasks. These models enable zero-shot inference through carefully crafted instructional text prompts without task-specific supervision. However, the potential of VLMs for generalization tasks in remote sensing (RS) has not been fully realized. To address this research gap, we propose a novel image-conditioned prompt learning strategy called the Visual Attention Parameterized Prompts Learning Network (APPLeNet). APPLeNet emphasizes the importance of multi-scale feature learning in RS scene classification and disentangles visual style and content primitives for domain generalization tasks. To achieve this, APPLeNet combines visual content features obtained from different layers of the vision encoder and style properties obtained from feature statistics of domain-specific batches. An attention-driven injection module is further introduced to generate visual tokens from this information. We also introduce an anticorrelation regularizer to ensure discrimination among the token embeddings, as this visual information is combined with the textual tokens. To validate APPLeNet, we curated four available RS benchmarks and introduced experimental protocols and datasets for three domain generalization tasks.

Architecture

architecture

APPLeNet is composed of a text encoder, an image encoder, and an injection block designed for multi-scale visual feature refinement. The image encoder produces multi-level visual content features, and the batch statistics for a domain as the style features, that are passed through a residual attention-based injection block.

Datasets

Released Datasets (Version-2):

Code

  • files folder contains the dataloader files of each datasets.
  • models folder contains the code of our model.
  • Clone this repository Dassl inside this repo for the metrices.
  • scripts folder holds the scripts of each of the generalization tasks both for training and testing.
$ cd scripts
$ bash base2new_train.sh patternnet 1
$ bash base2new_test.sh patternnet 1
$ bash crossdataset_train.sh patternnet 1
$ bash crossdataset_test.sh rsicd 1
$ bash domaingen_train.sh patternnetv2 1
$ bash domaingen_test.sh rsicdv2 1

Results

Base-to-New Class Generalization

base2new

Cross Dataset Generalization

crossdataset

Domain Generalization

domaingen

Bibtex

Please cite the paper if you use our work . Thanks.

@inproceedings{singha2023applenet,
  title={Applenet: Visual attention parameterized prompt learning for few-shot remote sensing image generalization using clip},
  author={Singha, Mainak and Jha, Ankit and Solanki, Bhupendra and Bose, Shirsha and Banerjee, Biplab},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Acknowledgements

Thanks to the authors of CoOp as our code is mainly based on this repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published