Skip to content
/ DeCUR Public

DeCUR: decoupling common and unique representations for multimodal self-supervised learning.

License

Notifications You must be signed in to change notification settings

zhu-xlab/DeCUR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Decoupling Common and Unique Representations for Multimodal Self-supervised Learning

decur main structure

PyTorch implementation of DeCUR. This is the accepted version for ECCV 2024; see the branch decur-old for the older version. The core design remains the same, with additional deformable attention for ConvNet backbones and more experiments.

Pretrained models

Modality Pretrain dataset Deformable Attention Full checkpoint Backbone only
SAR-MS SSL4EO-S12 RN50-SAR/MS-ep100 RN50-SAR, RN50-MS
SAR-MS SSL4EO-S12 RN50-RDA-SAR/MS-ep100 RN50-RDA-SAR, RN50-RDA-MS
SAR-MS SSL4EO-S12 ViTS16-SAR/MS-ep100 ViTS16-SAR, ViTS16-MS
RGB-DEM GeoNRW* RN50-RGB/DEM-ep100 RN50-RGB, RN50-DEM
RGB-DEM GeoNRW* RN50-RDA-RGB/DEM-ep100 RN50-RDA-RGB, RN50-RDA-DEM
RGB-DEM GeoNRW* ViTS16-RGB/DEM-ep100 ViTS16-RGB, ViTS16-DEM
RGB-depth SUNRGBD* MiTB2-RGB/HHA-ep200 MiTB2-RGB, MiTB2-HHA
RGB-depth SUNRGBD* MiTB5-RGB/HHA-ep200 MiTB5-RGB, MiTB5-HHA

*Performance gain when transferring these RGB-DEM/RGB-depth models to other very different downstream datasets may not be significant without additional care, as the pretraining datasets were designed for supervised tasks and are limited in scale and diversity.

DeCUR Pretraining

Clone the repository and install the dependencies based on requirements.txt. Customize your multimodal dataset and your preferred model backbone in src/datasets/, src/models/ and src/pretrain_mm.py, and run

python pretrain_mm.py \
--dataset YOUR_DATASET \
--method PRETRAIN_METHOD \
--data1 /path/to/modality1 \
--data2 /path/to/modality2 \
--mode MODAL1 MODAL2 \
...

Apart from DeCUR, we also support multimodal pretraining with SimCLR, CLIP, BarlowTwins and VICReg.

If you are using distributed training with slurm, we provide some example job submission scripts in src/scripts/pretrain.

Transfer Learning

See the corresponding readme.md in the datasets folders of the SAR-optical/RGB-DEM transfer learning tasks for dataset preparation instructions. To be updated.

Multilabel scene classification with ResNet50 on BigEarthNet-MM:

$ cd src/transfer_classification_BE
$ python linear_BE_resnet.py --backbone resnet50 --mode s1 s2 --pretrained /path/to/pretrained_weights ...

Semantic segmentation with simple FCN on GeoNRW:

$ cd src/transfer_segmentation_GEONRW
$ python GeoNRW_MM_FCN_RN50.py --backbone resnet50 --mode RGB DSM mask --pretrained /path/to/pretrained_weights ...

Semantic segmentation with CMX on SUNRGBD and NYUDv2:

$ cd src/transfer_segmentation_SUNRGBD
$ python convert_weights.py # convert pretrained weights to CMX format

Then refer to https://github.com/huaaaliu/RGBX_Semantic_Segmentation for dataset preparation, training etc.
Simply load the pretrained weights from our pretrained models. 

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Citation

@article{wang2024decoupling,
  title={Decoupling Common and Unique Representations for Multimodal Self-supervised Learning},
  author={Yi Wang and Conrad M Albrecht and Nassim Ait Ali Braham and Chenying Liu and Zhitong Xiong and Xiao Xiang Zhu},
  journal={arXiv preprint arXiv:2309.05300},
  year={2024}
}

About

DeCUR: decoupling common and unique representations for multimodal self-supervised learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published