Skip to content

hankyul2/DomainAdaptation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Domain Adaptation (Pytorch-Lightning)

This repository contains unofficial pytorch (pytorch-lightning) version source code introduced by domain adaptation papers:

  1. DANN (2015) [paper, repo]
  2. CDAN (2017) [paper, repo]
  3. MSTN (2018) [paper, repo]
  4. BSP (2019) [paper, repo]
  5. DSBN (2019) [paper, repo]
  6. RSDA-MSTN (2020) [paper, repo]
  7. SHOT (2020) [paper, repo]
  8. TransDA (2021) [paper, repo]
  9. FixBi (2021) [paper ,repo]

Tutorial (use standard dataset: Office-31)

  1. Clone repository and install dependency

    git clone https://github.com/hankyul2/DomainAdaptation.git
    pip3 install -r requirements.txt
  2. Train Model (If you don't use neptune, just remove it from here). Check configuration in configs

    python3 main.py fit --config=configs/cdan_e.yaml -d 'amazon_webcam' -g '0,'

Closed world Benchmark Result

Office-31 Dataset Benchmark reported in original papers

A>D A>W D>A D>W W>A W>D Avg
source only 80.8 76.9 60.3 95.3 63.6 98.7 79.3
DANN (2015) 79.7 82.0 68.2 96.9 67.4 99.1 82.2
CDAN (2017) 89.8 93.1 70.1 98.2 68.0 100.0 86.6
CDAN+E (2017) 92.9 94.1 71.0 98.6 69.3 100.0 87.7
MSTN (2018) 90.4 91.3 72.7 98.9 65.6 100.0 86.5
BSP+DANN (2019) 90.0 93.0 71.9 98.0 73.0 100.0 87.7
BSP+CDAN+E (2019) 93.0 93.3 73.6 98.2 72.6 100.0 88.5
DSBN+MSTN (2019) 90.8 93.3 72.7 99.1 73.9 100.0 88.3
RSDA+MSTN (2020) 95.8 96.1 77.4 99.3 78.9 100.0 91.1
SHOT (2020) 94.0 90.1 74.7 98.4 74.3 99.9 88.6
TransDA (2021) 97.2 95.0 73.7 99.3 79.3 99.6 90.7
FixBi (2021) 95.0 96.1 78.7 99.3 79.4 100.0 91.4

In this work

A>D
[tf.dev]
A>W
[tf.dev]
D>A
[tf.dev]
D>W
[tf.dev]
W>A
[tf.dev]
W>D
[tf.dev]
Avg
source only
[code, config]
82.3
weight
77.9 63.0
weight
94.5 64.7
weight
98.3 80.1
source only ViT
[code, config]
88.0
weight
87.9 76.7
weight
97.7 77.1
weight
99.7 87.8
DANN (2015)
[code, config]
87.2
weight
90.4
weight
70.6
weight
97.8
weight
73.7
weight
99.7
weight
86.6
CDAN (2017)
[code, config]
92.4 95.1 75.8 98.6 74.4 99.9 89.4
CDAN+E (2017)
[code, config]
93.2 95.6 75.1 98.7 75.0 100.0 89.6
MSTN (2018)
[code, config]
89.0 92.7 71.4 97.9 74.1 99.9 87.5
BSP+DANN (2019)
[code, config]
86.3 89.1 71.4 97.7 73.4 100.0 86.3
BSP+CDAN+E (2019)
[code, config]
92.6 94.7 73.8 98.7 74.7 100.0 89.1
DSBN+MSTN Stage1 (2019)
[code, config]
87.8 92.3 72.2 98.0 73.2 99.9 87.2
DSBN+MSTN Stage1 (2019)
[code, config]
90.6 93.5 74.0 98.0 73.1 99.5 88.1
RSDA+MSTN (2020)
[Not Implemented]
- - - - - - -
SHOT (2020)
[code, config]
93.2 92.5 74.3 98.2 75.9 100.0 89.0
SHOT (CDAN+E) (2020)
[code, config]
93.2 95.7 77.7 98.9 76.0 100.0 90.2
MIXUP (CDAN+E) (2021)
[code, config]
92.9 96.1 76.2 98.9 77.7 100.0 90.3
TransDA (2021)
[code, config]
94.4 95.8 82.3 99.2 82.0 99.8 92.3
FixBi (2021)
[code, config]
90.8 95.7 72.6 98.7 74.8 100.0 88.8

Note

  1. Reported scores are from SHOT, FixBi paper
  2. Evaluation datasets are: valid = test = target. For me, this looks weird, but there are no other way to reproduce results in paper. But, source only model's evaluation is a bit different: valid=source, test=target
  3. In this works, scores are 3 times averaged scores.
  4. If you want to use pretrained model weight, you should add loading pretrained model weights.
  5. Optimizer and learning rate scheduler are same to all model(SGD) except mstn, dsbn+mstn (Adam)
  6. SHOT can results lower accuracy than reported scores. To reproduce reported score, I recommend you to use provided source only model weights. I don't know why...
  7. BSP, DSBN+MSTN, FixBi: Fails to reproduce scores reported in paper

Experiment

Combination of several methods or models. (No weights and tf.dev)

A>D A>W D>A D>W W>A W>D Avg
DANN ViT (2015)
[code, config]
90.0 91.0 79.0 98.9 78.8 99.9 89.6
CDAN ViT (2017)
[code, config]
94.7 96.3 80.0 98.9 80.4 100.0 91.7
CDAN+E ViT (2017)
[code, config]
97.1 96.7 80.1 99.2 79.8 99.9 92.1
SHOT (CDAN+E) (2020)
[code, config]
93.2 95.7 77.7 98.9 76.0 100.0 90.2
MIXUP (CDAN+E) (2021)
[code, config]
92.9 96.1 76.2 98.9 77.7 100.0 90.3

Future Updates

  • Add weight parameter
  • Add ViT results
  • Check Fixbi code
  • Add office-home dataset results
  • Add digits results

Some Notations

  1. We use pytorch-lightning in the code. So if you are unfamiliar with pytorch-lightning, I recommend you to read quick-start of pytorch-lightning. (quick start is enough to read this code)
  2. To avoid duplication of code, we use class inheritance and just add changes proposed in papers. We try to keep code simple and nice to read. So if you think code is difficult to read, please leave it as issue or PR.
  3. Only 8 papers are now implemented. If there are any request for certain paper, we will try to re-implement it.
  4. There are some problems in backbone code. (I could not find where) So performance can be lower than reported table. I recommend to use standard library model. (timm, torchvision, etc)

Releases

No releases published

Packages

 
 
 

Languages