Skip to content

Multi-Level Self-Supervised Learning for Domain Adaptation: MxNet Implementation

Notifications You must be signed in to change notification settings

engrjavediqbal/MLSL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC

MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling (WACV2020)

By Javed Iqbal and Mohsen Ali

Update

  • 2020.12.05: code release for GTA-5 to Cityscapes Adaptation

Contents

  1. Introduction
  2. Requirements
  3. Setup
  4. Usage
  5. Results
  6. Note
  7. Citation

Introduction

This repository contains the multi-level self-supervised learning framework for domain adaptation of semantic segmentation based on the work described in the WACV 2020 paper "[MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling]". (https://arxiv.org/pdf/1909.13776.pdf).

Requirements:

The code is tested in Ubuntu 16.04. It is implemented based on MXNet 1.3.0 and Python 2.7.12. For GPU usage, the maximum GPU memory consumption is about 7.8 GB in a single GTX 1080.

Setup

We assume you are working in mlsl-master folder.

  1. Datasets:
  • Download GTA-5 dataset.
  • Download Cityscapes.
  • Put downloaded data in "data" folder.
  1. Source pretrained models:
  1. Spatial priors
  • Download Spatial priors from GTA-5. Spatial priors are only used in GTA2Cityscapes. Put the prior_array.mat in "spatial_prior/gta/" folder.

Usage

  1. Set the PYTHONPATH environment variable:
cd mlsl-master
export PYTHONPATH=PYTHONPATH:./
  1. Self-training for GTA2Cityscapes:
  • MLSL(SISC):

python issegm/solve_AO.py --num-round 6 --test-scales 2048 --scale-rate-range 0.7,1.3 --dataset gta --dataset-tgt cityscapes --split train --split-tgt train --data-root data/gta --data-root-tgt data/cityscapes --output gta2city/MLSL-SISC --model cityscapes_rna-a1_cls19_s8 --weights models/gta_rna-a1_cls19_s8_ep-0000.params --batch-images 2 --crop-size 500 --origin-size-tgt 2048 --init-tgt-port 0.15 --init-src-port 0.03 --seed-int 0 --mine-port 0.8 --mine-id-number 3 --mine-thresh 0.001 --base-lr 1e-4 --to-epoch 2 --source-sample-policy cumulative --self-training-script issegm/solve_ST1.py --kc-policy cb --prefetch-threads 2 --gpus 0 --with-prior True
  • MLSL(SISC-PWL):
python issegm1/solve_AO.py --num-round 6 --test-scales 2048 --scale-rate-range 0.7,1.3 --dataset gta --dataset-tgt cityscapes --split train --split-tgt train --data-root data/gta --data-root-tgt data/cityscapes --output gta2city/MLSL-SISC-PWL --model cityscapes_rna-a1_cls19_s8 --weights models/gta_rna-a1_cls19_s8_ep-0000.params --batch-images 1 --crop-size 500 --origin-size-tgt 2048 --init-tgt-port 0.15 --init-src-port 0.03 --seed-int 0 --mine-port 0.8 --mine-id-number 3 --mine-thresh 0.001 --base-lr 1e-4 --to-epoch 2 --source-sample-policy cumulative --self-training-script issegm1/solve_ST.py --kc-policy cb --prefetch-threads 2 --gpus 0 --with-prior True

  • To run the code, you need to set the data paths of source data (data-root) and target data (data-root-tgt) by yourself. Besides that, you can keep other argument setting as default.
  1. Evaluation
  • Test in Cityscapes for model compatible with GTA-5 (Initial source trained model as an example)
python issegm/evaluate.py --data-root DATA_ROOT_CITYSCAPES --output val/gta-city --dataset cityscapes --phase val --weights models/gta_rna-a1_cls19_s8_ep-0000.params --split val --test-scales 2048 --test-flipping --gpus 0 --no-cudnn
  1. Train in the source domain
  • Train in GTA-5
python issegm/train_src.py --gpus 0,1,2,3 --split train --data-root DATA_ROOT_GTA --output gta_train --model gta_rna-a1_cls19_s8 --batch-images 16 --crop-size 500 --scale-rate-range 0.7,1.3 --weights models/ilsvrc-cls_rna-a1_cls1000_ep-0001.params --lr-type fixed --base-lr 0.0016 --to-epoch 30 --kvstore local --prefetch-threads 16 --prefetcher process --cache-images 0 --backward-do-mirror --origin-size 1914

Note

  • This code is based on CBST.
  • Due to the randomness, the self-training based domain adaptation results may slightly vary in each run.
  • For training in the source domain, the best model usually appears during the first 30 epochs, however, an optimal model can be obtained with less/more number of epochs.

Results

A leaderboard for state-of-the-art methods is available here. Feel free to contact us for adding your published results.

Citation:

If you found this useful, please cite our paper.

@inproceedings{iqbal2020mlsl,
      title={MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and
      Semantically Consistent Labeling},
      author={Javed Iqbal and Mohsen Ali},
      booktitle={The IEEE Winter Conference on Applications of Computer Vision},       pages={1864--1873},       year={2020} }

Contact: javed.iqbal@itu.edu.pk

About

Multi-Level Self-Supervised Learning for Domain Adaptation: MxNet Implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages