Skip to content

Latest commit

 

History

History
88 lines (58 loc) · 5.87 KB

README.md

File metadata and controls

88 lines (58 loc) · 5.87 KB

GCA (AAAI'2020)

Natural Image Matting via Guided Contextual Attention

Task: Matting

Abstract

Over the last few years, deep learning based approaches have achieved outstanding improvements in natural image matting. Many of these methods can generate visually plausible alpha estimations, but typically yield blurry structures or textures in the semitransparent area. This is due to the local ambiguity of transparent objects. One possible solution is to leverage the far-surrounding information to estimate the local opacity. Traditional affinity-based methods often suffer from the high computational complexity, which are not suitable for high resolution alpha estimation. Inspired by affinity-based method and the successes of contextual attention in inpainting, we develop a novel end-to-end approach for natural image matting with a guided contextual attention module, which is specifically designed for image matting. Guided contextual attention module directly propagates high-level opacity information globally based on the learned low-level affinity. The proposed method can mimic information flow of affinity-based methods and utilize rich features learned by deep neural networks simultaneously. Experiment results on Composition-1k testing set and this http URL benchmark dataset demonstrate that our method outperforms state-of-the-art approaches in natural image matting.

Results and models

Model Dataset SAD MSE GRAD CONN Training Resources Download
baseline (our) Composition-1K 34.61 0.0083 16.21 32.12 4 model | log
GCA (our) Composition-1K 33.38 0.0081 14.96 30.59 4 model | log
baseline (with DIM pipeline) Composition-1K 49.95 0.0144 30.21 49.67 4 model | log
GCA (with DIM pipeline) Composition-1K 49.42 0.0129 28.07 49.47 4 model | log

Quick Start

Train

Train Instructions

You can use the following commands to train a model with cpu or single/multiple GPUs.

# cpu train
CUDA_VISIBLE_DEVICES=-1 python tools/train.py configs/gca/gca_r34_4xb10-200k_comp1k.py

# single-gpu train
python tools/train.py configs/gca/gca_r34_4xb10-200k_comp1k.py

# multi-gpu train
./tools/dist_train.sh configs/gca/gca_r34_4xb10-200k_comp1k.py 8

For more details, you can refer to Train a model part in train_test.md.

Test

Test Instructions

You can use the following commands to test a model with cpu or single/multiple GPUs.

# cpu test
CUDA_VISIBLE_DEVICES=-1 python tools/test.py configs/gca/gca_r34_4xb10-200k_comp1k.py https://download.openmmlab.com/mmediting/mattors/gca/gca_r34_4x10_200k_comp1k_SAD-33.38_20220615-65595f39.pth

# single-gpu test
python tools/test.py configs/gca/gca_r34_4xb10-200k_comp1k.py https://download.openmmlab.com/mmediting/mattors/gca/gca_r34_4x10_200k_comp1k_SAD-33.38_20220615-65595f39.pth

# multi-gpu test
./tools/dist_test.sh configs/gca/gca_r34_4xb10-200k_comp1k.py https://download.openmmlab.com/mmediting/mattors/gca/gca_r34_4x10_200k_comp1k_SAD-33.38_20220615-65595f39.pth 8

For more details, you can refer to Test a pre-trained model part in train_test.md.

Citation

@inproceedings{li2020natural,
  title={Natural Image Matting via Guided Contextual Attention},
  author={Li, Yaoyi and Lu, Hongtao},
  booktitle={Association for the Advancement of Artificial Intelligence (AAAI)},
  year={2020}
}