Skip to content

ChunmingHe/FEDER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FEDER

Camouflaged Object Detection with Feature Decomposition and Edge Reconstruction, CVPR 2023

[Paper] [Supplementary material] [Results] [Pretrained models]

Authors

Chunming He, Kai Li*, Yachao Zhang, Longxiang Tang, Yulun Zhang, Zhenhua Guo, Xiu Li*


Abstract: Camouflaged object detection (COD) aims to address the tough issue of identifying camouflaged objects visually blended into the surrounding backgrounds. COD is a challenging task due to the intrinsic similarity of camouflaged objects with the background, as well as their ambiguous boundaries. Existing approaches to this problem have developed various techniques to mimic the human visual system. Albeit effective in many cases, these methods still struggle when camouflaged objects are so deceptive to the vision system. In this paper, we propose the FEature Decomposition and Edge Reconstruction (FEDER) model for COD. The FEDER model addresses the intrinsic similarity of foreground and background by decomposing the features into different frequency bands using learnable wavelets. It then focuses on the most informative bands to mine subtle cues that differentiate foreground and background. To achieve this, a frequency attention module and a guidance-based feature aggregation module are developed. To combat the ambiguous boundary problem, we propose to learn an auxiliary edge reconstruction task alongside the COD task. We design an ordinary differential equation-inspired edge reconstruction module that generates exact edges. By learning the auxiliary task in conjunction with the COD task, the FEDER model can generate precise prediction maps with accurate object boundaries. Experiments show that our FEDER model significantly outperforms state-of-the-art methods with cheaper computational and memory costs.


Usage

1. Prerequisites

Note that FEDER is only tested on Ubuntu OS with the following environments.

  • Creating a virtual environment in terminal: conda create -n FEDER python=3.8.
  • Installing necessary packages: pip install -r requirements.txt

2. Downloading Training and Testing Datasets

  • Download the training set (COD10K-train) used for training
  • Download the testing sets (COD10K-test + CAMO-test + CHAMELEON + NC4K ) used for testing

3. Training Configuration

  • The pretrained model is stored in Google Drive. After downloading, please change the file path in the corresponding code.
python Train.py  --epoch 160  --lr 1e-4  --batchsize 36  --trainsize 36  --train_root YOUR_TRAININGSETPATH  --val_root  YOUR_VALIDATIONSETPATH  --save_path YOUR_CHECKPOINTPATH

4. Testing Configuration

Our well-trained model is stored in Google Drive. After downloading, please change the file path in the corresponding code.

python Test.py  --testsize YOUR_IMAGESIZE  --pth_path YOUR_CHECKPOINTPATH  --test_dataset_path  YOUR_TESTINGSETPATH

5. Evaluation

  • Matlab code: One-key evaluation is written in MATLAB code, please follow this the instructions in main.m and just run it to generate the evaluation results.

6. Results download

The prediction results of our FEDER are stored on Google Drive please check.

Related Works

Weakly-Supervised Concealed Object Segmentation with SAM-based Pseudo Labeling and Multi-scale Feature Grouping, arxiv 2023.

Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers, CVPR 2023.

Concealed Object Detection, TPAMI 2022.

You can see more related papers in awesome-COD.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{He2023Camouflaged,
title={Camouflaged Object Detection with Feature Decomposition and Edge Reconstruction},
author={He, Chunming and Li, Kai and Zhang, Yachao and Tang, Longxiang and Zhang, Yulun and Guo, Zhenhua and Li, Xiu},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}

Concat

If you have any questions, please feel free to contact me via email at chunminghe19990224@gmail.com or hcm21@mails.tsinghua.edu.cn.

Acknowledgement

The code is built on SINet V2. Please also follow the corresponding licenses. Thanks for the awesome work.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages