Skip to content

icey-zhang/E2E-MFD

Repository files navigation

E2E-MFD

E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection

NeurlPS 2024 oral

The code is based on MMdetection 2.26.0, MMrotate 0.3.4 and MMCV-full 1.7.2. We modify its data loading, related classes, and functions. We revise the MMdetection and MMrotate to a multi-modal oriented detection framework to facilitate Multimodal Object Detection.

Overview

overview

Getting Started

Installation

ref : mmrotate installation and mmdetection installation

Step 1: Clone the E2E-MFD repository:

To get started, first clone the E2E-MFD repository and navigate to the project directory:

git clone https://github.com/icey-zhang/E2E-MFD
cd E2E-MFD

Step 2: Environment Setup:

E2E-MFD recommends setting up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n E2E-MFD python=3.9.17
conda activate E2E-MFD

If you develop and run mmrotate directly, install it from source

pip install -v -e .

Install Dependencies

pip install -r requirements.txt

Prepare the dataset DroneVehicle

DroneVehicle is a publicly available dataset.

you can download the dataset at baiduyun with train (code:ngar) and test (code:tqwc).

root
├── DroneVehicle
│   ├── train
│   │   ├── rgb
│   │   │   ├── images
│   │   │   ├── labels
│   │   ├── ir
│   │   │   ├── images
│   │   │   ├── labels
│   ├── test
│   │   ├── rgb
│   │   │   ├── images
│   │   │   ├── labels
│   │   ├── ir
│   │   │   ├── images
│   │   │   ├── labels

Begin to train and test

Use the config file with this.

python ./tools/train.py
python ./tools/test.py

Generate fusion images

python ./tools/generate_fusion_image.py

Result

DroneVehicle weights
DroneVehicle logs

Citation

If our code is helpful to you, please cite:

@ARTICLE{10075555,
  author={Zhang, Jiaqing and Lei, Jie and Xie, Weiying and Fang, Zhenman and Li, Yunsong and Du, Qian},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={SuperYOLO: Super Resolution Assisted Object Detection in Multimodal Remote Sensing Imagery}, 
  year={2023},
  volume={61},
  number={},
  pages={1-15},
  doi={10.1109/TGRS.2023.3258666}}

@article{zhang2023guided,
  title={Guided Hybrid Quantization for Object Detection in Remote Sensing Imagery via One-to-one Self-teaching},
  author={Zhang, Jiaqing and Lei, Jie and Xie, Weiying and Li, Yunsong and Yang, Geng and Jia, Xiuping},
  journal={IEEE Transactions on Geoscience and Remote Sensing},
  year={2023},
  publisher={IEEE}
}

@misc{zhang2024e2emfd,
      title={E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection}, 
      author={Jiaqing Zhang and Mingxiang Cao and Xue Yang and Weiying Xie and Jie Lei and Daixun Li and Wenbo Huang and Yunsong Li},
      year={2024},
      eprint={2403.09323},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2403.09323}, 
}


Star History Chart height="500" />