Skip to content
/ UNP Public

The code for paper "Understanding Negative Proposals in Generic Few-Shot Object Detection"

License

Notifications You must be signed in to change notification settings

Ybowei/UNP

Repository files navigation

UNP

This repository contains the source code for our paper " Understanding Negative Proposals in Generic Few-Shot Object Detection " by Bowei Yan, Chunbo Lang, Gong Cheng and Junwei Han.

abstract: Recently, Few-Shot Object Detection (FSOD) has received considerable research attention as a strategy for reducing reliance on extensively labeled bounding boxes. However, current approaches encounter significant challenges due to the intrinsic issue of incomplete annotation while building the instance-level training benchmark. In such cases, the instances with missing annotations are regarded as background, resulting in erroneous training gradients back-propagated through the detector, thereby compromising the detection performance. To mitigate this challenge, we introduce a simple and highly efficient method that can be plugged into both meta-learning-based and transfer-learning-based methods. Our method incorporates two innovative components: Confusing Proposals Separation (CPS) and Affinity-Driven Gradient Relaxation (ADGR). Specifically, CPS effectively isolates confusing negatives while ensuring the contribution of hard negatives during model fine-tuning; ADGR then adjusts their gradients based on the affinity to different category prototypes. As a result, false-negative samples are assigned lower weights than other negatives, alleviating their harmful impacts on the few-shot detector without the requirement of additional learnable parameters. Extensive experiments conducted on the PASCAL VOC and MS-COCO datasets consistently demonstrate that our method significantly outperforms both the baseline and recent FSOD methods. Furthermore, its versatility and efficiency suggest the potential to become a stronger new baseline in the field of FSOD.


Image text

📑 Table of Contents

  • Understanding Negative Proposals in Generic Few-Shot Object Detection
    • Table of Contents
    • Installation
    • Code Structure
    • Data Preparation
    • Model training and evaluation on MSCOCO
      • Base training
      • Model Fine-tuning
      • Evaluation
    • Model training and evaluation on PASCL VOC
      • Base training
      • Model Fine-tuning
      • Evaluation
    • Model Zoo

🧩 Installation

Our code is based on MMFewShot and please refer to install.md for installation of MMFewShot framwork. Please note that we used detectron 0.1.0 in this project. Higher versions of detectron might report errors.

🏰 Code Structure

  • configs: Configuration files
  • checkpoints: Checkpoint
  • Weights: Pretraing models
  • Data: Datasets for base training and finetuning
  • mmfewshot: Model framework
  • Tools: analysis and visualize tools

💾 Data Preparation

  • Our model is evaluated on two FSOD benchmarks PASCAL VOC and MSCOCO following the previous work TFA.
  • Please prepare the original PASCAL VOC and MSCOCO datasets and also the few-shot datasets in the folder ./data/coco and ./data/voc respectively.
  • please refer to PASCAL VOC and MSCOCO for more detail.

📖 Model training and evaluation on PASCAL VOC

  • We have two steps for model training, first training the model over base classes, and then fine-tuning the model over novel classes.
  • The training script for base training is
    sh script/dist_train_voc_base.sh
  • After model base training, we perform 1/2/3/5/10/30-shot fine-tuning over novel classes, and the training script is
    sh script/dist_train_voc_finetuning.sh
  • We evaluate our model on the three splits in PASCLAL VOC as TFA. The evaluate script is
    sh script/dist_test_voc.sh

📖 Model training and evaluation on MSCOCO

  • Similar as PASCAL VOC benchmark, we have two steps for model training.
  • The training script for base training is
    sh script/dist_train_coco_base.sh
  • After model base training, we perform 10/30-shot fine-tuning over novel classes, and the training script is
    sh script/dist_train_coco_finetuning.sh
  • The evaluate script is
    sh script/dist_test_coco.sh

📚 Model Zoo

  • We provided both the base-trained models over base classes and novel-finetuning models over novel classes for both two benchmarks. The model links are Baidu Drive.

👏 Acknowledgement

  • This repo is developed based on TFA and mmfewshot. Thanks for their wonderful codebases.

About

The code for paper "Understanding Negative Proposals in Generic Few-Shot Object Detection"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages