We recommend three datasets of object detection to train a supernet.
- COCO dataset
- Objects365 dataset
- OpenImages dataset
- Download the whole dataset from link
- Formulate the data directory as follows:
/path/to/coco/
├─annotations/
│ ├─instances_train2017.json
│ ├─instances_val2017.json
│ └─...
└─images/
│─train2017/
│ ├─{image_id}.jpg
│ └─...
└─val2017/
│─{image_id}.jpg
└─...
- Download the whole dataset from link.
- Note that Objects365v1 which is the version we use in paper is not available now. Use v2 instead which holds more data.
- Formulate the data directory as follows:
/path/to/object365/
├─annotations/
│ ├─objects365_train.json
│ ├─objects365_val.json
│ └─...
└─images/
│─train/
│ ├─{image_id}.jpg
│ └─...
│─val/
│ ├─{image_id}.jpg
│ └─...
└─test/
│─{image_id}.jpg
└─...
- Convert meta-file of
Objects365
from coco-style to custom-style (required):
cd /path/to/GAIA-det
python tools/convert_datasets/coco2custom.py --data_dir /path/to/Objects365 --src_name objects365_train.json --dst_name objects365_generic_train.json
- Download the whole dataset from link.
- Note that the version we use in paper is OpenImages2019 Challenge, annotations could be found from link.
- Formulate the data directory as follows:
/path/to/OpenImages/
├─annotations/
│ ├─bbox_labels_500_hierarchy.json
│ ├─challenge-2019-classes-description-500.csv
│ ├─challenge-2019-train-detection-bbox.csv
│ └─...
└─images/
│─train/
│ ├─{image_id}.jpg
│ └─...
│─val/
│ ├─{image_id}.jpg
│ └─...
└─test-challenge/
│─{image_id}.jpg
└─...
- Convert meta-file of
OpenImages
from csv to custom-style (required):
cd /path/to/GAIA-det
python tools/convert_datasets/oid2custom.py --oid_dir /path/to/OpenImages --dst_name oid500_generic_train.json