Skip to content

lhyfst/awesome-autonomous-driving-datasets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 

Repository files navigation

autonomous driving datasets

Author: Li Heyuan (李贺元)
Email: lhyfst@gmail.com
All rights reserved

the project is still under development.


KITTI

website link: http://www.cvlibs.net/datasets/kitti/

paper:

  1. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
  2. Vision meets Robotics: The KITTI Dataset
  3. A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms
  4. Object Scene Flow for Autonomous Vehicles

data format: lider, stereo

dataset size: ~40G

competition:

  1. stereo
  2. flow
  3. sceneflow
  4. depth completion
  5. single image depth prediction
  6. visual odometry
  7. 3d object detection
  8. multi-object tracking
  9. road/lane detection 10 semantic and instance segmentation

SemanticKITTI

website link: http://semantic-kitti.org/index.html

paper: SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences

data format: point cloud

dataset size: ~60G

competition:

  1. semantic segmentation
  2. semantic scene completion

Lyft level5

website link: https://level5.lyft.com/dataset/

paper: Lyft Level 5 AV Dataset 2019

data format: camera, lidar, radar

competition:

  1. 3d object detection

KAIST URBAN DATA SET

website link: http://irap.kaist.ac.kr/dataset/

paper:

  1. Complex Urban Dataset with Multi-level Sensors from Highly Diverse Urban Environments
  2. Road is Enough! Extrinsic Calibration of Non-overlapping Stereo Camera and LiDAR using Road Information
  3. Complex Urban LiDAR Data Set

data format: lidar, stereo

Baidu Apolloscapes

website link: http://apolloscape.auto/

competition:

  1. scene parsing
  2. car instance
  3. lane segmentation
  4. localization
  5. trajectory
  6. detection
  7. tracking
  8. stereo

Virtual KITTI dataset

website link: https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds/

paper:

  1. Back to Publications Virtual Worlds as Proxy for Multi-Object Tracking Analysis

dataset size: ~30G

nuscenes

website link: https://www.nuscenes.org

paper:

  1. nuScenes: A multimodal dataset for autonomous driving

data format: camera, lidar, radar

competetion:

  1. object detection
  2. tracking

A*3D Dataset

website link: https://github.com/I2RDL2/ASTAR-3D

paper:

  1. A*3D Dataset: Towards Autonomous Driving in Challenging Environments

data format: lidar, image

H3D

website link: https://usa.honda-ri.com/h3d

paper:

  1. The H3D Dataset for Full-Surround 3D Multi-Object Detection and Tracking in Crowded Urban Scenes

data format: lidar

Berkeley DeepDrive BDD100k

website link: https://bdd-data.berkeley.edu/

paper:

  1. BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling

data format: video

competetion:

  1. Drivable Area
  2. Road Object Detection
  3. Domain Adaption

Cityscape Dataset

website link: https://www.cityscapes-dataset.com/

paper:

  1. The Cityscapes Dataset for Semantic Urban Scene Understanding
  2. The Cityscapes Dataset

competetion:

  1. Pixel-Level Semantic Labeling Task
  2. Instance-Level Semantic Labeling Task
  3. Panoptic Semantic Labeling Task

comma.ai

website link: https://archive.org/details/comma-dataset

Oxford’s Robotic Car

website link: https://robotcar-dataset.robots.ox.ac.uk/

paper:

  1. 1 Year, 1000km: The Oxford RobotCar Dataset

About

an awesome list of autonomous driving datasets

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published