Skip to content

Latest commit

 

History

History
63 lines (50 loc) · 1.9 KB

README.md

File metadata and controls

63 lines (50 loc) · 1.9 KB

Introduction

This is an unofficial implementation of VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection in pytorch. A large part of this project is based on the work here

Dependencies

  • python3.5+
  • pytorch (tested on 0.3.1)
  • opencv
  • shapely
  • mayavi

Installation

  1. Clone this repository.
  2. Compile the Cython module for box_overlaps
$ python3 setup.py build_ext --inplace
  1. Compile the nms model
$ python3 nms/build.py

Data Preparation

  1. Download the 3D KITTI detection dataset from here. Data to download include:

    • Velodyne point clouds (29 GB): input data to VoxelNet
    • Training labels of object data set (5 MB): input label to VoxelNet
    • Camera calibration matrices of object data set (16 MB): for visualization of predictions
    • Left color images of object data set (12 GB): for visualization of predictions
  2. In this project, the cropped point cloud data for training and validation. Point clouds outside the image coordinates are removed.

$ python3 data/crop.py
  1. Split the training set into training and validation set according to the protocol here.
└── DATA_DIR
       ├── training   <-- training data
       |   ├── image_2
       |   ├── label_2
       |   ├── velodyne
       |   └── crop
       └── testing  <--- testing data
       |   ├── image_2
       |   ├── label_2
       |   ├── velodyne
       |   └── crop

Train

TODO

  • training code
  • data augmentation
  • validation code
  • reproduce results for Car, Pedestrian and Cyclist
  • multi-gpu support
  • improve the performances