Skip to content
cDc edited this page Feb 16, 2020 · 4 revisions

Dense Point-Cloud Reconstruction

The goal of this module is to provide the functionality of obtaining a complete and accurate as possible point-cloud at reasonable speeds. Since the final goal is to obtain a mesh representation, and since there is a module to refine the mesh, the completeness and speed of estimating the dense point-cloud is more important than the accuracy. Therefore, the current implementation is based on the Patch-Match algorithm: PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing C. Barnes et al. 2009.

A second option for estimating the dense point-cloud is using Semi-Global Matching algorithm, implemented as described in: Memory Efficient Semi-Global Matching H. Hirschmüller et al. 2012. This method is still experimental, thus sometimes the speed and completeness might not be as good as the Path-Match approach, though the accuracy could be better.

Mesh Reconstruction

This module aims at estimating a mesh surface that explains the best the input point-cloud, and to be robust to outliers. The input point-cloud could be dense or sparse, and hence the algorithm used should be able to perform well in both cases. For these reasons, the algorithm currently implemented is based on the paper: Exploiting Visibility Information in Surface Reconstruction to Preserve Weakly Supported Surfaces M. Jancosek et al. 2014.

Mesh Refinement

Rough meshes obtained by the previous module are in general a good enough starting point for a variational refinement step. Such algorithms are relatively fast and able to recover the true surface even in cases when only a coarse input mesh is provided (as in the case of meshes estimated from a sparse point-cloud, or texture-less scenes). The algorithm employed for solving this task is based on the paper: High Accuracy and Visibility-Consistent Dense Multiview Stereo HH. Vu et al. 2012.

Mesh Texturing

In the case of having a perfect mesh reconstruction and ground-truth camera poses, obtaining the texture is relatively a strait-forward step. In reality however both the mesh and the camera poses contain slight variations/errors at best, and hence the mesh texturing module should be able to cope with them. A very good paper describing such an algorithm, implemented in OpenMVS, is: Let There Be Color! - Large-Scale Texturing of 3D Reconstructions M. Waechter et al. 2014.

Clone this wiki locally