arxivst stuff from arxiv that you should probably bookmark

2D to 3D Depth in Noisy Environments

Post · Apr 5, 2017 22:20 ·

state-of-the-art 2d-3d-depth cs-cv

This latest paper has state-of-the-art results getting depth information out of noisy data, working in situations where the 3d space output partitions are unknown. There’s still a couple downsides though, you’ll need a large number of frames for filtering and it doesn’t guess at what it can’t see.

Highlights From The Paper

  • “Takes as input one or more depth images and estimates both the complete 3D reconstruction and its 3D space partitioning.”
  • “Learns produce a smooth and accurate 3D model from highly noisy input.”

Arxiv Abstract

  • Gernot Riegler
  • Ali Osman Ulusoy
  • Horst Bischof
  • Andreas Geiger

In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method achieves great results, it can not reconstruct surfaces occluded in the input views and requires a large number frames to filter out sensor noise and outliers. Motivated by large 3D model databases and recent advances in deep learning, we present a novel 3D convolutional network architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based fusion approach significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill gaps in the reconstruction. We evaluate our approach extensively on both synthetic and real-world datasets for volumetric fusion. Further, we apply our approach to the problem of 3D shape completion from a single view where our approach achieves state-of-the-art results.

Read the paper (pdf) »