Silhouettes for calibration and reconstruction from multiple viewsPublic Deposited
Add to collection
You do not have access to any existing collections. You may create a new collection.
Downloadable ContentDownload PDF
MLASinha, Sudipta N. Silhouettes for Calibration and Reconstruction From Multiple Views. 2009. https://doi.org/10.17615/csc8-rq94
APASinha, S. (2009). Silhouettes for calibration and reconstruction from multiple views. https://doi.org/10.17615/csc8-rq94
ChicagoSinha, Sudipta N. 2009. Silhouettes for Calibration and Reconstruction From Multiple Views. https://doi.org/10.17615/csc8-rq94
- Last Modified
- March 21, 2019
Sinha, Sudipta N.
- Affiliation: College of Arts and Sciences, Department of Computer Science
- In this thesis, we study how silhouettes extracted from images and video can help with two fundamental problems of 3D computer vision - namely multi-view camera calibration and 3D surface reconstruction from multiple images. First, we present an automatic method for calibrating a network of cameras that works by analyzing only the motion of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. Our key contribution is a novel RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair of cameras, only from the motion of silhouettes in video. The approach proceeds by first independently computing the epipolar geometry and synchronization for pairs of cameras in the network. In the next stage, the calibration and synchronization for the complete network is recovered. The fundamental matrices from the first stage are used to determine a projective reconstruction, which is then upgraded to a metric reconstruction using self-calibration. Finally, a visual-hull algorithm is used to reconstruct the shape of the dynamic object from its silhouettes in video. For unsynchronized video streams with sub-frame temporal offsets, we interpolate silhouettes between successive frames to get more accurate visual hulls. In the second part of the thesis, we address some short-comings of existing volumetric multi-view stereo approaches. First we propose a novel formulation for multi-view stereo that allows for robust and accurate fusion of the silhouette and stereo cues. We show that it is possible to enforce exact silhouette constraints within the graph-cut optimization step in the volumetric multi-view stereo algorithm. This guarantees that the reconstructed surface will be exactly consistent with the original silhouettes. Contrary to previous work on silhouette and stereo fusion, the silhouette consistency is guaranteed by construction through hard constraints in the graph-cut problem -- the silhouette consistency terms are not part of the energy minimization problem which aims to find a surface with maximal photo-consistency. Finally, we have also developed an adaptive graph construction approach for graph-cut based multi-view stereo to address the inherent high memory and computational overhead of the basic algorithm. The approach does not need any initialization and is not restricted to a specific surface topology which is a limitation with existing methods that use a base surface for initialization. Using this method, we have been able to efficiently reconstruct accurate and detailed 3D models of objects from high-resolution images for a number of different datasets.
- Date of publication
- August 2009
- Resource type
- Rights statement
- In Copyright
- Pollefeys, Marc
- Degree granting institution
- University of North Carolina at Chapel Hill
- Access right
- Open access
- Date uploaded
- October 11, 2010
This work has no parents.
|Silhouettes for calibration and reconstruction from multiple views||2019-04-11||Public||Download|