Multi-camera simultaneous localization and mappingPublic Deposited
Add to collection
You do not have access to any existing collections. You may create a new collection.
Downloadable ContentDownload PDF
MLAClipp, Brian Sanderson. Multi-camera Simultaneous Localization and Mapping. Chapel Hill, NC: University of North Carolina at Chapel Hill, 2010. https://doi.org/10.17615/wsax-cc96
APAClipp, B. (2010). Multi-camera simultaneous localization and mapping. Chapel Hill, NC: University of North Carolina at Chapel Hill. https://doi.org/10.17615/wsax-cc96
ChicagoClipp, Brian Sanderson. 2010. Multi-Camera Simultaneous Localization and Mapping. Chapel Hill, NC: University of North Carolina at Chapel Hill. https://doi.org/10.17615/wsax-cc96
- Last Modified
- March 21, 2019
Clipp, Brian Sanderson
- Affiliation: College of Arts and Sciences, Department of Computer Science
- In this thesis, we study two aspects of simultaneous localization and mapping (SLAM) for multi-camera systems: minimal solution methods for the scaled motion of non-overlapping and partially overlapping two camera systems and enabling online, real-time mapping of large areas using the parallelism inherent in the visual simultaneous localization and mapping (VSLAM) problem. We present the only existing minimal solution method for six degree of freedom structure and motion estimation using a non-overlapping, rigid two camera system with known intrinsic and extrinsic calibration. One example application of our method is the three-dimensional reconstruction of urban scenes from video. Because our method does not require the cameras' fields-of-view to overlap, we are able to maximize coverage of the scene and avoid processing redundant, overlapping imagery. Additionally, we developed a minimal solution method for partially overlapping stereo camera systems to overcome degeneracies inherent to non-overlapping two-camera systems but still have a wide total field of view. The method takes two stereo images as its input. It uses one feature visible in all four views and three features visible across two temporal view pairs to constrain the system camera's motion. We show in synthetic experiments that our method creates rotation and translation estimates that are more accurate than the perspective three-point method as the overlap in the stereo camera's fields-of-view is reduced. A final part of this thesis is the development of an online, real-time visual SLAM system that achieves real-time speed by exploiting the parallelism inherent in the VSLAM problem. We show that feature tracking, relative pose estimation, and global mapping operations such as loop detection and loop correction can be effectively parallelized. Additionally, we demonstrate that a combination of short baseline, differentially tracked corner features, which can be tracked at high frame rates and wide baseline matchable but slower to compute features such as the scale-invariant feature transform can facilitate high speed visual odometry and at the same time support location recognition for loop detection and global geometric error correction.
- Date of publication
- December 2010
- Resource type
- Rights statement
- In Copyright
- "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science."
- Pollefeys, Marc
- Place of publication
- Chapel Hill, NC
- Open access
This work has no parents.