SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY
Public DepositedAdd to collection
You do not have access to any existing collections. You may create a new collection.
Downloadable Content
Download PDFCitation
MLA
Zheng, Feng. Spatio-temporal Registration In Augmented Reality. Chapel Hill, NC: University of North Carolina at Chapel Hill Graduate School, 2015. https://doi.org/10.17615/pw64-4p19APA
Zheng, F. (2015). SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY. Chapel Hill, NC: University of North Carolina at Chapel Hill Graduate School. https://doi.org/10.17615/pw64-4p19Chicago
Zheng, Feng. 2015. Spatio-Temporal Registration In Augmented Reality. Chapel Hill, NC: University of North Carolina at Chapel Hill Graduate School. https://doi.org/10.17615/pw64-4p19- Last Modified
- March 19, 2019
- Creator
-
Zheng, Feng
- Affiliation: College of Arts and Sciences, Department of Computer Science
- Abstract
- The overarching goal of Augmented Reality (AR) is to provide users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective persistent illusion requires accurate registration between the real and the virtual objects, registration that is spatially and temporally coherent. However, visible misregistration can be caused by many inherent error sources, such as errors in calibration, tracking, and modeling, and system delay. This dissertation focuses on new methods that could be considered part of "the last mile" of spatio-temporal registration in AR: closed-loop spatial registration and low-latency temporal registration: 1. For spatial registration, the primary insight is that calibration, tracking and modeling are means to an end---the ultimate goal is registration. In this spirit I present a novel pixel-wise closed-loop registration approach that can automatically minimize registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are minimized in both global world space via camera pose refinement, and local screen space via pixel-wise adjustments. This approach is presented in the context of Video See-Through AR (VST-AR) and projector-based Spatial AR (SAR), where registration results are measurable using a commodity color camera. 2. For temporal registration, the primary insight is that the real-virtual relationships are evolving throughout the tracking, rendering, scanout, and display steps, and registration can be improved by leveraging fine-grained processing and display mechanisms. In this spirit I introduce a general end-to-end system pipeline with low latency, and propose an algorithm for minimizing latency in displays (DLP DMD projectors in particular). This approach is presented in the context of Optical See-Through AR (OST-AR), where system delay is the most detrimental source of error. I also discuss future steps that may further improve spatio-temporal registration. Particularly, I discuss possibilities for using custom virtual or physical-virtual fiducials for closed-loop registration in SAR. The custom fiducials can be designed to elicit desirable optical signals that directly indicate any error in the relative pose between the physical and projected virtual objects.
- Date of publication
- May 2015
- Keyword
- Subject
- DOI
- Identifier
- Resource type
- Rights statement
- In Copyright
- Advisor
- Zhang, Zhengyou
- Niethammer, Marc
- Welch, Gregory Francis
- Bishop, Gary
- Fuchs, Henry
- Degree
- Doctor of Philosophy
- Degree granting institution
- University of North Carolina at Chapel Hill Graduate School
- Graduation year
- 2015
- Language
- Publisher
- Place of publication
- Chapel Hill, NC
- Access right
- There are no restrictions to this item.
- Date uploaded
- June 25, 2015
Relations
- Parents:
This work has no parents.
Items
Thumbnail | Title | Date Uploaded | Visibility | Actions |
---|---|---|---|---|
Zheng_unc_0153D_15357.pdf | 2019-04-09 | Public | Download |