Encoding of multiple depth streams Public Deposited

Downloadable Content

Download PDF
Last Modified
  • March 20, 2019
  • Kum, Sang-Uok
    • Affiliation: College of Arts and Sciences, Department of Computer Science
  • With advances in hardware, interests in systems for capturing,representing, and transmitting dynamic real world scenes have been growing. Examples of such systems include 3D immersive systems, tele-immersion video conferencing systems, 3D TV, and medical consultation and surgical training. These systems use multiple cameras for dynamic scene acquisition. One approach for scene acquisition is to use multiple video streams captured from varying viewpoints in order to form a dynamic light field and then to use image based rendering (IBR) techniques for generating virtual views of the scene. Another approach is to use the imagery from the multiple cameras to derive a 3D representation of the environment that is then transmitted with color. The dynamic light field approach handles complex scenes where 3D reconstruction would be difficult. Additionally, the captured video streams can be transmitted with little further processing, while deriving 3D information requires extensive processing before transmission.However, more cameras are needed for dynamic light fields since light fields need to be acquired more densely to be as effective. One common 3D representation for the dynamic environments is multiple depth streams- streams of images with color and depth per pixel. A large number of depth streams are needed to properly represent a dynamic scene. Without compression, multiple depth streams, even sparsely sampled, requires significant storage and transmission bandwidth. Fortunately, the strong spatial coherence between streams and temporal coherence between frames allows for an effective encoding of multiple depth streams. In this dissertation, I present an effective encoding algorithm for multiple depth streams that uses approximate per pixel depth information to predict color for each pixel.
Date of publication
Resource type
Rights statement
  • In Copyright
  • Mayer-Patel, Ketan
  • Open access

This work has no parents.