Paper
16 January 2006 Real-time 3D video compression for tele-immersive environments
Zhenyu Yang, Yi Cui, Zahid Anwar, Robert Bocchino, Nadir Kiyanclar, Klara Nahrstedt, Roy H. Campbell, William Yurcik
Author Affiliations +
Proceedings Volume 6071, Multimedia Computing and Networking 2006; 607102 (2006) https://doi.org/10.1117/12.642513
Event: Electronic Imaging 2006, 2006, San Jose, California, United States
Abstract
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ≈ 13 ms per 3D video frame).
© (2006) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhenyu Yang, Yi Cui, Zahid Anwar, Robert Bocchino, Nadir Kiyanclar, Klara Nahrstedt, Roy H. Campbell, and William Yurcik "Real-time 3D video compression for tele-immersive environments", Proc. SPIE 6071, Multimedia Computing and Networking 2006, 607102 (16 January 2006); https://doi.org/10.1117/12.642513
Lens.org Logo
CITATIONS
Cited by 31 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D video compression

Image compression

Video

Video compression

3D modeling

3D image processing

Visualization

RELATED CONTENT

Visual masking near spatiotemporal edges
Proceedings of SPIE (April 22 1996)
UrbanScape
Proceedings of SPIE (May 01 2007)
Toward a perceptual video-quality metric
Proceedings of SPIE (July 17 1998)
Low-delay embedded 3D wavelet color video coding with SPIHT
Proceedings of SPIE (January 09 1998)

Back to Top