The waveguide-type full-color 3D-AR display system based on the integral imaging technique using the holographic mirror array is proposed. In the experiment, the AR feature has been successfully verified that the real-world scene and reconstructed virtual full-color 3D image were observed simultaneously.
Multiview display is a popular method to deliver three-dimensional (3D) images by generating a perspective directional view. However, there are some limitations such as a low resolution, lack of motion parallax, and a narrow viewing angle. In this paper, we propose a method to implement a multi-view display system that provides a 3D image in high resolution. The original setup is composed of a stereoscopic 3D display panel and a head tracking camera. The directional view image of a 3D object is captured by a camera array and shown on a stereoscopic 3D display. A user interface is designed to control the hardware. An Intel RealSense sr300 camera is used to track the observer's viewing angle. The images are captured rotationally through a movable camera array in a 30-degree span. There are 71 and 3 views in the horizontal and vertical direction, respectively. The directional view information is displayed according to the observer's viewing direction as well as the head position. The observer can realize a high-resolution 3D image with smooth motion parallax. Most importantly, the proposed system interactively displays the exact view direction according to the user’s viewing angle which feels more natural to the observer.
KEYWORDS: Video, Video acceleration, 3D video streaming, 3D image processing, 3D displays, Integral imaging, RGB color model, Parallel processing, Internet, Imaging systems
We propose a novel technique to synchronize elemental images and audio signal and the transmission technique for a glass-free 3D TV system based on integral imaging in real time. The main idea behind the method is to generate real time 3D video based on elemental images synchronized with audio stream. The system uses the depth information and RGB data of per frame of a video through Intel RealSense 3D camera and the audio stream from microphone. The audio file is sampled according to per frame of the video and kept in different buffers but having same index. The frames are divided into elemental images using Elemental Image Generation algorithm and the audio data is synchronized according to the index. Then the stream of elemental images and corresponding audio data is transmitted to data server. The display device fetches and decodes this data to produce video which is viewed as 3D using multi-array of lenses.
In this paper, a fast and efficient multiple wavefront recording planes method with parallel processing is proposed for enhancing the image quality and generation speed of point cloud-based holograms. The proposed method gives an optimized fixed active area to generate depth-related multiple WRPs to improve the calculation speed and enhance the color uniformity of full-color hologram. In other to parallel processing the ray tracing intermediate plane is created. This method is more effective when the number of depths is smaller, such as the RGB-D image.
KEYWORDS: Imaging systems, Integral imaging, Cameras, 3D image processing, Computing systems, 3D displays, Parallel processing, Image processing, Data acquisition, Graphics processing units
An improved and efficient system for faster computation of Elemental Image generation for real time integral imaging 3D display system with the assistance of Graphics Processing Unit parallel processing is proposed. Previously implemented systems for real time integral imaging system had a resulting frame rate greater than 30 fps for elemental image generation. But this improved and more efficient system is able to produce elemental image at a rate greater than 65 fps for real time integral imaging system. Our proposed model consists of the following steps: information acquisition of objects in real time using Kinect sensor, generation of elemental image sets using pixel mapping algorithm using GPU parallel processing for faster generation. To implement this system, firstly the color (RGB) and depth information data of each object point is acquired from the depth camera (Kinect sensor). Using acquired information, we create the elemental image sets using pixel mapping algorithm. And finally we implemented the pixel mapping algorithm in GPU and hence the overall computational speed of the real time integral display system increased surprisingly. This remarkable increase in speed for elemental image generation opens up new field of possibilities for improvement in integral imaging technology i.e. merging this system with multi-directional projection for real time integral imaging system can enhance the viewing angle remarkably and so on.
KEYWORDS: Video, 3D video streaming, 3D image processing, Integral imaging, 3D displays, RGB color model, Video acceleration, Internet, Imaging systems, Glasses
We propose a novel technique to synchronize elemental images and audio signal and the transmission technique for a glass-free 3D TV system based on integral imaging. The main idea behind the method is to generate 3D video based on elemental images synchronized with audio stream. The system uses the depth information and RGB data of per frame of a video through Intel RealSense 3D camera and the audio stream from microphone. The audio file is sampled according to per frame duration of the video and kept in different buffers but having same index. The frames are divided into elemental images using Elemental Image Generation algorithm and the audio signal is synchronized according to the index. Then the stream of elemental images and corresponding audio data is transmitted to data server for storage. HLS streaming protocol is used to stream the TV content. A dedicated web application was made that fetches data from the server and plays video on the user end display device. By using array of micro-lenses in front of display, the video is viewed as three-dimensional with the help of integral imaging technology that omits the need of wearing 3D glasses.
In this paper, we propose the well-enhancing method for the resolution of the reconstructed image of the mobile threedimensional integral imaging display system. A mobile 3D integral imaging display system is a valuable way to acquire the 3D information of real objects and display the realistic 3D visualizations of them on the mobile display. Here, the 3D color and depth information are acquired by the 3D scanner, and the elemental image array (EIA) is generated from the acquired 3D information virtually. However, the resolution of the EIA is quite low due to the low-resolution of the acquired depth information, and it affects the final reconstructed image resolution. In order to enhance the resolution of reconstructed images, the EIA resolution should be improved by increasing the number of elemental images, because the resolution of the reconstructed image depends on the number of elemental images. For the comfortable observation, the interpolation process should be iterated at least twice or three times. However, if the interpolation process is iterated more than twice, the reconstructed image is damaged, and the quality is degraded considerably. In order to improve the resolution of reconstructed images well, while maintaining the image quality, we applied the additional convolutional super-resolution algorithm instead of the interpolation process. Finally, the 3D visualizations with a higher resolution and fine-quality are displayed on the mobile display.
KEYWORDS: 3D image reconstruction, 3D image processing, Imaging systems, Image enhancement, Integral imaging, Parallel processing, 3D image enhancement, 3D acquisition, Image processing, Reconstruction algorithms, 3D displays, Cameras
A novel method of viewing angle enhancement of a real-time integral imaging system using multi-directional projections and GPU parallel processing is proposed. The proposed system is composed of three processes: information acquisition of real objects, generation of multi-directional elemental image sets, and reconstruction of 3D images by using multidirectional projections scheme. To implement this system, depth and color (RGB) information of each object point are captured by a depth camera; then, a dynamic algorithm and GPU parallel processing are used for generating multidirectional elemental image sets to be illuminated in different directions as well as to maintain a real-time processing seed; and finally, 3D images are reconstructed by using a time-multiplexed multi-directional projection scheme through an appropriate optical setup of a projection-type integral image system. Multi-directional illuminations of elemental image sets enhance the optical ray divergence of reconstructed 3D images according to the directional projection angles. Hence, a real-time integral imaging system with enhanced viewing angle is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.