We propose an integral three-dimensional (3D) display system with a wide viewing zone and depth range using a time-division display and eye-tracking technology. In the proposed system, the optical viewing zone (OVZ) is narrowed to a size that only covers an eye to increase the light ray density using a lens array with a long focal length. In addition, a system with low crosstalk with respect to the viewer’s movement is constructed by forming a combined OVZ (COVZ) that covers both eyes through a time-division display. Further, an eye-tracking directional backlight is used to dynamically control the COVZ and realize a wide system viewing zone (SVZ). The luminance unevenness is reduced by partially overlapping two OVZs. The combination of OVZs formed a COVZ with an angle that is ∼1.6 times larger than that of the OVZ, and an SVZ of 81.4 deg and 47.6 deg for the horizontal and vertical directions, respectively, was achieved using the eye-tracking technology. The comparison results of the three types of display systems (i.e., the conventional system, our previously developed system, and our currently proposed system) confirmed that the depth range of the 3D images in the proposed system is wider than that of the other systems.
KEYWORDS: 3D displays, 3D image processing, Projection systems, Image resolution, Displays, Diffusion, Prototyping, Calibration, 3D vision, 3D image enhancement
Light field displays technologies are popular glasses-free three-dimensional (3D) display methods, whereby natural 3D images can be viewed by precisely reproducing light rays from the objects. However, sufficient display performances cannot be obtained with conventional display techniques because reproduction of a great number of high-density light rays is required for high quality 3D images. Therefore, we develop a novel light field display method named Aktina Vision, which consists of a special 3D screen with isotropic narrow diffusion characteristics and a display optical system for projecting high-density light rays. In this method, multi-view images with horizontal and vertical parallaxes are projected onto the 3D screen at various angles in a superposed manner. The 3D screen has narrow diffusion angle and top-hat diffusion characteristics for optimal widening of the light rays according to the discrete intervals between the rays. 3D images with high resolution and depth-reproducibility can be displayed by suppressing crosstalk between light rays and reproducing them with continuous luminance distribution. We prototype a display system using 14 exclusively designed 4K projectors and develop a light field calibration technique. The reproduction of 3D images with a resolution of approximately 330,000 pixels, which is three times higher than that of conventional display methods using a lens array, and viewing angles of 35.1° in the horizontal direction and 4.7° in the vertical direction is realized by projecting 350 multiview images in a superposed manner.
Light field displays can provide a naturally viewable three-dimensional (3D) image without the need for using special glasses. However, improving in the resolution of 3D images is difficult because considerable image information is required. Therefore, we propose two new light field display methods that use multiple ultra-high definition projectors to realize a reproduction of a high-resolution spatial image. One of the two proposed methods is based on integral imaging. Multi-elemental images are superimposed onto a lens array using multiple projectors placed at optimal positions. An integral 3D image with enhanced resolution and viewing angle can be reproduced by projecting each elemental image as collimated light rays at different predetermined angles. We prototyped a display system having six projector units and realized a resolution of approximately 100,000 pixels and viewing angle of approximately 30°. The other proposed method aiming at further resolution enhancement is based on multi-view projection. By constructing a new display optical system to reproduce a full parallax light field and by developing a special 3D screen with isotropic narrow diffusion characteristics of non-Gaussian shape, optical 3D images could be reconstructed, which was difficult with conventional methods. We prototyped a display system comprising two projector units and realized higher resolution of approximately 330,000 pixels as compared to our previous full parallax light field display systems.
We studied an integral three-dimensional (3D) TV based on integral photography to develop a new form of broadcasting that provides a strong sense of presence. The integral 3D TV can display natural 3D images that have motion parallax in the horizontal and vertical directions. However, a large number of pixels are required to obtain superior 3D images. To improve image quality, we applied ultra-high-definition video technologies to an integral 3D TV system. Furthermore, we are developing several methods for combining multiple cameras and display devices to improve the quality of integral 3D images.
We propose a method for arranging multiple projectors in parallel using an image-processing technique and for enlarging the viewing zone in an integral three-dimensional image display. We have developed a method to correct the projection distortion precisely using an image-processing technique combining projective and affine transformations. To combine the multiple viewing zones formed by each projector continuously and smoothly, we also devised a technique that provides accurate adjustment by generating the elemental images of a computer graphics model at high speed. We constructed a prototype device using four projectors equivalent to 4K resolution and realized a viewing zone with measured viewing angles of 49.2 deg horizontally and 45.2 deg vertically. Compared with the use of only one projector, the prototype device expanded the viewing angles by approximately two times in both the horizontal and vertical directions.
KEYWORDS: Wavefronts, Printing, 3D image reconstruction, Holograms, Holography, Diffraction, Spatial light modulators, 3D acquisition, 3D image processing, 3D printing
A hologram recording technique, generally called as “wavefront printer”, has been proposed by several research groups for static three-dimensional (3D) image printing. Because the pixel number of current spatial light modulators (SLMs) is not enough to reconstruct the entire wavefront in recording process, typically, hologram data is divided into a set of subhologram data and each wavefront is recorded sequentially as a small sub-hologram cell in tiling manner by using X-Y motorized stage. However since previous works of wavefront printer do not optimize the cell size, the reconstructed images were degraded by obtrusive split line due to visible cell size caused by too large cell size for human eyesight, or by diffraction effect due to discontinuity of phase distribution caused by too small cell size. In this paper, we introduce overlapping recording approach of sub-holograms to achieve both conditions: enough smallness of apparent cell size to make cells invisible and enough largeness of recording cell size to suppress diffraction effect by keeping the phase continuity of reconstructed wavefront. By considering observing condition and optimization of the amount of overlapping and cell size, in the experiment, the proposed approach showed higher quality 3D image reconstruction while the conventional approach suffered visible split lines and cells.
A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.
We have recently developed an electronic holography reconstruction system by tiling nine 4Kx2K liquid crystal on silicon (LCOS) panels seamlessly. Magnifying optical systems eliminate the gaps between LCOS panels by forming enlarged LCOS images on the system’s output lenses. A reduction optical system reduces the tiled LCOS images to the original size, returning to the original viewing zone angle. Since this system illuminates each LCOS panel through polarized beam splitters (PBS) from different distances, viewing-zone-angle expansion was difficult since it requires illumination of each LCOS panel from different angles. In this paper, we investigated viewing-zone-angle expansion of this system by integrating point light sources in the magnifying optical system. Three optical fibers illuminate a LCOS panel from different angles in time-sequential order, reconstructing three continuous viewing-zones. Full-color image reconstruction was realized by switching the laser source among R, G, and B colors. We propose a fan-shaped optical fiber arrangement to compensate for the offset of the illumination beam center from the LCOS panel center. We also propose a solution for high-order diffraction light interference by inserting electronic shutter windows in the reduction optical system.
Electronic holography technology is expected to be used for realizing an ideal 3DTV system in the future, providing
perfect 3D images. Since the amount of fringe data is huge, however, it is difficult to broadcast or transmit it directly. To
resolve this problem, we investigated a method of generating holograms from depth images. Since computer generated
holography (CGH) generates huge fringe patterns from a small amount of data for the coordinates and colors of 3D
objects, it solves half of this problem, mainly for computer generated objects (artificial objects). For the other half of the
problem (how to obtain 3D models for a natural scene), we propose a method of generating holograms from multi-view
images and associated depth maps. Multi-view images are taken by multiple cameras. The depth maps are estimated
from the multi-view images by introducing an adaptive matching error selection algorithm in the stereo-matching
process. The multi-view images and depth maps are compressed by a 2D image coding method that converts them into
Global View and Depth (GVD) format. The fringe patterns are generated from the decoded data and displayed on
8K4K liquid crystal on silicon (LCOS) display panels. The reconstructed holographic image quality is compared using
uncompressed and compressed images.
Integral 3D television based on integral imaging requires huge amounts of information. Earlier, we built an Integral 3D
television using Super Hi-Vision (SHV) technology, with 7680 pixels horizontally and 4320 pixels vertically. Here we
report on an improvement of image quality by developing a new video system with an equivalent of 8000 scan lines and
using this for Integral 3D television. We conducted experiments to evaluate the resolution of 3D images using this
prototype equipment and were able to show that by using the pixel-offset method we have eliminated aliasing that was
produced by the full-resolution SHV video equipment. As a result, we confirmed that the new prototype is able to
generate 3D images with a depth range approximately twice that of Integral 3D television using the full-resolution SHV.
An integral 3DTV system needs high-density elemental images to increase the reconstructed 3D image's resolution,
viewing zone, and depth representability. The dual green pixel-offset method, which uses two green
channels of images, is a means of achieving ultra high-resolution imagery. We propose a precise and easy method
for detecting the pixel-offset distance when a lens array is mounted in front of the integral imaging display. In
this method, pattern luminance distributions based on sinusoidal waves are displayed on each panel of green
channels. The difference between phases (amount of phase variation) of these patterns is conserved when the
patterns are sampled and transformed to a lower frequency by aliasing with the lens array. This allows the
pixel-offset distance of the display panel to be measured in a state of magnification. The relation between the
contrast and the amount of phase variation of the pattern is contradicted in relation to the pattern frequency.
We derived a way to find the optimal spatial frequency of the pattern by regarding the product of the contrast
and amount of phase variation of the patterns as an indicator of accuracy. We also evaluated the pixel-offset
detection method in an experiment with the developed display system. The results demonstrate that the resolution
characteristics of the projected image were refined. We believe that this method can be used to improve
the resolution characteristics of the depth direction of integral imaging.
We present a generating method of stereoscopic images from moving pictures captured by a single high-definition
television camera mounted on the Japanese lunar orbiter Kaguya (Selenological and Engineering Explorer, SELENE).
Since objects in the moving pictures look as if they are moving vertically, vertical disparity is caused by
the time offset of the sequence. This vertical disparity is converted into horizontal disparity by rotating the images
by 90 degrees. We can create stereoscopic images using the rotated images as the images for a left and right
eyes. However, this causes spatial distortion resulting from the
axi-asymmetrical positions of the corresponding
left and right cameras. We reduced this by adding a depth map that was obtained by assuming that the lunar
surface was spherical. We confirmed that we could provide more acceptable views of the Moon by using the
correction method.
KEYWORDS: Video processing, Projection systems, Imaging arrays, Signal processing, Image quality, Video, 3D displays, 3D image reconstruction, 3D image processing, Distortion
An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D
images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution
(EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D
system is that positional errors appear between the elemental image and the elemental lens when there is geometric
distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental
images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified
that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and
that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover,
we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR
projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing
angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both
capture and display stages. We present an experimental 3D television system based on the integral method using an
extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method
for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The
viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full
color and full parallax 3D images in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.