KEYWORDS: Cameras, Imaging systems, Time of flight cameras, Error analysis, Video, Optical filters, RGB color model, Optical engineering, 3D video compression, Digital filtering
When the three-dimensional (3-D) video system includes a multiview video generation technique using depth data to provide more realistic 3-D viewing experiences, accurate depth map acquisition is an important task. In order to generate the precise depth map in real time, we can build a camera fusion system with multiple color cameras and one time-of-flight (TOF) camera; however, this method is associated with depth errors, such as depth flickering, empty holes in the warped depth map, and mixed pixels around object boundaries. In this paper, we propose three different methods for depth error reduction to minimize such depth errors. In order to reduce depth flickering in the temporal domain, we propose a temporal enhancement method using a modified joint bilateral filtering at the TOF camera side. Then, we fill the empty holes in the warped depth map by selecting a virtual depth and applying a weighted depth filtering method. After hole filling, we remove mixed pixels and replace them with new depth values using an adaptive joint multilateral filter. Experimental results show that the proposed method reduces depth errors significantly in near real time.
We developed a head-mounted display (HMD)-type multifocus display system using a laser-scanning method to provide an accommodation effect for viewers. This accomplishment indicates that providing a monocular depth cue is possible through this multifocus system. In the system, the optical path is changed by a scanning action. To provide an accurate accommodation effect for the viewer, the multifocus display system is designed and manufactured in accordance with the geometric analysis of the system's scanning action. Using a video camera as a substitute for the viewer, correct focus adjustment without the scanning action problem is demonstrated. By analyzing the scanning action and experimental results, we are able to illustrate the formation of a viewpoint in an HMD-type multifocus display system using a laser-scanning method. In addition, we demonstrate that the accommodation effect could be provided independent of the viewing condition of the viewer.
Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.
A two-dimensional light-emitting diode array is used to replace the viewing-zone-forming optics in-multiview full-parallax three-dimensional image display systems. Since the array is not working merely as the viewing-zone-forming optics but also as the backlight panel for the liquid-crystal display (LCD) panel, it allows constructing three-dimensional imaging systems having the same structure as the current LCD display. The designed system displays images with good depth sense.
This paper presents mobile phone based service of 3D virtual space. First, this paper introduces a 3D responsive
virtual space which includes 3D indoor virtual environment and model of real and virtual sensors in indoor
space. In responsive 3D virtual space, a status of 3D virtual environment is dynamically changed according to
the sensor status. Second, an interactive service on mobile phone is introduced for browsing of 3D responsive
virtual space. The main feature of this service is that interactive 3D view image browsing can be provided with
popular mobile phone without 3D graphic engine. Finally, the system implementation of our service and its
experiment are described.
KEYWORDS: Haptic technology, Physics, Virtual reality, Internet, Local area networks, Computer simulations, 3D modeling, Visualization, OpenGL, Imaging systems
This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over
Internet, and 3) Analysis of collaborative interaction between
non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.
There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The
most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques
are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for
3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D
display system.
Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are
focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction
and gaze depth should be estimated for the gaze-based interaction in 3D virtual space.
In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of
gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We
present our approach for the estimation of gaze direction and gaze depth and show experimentation results.
A HMD type multi-focus 3D display system is developed and experiment about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depths. Therefore this proposed 3D display system has a possibility to solve the problem that the 3-dimensional image display system using only binocular disparity can induce the eye fatigue because of the mismatch between the accommodation of each eye and the convergence of two eyes. The accommodation of one eye is tested and a proof of the satisfaction of the accommodation is given as a result by using the proposed 3D display system. We could achieve a result that focus adjustment is possible at 4 step depths in sequence within 2m depth for only one eye.
Virtual Reality simulation enables immersive 3D experience of a Virtual Environment. A simulation-based Virtual
Environment can be used to map real world phenomena onto virtual experience. With a reconfigurable simulation, users
can reconfigure the parameters of the involved objects, so that they can see different effects from the different
configurations. This concept is suitable for a classroom learning of physics law. This research studies the Virtual Reality
simulation of Newton's physics law on rigid body type of objects. With network support, collaborative interaction is
enabled so that people from different places can interact with the same set of objects in immersive Collaborative Virtual
Environment. The taxonomy of the interaction in different levels of collaboration is described as: distinct objects and
same object, in which there are same object - sequentially, same object - concurrently - same attribute, and same object
- concurrently - distinct attributes. The case studies are the interaction of users in two cases: destroying and creating a
set of arranged rigid bodies. In Virtual Domino, users can observe physics law while applying force to the domino blocks
in order to destroy the arrangements. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse
using existing building blocks, under gravity effects.
KEYWORDS: 3D modeling, Cameras, 3D acquisition, Laser scanners, Data modeling, Volume rendering, Image processing, 3D image processing, Calibration, Distortion
This paper presents our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One is for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model.
The algorithm is applied to 2 cases which are corridor and space that has the four walls like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.
Recently we have built the largest Virtual Reality (VR) theatre in the world of the Kyongju World Culture EXPO 2000. Unlike single user VR systems, the VR theatre is characterized by a single shared screen and controlled by several hundreds of people in the audience. The large computer-generated stereo images displayed by the huge cylindrical screen provide the feeling of immersion into 3D virtual environment that is augmented by the physical theatre space. In addition to the visual immersion, the theatre provides 3D audio, vibration and olfactory display as well as keypads for each and everyone in the audience in their seats interactively control the virtual environment. This paper introduces the issues raised and addressed during the design of making the VR theatre, production process and presentation techniques using the versatile display and interaction capabilities of the theatre.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.