In the field of augmented reality, it is important to solve a geometric registration problem between real and virtual worlds.
To solve this problem, many kinds of image based online camera parameter estimation methods have been proposed. As one
of these methods, we have been proposed a feature landmark based camera parameter estimation method. In this method,
extrinsic camera parameters are estimated from corresponding landmarks and image features. Although the method can
work in large and complex environments, our previous method cannot work in real-time due to high computational cost
in matching process. Additionally, initial camera parameters for the first frame must be given manually. In this study,
we realize real-time and manual-initialization free camera parameter estimation based on feature landmark database. To
reduce the computational cost of the matching process, the number of matching candidates is reduced by using priorities
of landmarks that are determined from previously captured video sequences. Initial camera parameter for the first frame
is determined by a voting scheme for the target space using matching candidates. To demonstrate the effectiveness of
the proposed method, applications of landmark based real-time camera parameter estimation are demonstrated in outdoor
environments.
Image inpainting techniques have been widely used to remove undesired visual objects in images such as damaged portions of photographs and people who have accidentally entered into pictures. Conventionally, the missing parts of an image are completed by optimizing the objective function which is defined based on the sum of SSD (sum of squared differences). However, the naive SSD-based objective function is not robust against intensity change in an image. Thus, unnatural intensity change often appears in the missing parts. In addition, when an image has continuously changing texture patterns, the completed texture in a resultant image sometimes blurs due to inappropriate pattern matching. In this paper, in order to improve the image quality of the completed texture, the conventional objective function is newly extended by considering intensity changes and spatial locality to prevent unnatural intensity changes and blurs in a resultant image. By minimizing the extended energy function, the missing regions can be completed without unnatural intensity changes and blurs. In experiments, the effectiveness of the proposed method is successfully demonstrated by applying our method to various images and comparing the results with those obtained by the conventional method.
In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
Recently, document and photograph digitization from a paper is very important for digital archiving and personal data transmission through the internet. Though many people wish to digitize documents on a paper easily, now heavy and large image scanners are required to obtain high quality digitization. To realize easy and high quality digitization of documents and photographs, we propose a novel digitization method that uses a movie captured by a hand-held camera. In our method, first, 6-DOF(Degree Of Freedom) position and posture parameters of the mobile camera are estimated in each frame by tracking image features automatically. Next, re-appearing feature points in the image sequence are detected and stitched for minimizing accumulated estimation errors. Finally, all the images are merged as a high-resolution mosaic image using the optimized parameters.
Experiments have successfully demonstrated the feasibility of the proposed method. Our prototype system can acquire initial estimates of extrinsic camera parameters in real-time with capturing images.
Telepresence systems using an omnidirectional image sensor enable us to experience remote site. A omnidirectional multi-camera system is more useful to acquire outdoor scenes than a monocular camera system, because the multi-camera system can easily capture high-resolution omnidirectional images. However, exact calibration of the camera system is necessary to virtualize the real world accurately. In this paper, we describe a geometric and photometric camera calibration and a panorama movie generation method for the omnidirectional multi-camera system. In the geometric calibration, intrinsic and extrinsic parameters of each camera are estimated using a calibration board and a laser measurement system called total station. In the photometric calibration, the limb darkening and color balances among the cameras are corrected. The result of the calibration is used in the panorama movie generation. In experiments, we have actually calibrated the multi-camera system and have generated spherical panorama movies by using the estimated camera parameters. A telepresence system was prototyped in order to confirm that the panorama movie can be used for telepresence well. In addition, we have evaluated the discontinuity in generated panoramic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.