KEYWORDS: 3D modeling, Data modeling, Laser scanners, 3D image processing, Data acquisition, Cameras, Systems modeling, 3D acquisition, Visualization, Global Positioning System
The digital documentation of monuments and architectures is an important field of application of the 3D modeling where
both visual quality and precise 3D measurement are important. This paper proposes an integrated approach based upon
the combination of different 3D modeling techniques for the virtual reconstruction of complex architectures like those
found in medieval castles. The need of combining multiple techniques, like terrestrial laser scanning, photogrammetry
and digital surveying comes from the complexity of some structures and by the lack of a single technique capable of
giving satisfactory results in all measuring conditions. This paper will address modeling issues related to the automation
of photogrammetric methods and to the fusion of 3D models acquired with different techniques, at different point
densities and measurement accuracies. The test bench is a medieval castle placed in Trentino A.A., a tiny region in
Northern Italy.
KEYWORDS: 3D modeling, Cameras, Data modeling, Laser scanners, RGB color model, Calibration, 3D displays, 3D image processing, Imaging systems, Stereoscopy
The National Research Council of Canada (NRC) has developed a range of 3D imaging technology tools, which have been applied to a wide range of museum and heritage recording applications. The technology suite includes the development of high-resolution laser scanner systems as well as software for the preparation of accurate 3D models and for the display, analysis and comparison of 3D data. This paper will offer an overview of the technology and its museum and heritage applications with particular reference to the 3D examination of paintings and recording of archaeological sites.
KEYWORDS: 3D modeling, Cultural heritage, 3D image processing, 3D metrology, Solid modeling, Multimedia, Volume rendering, Range imaging, Cameras, Computer aided design
This paper presents a summary of the 3D modeling work that was accomplished in preparing multimedia products for cultural heritage interpretation and entertainment. The three cases presented are the Byzantine Crypt of Santa Cristina, Apulia, temple C of Selinunte, Sicily, and a bronze sculpture from the 6th century BC found in Ugento, Apulia. The core of the approach is based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images. It is shown that three-dimensional modeling from range imaging is an effective way to present the spatial information for environments and artifacts. Spatial sampling and range measurement uncertainty considerations are addressed by giving the results of a number of tests on different range cameras. The integration of both photogrammetric and CAD modeling complements this approach. Results on a CDROM, a DVD, virtual 3D theatre, holograms, video animations and web pages have been prepared for these projects.
KEYWORDS: 3D modeling, 3D image processing, Volume rendering, Data modeling, Cameras, Optical spheres, RGB color model, Calibration, 3D acquisition, 3D scanning
This paper presents the work that was accomplished in preparing a multimedia CDROM about the history of a Byzantine Crypt. An effective approach based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images is used to present the spatial information about the Crypt. Usually, this information is presented on 2D images that are flat and don’t show the three-dimensionality of an environment. In recent years, high-resolution recording of heritage sites has stimulated a lot of research in fields like photogrammetry, computer vision, and computer graphics. The methodology we present should appeal to people interested in 3D for heritage. It is applied to the virtualization of a Byzantine Crypt where geometrically correct texture mapping is essential to render the environment realistically, to produce virtual visits and to apply virtual restoration techniques. A CDROM and a video animation have been created to show the results.
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
In this paper, we compare the accuracy and resolution of a 3D-laser scanner prototype that tracks in real-time and computes the relative pose of objects in a 3D space. This 3D-laser scanner prototype was specifically developed to study the use of such a sensor for space applications. The main objective of this project is to provide a robust sensor to assist in the assembly of the International Space Station where high tolerance to ambient illumination is paramount. The laser scanner uses triangulation based range data and photogrammetry methods to calculate the relative pose of objects. Range information is used to increase the accuracy of the sensing system and to remove erroneous measurements. Two high-speed galvanometers and a collimated laser beam address individual targets mounted on an object to a resolution corresponding to an equivalent imager of 10000 by 10000 pixels. Knowing the position coordinates of predefined targets on the objects, their relative poses can be computed using either the scanner calibrated 3D coordinates or spatial resection methods.
KEYWORDS: 3D modeling, 3D image processing, Data modeling, Cameras, Sensors, Solid modeling, Computer aided design, Image registration, Calibration, 3D metrology
Creating geometrically correct and complete 3D models of complex environments remains a difficult problem. Techniques for 3D digitizing and modeling have been rapidly advancing over the past few years although most focus on single objects or specific applications such as architecture and city mapping. The ability to capture details and the degree of automation vary widely from one approach to another. One can safely say that there is no single approach that works for all types of environment and at the same time is fully automated and satisfies the requirements of every application. In this paper we show that for complex environments, those composed of several objects with various characteristics, it is essential to combine data from different sensors and information from different sources. Our approach combines models created from multiple images, single images, and range sensor. It can also use known shapes, CAD, existing maps, survey data, and GPS. 3D points in the image-based models are generated by photogrammetric bundle adjustment with our without self-calibration depending on the image and point configuration. Both automatic and interactive procedures are used depending on the availability of reliable automated process. Producing high quality and accurate models, rather than full automation, is the goal. Case studies in diverse environments are used to demonstrate that all the aforementioned features are needed for environments with a significant amount of complexity.
KEYWORDS: Laser scanners, Laser systems engineering, 3D scanning, Sun, Sensors, Cameras, 3D acquisition, Signal to noise ratio, Signal detection, Scanners
A 3-D laser tracking scanner system analysis focusing on immunity to ambient sunlight and geometrical resolution and accuracy is presented in the context of a space application. The main goal of this development is to provide a robust sensor to assist in the assembly of the Space Station. This 3- D laser scanner system can be used in imagery or in tracking modes, using either time-of-flight (TOF) or triangulation methods for range acquisition. It uses two high-speed galvanometers and a collimated laser beam to address individual targets on an object. In the tracking mode of operation, we will compare the pose estimation and accuracy of the laser scanner using the different methods: triangulation, TOF (resolved targets), and photogrammetry (spatial resection), and show the advantages of combining these different modes of operation to increase the overall performances of the laser system.
KEYWORDS: 3D modeling, Sensors, Data modeling, Image registration, Virtual reality, Cameras, 3D image processing, Calibration, Data acquisition, 3D acquisition
Selecting the appropriate 3-D capture and modeling technologies to create the contents of a virtual environment (VE) remains a challenging task. One of the objectives of the environment-modeling project in our laboratory is to develop a design strategy for selecting the optimum sensor technology and its configuration to meet the requirements for virtual environment applications. This will be based on experimental evaluation of all performance aspects fa several systems for creating virtual environments. The main problems include: practical sensor calibration, the registration of images particularly in large sites in the absence of GPS and control points, the complete and accurate coverage of all details, and maintaining realistic-look at real-time rate. This paper focuses on the evaluation of the performance of some approaches to virtual environment creation using specifically designed laboratory experimentation. By understanding the potentials and limitations of each method, we are able to select the optimum one for a given application.
KEYWORDS: 3D modeling, Virtual reality, Data modeling, 3D metrology, Environmental sensing, Mining, Optical imaging, Image sensors, Sensors, 3D image processing
The key to navigate in a 3D environment or designing autonomous vehicles that can successfully maneuver and manipulate objects in their environment is the ability to create, maintain, and use effectively a 3D digital model that accurately represents its physical counterpart. Virtual exploring of real places and environments, either for leisure, engineering design, training and simulation, or tasks in remote or hazardous environments, is more effective and useful if geometrical relationships and dimensions in the virtual model are accurate. A system which can rapidly, reliably, remotely and accurately perform measurements in the 3D space for the mapping of indoor environments is needed for many applications. In this paper we present a mobile mapping system that is designed to generate a geometrically precise 3D model of an unknown indoor environment. The same general design concept can be used for environments ranging from simple office hallways to long winding underground mine tunnels. Surfaces and features can be accurately mapped from images acquired by a unique configuration of different types of optical imaging sensor and dead reckoning positioning device. This configuration guarantees that all the information required to create the 3D model of the environment is included in the collected data. Sufficient overlaps between 2D intensity images, in combination with information from 3D range images, insure that the complete environment can be accurately reconstructed when all the data is simultaneously processed. The system, the data collection and processing procedure, test results, the modeling and display at our virtual environment facility are described.
Automated digital photogrammetric systems are considered to be passive three-dimensional vision systems since they obtain object coordinates from only the information contained in intensity images. Active 3-D vision systems, such as laser scanners and structured light systems obtain the object coordinates from external information such as scanning angle, time of flight, or shape of projected patterns. Passive systems provide high accuracy on well defined features, such as targets and edges however, unmarked surfaces are hard to measure. These systems may also be difficult to automate in unstructured environments since they are highly affected by the ambient light. Active systems provide their own illumination and the features to be measured so they can easily measure surfaces in most environments. However, they have difficulties with varying surface finish or sharp discontinuities such as edges. Therefore each type of sensor is more suited for a specific type of objects and features, and they are often complementary. This paper compares the measurement accuracy, on various type of features, of some technologically-different 3-D vision systems: photogrammetry-based (passive) systems, a laser scanning system (active), and a range sensor using a mask with two apertures and structured light (active).
Due to the nature of many applications, it is difficult with present technology to use a single type of sensor to automatically, accurately, reliably, and completely measure or map, in 3D, objects, sites, or scenes. Therefore, a combination of various sensor technologies is usually the obvious solution. There are several 3D technologies, two of which; digital photogrammetry and triangulation laser scanning, are dealt with in this paper. The final accuracy achieved by the combination of various sensors is a factor of many sensor parameters and the quality of the image coordinate measurements of each sensor. The effect of those parameters must be determined to maximize the overall quality within the constraints imposed by the requirements of the application. The parameters affecting the accuracy of measurement, the test laboratory, and test results using intensity and range data, are presented. The configuration design of intensity and range sensors is discussed based on the results presented here and in two previous papers.
We have developed and clinically tested a computer vision system capable of real time monitoring of the position of an oncology (cancer) patient undergoing radiation therapy. The system is able to report variations in patient setup from day to day, as well as patient motion during an individual treatment. The system consists of two CCD cameras mounted in the treatment room and focused on the treatment unit isocenter. The cameras are interfaced to a PC via a two channel video board. Special targets, placed on the patient surface are automatically recognized and extracted by our 3D vision software. The three coordinates of each target are determined using a triangulation algorithm. System accuracy, stability and reproducibility were tested in the laboratory as well as in the radiation therapy room. Beside accuracy, the system must ensure the highest reliability and safety in the actual application environment. In this paper we also report on the results of clinical testing performed on a total of 23 patients having various treatment sites and techniques. The system in its present configuration is capable of measuring multiple targets placed on the patient surface during radiation therapy. In the clinical environment the system has an accuracy and repeatability of better than 0.5 mm in Cartesian space over extended periods (> 1 month). The system can measure and report patient position in less than 5 seconds. Clinically we have found that the system can easily and accurately detect patient motion during treatment as well as variations in patient setup from day to day. A brief description of the system and detailed analysis of its performance in the laboratory and in the clinic are presented.
KEYWORDS: Cameras, Sensors, CCD cameras, Calibration, 3D modeling, Data integration, Image registration, Laser scanners, Data modeling, 3D image processing
Conventional vision techniques based on intensity data, such as the data produced by CCD cameras, cannot produce complete 3D measurements for object surfaces. Range sensors, such as laser scanners, do provide complete range data for visible surfaces; however, they may produce erroneous results on surface discontinuities such as edges. In most applications, measurements on all surfaces and edges are required to completely describe the geometric properties of the object, which means that intensity data alone or range data alone will not provide sufficiently complete or accurate information for these applications. The technique described in this paper uses a range sensor the simultaneously acquires perfectly registered range and intensity images. It can also integrate the range data with intensity data produced by a separate sensor. The range image is used to determine the shape of the object (surfaces) while the intensity image is used to extract edges and surface features such as targets. The two types of data are then integrated to utilize the best characteristics of each. Specifically, the objective of the integration is to provide highly accurate dimensional measurements on the edges and features. The sensor, its geometric model, the calibration procedure, the combined data approach, and some results of measurements on straight and circular edges (holes) are presented in the paper.
This paper presents a calibration procedure adapted to a range camera intended for space applications. The range camera, which is based upon an auto-synchronized triangulation scheme, can measure objects from about 0.5 m to 100 m. The field of view is 30 degree(s) X 30 degree(s). Objects situated at distances beyond 10 m can be measured with the help of cooperative targets. Such a large volume of measurement presents significant challenges to a precise calibration. A two-step methodology is proposed. In the first step, the close-range volume (from 0.5 m to 1.5 m) is calibrated using an array of targets positioned at known locations in the field of view of the range camera. A large number of targets are evenly spaced in that field of view because this is the region of highest precision. In the second step, several targets are positioned at distances greater than 1.5 m with the help of an accurate theodolite and electronic distance measuring device. This second step will not be discussed fully here. The internal and external parameters of a model of the range camera are extracted with an iterative nonlinear simultaneous least-squares adjustment method. Experimental results obtained for a close-range calibration suggest a precision along the x, y and z axes of 200 micrometers , 200 micrometers , and 250 micrometers , respectively, and a bias of less than 100 micrometers in all directions.
The vision-based coordinate measurement system, developed at the National Research Council Canada, is a multicamera passive system that combines the principles of stereo vision, photogrammetry, knowledge-based techniques, and an object-oriented design methodology to provide precise coordinate and dimension measurements of parts for applications such as dimensional inspection, positioning and tracking of objects, and reverse engineering. For a vision system to be considered for such applications, its performance and design parameters must be well understood. A description of the system, the techniques employed for calibration, a performance evaluation procedure, an accuracy analysis, and test results are presented.
Applications such as dimensional inspection, positioning and tracking of objects, and reverse engineering require highly accurate measurement systems. A vision-based coordinate measurement system (VCM) developed at the National Research Council of Canada, targeted for these applications, will be described. In order for a vision system to be considered for such applications, its performance and design parameters must be well understood. Therefore, a performance evaluation procedure and accuracy analysis, along with test results, will also be presented.
The vision-based coordinate measurement (VCM) automated measurement system has been under development at the National Research Council Canada for several years. The system, which is a multicamera passive system, combines the principles of stereo vision, photogrammetry, knowledge-based techniques, and object-oriented design to provide precise coordinate and dimension measurements of parts for applications such as those found in the aerospace and automobile industries. The system may also be used for tracking or positioning of parts and digitization of targeted objects. Description of the system, the techniques employed for calibration, CAD-based feature extraction and measurement, and performance evaluation are presented.
A performance evaluation approach for a vision metrology system has been developed for the application of automated dimensional inspection. The approach is designed so that parameters and conditions, when the system functions as planned or fails, are clear and well understood. Statistical stability of the measured object properties as well as the overall system error are analyzed.
This paper presents a general vision-based coordinate measurement system, its planned use within a sheet metal inspection application, the expected benefits accrued from the use of an object-oriented design strategy, and the utilization of the methodology as it applies to the redesign of the vision system. Adherence to an object-oriented design strategy will obviate a number of shortcomings present in the original system implementation. System improvements include: a more lucid and intuitive user interface; greater accuracy and reliability through the use of decentralized modules; simplification of the description of the interrelationship between the perceived and actual dimensions of parts being inspected; and, easier porting of the general vision system to specific applications.
New applications continue to emerge for machine vision and real-time photogrammetry, particularly in industrial inspection and biomedicine. Solutions to automation and practical problems are the key to the success of this relatively new technology. Calibration, illumination, edge definition and integration with a priori known information are addressed.
With the new reorganization of the working groups within Commission V, there are significant interest in digital photogrammetry by all working groups within the Commission. The number of papers devoted to this subject in this Zurich Symposium is a clear evidence that computer vision techniques will be a major force in the automation of close-range photogrammetry in many traditional as well as new application areas. Within the new structure of the working groups, WG V/1 will have responsibility in areas related to the development and implementation of fully automated and real-time systems. More specifically, the terms of reference are as follows: 1. real-time vision systems for metric measurement; 2. near-real-time, but fully automated, vision systems with relaxed time constraints; 3. system hardware and software integration; and 4. demonstration of real-time and near-real-time systems in actual application environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.