LiDAR technology is increasingly being used as an area-based 3D measurement method. In addition to high aspirations in terms of accuracy, speed and resolution, manufacturers of lidar cameras are competing to reduce size, weight and power consumption. As one of the most compact high-resolution systems, the Intel RealSense L515 has undergone extensive testing according to VDI/VDE Guideline 2634. In addition, tests were conducted with glossy or partially transparent surfaces (acrylic glass, carbon fiber material and aluminum) as well as with human skin. The latter shows the applicability for human-machine interactions. Both laboratory conditions and the influence of natural light were used as environmental conditions. A comparison of the results is given by the Intel RealSense D415 stereo camera system.
Within the paper the use of a 3D-line scanner is evaluated to realize an inline and intime detection of weld imperfections for an automated MAG-welding of fillet welds. The required conditions of the weld are in accordance with the quality level B of the international standard ISO 5817 [1]. The used system is a Wenglor 3D-line scanner with a wavelength of 660 nm and an image recording frequency of at least 200 Hz. This scanner is able to detect, as a stand-alone system, geometry and surface imperfections of the weld. Therefore, algorithms were developed for the reliable recognition of deviations regarding the size, superelevations, undercuts, seam transition angles and the symmetry of the weld with a measurement accuracy of 10 ➭m. With the information, gathered by the line scanner and the corresponding image-data processing algorithms, it is possible to improve the efficiency of automated MAG-welding. The welding process can be interrupted and recalibrated if significant imperfections are detected. This is realized with a short time delay depending on the distance between the welding torch and the camera and the welding speed. The direct feedback could prevent the weldment from getting unusable and saving ressources through minor required efforts for renewing or revising the weld. Additionally, the required visual testing (ISO 17635 [2], ISO 17637 [3]) can be objectified and accelerated by highlighting the parts of the weld which are most probably affected by imperfections. In the future this automated system could possibly replace the manual quality inspection.
Depth sensors for three-dimensional object acquisition are widely used and available in many different sizes and weight classes. The measuring method used and the measuring accuracy depend on the task to be performed. The integration of depth sensors in mobile devices such as tablets and smartphones is largely new. The TrueDepth system of the iPhone X shows which measurement accuracies can be achieved with these systems and which areas of application can be achieved in addition to consumer fun. The investigations show that the TrueDepth system of the iPhone X can be used for measuring tasks with accuracies in the millimeter range.
In order to enhance the efficiency of quality inspection of Direct Copper Bonded (DCB) structures an optical inspection using a 3D measuring system is conceivable. However, it is a challenging task to use 3D optical measurement techniques for diffuse reflective copper surfaces. This work deals with the optical detection of defects of the copper surface, using multi- and hyperspectral acquisition devices. Over a broad spectral range from the visual spectrum to the short-wave infrared (400 nm - 1700 nm) it is analysed which wavelengths provide good contrast ratios for the detection of flaws. For the inspection of the sample back side, the push-broom imager, operating in the VIS and NIR range (400 nm - 1000 nm), provides the best contrast ratio. An outstanding contrast is reached around 400 nm. Deposited particles on the front side of the DCB substrates are best detected by the filter wheel camera, which is sensitive in the visual and near infrared range. Outstanding contrast is reached with wavelengths around 640 nm. After evaluating the standard deviations of the gray values, it can be shown that defects differ clearly from flawless substrate areas under investigation with light of wavelengths 577 nm, 640 nm and 950 nm. Furthermore, the comparison between certain pixel spectra confirms that significant differences appear at the same three wavelengths. Regarding an automated inspection of defects, it is advisable to shift the pattern projection for the 3D correspondence analysis to the spectral ranges mentioned.
Usually, a large number of patterns are needed in the computational ghost imaging (CGI). In this work, the possibilities to reduce the pattern number by integrating compressive sensing (CS) algorithms into the CGI process are systematically investigated. Based on the different combinations of sampling patterns and image priors for the L1-norm regularization, different CS-based CGI approaches are proposed and implemented with the iterative shrinkage thresholding algorithm. These CS-CGI approaches are evaluated with various test scenes. According to the quality of the reconstructed images and the robustness to measurement noise, a comparison between these approaches is drawn for different sampling ratios, noise levels, and image sizes.
This paper presents an approach for single-shot 3D shape reconstruction using a multi-wavelength array projector and a stereo-vision setup of two multispectral snapshot cameras. Thus, a sequence of six to eight aperiodic fringe patterns can be simultaneously projected at different wavelengths by the developed array projector and captured by the multispectral snapshot cameras. For the calculation of 3D point clouds, a computational procedure for pattern extraction from single multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach.
We present an approach for single-frame three-dimensional (3-D) imaging using multiwavelength array projection and a stereo vision setup of two multispectral snapshot cameras. Thus a sequence of aperiodic fringe patterns at different wavelengths can be projected and detected simultaneously. For the 3-D reconstruction, a computational procedure for pattern extraction from multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach.
Inline three-dimensional measurements are a growing part of optical inspection. Considering increasing production capacities and economic aspects, dynamic measurements under motion are inescapable. Using a sequence of different pattern, like it is generally done in fringe projection systems, relative movements of the measurement object with respect to the 3d sensor between the images of one pattern sequence have to be compensated.
Based on the application of fully automated optical inspection of circuit boards at an assembly line, the knowledge of the relative speed of movement between the measurement object and the 3d sensor system should be used inside the algorithms of motion compensation. Optimally, this relative speed is constant over the whole measurement process and consists of only one motion direction to avoid sensor vibrations. The quantified evaluation of this two assumptions and the error impact on the 3d accuracy are content of the research project described by this paper.
For our experiments we use a glass etalon with non-transparent circles and transmitted light. Focused on the circle borders, this is one of the most reliable methods to determine subpixel positions using a couple of searching rays. The intersection point of all rays characterize the center of each circle. Based on these circle centers determined with a precision of approximately 1=50 pixel, the motion vector between two images could be calculated and compared with the input motion vector. Overall, the results are used to optimize the weight distribution of the 3d sensor head and reduce non-uniformly vibrations. Finally, there exists a dynamic 3d measurement system with an error of motion vectors about 4 micrometer. Based on this outcome, simulations result in a 3d standard deviation at planar object regions of 6 micrometers. The same system yields a 3d standard deviation of 9 µm without the optimization of weight distribution.
The requirement for a non-transparent Lambertian like surface in optical 3D measurements with fringe pattern projection cannot be satisfied at translucent objects. The translucency causes artifacts and systematic errors in the pattern decoding, which could lead to measurement errors and a decrease of measurement stability. In this work, the influence of light wavelength on 3D measurements was investigated at a stereoscopic system consisting of two filter wheel cameras with narrowband bandpass filters and a projector with a wide-band light source. The experimental results show a significant wavelength dependency of the systematic measurement deviation and the measurement stability.
Fringe projection is an established method for contactless measurement of 3D object structure. Adversely, the coding of fringe projection is ambiguous. To determine object points with absolute position in 3D space, this coding has to be unique.
We propose a novel approach of phase unwrapping without using additional pattern projection. Based on a stereo camera setup, an image segmentation of each view in areas without height jumps larger than a fringe period is necessary. Within these segments, phase unwrapping is potentially without error. Alignment of phase maps between the two views is realized by an identification process of one correspondence point.
An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.
Fringe projection is an established method to measure the 3D structure of macroscopic objects. To achieve both a high accuracy and robustness a certain number of images with pairwise different projection pattern is required. Over this sequence it is necessary that each 3D object point corresponds to the same image point at every time. This situation is no longer given for measurements under motion. One possibility to solve this problem is to restore the static situation. Therefore, the acquired camera images have to be realigned and secondly, the degree of fringe shift has to be estimated. Furthermore, there exists another variable: change in lighting. The compensation of these variances is a difficult task and could only be realized with several assumptions, but it has to be approximately determined and integrated into the 3D reconstruction process. We propose a method to estimate these lighting changes for each camera pixel with respect to their neighbors at each point in time. The algorithms were validated on simulation data, in particular with rotating measurement objects. For translational motion, lighting changes have no severe effect in our applications. Taken together, without using high-speed hardware our method results in a motion compensated dense 3D point cloud which is eligible for three-dimensional measurement of moving objects or setups with sensor systems in motion.
Measuring the three-dimensional (3D) surface shape of objects in real time has become an important task e.g. in
industrial quality management or medical sciences. Stereo vision-based arrangements in connection with pattern
projection offer high data acquisition speed and low computation time. However, these coded-light techniques
are limited by the projection speed which is conventionally in the range of 200. . .250Hz.
In this contribution, we present the concepts and a realized setup of a so-called 3D array projector. It is
ultra-slim, but nonetheless able to project fixed patterns with high brightness and depth of focus. Furthermore,
frame rates up to the 100 kHz range are achievable without any need of mechanically moving parts since the
projection speed is limited mainly by the switching frequency of the used LEDs. According to the measurement
requirements, type and structure of the patterns can be chosen almost freely: linear or sinusoidal fringes, binary
codes such as the Gray code, square, hexagonal or random patterns and many more.
First investigations on the functionality of such a 3D array projector were conducted using a prototype with
a combination of Gray codes and phase-shifted sinusoidal fringes. Our contribution proves the high brightness
of the proposed projector, its sharpness and the good Michelson contrast of the fringe patterns. We deal with
the patterns’ homogeneity and the accuracy of the phase shift between the sinusoidal patterns. Furthermore, we
present first measurement results and outline future research which is, inter alia, addressed to the use of other
structured light techniques with the help of new purpose-built 3D array projector prototypes.
A sensor based on fringe projection technique was developed which allows ultrafast measurements of the surface of flat
measuring objects which realizes a data acquisition rate up to 8.9 million 3D points per second. The high measuring
velocity was achieved by consequent fringe code reduction and parallel data processing. Fringe sequence length was
reduced using geometric constraints of the sensor arrangement including epipolar geometry. Further reduction of the
image sequence length was obtained by omission of the Gray code sequence by using the geometric constraints of the
measuring objects. The sensor may be used e.g. for inspection of conductor boards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.