Fringe projection profilometry (FPP) is widely used in the field of three-dimensional (3D) measurement, owing to its advantages of simple hardware configuration, high accuracy and speed. The traditional fringe projection systems mainly use visible light as a projection source, but they are not suitable for the measurement of human faces and shadow objects. Whereas infrared micro electro mechanical system (MEMS) project invisible infrared light and is more suitable for use as an illumination system for 3D measurement. However, the infrared fringe pattern generated by the MEMS projector usually carries a large amount of speckle noise, which leads to a limitation in the quality of 3D reconstruction. Therefore, in this paper, a lightweight convolutional neural network framework is proposed to achieve fast and high-precision denoising of infrared fringe images. A three-step phase shift method and a multi-view stereo phase matching method are used to compute the absolute phase for the denoised fringe images, achieving fast high-precision 3D reconstruction. Experiments are conducted in static and dynamic scenes, verifying that the lightweight network can improve the denoising speed while guaranteeing similar accuracy to conventional methods, while the designed system is capable of fast high-precision 3D reconstruction at a speed of more than 25 frames per second and an accuracy of 80µm.
Fringe projection phase-shifting profilometry is widely used in defect detection and reverse engineering due to its high precision and non-contact characteristics. However, phase-shifting profilometry remains a challenge for the measurement of dynamic scenes due to continuous projection. In this work, we propose a motion error compensation method based on principal component analysis (PCA), which is capable of estimating the motion of object, eliminating the phase error introduced by motion, and realizing high-precision phase retrieval. The experimental results demonstrate that the proposed method can perform high-quality 3D reconstruction of an object moving in any direction, and the measurement accuracy is better than 80 um.
In recent years, there has been tremendous progress in the development of deep-learning-based approaches for optical metrology, which introduce various deep neural networks (DNNs) for many optical metrology tasks, such as fringe analysis, phase unwrapping, and digital image correlation. However, since different DNN models have their own strengths and limitations, it is difficult for a single DNN to make reliable predictions under all possible scenarios. In this work, we introduce ensemble learning into optical metrology, which combines the predictions of multiple DNNs to significantly enhance the accuracy and reduce the generalization error for the task of fringe-pattern analysis. First, several state-of-the-art base models of different architectures are selected. A K-fold average ensemble strategy is developed to train each base model multiple times with different data and calculate the mean prediction within each base model. Next, an adaptive ensemble strategy is presented to further combine the base models by building an extra DNN to fuse the features extracted from these mean predictions in an adaptive and fully automatic way. Experimental results demonstrate that ensemble learning could attain superior performance over state-of-the-art solutions, including both classic and conventional single-DNN-based methods. Our work suggests that by resorting to collective wisdom, ensemble learning offers a simple and effective solution for overcoming generalization challenges and boosts the performance of data-driven optical metrology methods.
In the field of 3D measurement, fringe projection profilometry attracts the most interest due to its high precision and convenience. However, it is still challenging to retrieve the unambiguous absolute phase from a single fringe image. In this paper, we propose a deep learning-based method for retrieving the absolute phase of triangular-wave embedded fringe images. Through the learning of a large amount of data, we use two neural networks to obtain high-precision wrapped phase and coarse absolute phase from the triangular-wave embedded fringe images respectively so as to obtain accurate fringe order. Combining the wrapped phase and fringe order, we can obtain high-precision absolute phases. The experimental results demonstrate that compared with our previous proposed composite dual-frequency fringe coding strategy, the fringe image of the new triangular-wave embedded fringe coding strategy as the input of the network can obtain the absolute phase with higher accuracy.
KEYWORDS: Composites, Fringe analysis, 3D modeling, 3D metrology, Phase retrieval, Data modeling, Modulation, Neural networks, Spatial frequencies, Projection systems
In recent years, due to the rapid development of deep learning technology in computer vision, deep learning has gradually penetrated into fringe projection profilometry (FPP) to improve the efficiency of three-dimensional (3D) shape measurement and solve the problem of phase/or depth retrieval accuracy. In order to measure dynamic scenes or high-speed events, the single-shot fringe projection technique, due to its single-frame measurement property that can completely overcome the motion-induced errors of the object, becomes one of the optimal options. In this paper, we introduce a deep learning-enabled single-shot fringe projection profilometry with a composite coding strategy. By combining an FPP physical model-based network architecture with a large dataset, we demonstrate that models generated by training an improved deep convolutional neural network can directly perform high-precision phase retrieval on a single fringe image.
Optical three-dimensional(3D) shape measurement technology has been widely used in industrial manufacturing, defect detection, reverse engineering, human modeling, pattern recognition and other fields. As industrial standards continue to advance, we are demanding more and more functionality and performance from our imaging systems. At present, although a real-time imaging system based on visible light from fringe projection can image well and achieve real-time imaging speed, it is still not applicable to face scanning, shaded object imaging, etc. However, most of the 3D imaging based on the infrared projector cannot achieve the effect of real-time imaging due to the slow scanning speed. In this paper, we combine the near-infrared structured light illumination system with the stereo phase unwrapping method with multi-camera calibration to realize high-precision real-time 3D imaging. The MEMS near-infrared fringe projection device is used as a structured light illumination source, which can reduce the damage of visible structured light to human eyes and animals. Experiments are carried out on static and dynamic scenes, and it is verified that the designed system can achieve high-speed and high precision 3D reconstruction at the speed of 100 frames per second, and the measurement accuracy is about 100µm.
Near-infrared (NIR) fringe projection is gradually replacing visible fringe projection in face-scanning because NIR light is less harmful to human eyes and has a higher recognition rate in a special environment. However, since NIR is susceptible to interference from various heat sources and light sources, the NIR fringe image captured by the camera is of poor quality and has low contrast. And the captured low-quality fringe image will directly affect the quality of phase acquisition. Traditional phase acquisition methods, such as Fourier transform profilometry and phase-shifting profilometry, are difficult to achieve both high-speed and high-precision phase measurements at the same time. Therefore, this paper proposes a deep learning based phase acquisition method for NIR fringe projection. By using a deep learning model trained by the deep neural network with powerful learning and computing capabilities, phase extraction can be achieved from fewer NIR fringe images. Moreover, our method can retrieve the phase information with high speed and high quality without additional optimization of the original fringe map.
In fringe projection profilometry (FPP), efficiently recovering the absolute phase has always been a great chal-lenge. The stereo phase unwrapping (SPU) technologies based on geometric constraints can eliminate phase ambiguity without projecting any additional patterns, which maximizes the efficiency of the retrieval of abso-lute phase. Inspired by recent successes of deep learning for phase analysis, we demonstrate that deep learning can be an effective tool that organically unifies phase retrieval, geometric constraints, and phase unwrapping into a comprehensive framework. Driven by extensive training dataset, the properly trained neural network can achieve high-quality phase retrieval and robust phase ambiguity removal from only single-frame projection. Experimental results demonstrate that compared with conventional SPU, our deep-learning-based approach can more efficiently and robustly unwrap the phase of dense fringe images in a larger measurement volume with fewer camera views.
Three-dimensional (3D) imaging technology has been widely applied in various fields, such as intelligent manufacturing, online inspection, reverse engineering, cultural relic protection, etc. In this work, we present a high-accuracy real-time omnidirectional 3D scanning and inspection system based on fringe projection profilometry. Firstly, a multi-camera system based on geometric constraints is constructed to perform stereo phase unfolding without additional auxiliary projection images to ensure high-accuracy 3D data acquisition in real time. Then, we propose a rapid 3D point cloud registration approach combining simultaneous localization and mapping (SLAM) with iterative closest point (ICP) techniques to achieve alignment of point cloud slices with accuracy of up to 100 microns. Finally, a cycle-positioning-based registration scheme is developed to allow for real-time 360 degree 3D surface defect inspection. The experimental results show that our system is capable of real-time omnidirectional 3D modelling and real-time 360° defect detection.
Fringe projection profilometry (FPP) has been widely used in high-speed, dynamic, real-time three-dimensional (3D) shape measurements. How to recover the high-accuracy and high-precision 3D shape information by a single fringe pattern is our long-term goal in FPP. Traditional single-shot fringe projection measurement methods are difficult to achieve high-precision 3D shape measurement of isolated and complex surface objects due to the influence of object surface reflectivity and spectral aliasing. In order to break through the physical limits of the traditional methods, we apply deep convolutional neural networks to single-shot fringe projection profilometry. By combining physical models and data-driven, we demonstrate that the model generated by training an improved U-Net network can directly perform high-precision and unambiguous phase retrieval on a single-shot spatial frequency multiplexing composite fringe image while avoiding spectrum aliasing. Experiments show that our method can retrieve high-quality absolute 3D surfaces of objects only by projecting a single composite fringe image.
Using a single fringe image to complete the dynamic absolute 3D reconstruction has become a tremendous challenge and an eternal pursuit for researchers. In fringe projection profilometry (FPP), although many methods can achieve high-precision 3D reconstruction from simple system architecture via appropriate encoding ways, they usually cannot retrieve the absolute 3D information of objects with complex surfaces through only a single fringe pattern. In this work, we develop a single-frame composite fringe encoding approach and use a deep convolutional neural network to retrieve the absolute phase of the object from this composite pattern end to- end. The proposed method can directly obtain spectrum-aliasing-free phase information and robust phase unwrapping from single-frame compound input through extensive data learning. Experiments have demonstrated that the proposed deep-learning-based approach can achieve absolute phase retrieval using a single image.
Fringe projection profilometry (FPP) has been more widely applied in fields such as intelligent manufacturing and medical plastic surgery. Recovering the three-dimensional (3D) surface of an object from a single frame image has always been the pursued goal in FPP. The color fringe projection method is one of the most potential technologies to realize single-shot 3D imaging because of the multi-channel multiplexing. Inspired by the recent success of deep learning technologies for phase analysis, we propose a novel single-shot 3D shape measurement approach named color deep learning profilometry (CDLP). Through `learning' on extensive data sets, the properly trained neural network can gradually `predict' the crosstalk-free high-quality absolute phase corresponding to the depth information of the object directly from a color fringe image. Experimental results demonstrate that our method can obtain accurate phase information acquisition and robust phase unwrapping without any complex pre/post-processing.
Ensuring high quality standards at a competitive cost through rapid and accurate industrial inspection is a great challenge in the eld of intelligent manufacturing. Three-dimensional (3D) optical quality inspection technologies are gradually widely applied for surface defect detection of complex workpieces because of its non-contact, high- accuracy, digitization and automation. However, the contradiction between cost and eciency, dependence on additional position hardware, and compromised detection strategies remain the urgent obstacles to overcome. In this work, we propose a fast 3D surface defect inspection approach based on fringe projection pro-lometry (FPP) for complex objects without any auxiliary equipment for position and orientation control. Firstly, a multi- view 3D measurement based on geometric constraints is employed to acquire high-accuracy depth information from dierent perspectives. Then, a cycle-positioning-based registration scheme with the establishment of the pose-information-matched 3D standard digital model is proposed to realize rapid alignment of the measured point cloud and the standard model. Finally, a minimum 3D distance search method is driven by a dual-thread processing mechanism for simultaneous scanning and detection to quantify and locate 3D surface defects in real time. To validate the proposed inspection approach, a software that combines 3D imaging, point cloud registration, and surface defect calculation is developed to perform quality inspections on complicated objects. The experimental results show that our method can accurately detect the 3D surface defect of the workpiece through more economical hardware and more convenient means in real time, which is of great signicance to intelligent manufacturing.
Eliminating the effective phase ambiguity with as few fringe patterns as possible is a huge challenge for fringe projection profilometry (FPP). The stereo phase unwrapping (SPU) technologies based on geometric constraints can achieve phase unwrapping without projecting any additional patterns, which maximizes the efficiency of the absolute phase retrieval. However, when high-frequency fringes are used, the phase ambiguities will increase which makes SPU unreliable. The adaptive depth constraint (ADC) method can increase the robustness of SPU, but it is difficult to deal with scenarios without the priori depth guidance. In this work, we propose a stereo phase unwrapping method based on feedback projection to robustly unwrap the wrapped phase of dense fringe images. Aiming at the problem that the ADC is too dependent on the last measurement result, a simple and effective deep anomaly detection strategy is proposed. After determining the reconstruction error, through the proposed fully automatic projection feedback mechanism, the absolute depth of the object can be quickly obtained to correct the dynamic depth range of the ADC, thus guiding the acquisition of high-quality 3D information. Experiments prove that this approach can achieve high-speed, real-time, and high-resolution 3D measurement with a measurement speed of 30 Hz under the premise of using two perspectives.
Fringe projection profilometry (FPP) has been widely applied in three-dimensional (3D) measurement owing to its high measurement accuracy and simple structure. In FPP, how to effectively recover the absolute phase, especially through a single image, has always been a huge challenge and eternal pursuit. The frequency-multiplex methods can maximize the efficiency of phase unwrapping by mixing the multi-frequency information used to eliminate phase ambiguity in the spectrum. However, spectrum aliasing and the resulting phase unwrapping errors are still pressing difficulties. Inspired by the successful application of deep learning in FPP, we propose a single-shot frequency multiplex fringe pattern for phase unwrapping approach using deep learning. Through extensive data learning, the properly trained neural networks can directly learn to obtain spectrum-aliasingfree phase information and robust phase unwrapping from single-frame compound input. Experimental results demonstrate that compared with convenient frequency-multiplex methods, our deep-learning-based approach can achieve more accurate and stable absolute phase retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.