KEYWORDS: Cameras, Thermal imaging cameras, 3D modeling, Profilometers, 3D projection, Projection systems, Visible radiation, Thermography, Point clouds, 3D image processing
Conventional fringe projection profilometers utilize cameras and projectors in the visible spectrum. Nevertheless, some applications require profilometers with a complementary thermal camera for the infrared spectrum. Since the point cloud is computed from pixel correspondences between the visible camera-projector pair, the texture in the visible spectrum is obtained by direct association of color from each image pixel to its corresponding point in the cloud. Unfortunately, the texture from the thermal camera is not straightforward because of the inexistence of pixel-point correspondences. In this paper, a simple interpolation-based method for determining the texture of the reconstructed objects is proposed. The theoretical principles are reviewed, and an experimental verification is conducted using a visible-thermal fringe projection profilometer. This work provides a helpful framework for three-dimensional data fusion for advanced multi-modal profilometers.
Current calibration methods for multimodal systems consisting of structured light and thermography use calibration targets with physical characteristics. However, defects in the manufacturing of these targets are common. Therefore, these methods are prone to undesired errors. We propose a calibration method for a multimodal system (a visible camera, projector, and thermal imaging camera) that does not require the construction of a physical calibration target. For this purpose, thanks to an auxiliary camera, we use a digital screen to obtain the intrinsic parameters of the camera, and a mirror to obtain the intrinsic and extrinsic parameters of the projector and the thermal imaging camera. The experimental results demonstrate that it is possible to elude the challenging task of fabricating physical targets without compromising the accuracy of the system calibration compared to conventional methods.
Reliably detecting or tracking 3D features is challenging. It often requires preprocessing and filtering stages, along with fine-tuned heuristics for reliable detection. Alternatively, artificial intelligence-based strategies have recently been proposed; however, these typically require many manually labeled images for training. We introduce a method for 3D feature detection by using a convolutional neural network and a single 3D image obtained by fringe projection profilometry. We cast the problem of 3D feature detection as an unsupervised detection problem. Hence, the goal is to use a neural network that learns to detect specific features in 3D images using a single unlabeled image. Therefore, we implemented a deep-learning method that exploits inherent symmetries to detect objects with few training data and without ground truth. Subsequently, using a pyramid methodology of rescaling each image to be processed, we achieved feature detections of different sizes. Finally, we unified the detections using a non-maximum suppression algorithm. Preliminary results show that the method provides reliable detection under different scenarios with a more flexible training procedure than other competing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.