Optical triangulation metrology is an essential part of modern industrial quality assurance. Due to their robustness and cost-effectiveness Laser Light Section Sensors have become a widespread solution for geometry inspections. The configuration of a measurement system involves balancing the size of the measurement volume against the accuracy to be achieved. Therefore, in order to provide accurate measurements on a larger scale, it is required to combine several individual measurements of different configurations. This study thus investigates the identification of parameters of a focus-adjustable triangulation sensor. Adaptability of the working distance is achieved by automatic focusing of the camera and repositionability of the laser with a piezo rotation stage and a mirror. The movement of the projected laser plane has to be identified with regard to the tilting angle. This ensures accurate calibration of the measuring system while the working distance being adjustable. Two general approaches are suitable for solving this task: One is based on a simplified identification of the rotational axis of the tilted laser plane. However, this does not take into account the deviations resulting from the offset of the reflection axis on the mirror surface and the rotational axis. This paper extends conventional models by deriving the position of the projected plane from the complete set of all rigid body transformations with regard to the rotation angles. The position of the laser source, the rotation axis and the mirror surface are described in the camera coordinate system. The validity of the extended model is assessed in comparison with a simplified model. Furthermore, the influence of the focusable camera on the calibration of the laser rotation axis is further investigated.
For high-precision measurements through a inspection window, a 3D scanner based on fringe projection profilometry is being researched. The 3D scanner combines a micromirror array projector and two telecentric cameras. The affine camera model is commonly used to calibrate telecentric imaging systems, in which a single magnification factor is introduced and optimized for each lens. However, 3D reconstructions based on this model indicated that reconstruction uncertainties in the peripheral areas of the measuring volume are significanty affected by a possible inspection window. These uncertainties may occur due to the model-based determination of the magnification factor and the reduction in parallelism of the visible rays within the telecentric lens that can occur at larger working distances. To address this issue, a new method for calculating and identifying the influence of the magnification factor on the 3D point scaling for telecentric measuring systems is proposed. First, the measuring system is calibrated using an affine camera model. Then, the reconstructed 3D target points are used to estimate the magnification factor locally and assessing the influence of an inspection window in the optical path. In order to further investigate the influence of the inspection window on the imaging performance of the cameras the focus is estimated locally within the measuring volume. Initial measurements using these methods reveal that scale variations and the reduction of focus can be quantified locally and a model based correction as well as the removal of poorly reconstructed points is feasible.
This study presents a method to generate synthetic microscopic surface images by adapting the pre-trained latent diffusion model Stable Diffusion and the pre-trained text encoder OpenCLIP-ViT/H. A confocal laser scanning microscope was used to acquire the dataset for transfer learning. The measured samples include metallic surfaces processed with different abrasive machining methods like grinding, polishing, or honing. The network is trained to generate microtopographies with these machining methods, with different materials (for example, aluminum, PVC, and steel) and roughness values (for example, milling with Ra=0.4 to Ra =12.5). The performance of the network is evaluated through visual inspection, and the objective image quality measures Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Frechet Inception Distance (FID). The results demonstrate that the proposed method can generate realistic microtopographies, albeit with some limitations. These limitations may be due to the fact that the original training data for the Stable Diffusion network used mostly images from the Internet, which often show people or landscapes. It was also found that the lack of post-processing of the synthetic images may lead to a reduction in perceived sharpness and less finely detailed structures. Nevertheless, the performance of the model demonstrates a promising and effective approach to surface metrology and materials science, contributing to fields such as materials science and surface engineering.
In this work, a deep convolutional neural network is proposed to improve the registration of microtopographic data. For this purpose, different mechanical surfaces were optically measured using a confocal laser scanning microscope. A wide range of surfaces with different materials, processing methods, and topographic properties, such as isotropy and anisotropy or stochastic and deterministic features, are included. Training and testing datasets with known homographies are generated from these measurements by cropping a fixed and moving image patch from each topography and then randomly perturbing the latter. A pseudo-siamese network architecture based on the VGG Net is then used to predict these homographies. The network is trained with a supervised learning approach where the Euclidean distance between the predicted and the ground truth gives the loss function. The 4-point homography parameterization is used to improve the loss convergence. Furthermore, different amounts of image noise are added to enhance the prediction’s robustness and prevent overfitting. The effectiveness of the proposed method is evaluated through different experiments. First, the network performance is compared to intensity-based and feature-based conventional registration algorithms regarding the resulting error, the noise-robustness, and the processing speed. In addition, images from the Microsoft Common Objects in Context (COCO) dataset are used to verify the network’s generalization capability to new image types and contents. The results show that the learning-based approach offers much higher robustness regarding image noise and a much lower processing time. In contrast, conventional algorithms have a smaller registration error without image noise.
To ensure the safe operation of aircraft, regular endoscopy of the engines is mandatory. Since the blade stages are particularly susceptible to defects, they must be inspected especially frequently. In the process, a worker must inspect each blade individually. All findings must be carefully documented and assigned to the respective blade. Since there are no individual markings to identify the blades, the operator must count all blades as they pass through the endoscope image. Although electric rotary devices with automatic blade counting are available for some engines, manual counting is often necessary. Simultaneously inspecting and counting blades is tedious and error-prone. We present a novel algorithm for automatic blade counting during jet engine inspection in this paper. The algorithm’s central part is a Pearson correlation of individual video frames as the blades pass before the endoscopes during turning. Adaptive thresholding of the correlation function is used to count the blades. Rotation direction and speed are determined using the Farneback method of optical flow. By using correlation instead of classical image features, the algorithm is highly robust to metallic reflections and smooth blade surfaces without significant image features. In addition, the algorithm is robust to different rotation speeds and directions. Compared to existing approaches, the algorithm is robust and universally applicable for counting engine blades on almost any engine without the need for customization.
Optical triangulation systems based on fringe projection profilometry have emerged in recent years as a complement to traditional tactile devices. Due to the good scalability of the measurement approach, a highly compact novel sensor for maintenance and inspection in narrow spaces is realized by applying optical fiber bundles. Especially in the field of high-resolution and rapid maintenance in industrial environments, numerous applications arise. Endoscopic 3D measurements of gearing geometries are of particular technical relevance for detecting and quantifying damage. The measurement performance depends to a considerable extent on the technical surface to be inspected. Polished surfaces are particularly problematic due to specular reflections, but can still be partially reconstructed by using HDR imaging. However, if multiple reflections occur due to the specimen geometry and sensor arrangement in such a way that the optical path of each corresponding camera pixel can no longer be reconstructed unambiguously, a measurement is no longer feasible. In this study, the effects of surface roughness, sensor arrangement, and triangulation angle on measurement error are systematically investigated to describe possible application limits and provide guidance on sensor operation.
Benefitting from recent innovations in the smartphone sector, liquid optics in very compact designs have been cost-effectively introduced to the market. Without mechanical actuation, a focus variation can be adjusted within fractions of a second by curving a boundary layer between two liquids by applying a pulse width or amplitude modulated potential. Especially in the field of endoscopy, these innovative optical components open up many application possibilities. Conventional, mechanical zoom lenses are not very common in endoscopy and can only be miniaturized at considerable effort due to the necessary actuation and the complex design. In addition, the mechanical response is slow, which is a particular disadvantage in hand-held operation. A calibrated camera provides a two-dimensional camera pixel translated into a three-dimensional beam and, together with distortion correction enables the extraction of metric information. This approach is widely used in endoscopy, for example, to measure objects in the scene or to estimate the camera position and derive a trajectory accordingly. This is particularly important for triangulation-based 3D reconstruction such as photogrammetry. The use of liquid lenses requires a new data set with an adapted camera calibration for each focus adjustment. In practice, this is not feasible and would result in an extensive calibration effort. This paper therefore examines, on the basis of an experimental setup for automated endoscopic camera calibration, the extent to which certain calibration parameters can be modelled and approximated for each possible focal adjustment, and also investigates the influence of a liquid lens on the quality of the actual calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.