In electronics manufacturing, the inspection of defects of electrical components on printed circuit boards (SMD-PCB) is an import part of the production chain. This process is normally implemented by automatic optical inspection (AOI) systems based on classical computer vision and multimodal imaging. Despite the highly developed image processing, misclassifications can occur due to the different, variable appearance of objects and defects and constantly emerging defect types, which can only be avoided by constant manual supervision and adaption. Therefore, a lot of manpower is needed to do this or to perform a subjective follow-up. In this paper, we present a new method using the principle of multimodal deep learning-based one-class novelty-detection to support AOIs and operators to detect defects more accurate or to determine whether something needs to be changed. By combining with a given AOI classification a powerful adaptive AOI system can be realized. To evaluate the performance of the multimodal novelty-detector, we conducted experiments with SMD-PCB-components imaged in texture and geometric modalities. Based on the idea of one-class-detection only normal data is needed to form training sets. Annotated defect data which is normally only insufficiently available, is only used in the tests. We report about some experiments in accordance with the consistence of data categories to investigate the applicability of this approach in different scenarios. Hereby we compared different state-of-the-art one-class novelty detection techniques using image data of different modalities. Besides the influence of different data fusion methods are discussed to find a good way to use this data and to show the benefits using multimodal data. Our experiments show an outstanding performance of defect detection using multimodal data based on our approach. Our best value of the widely known AUROC reaches more than 0.99 with real test data.
We introduce an innovative concept for 3D imaging that utilizes a structured light principle. While our design is specifically tailored for collaborative scenarios involving mobile transport robots, it is also applicable to similar contexts. Our system pairs a standard camera with a projector that employs a diffractive optical element (DOE) and a collimated laser beam to generate a coded light pattern. This allows a three-dimensional measurement of objects from a single camera shot. The main objective of the 3D-sensor is to facilitate the development of automatic, dynamic and adaptive logistics processes capable of managing diverse and unpredictable events. The key novelty of our proposed system for triangulation-based 3D reconstruction is the unique coding of the light pattern, ensuring robust and efficient 3D data generation, even within challenging environments such as industrial settings. Our pattern relies on a perfect submap, a matrix featuring pseudorandomly distributed dots, where each submatrix of a fixed size is distinct from the others. Based on the size of the working space and known geometrical parameters of the optical components, we establish vital design constraints like minimum pattern size, uniqueness window size, and minimum Hamming distance for the design of an optimal pattern. We empirically examine the impact of these pattern constraints on the quality of the 3D data and compare our proposed encoding with some single-shot patterns found in existing literature. Additionally, we provide detailed explanations on how we addressed several challenges during the fabrication of the DOE, which are crucial in determining the usability of the application. These challenges include reducing the 0th diffraction order, accommodating a large horizontal field of view, achieving high point density, and managing a large number of points. Lastly, we propose a real-time processing pipeline that transforms an image of the captured dot pattern into a high-resolution 3D point cloud using a computationally efficient pattern decoding methodology.
One of the most preferred platforms or boards for developing 'real-time image-video processing' applications is NVIDIA's Jetson Nano, which is equipped with CUDA that will accelerate the performance. However, there is no research has yet evaluated CUDA performance on the Jetson Nano for Real-Time Image/Video Processing applications. Through this research, an evaluation of the CUDA performance on Jetson Nano will be obtained by running the Thresholding application with and without the CUDA feature. Some of the aspects evaluated from this study are as follows: CPU usage percentage, GPU usage percentage, temperature level, Current Usage on CPU, Current Usage on GPU from the research results, it was found that the CUDA feature does not always provide added value or performance in its use, CUDA will run very effectively when offline (not real-time), but in real-time, the performance with and without CUDA is almost the same.
This paper presents a principle for scene-related camera calibration in Manhattan worlds. The proposed estimation of extrinsic camera parameters from vanishing points represents a useful alternative to the traditional target-based calibration methods, especially in large urban or industrial environments. We analyse the effects of errors in the calculation of camera poses and derive general restrictions for the use of our approach. In addition, we present methods for calculating the position and orientation of several cameras to a world coordinate system and discuss the effect of imprecise or incorrectly calculated vanishing points. Our approach was evaluated with real images of a prototype for human-robot collaboration installed at ZBS e.V. The results were compared with a perspective n-Point (PnP) method.
Currently, Deep Learning (DL) shows us powerful capabilities for image processing. But it cannot output the exact photometric process parameters and shows non-interpretable results. Considering such limitations, this paper presents a robot vision system based on Convolutional Neural Networks (CNN) and Monte Carlo algorithms. As an example to discuss about how to apply DL in industry. In the approach, CNN is used for preprocessing and offline tasks. Then the 6- DoF object position are estimated using a particle filter approach. Experiments will show that our approach is efficient and accurate. In future it could show potential solutions for human-machine collaboration systems.
KEYWORDS: Point spread functions, Deconvolution, Scanning probe microscopy, Microscopy, Convolution, Microscopes, Data modeling, Chemical elements, Image processing, Analytical research
The Kelvin Probe Force Microscopy (KPFM) is a method to detect the surface potential of micro- and nanostructured
samples using a common Scanning Probe Microscope (SPM). The electrostatic force has a very long
range compared to other surface forces. By using SPM systems the KPFM measurements are performed in the
noncontact region at surface distances greater than 10 nm. In contrast to topography measurement, the measured
data is blurred. The KPFM signal can be described as a convolution of an effective surface potential and a
microscope intrinsic point spread function, which allows the restoration of the measured data by deconvolution.
This paper deals with methods to deconvolute the measured KPFM data with the objective to increase the
lateral resolution. An analytical and a practical way of obtaining the point spread function of the microscope
was compared. In contrast to other papers a modern DoF-restricted deconvolution algorithm is applied to the
measured data. The new method was demonstrated on a nanoscale test stripe pattern for lateral resolution and
calibration of length scales (BAM-L200) made by German Federal Istitute for Materials Research and Testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.