We propose an edge-based depth-from-focus technique for high-precision non-contact industrial inspection and metrology
applications. In our system, an objective lens with a large numerical aperture is chosen to resolve the edge details of the
measured object. By motorizing this imaging system, we capture the high-resolution edges within every narrow depth
of field. We can therefore extend the measured range and keep a high resolution at the same time. Yet, on the surfaces
with a large depth variation, a significant amount of data around each measured point are out of focus within the captured
images. Then, it is difficult to extract the valuable information from these out-of-focus data due to the depth-variant blur.
Moreover, these data impede the extraction of continuous contours for the measurement objects in high-level machine
vision applications. The proposed approach however makes use of the out-of-focus data to synthesize a depth-invariant
smoothed image, and then robustly locates the positions of high contrast edges based on non-maximum suppression and
hysteresis thresholding. Furthermore, by focus analysis of both the in-focus and the out-of-focus data, we reconstruct the
high-precision 3D edges for metrology applications.
Due to the variance between subjects, there is usually ambiguity in intensity-based intersubject registration. The topological constraint in the brain cortical surface might be violated because of the highly convolved nature of the human cortical cortex. We propose an intersubject brain registration method by combining the intensity and the geodesic closest point-based similarity measurements. Each of the brain hemispheres can be topologically equal to a sphere and a one-to-one mapping of the points on the spherical surfaces of the two subjects can be achieved. The correspondences in the cortical surface are obtained by searching the geodesic closest points in the spherical surface. The corresponding features on the cortical surfaces between subjects are then used as anatomical landmarks for intersubject registration. By adding these anatomical constraints of the cortical surfaces, the intersubject registration results are more anatomically plausible and accurate. We validate our method by using real human datasets. Experimental results in visual inspection and alignment error show that the proposed method performs better than the typical joint intensity- and landmark-distance-based methods.
The perspective effect is common in real optical systems using projected patterns for machine vision applications. In the past, the frequencies of these sinusoidal patterns are assumed to be uniform at different heights when reconstructing moving objects. Therefore, the error caused by a perspective projection system becomes pronounced in phase-measuring profilometry, especially for some high precision metrology applications such as measuring the surfaces of the semiconductor components at micrometer level. In this work, we investigate the perspective effect on phase-measuring profilometry when reconstructing the surfaces of moving objects. Using a polynomial to approximate the phase distribution under a perspective projection system, which we call a polynomial phase-measuring profilometry (P-PMP) model, we are able to generalize the phase-measuring profilometry model discussed in our previous work and solve the phase reconstruction problem effectively. Furthermore, we can characterize how the frequency of the projected pattern changes according to the height variations and how the phase of the projected pattern distributes in the measuring space. We also propose a polynomial phase-shift algorithm (P-PSA) to correct the phase-shift error due to perspective effect during phase reconstruction. Simulation experiments show that the proposed method can improve the reconstruction quality both visually and numerically.
A high throughput is often required in many machine vision systems, especially on the assembly line in the semiconductor industry. To develop a non-contact three-dimensional dense surface reconstruction system for real-time surface inspection and metrology applications, in this work, we project sinusoidal patterns onto the inspected objects and propose a high speed phase-shift algorithm. First, we use an illumination-reflectivity-focus (IRF) model to investigate the factors in image formation for phase-measuring profilometry. Second, by visualizing and analyzing the characteristic intensity locus projected onto the intensity space, we build a two-dimensional phase map to store the phase information for each point in the intensity space. Third, we develop an efficient elliptic phase-shift algorithm (E-PSA) for high speed surface profilometry. In this method, instead of calculating the time-consuming inverse trigonometric function, we only need to normalize the measured image intensities and then index the built two-dimensional phase map during real-time phase reconstruction. Finally, experimental results show that it is about two times faster than conventional phase-shift algorithm.
This article [Opt. Eng.. 51, , 097001 (2012)] was originally published on 13 Sep. 2012 with an error on p. 6, column 2, line 1. There was a stray multiplication symbol before the equation. The corrected line should appear as follows:
.
"...function {where L(x,y)=100exp[(x−128 220 ) 2 +(y−128 220 ) 2 ]} ."
.
The paper was corrected online on 18 Sep 2012. The article appears correctly in print.
Uneven illumination is a common problem in practical optical systems designed for machine vision applications, and it leads to significant errors when phase-shifting algorithms (PSA) are used to reconstruct the surface of a moving object. We propose an illumination-reflectivity-focus model to characterize this uneven illumination effect on phase-measuring profilometry. With this model, we separate the illumination factor effectively and consider the phase reconstruction from an optimization perspective. Furthermore, we formulate an illumination-invariant phase-shifting algorithm (II-PSA) to reconstruct the surface of a moving object under an uneven illumination environment. Experimental results show that it can improve the reconstruction quality both visually and numerically.
Uneven illumination is a common problem in real optical systems for machine vision applications, and it contributes
significant errors when using phase-shifting algorithms (PSA) to reconstruct the surface of a moving
object. Here, we propose an illumination-reflectivity-focus (IRF) model to characterize this uneven illumination
effect on phase-measuring profilometry. With this model, we separate the illumination factor effectively, and
then formulate the phase reconstruction as an optimization problem. To simplify the optimization process, we
calibrate the uneven illumination distribution beforehand, and then use the calibrated illumination information
during surface profilometry. After calibration, the degrees of freedom are reduced. Accordingly, we develop
a novel illumination-invariant phase-shifting algorithm (II-PSA) to reconstruct the surface of a moving object
under an uneven illumination environment. Experimental results show that the proposed algorithm can improve
the reconstruction quality both visually and numerically. Therefore, using this IRF model and the corresponding
II-PSA, not only can we handle uneven illumination in a real optical system with a large field of view (FOV),
but we also develop a robust and efficient method for reconstructing the surface of a moving object.
In this paper, we develop rectangular Gaussian kernels, i.e. all the rotated versions of the first order partial
derivatives of the 2D nonsymmetrical Gaussian functions, which are used to convolve with the test images for
edge extraction. By using rectangular kernels, one can have greater flexibility to smooth high frequency noise
while keeping the high frequency edge details. When using larger kernels for edge detection, one can smooth more
high frequency noise at the expense of edge details. Rectangular kernels allow us to smooth more noise along one
direction and detect better edge details along the other direction, which improve the overall edge detection results
especially when detecting line pattern edges. Here we propose two new approaches in using rectangular Gaussian
kernels, namely the pattern-matching method and the quadratic method. The magnitude and directional edge
from these two methods are computed based on the convolution results of the small neighborhood of the edge
point with the rectangular Gaussian kernels along different directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.