An integrated array computational imaging system, dubbed PERIODIC, is presented which is capable of exploiting a
diverse variety of optical information including sub-pixel displacements, phase, polarization, intensity, and
wavelength. Several applications of this technology will be presented including digital superresolution, enhanced
dynamic range and multi-spectral imaging. Other applications include polarization based dehazing, extended depth of
field and 3D imaging. The optical hardware system and software algorithms are described, and sample results are
shown.
Digital super-resolution refers to computational techniques that exploit the generalized sampling theorem to
extend image resolution beyond the pixel spacing of the detector, but not beyond the optical limit (Nyquist
spatial frequency) of the lens. The approach to digital super-resolution taken by the PERIODIC multi-lenslet
camera project is to solve a forward model which describes the effects of sub-pixel shifts, optical blur, and
detector sampling as a product of matrix factors. The associated system matrix is often ill-conditioned, and
convergence of iterative methods to solve for the high-resolution image may be slow.
We investigate the use of pupil phase encoding in a multi-lenslet camera system as a means to physically
precondition and regularize the computational super-resolution problem. This is an integrated optical-digital
approach that has been previously demonstrated with cubic type and pseudo-random phase elements. Traditional
multi-frame phase diversity for imaging through atmospheric turbulence uses a known smooth phase perturbation
to help recover a time series of point spread functions corresponding to random phase errors. In the context of a
multi-lenslet camera system, a known pseudo-random or cubic phase error may be used to help recover an array
of unknown point spread functions corresponding to manufacturing and focus variations among the lenslets.
We describe a computational imaging technique to extend the depth-of field of a 94-GHz imaging system. The
technique uses a cubic phase element in the pupil plane of the system to render system operation relatively
insensitive to object distance. However, the cubic phase element also introduces aberrations but, since these
are fixed and known, we remove them using post-detection signal processing. We present experimental results
that validate system performance and indicate a greater than four-fold increase in depth-of-field from 17" to
greater than 68".
We investigate the use of a novel multi-lens imaging system in the context of biometric identification, and more
specifically, for iris recognition. Multi-lenslet cameras offer a number of significant advantages over standard
single-lens camera systems, including thin form-factor and wide angle of view. By using appropriate lenslet spacing
relative to the detector pixel pitch, the resulting ensemble of images implicitly contains subject information
at higher spatial frequencies than those present in a single image. Additionally, a multi-lenslet approach enables
the use of observational diversity, including phase, polarization, neutral density, and wavelength diversities. For
example, post-processing multiple observations taken with differing neutral density filters yields an image having
an extended dynamic range. Our research group has developed several multi-lens camera prototypes for the
investigation of such diversities.
In this paper, we present techniques for computing a high-resolution reconstructed image from an ensemble of
low-resolution images containing sub-pixel level displacements. The quality of a reconstructed image is measured
by computing the Hamming distance between the Daugman4 iris code of a conventional reference iris image,
and the iris code of a corresponding reconstructed image. We present numerical results concerning the effect of
noise and defocus blur in the reconstruction process using simulated data and report preliminary work on the
reconstruction of actual iris data obtained with our camera prototypes.
Iris recognition imaging is attracting considerable interest as a viable alternative for personal identification and verification in many defense and security applications. However current iris recognition systems suffer from limited depth of field, which makes usage of these systems more difficult by an untrained user. Traditionally, the depth of field is increased by reducing the imaging system aperture, which adversely impacts the light capturing power and thus the system signal-to-noise ratio (SNR). In this paper we discuss a computational imaging system, referred to as Wavefront Coded(R) imaging, for increasing the depth of field without sacrificing the SNR or the resolution of the imaging system. This system employs a especially designed Wavefront Coded lens customized for iris recognition. We present experimental results that show the benefits of this technology for biometric identification.
The insertion of a suitably designed phase plate in the pupil of an imaging system makes it possible to encode the depth dimension of an extended three-dimensional scene by means of an approximately shift-invariant PSF. The so-encoded image can then be deblurred digitally by standard image recovery algorithms to recoup the depth dependent detail of the original scene. A similar strategy can be adopted to compensate for certain monochromatic aberrations of the system. Here we consider two approaches to optimizing the design of the phase plate that are somewhat complementary - one based on Fisher information that attempts to reduce the sensitivity of the phase encoded image to misfocus and the other based on a minimax formulation of the sum of singular values of the system blurring matrix that attempts to maximize the resolution in the final image. Comparisons of these two optimization approaches are discussed. Our preliminary demonstration of the use of such pupil-phase engineering to successfully control system aberrations, particularly spherical aberration, is also presented.
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
Automated iris recognition is a promising method for noninvasive verification of identity. Although it is noninvasive, the procedure requires considerable cooperation from the user. In typical acquisition systems, the subject must carefully position the head laterally to make sure that the captured iris falls within the field-of-view of the digital image acquisition system. Furthermore, the need for sufficient energy at the plane of the detector calls for a relatively fast optical system which results in a narrow depth-of-field. This latter issue requires the user to move the head back and forth until the iris is in good focus. In this paper, we address the depth-of-field problem by studying the effectiveness of specially designed aspheres that extend the depth-of-field of the image capture system. In this initial study, we concentrate on the cubic phase mask originally proposed by Dowski and Cathey. Laboratory experiments are used to produce representative captured irises with and without cubic asphere masks modifying the imaging system. The iris images are then presented to a well-known iris recognition algorithm proposed by Daugman. In some cases we present unrestored imagery and in other cases we attempt to restore the moderate blur introduced by the asphere. Our initial results show that the use of such aspheres does indeed relax the depth-of-field requirements even without restoration of the blurred images. Furthermore, we find that restorations that produce visually pleasing iris images often actually degrade the performance of the algorithm. Different restoration parameters are examined to determine their usefulness in relation to the recognition algorithm.
A novel and successful optical-digital approach for removing certain
aberrations in imaging systems involves placing an optical mask between an image-recording device and an object to encode the wavefront phase before the image is recorded, followed by digital image deconvolution to decode the phase. We have observed that when appropriately engineered, such an optical mask can also act as a form of preconditioner for certain deconvolution algorithms. It can boost information in the signal before it is recorded well above the noise level, leveraging digital restorations of very high quality. In this paper, we 1) examine the influence that a phase mask has on the incoming signal and how it subsequently affects the performance of restoration algorithms, and 2) explore the design of optical masks, a difficult nonlinear optimization problem with multiple design parameters, for removing certain aberrations and for maximizing
restorability and information in recorded images.
Very simple visual aids can be designed to convey sophisticated concepts in optics to students ranging from 5th grade to first year graduate students. In this talk I will outline several specific classroom experiments illustrating concepts in wave optics that can be performed with computer generated holograms.
By suitably phase-encoding optical images in the pupil plane and then digitally restoring them, one can greatly improve their quality. The use of a cubic phase mask originated by Dowski and Cathey to enhance the depth of focus in the images of 3-d scenes is a classic example of this powerful approach. By using the Strehl ratio as a measure of image quality, we propose tailoring the pupil phase profile by minimizing the sensitivity of the quality of the phase-encoded image of a point source to both its lateral and longitudinal coordinates. Our approach ensures that the encoded image will be formed under a nearly shift-invariant imaging condition, which can then be digitally restored to a high overall quality nearly free from the aberrations and limited depth of focus of a traditional imaging system. We also introduce an alternative measure of sensitivity that is based on the concept of Fisher information. In order to demonstrate the validity of our general approach, we present results of computer simulations that include the limitations imposed by detector noise.
Many visible and infrared sampled imaging systems suffer from moderate to severe amounts of aliasing. The problem arises because the large optical apertures required for sufficient light gathering ability result in large spatial cutoff frequencies. In consumer grade cameras, images are often undersampled by a factor of twenty times the suggested Nyquist rate. Most consumer cameras employ birefringent blur filters that purposely blur the image prior to detection to reduce Moire artifacts produced by aliasing. In addition to the obvious Moire artifacts, aliasing introduces other pixel level errors that can cause artificial jagged edges and erroneous intensity values. These types of errors have led some investigators to treat the aliased signal as noise in imaging system design and analysis. The importance of aliasing is dependent on the nature of the imagery and the definition of the assessment task. In this study, we employ a laboratory experiment to characterize the nature of aliasing noise for a variety of object classes. We acquire both raw and blurred imagery to explore the impact of pre-detection antialiasing. We also consider the post detection image restoration requirements to restore the in-band image blur produced by the anti-aliasing schemes.
KEYWORDS: Signal to noise ratio, Optical transfer functions, Visual information processing, Imaging systems, Spatial frequencies, Sensors, Signal attenuation, Optical design, Optical filters, Phase only filters
Aliasing is introduced in sampled imaging systems when light level requirements dictate using a numerical aperture that passes spatial frequencies higher than the Nyquist frequency set by the detector. One method to reduce the effects of aliasing is to modify the optical transfer function so that frequencies that might otherwise be aliased are removed. This is equivalent to blurring the image prior to detection. However, blurring the image introduces a loss in spatial detail and, in some instances, a decrease in the image signal-to-noise ratio. The tradeoff between aliasing and blurring can be analyzed by treating aliasing as additive noise and using information density to assess the imaging quality. In this work we use information density as a metric in the design of an optical phase-only anti-aliasing filter. We used simulated annealing to determine a pupil phase that modifies the system optical transfer function so that the information density is maximized. Preliminary results indicate that maximization of the information density is possible. The increase in information density appears to be proportional to the logarithm of the electronic signal-to-noise ratio and insensitive to the number of phase levels in the pupil phase. We constrained the pupil phase to 2, 4, 8, and 256 phase quantization levels and found little change in the information density of the optical system. Random and zero initial-phase inputs also generated results with little difference in their final information densities.
KEYWORDS: Imaging systems, Sensors, Signal to noise ratio, Wavefronts, Modulation transfer functions, Spatial frequencies, Image restoration, Image quality, Image quality standards, Information theory
Classical optical design techniques are oriented toward optimizing imaging system response at a single image plane. Recently, researchers have proposed to greatly extend the imaging system depth of field, by introducing large deformations of the optical wavefront, coupled with subsequent post-detection image restoration. In one case, a spatially separable cubic phase plate is placed at the pupil plane of an imaging system to create an extremely large effective depth of field. The price for this extended performance is noise amplification in the restored imagery relative to a perfectly focused image. In this paper we perform a series of numerical design studies based on information theoretic analyses to determine when a cubic phase system is preferable to a standard optical imaging system. The amount of optical path difference (OPD) associated with the cubic phase plate is directly related to the amount of achievable depth of field. A large OPD allows greater depth of field at the expense of greater noise in the restored image. The information theory approach allows the designer to study the effect of the cubic phase OPD for a given depth of field requirement.
Charge coupled-device imaging systems are often designed so that the image of the object field is sampled well below the Nyquist limit. Undersampled designs frequently occur because of the need for optical apertures that are large enough to satisfy the detector sensitivity requirements. Consequently, the cutoff frequency of the aperture is well beyond the sampling limits of the detector array, and aliasing artifacts degrade the resulting image. A common antialiasing technique in such imaging systems is to use birefringent plates as a blur filter. The blur filter produces a point spread function (PSF) that resembles multiple replicas of the optical system PSF, with the separation between replicas determined by the thickness of the plates. When the altered PSF is convolved with the PSF of the detector, an effective pixel is produced that is larger than the physical pixel and thus, the higher spatial frequency components and the associated aliasing are suppressed. Previously, we have shown how information theory can be used in designing birefringent blur filters by maximizing the information density of the image. In this paper, we investigate the effects of spherical aberration and defocus on the information density of an imaging system containing a birefringent blur filter.
KEYWORDS: Imaging systems, Optical transfer functions, Digital filtering, Spatial frequencies, Modulation transfer functions, Diffraction, Sensors, Optical components, Digital imaging, Point spread functions
We provide experimental verification of the performance of an optical-digital imaging system that delivers near diffraction limited imaging over a wide depth of field. A custom aspheric optical element is used to modify an incoherent optical system so that the generated optical image is nearly independent of misfocus-induced blur. The resulting image, called an intermediate image, is not spatially diffraction limited. Digital processing of the intermediate image produces a final image that forms a close approximation to the diffraction limited image. The combined effect of the optical-digital system is to image objects independently of focus or range, that is, the system has an extended depth of field.
Spatial coherence in optical processing can be exploited to implement a wide variety of image processing functions. While fully coherent systems tend to receive the most attention, spatially noncoherent systems can often provide equivalent functionality while offering significant advantages over coherent systems with regard to noise performance and system robustness. The term noncoherent includes both partially coherent and fully incoherent illumination. In addition to the noise immunity advantage, noncoherent diffraction-based processors have relaxed requirements on pupil plane spatial light modulator characteristics. In this paper we provide a discussion of the tradeoffs between coherent and noncoherent processing, taking into account the limited performance characteristics of commercially available spatial light modulators. The advantages of noncoherent processing are illustrated with numerical and experimental results corresponding to three different noncoherent architectures.
In contrast to imaging and interconnect applications, which require fixed diffractive optical elements, applications such as optical image processing and target recognition require diffractive elements that can be altered dynamically in real-time and, therefore, require the use of spatial light modulators (SLMs). Present SLM technology, however, has limited modulation and space-bandwidth which affects system performance. We present techniques for designing diffractive filters for display on SLMs in coherent and incoherent pattern recognition systems.
In contrast to imaging and interconnect applications, which require fixed diffractive optical elements, applications such as optical image processing and target recognition require diffractive elements that can be altered dynamically in real-time and, therefore, require the use of spatial light modulators (SLMs). Present SLM technology, however, has limited spatial resolution and space-bandwidth which affects system performance. We present techniques for designing diffractive filters for display on SLMs in coherent and incoherent image processing systems. Laboratory results are presented for a coherent matched filter design for display on a binary phase SLM.
The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.