Signal and image stationarity is the basic assumption for many methods of their analysis. However this assumption is not true in a lot of real cases. The paper is focused on local stationary testing using a small symmetric neighbourhood. The neighbourhood is split into two parts which should have the same statistical properties when the hypothesis of image stationarity is valid. We apply various testing approaches (two-sampled F-test, t-test, WMW, K-S) to obtain adequate p-values for given pixel, mask position, and test type. Finally, using battery of masks and tests, we obtain the series of p-values for every pixel. Applying False Discovery Rate (FDR) methodology, we localize all the pixels when any hypothesis falls. Resulting binary image is an alternative to traditional edge detection but with strong statistical background.
In the last decade, several different structured illumination microscopy (SIM) approaches have been developed. Precise determination of the effective spatial resolution in a live cell SIM reconstructed image is essential for reliable interpretation of reconstruction results. Theoretical resolution improvement can be calculated for every SIM method. In practice, the final spatial resolution of the cell structures in the reconstructed image is limited by many different factors. Therefore, assessing the resolution directly from the single image is an inherent part of the live cell imaging. There are several commonly used resolution measurement techniques based on image analysis. These techniques include full-width at half maximum (FWHM) criterion, or Fourier ring correlation (FRC). FWHM measurement requires fluorescence beads or sharp edge/line in the observed image to determine the point spread function (PSF). FRC method requires two stochastically independent images of the same observed sample. Based on our experimental findings, the FRC method does not seem to be well suited for measuring the resolution of SIM live cell video sequences. Here we show a method based on the Fourier transform analysis using power spectral density (PSD). In order to estimate the cut-off frequency from a noisy signal, we use PSD estimation based on Welch's method. This method is widely used in non-parametric power spectra analysis. Since the PSD-based metric can be computed from a single SIM image (one video frame), without any prior knowledge of the acquiring system, it can become a fundamental tool for imaging in live cell biology.
The segmentation of 2D biomedical images is very complex problem which has to be solved interactively. Original MRI, CT, PET, or SPECT image can be enhanced using variational smoother. However, there are Regions of Interest (ROI) which can be exactly localized. The question is how to design human interaction with computer for user friendly biomedical service. Our approach is based on user selected points which determine the ROI border line. The relationship between point positions and image intensity is subject of variational interpolation using thin plate spline model. The general principle of segmentation is demonstrated on biomedical images of human brain.
KEYWORDS: Stochastic processes, Signal to noise ratio, Atmospheric turbulence, Atmospheric modeling, Fourier transforms, Monte Carlo methods, Optical transfer functions, Turbulence, Filtering (signal processing), Image processing
Modeling of atmospheric turbulence through Kolmogorov theorem belongs to traditional applications of 2D Fourier Transform (2D FT). It is based on Point Spread Function (PSF) in the spatial domain and its frequency domain image known as Optical Transfer Function (OTF). The latter is available in the explicit form. It enables to create an artificial fog effect in traditional image processing using 2D Discrete Fourier Transform (2D DFT). Exact knowledge of the Optical Transfer Function allows performing the image deblurring as deconvolution through Wiener method. The difference between the reference image and the deconvolution outcome can be quantified using SNR in traditional and rank modification. However, the real star image is a result of a stochastic process which is driven by 2D alpha-stable distribution. There is an efficient method how to generate a pseudorandom sample from the alpha-stable distribution. The distribution then enables to simulate the photon distribution following the theoretical PSF, i.e. convergence according to distribution is guaranteed. The comparison of both models and optimal parameter setting of Wiener deconvolution are studied for various exposure times and CCD camera noise levels. Obtained results can be generalized and applied to turbulent noise suppression.
Traditional median smoother for 2D images is insensitive to impulse noise but generates flat areas as unwanted artifacts. The proposed approach to overcome this issue is based on the minimization of the regularized form of total variation functional. At first the continuous functional is defined for n-dimensional signal in an integral form with regularization term. The continuous functional is converted to the discrete form using equidistant spatial sampling in point grid of pixels, voxels or other elements. This approach is suitable for traditional signal and image processing. The total variance is then converted to the sum of absolute intensity differences as a minimization criterion. The functional convexity guarantees the existence of global minimum and absence of local extremes. Resulting non-linear filter iteratively calculates local medians using red-black method of Successive Over/Under Relaxation (SOR) scheme. The optimal value of the relaxation parameter is also subject to our study. The sensitivity to regularization parameter enables to design high-pass and nonlinear band-pass filters as the difference between the image and low-pass smoother or as the difference between two different low-pass smoothers, respectively. Various median based approaches are compared in the paper.
Super-resolution (SR) microscopy is a powerful technique which enhances the resolution of optical microscopes beyond the diffraction limit. Recent SR methods achieve the resolution of 100 nm. Theoretical resolution enhancement can be mathematically defined. However, the final resolution in the real image can be influenced by technical limitations. Evaluation of resolution in a real sample is essential to assess the performance of an SR technique. Several image based resolution limit evaluation methods exist, but the determination of cutoff frequency is still a challenging task. In order to compare the efficiency of assessing resolution methods, the reference estimation technique is necessary. There exist several conventional methods in digital image processing. In this paper, the most common resolution measurement techniques used in the optical microscopy imaging are presented and their performance compared.
This paper deals with techniques for analysis of perceived color differences in images. Impact of color artifacts in image processing chain is a critical factor in the assessment of overall Quality of Experience (QoE). At first, an overview of color difference measures is presented. The performance of the methods is compared based on the results from subjective studies. Possible utilization of publicly available datasets with associated subjective scores is discussed. Majority of the datasets contain images distorted by various types of distortions not necessarily with controlled color impairments. Dedicated database of images with common color distortions and associated subjective scores is introduced. Performance evaluation and comparison of objective image color difference measures is presented using conventional correlation performance measures and robust receiver operating characteristic (ROC) based analyses.
Image segmentation is widely used as an initial phase of many image analysis tasks. It is often advantageous to first group pixels into compact, edge-respecting superpixels, because these reduce the size of the segmentation problem and thus the segmentation time by an order of magnitudes. In addition, features calculated from superpixel regions are more robust than features calculated from fixed pixel neighborhoods. We present a fast and general multiclass image segmentation method consisting of the following steps: (i) computation of superpixels; (ii) extraction of superpixel-based descriptors; (iii) calculating image-based class probabilities in a supervised or unsupervised manner; and (iv) regularized superpixel classification using graph cut. We apply this segmentation pipeline to five real-world medical imaging applications and compare the results with three baseline methods: pixelwise graph cut segmentation, supertexton-based segmentation, and classical superpixel-based segmentation. On all datasets, we outperform the baseline results. We also show that unsupervised segmentation is surprisingly efficient in many situations. Unsupervised segmentation provides similar results to the supervised method but does not require manually annotated training data, which is often expensive to obtain.
Images obtained from an astronomical digital camera are of integer nature as event counters in every pixel of its image sensor. The quality of the captured images is in uenced mostly by the camera characteristics and photon noise caused by natural random uctuation of observed light. We suppose the image pixel intensity as a mean value of the signal with Poisson distribution. The application of maximum likelihood method with image gradient regularization leads to variational task in discrete formulation. This variational task has only one unique solution on which the novel numerical method of image smoothing is based. The performance of the proposed smoothing procedure is tested using real images obtained from the digital camera in Meteor Automatic Imager and Analyzer (MAIA).
This paper deals with modeling of astronomical images in the spatial domain. We consider astronomical light images contaminated by the dark current which is modeled by Poisson random process. Dark frame image maps the thermally generated charge of the CCD sensor. In this paper, we solve the problem of an addition of two Poisson random variables. At first, the noise analysis of images obtained from the astronomical camera is performed. It allows estimating parameters of the Poisson probability mass functions in every pixel of the acquired dark frame. Then the resulting distributions of the light image can be found. If the distributions of the light image pixels are identified, then the denoising algorithm can be applied. The performance of the Bayesian approach in the spatial domain is compared with the direct approach based on the method of moments and the dark frame subtraction.
Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.
Reliable meteor detection is one of the crucial disciplines in astronomy. A variety of imaging systems is used for meteor path reconstruction. The traditional approach is based on analysis of 2D image sequences obtained from a double station video observation system. Precise localization of meteor path is difficult due to atmospheric turbulence and other factors causing spatio-temporal fluctuations of the image background. The proposed technique performs non-linear preprocessing of image intensity using Box-Cox transform as recommended in our previous work. Both symmetric and asymmetric spatio-temporal differences are designed to be robust in the statistical sense. Resulting local patterns are processed by data whitening technique and obtained vectors are classified via cluster analysis and Self-Organized Map (SOM).
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be
used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion
in the quality control. Also, the individual islet volume distribution is interesting — it has been indicated that
smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The
input of the volume estimation methods are segmented images of individual islets. The segmentation step is
not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical
or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does
not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are
observed. We present a simulation study comparing the performance of the volume estimation methods in
different scenarios and an experimental study comparing the methods on a real dataset.
This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).
Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.
There are various deconvolution methods for suppression of blur in images. In this paper a survey of image deconvolution techniques is presented with focus on methods designed to handle images acquired with wide-field astronomical imaging systems. Image blur present in such images is space-variant especially due to space-variant point spread function (PSF) of the lens. The imaging system can contain also nonlinear electro-optical elements. Analysis of nonlinear and space-variant imaging systems is usually simplified so that the system is considered as linear and space-invariant (LSI) under specific constraints. Performance analysis of selected image deconvolution methods is presented in this paper, while considering space-variant nature of wide-field astronomical imaging system. Impact of nonlinearity on the overall performance of image deconvolution technique is also analyzed. Test images with characteristics obtained from the real system with space-variant wide-field input lens and nonlinear image intensifier are used for the performance analysis.
Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
Additional monitoring equipment is commonly used in astronomical imaging. This electro-optical system usually complements the main telescope during acquisition of astronomical phenomena or supports its operation e.g. evaluating the weather conditions. Typically it is a wide-field imaging system, which consists of a digital camera equipped with fish-eye lens. The wide-field imaging system cannot be considered as a space-invariant because of space-variant nature of its input lens. In our previous research efforts we have focused on measurement and analysis of images obtained from the subsidiary all-sky monitor WILLIAM (WIde-field aLL-sky Images Analyzing Monitoring system). Space-variant part of this imaging system consists of input lens with 180 fi angle of view in horizontal and 154 fi in vertical direction. For a precise astronomical measurement over the entire field of view, it is very important to know how the optical aberrations affect characteristics of the imaging system, especially its PSF (Point Spread Function). Two methods were used for characterization of the space-variant PSF, i.e. measurement in the optical laboratory and estimation using acquired images and Zernike polynomials. Analysis of results obtained using these two methods is presented in the paper. Accuracy of astronomical measurements is also discussed while considering the space-variant PSF of the system.
Object detection is one of the most important procedures in astronomical imaging. This paper deals with segmentation of astronomical images based on random forrest classifier. We consider astronomical image data acquired using a photometric system with B, V, R and I filters. Each image is acquired in more realizations. All image realizations are corrected using master dark frame and master at field obtained as an average of hundreds of images. Then a profile photometry is applied to find possible position of stars. The classifier is trained by B, V, R and I image vectors. Training samples are defined by user using ellipsoidal regions (20 selections for both classes: object, background). A number of objects and their positions are compared with astronomical object catalogue using Euclidean distance. We can conclude that the performance of the presented technique is fully comparable to other SoA algorithms.
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
Most of the classical approaches to the measurement and modeling of electro-optical imaging systems rely on the principles of linearity and space invariance (LSI). In our previous research efforts we have focused on measurement and analysis of images obtained from a double station video observation system MAIA (Meteor Automatic Imager and Analyzer). The video acquisition module of this system contains wide-field input lens which contributes to space-variability of the imaging system. For a precise astronomical measurement over the entire field of view, it is very important to comprehend how the characteristics of the imaging system can affect astrometric and photometric outputs. This paper presents an analysis of how the space-variance of the imaging system can affect precision of astrometric and photometric results. This analysis is based on image data acquired in laboratory experiments and astronomical observations with the wide-field system. Methods for efficient calibration of this system to obtain precise astrometric and photometric measurements are also proposed.
KEYWORDS: Quantization, Wavelets, Wavelet transforms, Systems modeling, Signal processing, Interference (communication), Statistical modeling, Discrete wavelet transforms, Digital imaging, Imaging systems
Quantization noise is present in all the current digital imaging systems, therefore its understanding and modeling
is crucial for optimization of image reconstruction techniques. Hence, this paper deals with modeling of the
quantization noise. We exploit the undecimated wavelet transform (UWT) for signal representation. We assume
that the quantization noise in the spatial domain can be seen as additive, white and uniformly distributed. Hence,
the UWT causes the transform of noise distribution due to weighted sum of noise samples and filter coefficients.
From the known quantization step we are able to estimate suitable moments of noise uniform probability density
function (PDF). These moments then could be directly evaluated in the undecimated wavelet domain using the
derived equations. The presented algorithm gives the a priori information about the quantization noise and can
be used for the suppression of it.
This paper is devoted to the dark current suppression in astronomical and multimedia images using the wiener
filtering. We consider dark current represented by dark frame as a white impulsive noise generated in CCD
sensor. The wiener filter is then set up accordance to measured second sample moments at CCD at chosen
temperature range 268 K to 293 K. Furthermore, the temperature dependency of the second sample moments
was fit by exponential regression. Hence, we are able to find sample moment and suppress dark current at given
temperature. The measurement was done at SBIG ST8 camera.
KEYWORDS: Wavelets, Chemical engineering, Control systems, Electrical engineering, Electronic imaging, Chemical analysis, Software engineering, Astronomy
We discuss methods for modeling and removal of noise in astronomical images. For its favorable properties, we exploit the undecimated wavelet representation and apply noise suppression in this domain. Usually, the noise analysis of the studied imaging system is carried out in the spatial domain. However, noise in astronomical data is non-Gaussian, and thus the noise model parameters need to be estimated directly in the wavelet domain.We derive equations for estimating the sample moments for non-Gaussian noise in the wavelet domain. We consider that the sample moments in the spatial domain are known from the noise analysis and that the model parameters are estimated by using the method of moments.
In this paper we present current progress in development of new observational instruments for the double station video
experiment called MAIA (Meteor Automatic Imager and Analyzer). The main goal of the MAIA project is to monitor
activity of the meteor showers and sporadic meteors. This paper presents detailed analysis of imaging parameters based
on acquisition of testing video sequences at different light conditions. Among the most important results belong the
analysis of opto-electronic conversion function and noise characteristics. Based on these results, requirements for image
preprocessing algorithms are proposed.
This paper is devoted to the noise analysis and noise suppression in a system for double station observation of
the meteors now known as MAIA (Meteor Automatic Imager and Analyzer). The noise analysis is based on
acquisition of testing video sequences at different light conditions and their further analysis. The main goal is to
find a suitable noise model and subsequently determine if the noise is signal dependent or not. Noise and image
model in the wavelet domain should be based on Gaussian mixture model (GMM) or Generalized Laplacian
Model (GLM) and the model parameters should be estimated by moment method. GMM and GLM allow to
model various types of probability density functions. Finally the advanced de-noising algorithm using Bayesian
estimator will be applied.
This paper deals with modeling of scientific and multimedia images in the wavelet domain. Images transformed into
wavelet domain have a special shape of probability density function (PDF). Thus wavelet coefficients PDFs are usually
modeled using generalized Laplacian PDF model (GLM), which is characterized by two parameters. The wavelet
coefficients modeling can be more efficient, while the Gaussian mixture model (GMM) is utilized. GMM model is given
by addition of at least two Gaussian PDFs with different standard deviations. There will be presented equation system
derived by moment method for GMM models parameters estimation. The equation system was derived for an addition of
two GMM models. So it is suitable for advanced denoising systems, where an addition of two GMM random variables is
considered (e.g. dark current in astronomical images).
This paper deals with advanced methods for elimination of the thermally generated charge in the astronomical images,
which were acquired by Charged Coupled Device (CCD) sensor. There exist a number of light images acquired by
telescope, which were not corrected by dark frame. The reason is simple the dark frame doesn't exists, because it was
not acquired. This situation may for instance come when sufficient memory space is not available. There will be
discussed the correction method based on the modeling of the light and dark image in the wavelet domain. As the model
for the dark frame image and for the light image the generalized Laplacian was chosen. The models parameters were
estimated using moment method, whereas an extensive measurement on astronomical camera were proposed and done.
This measurement simplifies estimation of the dark frame model parameters. Finally a set of the astronomical testing
images were corrected and then the objective criteria for an image quality evaluation based on the aperture photometry
were applied.
It is generally known that every astronomical image, which was acquired by CCD sensor, has to be corrected by dark
frame. The Dark frame maps the thermally generated charge of the CCD. May become that the dark frame image is not
available and it is impossible to correct the astronomical images directly. It is good to note that uncorrected images are
not suitable for subsequent investigation. There are simple nonlinear filtering methods, e.g. median filtering, but the
obtained results are not so satisfactory. During the recent year the algorithms for the thermally generated charge
elimination were proposed. All these algorithms use the Discrete Wavelet Transform (DWT). The DWT transforms
image into different frequency bands. Wavelet coefficients histogram should be modeled by generalized Laplacian
probability density function (PDF). The Laplacian parameters were estimated by moment method using derived equation
system. Furthermore, the images, where the thermally generated charge was suppressed, were estimated using Bayesian
estimators. This algorithm will be in the future improved, but now to the promising eliminating algorithm should be
involved.
This paper is devoted to denoising technique for video noise removal and deals with advanced WT (Wavelet Transform)
based method of noise suppression for security purposes. Many sources of unwanted distortion exist in the real
surveillance system especially when the sensing is done at extremely low light level conditions. Our goal was to optimize
the WT based algorithm to be applicable for the noise suppression in security videos with high computational
efficiency. Preprocessing is applied to the output of the sensing system to make the video data more suitable for further
denoising. Then a WT based statistical denoising method is applied. The method uses BLSE (Bayesian Least Square
Error Estimator) of WT coefficients while utilizing generalized Laplacian PDF modeling and optimized moment method
for parameters estimation. Several tests have been done to verify high noise suppression performance, computational
efficiency and low distortion of important features. Experimental results show that the described method performs well
for wide range of light conditions and respective signal-to-noise ratios.
This paper deals with evaluation and processing of astronomical image data, which are obtained by WFC (Wide-Field
Camera) or UWFC (Ultra Wide-Field Camera) systems. Precision of astronomical image data post-processing and
analyzing is very important. Large amount of different kinds of optical aberrations and distortions is included in these
systems. The amplitude of wavefront aberration error increases towards margins of the FOV (Field of View). Relation
between amount of high order optical aberrations and astrometry measurement precision is discussed in this paper. There
are descriptions of the transfer characteristics of astronomical optical systems presented in this paper. Spatially variant
(SV) optical aberrations negatively affect the transfer characteristics of all system and make it spatially variant as well.
SV model of optical system is presented in this paper. Partially invariant model of optical systems allows using Fourier
methods for deconvolution. Some deconvolution results are shown in this paper.
CMOS imagers based on Active Pixel Sensors (APS) are very important among others because of their possible technical
innovations leading to ultra-low power image acquisition or efficient on-chip image preprocessing. Implementation
of the image processing tasks (focal plane preprocessing and subsequent image processing) can be done effectively only
with the consideration of known transfer characteristics of the imager itself. Geometrical Point Spread Function (PSF)
depends on the certain geometric shape of active area in the particular design of CMOS APS. In this paper, the concept
of Modulation Transfer Function (MTF) analysis is generalized to be applicable to the sampled structures of CMOS
APS. Recalling theoretical results, we have analytically derived the detector MTF in the closed form for some special
active area shapes. The paper also deals with the method based on pseudorandom image pattern with uniform power spectral density (PSD). This method allows to evaluate (in contrast to other methods) spatially invariant MTF including
sampling MTF. It is generally known that a signal acquired by image sensor contains different types of noises. The
superposition of these noises produces noise with a Gaussian distribution. The denoising method based on Bayesian
estimator for implementation into the smart imager is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.