New technologies for Secondary Ion Mass Spectrometry (SIMS) produce three-dimensional hyperspectral chemical
images with high spatial resolution and fine mass-spectral precision. SIMS imaging of biological tissues and
cells promises to provide an informational basis for important advances in a wide variety of applications, including
cancer treatments. However, the volume and complexity of data pose significant challenges for interactive visualization
and analysis. This paper describes new methods and tools for computer-based visualization and analysis
of SIMS data, including a coding scheme for efficient storage and fast access, interactive interfaces for visualizing
and operating on three-dimensional hyperspectral images, and spatio-spectral clustering and classification.
We develop a method for automatic colorization of images (or two-dimensional fields) in order to visualize pixel values and their local differences. In many applications, local differences in pixel values are as important as their values. For example, in topography, both elevation and slope often must be considered. Gradient-based value mapping (GBVM) is a technique for colorizing pixels based on value (e.g., intensity or elevation) and gradient (e.g., local differences or slope). The method maps pixel values to a color scale (either gray-scale or pseudocolor) in a manner that emphasizes gradients in the image while maintaining ordinal relationships of values. GBVM is especially useful for high-precision data, in which the number of possible values is large. Colorization with GBVM is demonstrated with data from comprehensive two-dimensional gas chromatography (GCxGC), using both gray-scale and pseudocolor to visualize both small and large peaks, and with data from the Global Land One-Kilometer Base Elevation (GLOBE) Project, using gray-scale to visualize features that are not visible in images produced with popular value-mapping algorithms.
KEYWORDS: Visualization, Associative arrays, Image visualization, Chromatography, Chemical analysis, Sensors, RGB color model, Digital image processing, Digital imaging, Modulation
This paper develops a method for automatic colorization of two-dimensional fields presented as images, in order to visualize local changes in values. In many applications, local changes in values are as important as magnitudes of values. For example, in topography, both elevation and slope often must be considered. Gradient-based value mapping for colorization is a technique to visualize both value (e.g., intensity or elevation) and gradient (e.g., local differences or slope). The method maps pixel values to a color scale in a manner that emphasizes gradients in the image. The value mapping function is monotonically non-decreasing, to maintain ordinal relationships of values on the color scale. The color scale can be a grayscale or pseudocolor scale. The first step of the method is to compute the gradient at each pixel. Then, the pixels (with computed gradients) are sorted by value. The value mapping function is the inverse of the relative cumulative gradient magnitude function computed from the sorted array. The value mapping method is demonstrated with data from comprehensive two-dimensional gas chromatography (GCxGC), using both grayscale and a pseudocolor scale to visualize local changes related to both small and large peaks in the GCxGC data.
This paper presents a computationally efficient method for super-resolution reconstruction and restoration from microscanned images. Microscanning creates multiple low-resolution images with slightly varying sample-scene phase shifts. Microscanning can be implemented with a physical microscanner built into specialized imaging systems or by simply panning and/or tilting traditional imaging systems to acquire a temporal sequence of images. Digital processing can combine the low-resolution images to produce an image with higher pixel resolution (i.e., super-resolution) and higher fidelity. The cubic convolution method developed in this paper employs one-pass, small-kernel convolution to perform reconstruction (increasing resolution) and restoration (improving fidelity). The approach is based on an end-to-end, continuous-discrete-continuous model of the microscanning imaging process. The derivation yields a parametric form that can be optimized for the characteristics of the scene and the imaging system. Because cubic convolution is constrained to a small spatial kernel, the approach is efficient and is amenable to adaptive processing and to parallel implementation. Experimental results with simulated imaging and with real microscanned images indicate that the cubic convolution method efficiently and effectively increases resolution and fidelity for significantly improved image quality.
KEYWORDS: Chemical analysis, Image processing, Chemical reactions, Detection and tracking algorithms, Liquid crystals, Chromatography, Data acquisition, Nickel, Affine motion model
Comprehensive two-dimensional gas chromatography (GCxGC) is a new technology for chemical separation. In GCxGC analysis, chemical identification is a critical task that can be performed by peak pattern matching. Peak pattern matching tries to identify the chemicals by establishing correspondences from the known peaks in a peak template to the unknown peaks in a target peak pattern. After the correspondences are established, information carried by known peaks are copied into the unknown peaks. The peaks in the target peak
pattern are then identified. Using peak locations as the matching features, peak patterns can be represented as point patterns and the peak pattern matching problem becomes a point pattern matching problem. In GCxGC, the chemical separation process imposes an ordering constraint on peak retention time (peak location). Based on the ordering constraint, the matching technique proposed in this paper forms directed edge patterns from point patterns and then matches the point patterns by matching the edge patterns. Preliminary
experiments on GCxGC peak patterns suggest that matching the edge patterns is much more efficient than matching the corresponding point patterns.
Pattern matching is one of the well-known pattern recognition techniques. When
using points as matching features, a pattern matching problem becomes a point
pattern matching problem. This paper proposes a novel point pattern matching
algorithm that searches transformation space by transformation sampling. The
algorithm defines a constraint set (a polygonal region in transformation space)
for each possible pairing of a template point and a target point. Under
constrained polynomial transformations that have no more than two parameters on
each coordinate, the constraint sets and the transformation space can be
represented as Cartesian products of 2D polygonal regions. The algorithm then
rasterizes the transformation space into a discrete canvas and calculates the
optimal matching at each sampled transformation efficiently by scan-converting
polygons. Preliminary experiments on randomly generated point patterns show
that the algorithm is effective and efficient. In addition, the running time of
the algorithm is stable with respect to missing points.
Comprehensive two-dimensional gas chromatography (GCxGC) is an emerging technology for chemical separation that provides an order-of-magnitude increase in separation capacity over traditional gas chromatography. GCxGC separates chemical species with two capillary columns interfaced by two-stage thermal desorption. Because GCxGC is comprehensive and has high separation capacity, it can perform multiple traditional analytical methods with a single analysis. GCxGC has great potential for a wide variety of environmental sensing applications, including detection of chemical warfare agents (CWA) and other harmful chemicals. This paper demonstrates separation of nerve agents sarin and soman from a matrix of gasoline and diesel fuel. Using a combination of an initial column separating on the basis of boiling point and a second column separating on the basis of polarity, GCxGC clearly separates the nerve agents from the thousands of other chemicals in the sample. The GCxGC data is visualized, processed, and analyzed as a two-dimensional digital image using a software system for GCxGC image processing developed at the University of Nebraska - Lincoln.
This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.
KEYWORDS: Image classification, Hyperspectral imaging, Classification systems, Signal attenuation, Imaging systems, Systems modeling, Signal to noise ratio, Hyperspectral systems, Interference (communication), Signal processing
This paper explores the relationship between information efficiency and pattern classification in hyperspectral imaging systems. Hyperspectral imaging is a powerful tool for many applications, including pattern classification for scene analysis. However, hyperspectral imaging can generate data at rates that challenge communication, processing, and storage capacities. System designs with fewer spectral bands have lower data overhead, but also may have reduced performance, including diminished capability to classify spectral patterns. This paper presents an analytic approach for assessing the capacity of a hyperspectral system for gathering information related to classification and the system's efficiency in that capacity. Our earlier work developed approaches for analyzing information capacity and efficiency in hyperspectral systems with either uniform or non-uniform spectral-band widths. This paper presents a model-based approach for relating information capacity and efficiency to pattern classification in hyperspectral imaging. The analysis uses a model of the scene signal for different classes and a model of the hyperspectral imaging process. Based on these models, the analysis quantifies information capacity and information efficiency for designs with various spectral-band widths. Example results of this analysis illustrate the relationship between information capacity, information efficiency, and classification.
This paper describes a method for assessing the information density and efficiency of hyperspectral imaging systems that have spectral bands of non-uniform width. The information density of the acquired signal is computed as a function of the hyperspectral system design, signal-to-noise ratio, and statistics of the scene radiance. The information efficiency is the ratio of the information density to the data density. The assessment can be used in system design, for example, to determine the number and size of the spectral bands. With this analysis, hyperspectral imaging systems can be tailored for scenes that are non-homogeneous with respect to spectral wavelength. If the scene spectral autocorrelation at each wavelength is different, then the information density at each wavelength is also different, suggesting that the spectral bands should have variable width. Two experiments illustrate the approach, one using a simple model for the scene radiance autocorrelation function and the other using the deterministic autocorrelation function of a hyperspectral image from NASA's Advanced Solid-state Array Spectroradiometer (ASAS). The design with non-uniform bandwidths yields greater information efficiency than an optimal design with uniform bandwidths.
Noise contamination of remote sensing data is an inherent problem and various techniques have been developed to counter its effects. In multiband imagery, principal component analysis (PCA) can be an effective method of noise reduction. For single images, convolution masking is more suitable. The application of data masking techniques, in association with PCA, can effectively portray the influence of noise. A description is presented of the performance of a developed masking technique in combination with PCA in the presence of simulated additive noise. The technique is applied to Landsat Thematic Mapper (TM) imagery in addition to a test image. Comparisons of the estimated and applied noise standard deviations from the techniques are presented.
Remote sensing images acquired in various spectral bands are used to estimate certain geophysical parameters or detect the presence or extent of geophysical phenomena. In general, the raw image acquired by the sensor is processed using various operations such as filtering, compression, enhancement, etc. in order to enhance the utility of the image for a particular application. In performing these operations, the analyst is attempting to maximize the information content in the image to fulfill the end objective. The information content in a remotely sensed image for a specific application is greatly dependent on the gray-scale resolution of the image. Intuitively, as the gray-scale resolution is degraded, the information content of the image is expected to reduce. However, the exact relationship between these parameters is not very clear. For example, while the digital number (DN) of a pixel may change as a result of the decrease in the number of gray scales, it may be possible that the overall image classification accuracy (a measure of information content) may not show a corresponding reduction. Furthermore, the degradation in information content has been shown to be related to the spatial resolution also. Our simulation studies reveal that the information content does indeed drop as the gray-scale resolution degrades. Similar results are observed on working with real images. We have developed a simple mathematical model relating the image information content to the gray-scale resolution, using which the optimal number of gray scales necessary to interpret an image for a particular application may be deduced.
A recent advance in the science of chemical separations known as 'comprehensive two-dimensional gas chromatography,' or GC X GC, routinely separates 2000 chemical species from petroleum derived mixtures such as gasoline and diesel fuels. The separated substances are observed to fall into orderly patterns in a two-dimensional image representative of compound classes and isomeric structures. To interpret these complex images, two procedures are needed. First, the images must be transformed into a standard format that permits facile recognition of chromatographic features. Second, quantitative data must be extracted from designated features. By automating these procedures, it becomes possible to rapidly interpret very complex chemical separations both qualitatively and quantitatively.
The potential of high-resolution radar and optical imagery for synoptic and timely mapping in many applications is well- known. Numerous methods have been developed to process and quantify useful information from remotely sensed images. Most image processing techniques use texture based statistics combined with spatial filtering to separate target classes or to infer geophysical parameters from pixel radiometric intensities. The use of spatial statistics to enhance the information content of images, thereby providing better characterization of the underlying geophysical phenomena, is a relatively new technique in image processing. We are currently exploring the relationship between spatial statistical parameters of various geophysical phenomena and those of the remotely sensed image by way of principal component analysis (PCA) of radar and optical images. Issues being explored are the effects of noise in multisensor imagery using PCA for land cover classifications. The differences in additive and multiplicative noise must be accounted for before using PCA on multisensor data. Preliminary results describing the performance of PCA in the presence of simulated noise applied to Landsat Thematic Mapper (TM) images are presented.
This paper describes a method for assessing the information density and efficiency of hyperspectral imaging system. The approach computes the information density of the acquired signal as a function of the hyperspectral system design, signal-to-noise ratio, and statistics of the scene radiance. Information efficiency is the ratio of the information density to the data density. The assessment can be used in system design - for example, to optimize information efficiency with respect to the number of spectral bands. Experimental results illustrate that information efficiency exhibits a single distinct maximum as a function of the number of spectral bands, indicating the design with peak information efficiency.
KEYWORDS: Principal component analysis, Image classification, Vegetation, Dielectric polarization, Radar, L band, Visualization, Image processing, Data acquisition, RGB color model
We are currently exploring the relationship between spatial statistical parameters of various geophysical phenomena and those of the remotely sensed image by way of principle component analysis (PCA) of radar and optical images. Issues being explored are the effects of incorporating PCA into land cover classification in an attempt to improve its accuracy. Preliminary results of using PCA in comparison with unsupervised land cover classification are presented.
KEYWORDS: Data modeling, Statistical modeling, Magnetorheological finishing, Optical character recognition, Data hiding, Pattern recognition, Systems modeling, Statistical analysis, Detection and tracking algorithms, Computer science
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
This paper describes a method for cloud cover assessment using computer-based analysis of multi-band Landsat images. The objective is to accurately determine the percentage of cloud cover in an efficient manner. The 'correct' value is determined by an expert's visual assessment. Acceptable error rates are +/- 10 percent from the visually determined coverage. This research improves upon an existing algorithm developed for use by the EROS data center several years ago. The existing algorithm uses threshold values in bands, 3, 5 and 6 based on the expected frequency response for clouds in each band. While this algorithm is reasonably fast, the accuracy is often unsatisfactory. The dataset used in developing the new method contained 329 subsampled, 7-band Landsat browse images with wide geographic coverage and a variety of cloud types. This dataset, provided by the EROS Data Center, also specifies the visual cloud cover assessment and the cloud cover assessment using the current automated algorithm. Mask images, separating cloud and non- cloud pixels, were developed for a subset of these images. The new approach is statistically based, developed forma multi-dimensional histogram analysis of a training subset. Images from a disjoint test set wee then classified. Initial results are significantly more accurate than the existing automated algorithm.
KEYWORDS: Image filtering, Image restoration, Model-based design, Systems modeling, Digital image processing, Imaging systems, Signal to noise ratio, Image acquisition, Linear filtering, Digital imaging
Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the smoothness of the restored image is maximized subject to a constraint on the fidelity of the restored image. The traditional derivation and implementation of the constrained least-squares restoration (CLS) filter is based on an incomplete discrete/discrete (d/d) system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a 1990 SPIE paper, Park et. al. demonstrated that a derivation of the Wiener filter based on the incomplete d/d model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous (c/d/c) model. In a similar 1992 SPIE paper, Hazra et al. attempted to extend Hunt's d/d model-based CLS filter derivation to the c/d/c model, but with limited success. In this paper, a successful extension of the CLS restoration filter is presented. The resulting new CLS filter is intuitive, effective and based on a rigorous derivation. The issue of selecting the user-specified inputs for this new CLS filter is discussed in some detail. In addition, we present simulation-based restoration examples for a FLIR (Forward Looking Infra-Red) imaging system to demonstrate the effectiveness of this new CLS restoration filter.
Generally, image compression algorithms are developed, and implemented, without taking into account the distortions injected into the process by the image-gathering and the image-display systems. Often, these distortions degrade the quality of the displayed image more than those due to coding and quantization. We assess the whole coding process--from image capture to image display--using information-theoretical analysis. The restoration procedure which we develop for (Discrete Cosine) transform coded images takes into account not only the quantization errors introduced by the coding, but also the aliasing and blurring errors due to (non-ideal) image gathering and display. This procedure maximizes the information content of the image-gathering and the fidelity of the resultant restorations.
This paper describes the design of an efficient filter that promises to significantly improve the performance of second-generation Forward Looking Infrared (FLIR) and other digital imaging systems. The filter is based on a comprehensive model of the digital imaging process that accounts for the significant effects of sampling and reconstruction as well as acquisition blur and noise. The filter both restores, partially correcting degradations introduced during image acquisition, and interpolates, increasing apparent resolution and improving reconstruction. The filter derivation is conditioned on explicit constraints on spatial support and resolution so that it can be implemented efficiently and is practical for real-time applications. Subject to these implementation constraints, the filter optimizes end-to-end system fidelity. In experiments with simulated FLIR systems, the filter significantly increases fidelity, apparent resolution, effective range, and visual quality for a range of conditions with relatively little computation.
Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach \/y/A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image- restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.
A software simulation environment for controlled image processing research is described. The simulation is based on a comprehensive model of the end-to-end imaging process that accounts for statistical characteristics of the scene, image formation, sampling, noise, and display reconstruction. The simulation uses a stochastic process to generate super-resolution digital scenes with variable spatial structure and detail. The simulation of the imaging process accounts for the important components of digital imaging systems, including the transformation from continuous to discrete during acquisition and from discrete to continuous during display. This model is appropriate for a variety of problems that involve image acquisition and display including system design, image restoration, enhancement, compression, and edge detection. By using a model-based simulation, research can be conducted with greater precision, flexibility, and portability than is possible using physical systems and experiments can be replicated on any general-purpose computer.
KEYWORDS: Optical transfer functions, Digital imaging, Signal to noise ratio, Imaging systems, Point spread functions, Super resolution, Cameras, Sensors, Imaging devices, Image acquisition
Despite the popularity of digital imaging devices (e.g., CCD array cameras) the problem of accurately characterizing the spatial frequency response of such systems has been largely neglected in the literature. This paper describes a simple method for accurately estimating the optical transfer function of digital image acquisition devices. The method is based on the traditional knife-edge technique but explicitly deals with fundamental sampled system considerations: insufficient and anisotropic sampling. Results for both simulated and real imaging systems demonstrate
the accuracy of the method.
The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.
In this paper we are concerned with the end-to-end performance of image gathering, coding, and restoration as a whole rather than as a chain of independent tasks. Our approach evolves from the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information theoretical assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and for the information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. The redundancy reduction is concerned mostly with the statistical properties of the acquired signal, and the irrelevancy reduction is concerned mostly with the visual properties of the scene and the restored image. The results of this assessment lead to intuitively appealing insights about image gathering and coding for digital restoration. Foremost is the realization that images can be restored with better quality and from less data as the information efficiency of the transmitted data is increased, providing that the restoration correctly accounts for the image gathering and coding processes and effectively suppresses the image-display degradations. High information efficiency, in turn, can be attained only by minimizing imagegathering degradations as well as signal redundancy. Another important realization is that the critical constraints imposed on both image gathering and natural vision limit the maximum acquired information density to ~ 4 binary information units (bifs). This information density requires ~ 5-bit encoding for transmission and recording when lateral inhibition is used to compress the dynamic range of the signal (irrelevancy reduction). This number of encoding levels is close (perhaps fortuitously) to the upper limit of the ~ 40 intensity levels that each nerve fiber can transmit, via pulses, from the retina to the visual cortex within ~l/20 sec to avoid prolonging reaction times. If the data are digitally restored as an image on film for ‘best’ visual quality, then the information density may often be reduced to ~3 bifs or even less, depending on the scene, without incurring perceptual degradations because of the practical limitations that are imposed on the restoration. These limitations are not likely to be found in the nervous system of human beings, so that the higher information density of ^4 bifs that the eye can acquire probably contributes effectively to the improvement in visual quality that we always experience when we view a scene directly rather than through the media of image gathering and restoration.
Most digital image restoration algorithms are inherently incomplete because they are conditioned on a discrete-input, discrete-output model which only accounts for blurring during image gathering and additive noise. For those restoration applications where sampling and reconstruction (display) are important the restoration algorithm should be based on a more comprehensive end-to-end model which also accounts for the potentially important noise-like effects of aliasing and the low- pass filtering effects of interpolative reconstruction. In this paper we demonstrate that, although the mathematics of this more comprehensive model is more complex, the increase in complexity is not so great as to prevent a complete development and analysis of the associated minimum mean- square error (Wiener) restoration filter. We also survey recent results related to the important issue of implementing this restoration filter, in the spatial domain, as a computationally efficient small convolution kernel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.