PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper extends the formulation of the non-linear maximum entropy (ME) technique to the continuous-target/discrete- processing/continuous-image (c/d/c) theory of insufficiently sampled images. It compares the c/d/c ME to existing discrete- input/discrete-output (d/d) ME and linear restorations. It further assesses their performance in terms of their ability to sharpen fine details of the target and suppress noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimal binary filters estimate an ideal random set by means of an observed random set. By parameterizing the ideal and observation random sets, one can examine the robustness of filter design relative to parameter states. This paper addresses the question as to which states possess the most robust optimal filters. Based on the prior distribution of the states, a measure of robustness is defined for each state and the state possessing maximal robustness is determined. The paper focuses on sparse noise, for which an analytic formulation of robustness is known. It proposes a parametric model from which to approximate robustness by estimating model parameters from image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The practical issues in the design and implementation of the optimal binary filters limit the observation windows down to a size that is usually much smaller than the characteristic features found in document images. To overcome this drawback, a new composite binary filtering method is developed. A preprocessing stage based on linearly separable Boolean functions is used to gather information from a very large window. The pre-filtering results together with the pixel values from a smaller window are then faded into an optimal estimator. The proposed filter structure can be viewed either as a two stage binary filter with linearly separable preprocessing or as a locally adaptive binary filter where the local adaptation is based on the characteristic features extracted from a large window. Both software and hardware implementations of the filter exhibit reasonable complexity even for large windows. Some properties of the filter and a closed form expression of the mean-absolute-error are derived. It is shown that, because there are no restrictions in the second stage, the whole composite structure can be optimally designed. The paper develops a practical design procedure that uses an efficient gradient method. Computer simulations are used to test the proposed filter against the optimal binary filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Local adaptation of the filter parameters is a well known solution that overcomes the fundamental tradeoff between noise rejection and image detail preservation. A remarkable class of local adaptation is the one that uses one or more decision filters to choose between several possible sets of parameters for the main filter. However, the design of these filters is typically based on ad-hoc solutions. Usually, if training- based design is required, the decision filters are chosen a priori and then the optimization is performed only on the main filter. The paper shows that, if the main filters are linear or polynomial, then a closed form expression of the mean- square-error can be derived and thus, the whole structure can undergo optimization. Next, a particular case is considered, that is when the decision filters are made by thresholding the output of linear or polynomial filters. A practical design procedure is developed for this case; it uses an efficient gradient based method that can reach the solution in few iterations. Experimental results are used to compare the optimized filter against ad-hoc solutions and non-adaptive optimal filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for isogram extraction from topographic maps is proposed and analyzed. Main part of the extraction is done using automatic software based on nonlinear algorithms. Possibly needed final corrections are done in an interactive mode. Iterative procedures are used both to provide reliability and to minimize the number of operations performed by the user in the interactive mode. Illustrations are presented to clarify the operations, goals and obtained results. Also, techniques for relief recovery are proposed and their accuracy is studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multiparameter binary (tau) -opening is a union of parameterized openings in which parameters for each opening are individually defined and a structuring element can be parameterized relative to both size and shape. The reconstructive filter corresponding to an opening is defined by fully passing any grain not eliminated by the opening and deleting all other grains. Adaptive design results from treating the parameter vector of a reconstructive multiparameter (tau) -opening as the state space of a Markov chain. The present paper considers the relationship between Markovian queueing networks and adaptive multiparameter (tau) - openings for the signal-union-noise model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the application of Choquet integral filters to automatic object detection in laser radar (LADAR) imagery. Choquet integrals are nonlinear integrals with respect to non-additive measures. These integrals can be used to represent typical nonlinear filters such as order statistic filters, linear combination of order statistic filters, weighted median filters and others. A Choquet integral filter is characterized by a measure. The representation of these filters as integrals with respect to measures provides an opportunity for optimizing the filters by finding optimal measures. Both optimal and heuristic filters are designed and compared on real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have already proposed the learning type of median and mean hybrid (LMMH) filters which have the desirable properties of both linear filter and nonlinear filters. The LMMH filters are designed by using LMS algorithm, therefore, both the noisy signal and its original signal are required when learning of those. We call the pair of images (i.e. noisy image and its original image) the learning signals. Although the original signal of the noisy image is not given in the practical application. In this paper, we propose a novel making method of learning signals for LMMH filters. In this method, we extract the signal information from the noisy signal and synthesize learning signals by using the information. In the simulations, the new learning signals obtained by the proposed method are shown to be effective for LMMH filters' learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is no formation model for natural images, unlike for speech or the specific signals generated by medical or satellite imagery. Autocorrelations and spectral analysis are convenient but limited tools. As Gaussiannity is nothing more than a rough approximation, higher order, or non-linear, models are required to account for the finer characteristics of real-world images. A joint modeling of neighboring pixels by means of finite mixture distributions is proposed. Each vector of M pixels is considered as being drawn form one of K M-variate distributions. Each component random vector is defined as the unitary transformation of a vector of M independent generalized-Gaussian random variables. This modeling technique permits to tackle a problem of high dimensionality (the estimation of a joint distribution of large order) with a limited number of parameters. The standard Expectation-Maximization (EM) or Stochastic EM algorithms can be used in order to estimate the model parameters from the data. The procedure can be applied to blocks of pixels or sets of subband samples and is tested on a variety of digital images. The applications range from image compression and joint source and channel coding to image restoration and image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From Communication to Pattern Recognition, from low layer signal processing to high layer cognition, from practice to theory of engineering principles, the question of inherent complexities of entities represented as sets in euclidean space is of fundamental interest. In this paper, we present some fundamental theoretical results pertaining to the question of how many randomly selected labelled example points it takes to reconstruct a set in euclidean space, and thereby propose a morphological sampling theorem in the form of Stochastic Morphological Sampling Theorem. Drawing on results and concepts from Mathematical Morphology and Learnability Theory, we pursue a set-theoretic approach and demonstrate some provable performances pertaining to euclidean-set- reconstruction from stochastic samples. In particular, we demonstrate a result towards the formulation of a stochastic (morphological) version of the Nyquist Sampling Theorem -- that, under weak assumptions on the situation under consideration, the number of randomly-drawn (positive) example points needed to reconstruct the target set is at most polynomial in the performance parameters and also the complexity of the target set as loosely captured by size, dimension and surface-area. The reconstruction result of this paper pertaining to the complexity of euclidean sets has natural interpretations for the process of smoothing modelled formally as a dilation and morphological operation. Thus, in this paper, we formulate and demonstrate a certain fundamental (distribution-free) well-behaving aspect of smoothing by proving a fundamental result pertaining to the (set-theoretic) complexity of sets in euclidean space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At SPIE conferences Nonlinear Image Processing VII and VIII a layered graph network for image segmentation was presented. This O(N) method often gave good results but it was not able to segment images with very strong noise. Therefore, the method is modified now. At first, instead of the used 'hard' Pixel Adjacency Graph (PAG) a 'soft' or fuzzy PAG is defined via a degree of adjacency of 4-neighbored pixels. Secondly, the averaging over 4-neighbors is applied recursively using a nonlinear weighting function which is closely connected with the degree of adjacency and which guarantees efficient noise reduction, edge preserving, and adaptation. The discrete nonlinear dynamic equation system describing the averaging process defines a Discrete Time Cellular Neural Network (CNN). Its stable states are the smoothed images. Then the soft PAG describing the edge strength' and the hard PAG defining the segments can be calculated. The method now can cope with strong noise. Some results demonstrate its smoothing and segmenting capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unsharp Masking (UM) is well known one of the most classical techniques in image enhancement. Due to the presence of the highpass filter, the UM operators very sensitive to noise. In order to conquer this defect, Ramponi has proposed the cubic UM operator which used not only highpass filtering but also edge sensor. By introducing edge sensor, edge enhancement is realized without noise amplification, to a certain extent. It is clear that the combination of edge sensors and highpass filter is effective for sharpening images corrupted by noise. In this paper, we introduce fuzzy rules in the UM operation. Fuzzy rules link the outputs of edge sensor and highpass filter to conclusions about sharpening component of processed image. We show how effectiveness of proposed method by some application results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological filters are investigated and employed for detecting and visualizing objects within an image. The techniques developed here will be employed on NASA's Earth Observing System (EOS) satellite data products for the purpose of anomaly detection. Previous efforts have shown the phase information in the spectral domain to be more significant than the magnitude information in representing the location of objects in an image. The magnitude information does provide some useful information for object location, but it is also sensitive to image illumination, blurring, and magnification variations, all of which influence the performance of object detection algorithms. Magnitude reduction techniques in the spectral domain can dramatically improve subsequent object detection methods by causing them to rely less on the magnitude and more on the phase information of the image. However, magnitude reduction enhances the high-frequency noise within an image, often causing unwanted noise to be interpreted as image objects. We propose three new techniques for improved object detection and noise reduction. Our first method employs varying magnitude reductions within radially concentric zones, using increasingly greater reductions in higher frequency zones. By employing this zonal magnitude- reduction technique, we manage to attenuate the high-frequency noise component while still maintaining the improved visualization performance of the magnitude reduction method. Our second technique operates by utilizing several magnitude reductions of varying scale, performing object detection on each magnitude-reduced image, and combining the results for improved accuracy. This result-averaging method allows us to further reduce our false-alarm rate from high-frequency noise while increasing visualization performance. Our third method is a new technique which is based on the ratios of morphological filters. By combining classical morphological filters in this way, we are able to produce more robust results which can yield useful information as to the location of image objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the analysis of grayscale images, straight line segments are usually extracted by using the Hough Transform method. Straight line detection using Hough Transform has the disadvantage that detecting peaks in the accumulator array is not always a reliable process. Thus, a significant amount of error may result. In this paper, we propose to extract straight line segments from binary images using binary morphological operations. In addition to the endpoint coordinates, the width of a line segment can also be reliably computed in the process. In the proposed approach, a set of line-shaped fixed-length structuring elements with orientation ranging from 0 to 180 degrees is used to extract line segments of all orientations. The algorithm is flexible for different applications. Line segments of different thickness, length and orientation can be extracted with high precision. Experiment results we obtained on engineering map drawings demonstrate the good performance of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clustering algorithms are often used as unsupervised classifiers when minimal information about the classification problem is available. Clustering will usually assign an unique label, corresponding to a class, to each of the data points. For most implementations of clustering algorithms, those labels just correspond to a class index, but don't convey information about which class is that. Identification of the classes corresponding to the formed clusters can be done with heuristics or using information from points in the clusters with known classes. This paper describe a hybrid clustering approach based on a biased fuzzy C-means algorithm. Biases values corresponding to the expectancy of a data point be assigned to a class will be derived from simple image processing operations and included as weighting factors in the clustering algorithm. The final labels for the data will retain the order imposed by the biases, therefore can be used to identify the classes for the clusters. The basic fuzzy C- means algorithm and the modifications for use of biases will be presented. Results for both synthetic and imagery data classification with the method will be presented and compared with the non-biased clustering results. The results obtained with the biased method are qualitatively superior to the non- biased method when conservative biases are used for the classes, and the method can be applied when it is difficult or impractical to use a completely supervised method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the issue of pose estimation for 3D polyhedral objects in 2D color image sequences. Knowing the 3D object model defined as a mesh or as a set of segments with adjency properties, the generic approach proposed here involves a robust 2D segment extraction method. This method involves two consecutive frames in the sequence, and is based on three steps: (1) color segmentation, (2) color gradient map computation based on the minimum vector dispersion detector, (3) edge modeling simulating the specific shape and magnitude of the gradient, and local active segment-based 2D matching. The procedure of updating the edge model makes it possible to easily take into account partially occluded objects. Introducing an active matching reduces drastically confusion and ambiguity cases. The 3D/2D matching of the model is achieved by minimizing the distance between the projected segments of the 3D model and the segments extracted in the 2D image. Results on simulated and real images are presented and discussed, the accuracy of the pose estimation depending mainly on the accuracy of the 3D object model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the prototype of an OCR that was designed and implemented at the Institute of Mathematics and Statistics of the University of Sao Paulo. The remarkable characteristic of this system is that all the necessary image processing tasks are performed by Mathematical Morphology operators (the so called morphological operators). Thus, we have developed morphological operators to segment scanned images (i.e., identify objects as characters, words and paragraphs), and recognize font styles and character semantics. The morphological operators that perform segmentation were designed by classical heuristic techniques, while the ones that recognize fonts and characters were designed automatically by new computational learning techniques. The fundamental idea under these techniques is the estimation of a morphological operator from observations of input-output image pairs, that describe its ideal performance. The morphological operators designed have been integrated in a system that translate scanned images into RTF text files, with reasonable correction and time performance. This system has been developed in the KHOROS platform, using the MMach (for morphological operators design heuristically) and PAC (for morphological operators designed by learning) toolboxes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new class of adaptive (alpha) -trimmed filters well suited for processing of images corrupted with non-symmetrical p.d.f. speckle and impulsive noise is proposed. It is shown that in certain cases one can provide a better speckle reduction and impulse removal by non-symmetrical trimming. Appropriate edge/detail preservation is ensured due to adaptation of the scanning window size or application of detail preserving Lpq-filter. Selection of adaptation parameter is discussed as well. The efficiency and properties of the proposed filters are demonstrated and quantitatively evaluated for test and real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The frequency spectra are concise, but complete descriptors of image texture. The peaks in the spectrum, or equivalently, the dominant poles in an autoregressive model represent the granularity and orientation of the texture -- two of its most salient visual characteristics. In this paper, a Gaussian model is developed for the peaks of the frequency spectra, and simple warping functions are defined on these peaks, to formulate a diverse and powerful set of operators, e.g. shape from texture, planar texture rendering using perspective projection, frontalization, shift and scaling of texture. Our approach to shape from texture converts perspective distortion of texture into a range image by warping the spectrum, and our planar rendering technique warps the spectrum to simulate the perspective effect. Thus, our unified texture model gives rise to real-time algorithms for computer vision, graphics and multi-media, which are direct, and are based on local modeling rather than global optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a unified method of image computing for spatial and spectral feature extraction is described. It is named as 'pixel swapping method.' This method facilitates identification of types and features of objects such as point- like objects, line-like objects or region-like objects, and to detect line-start/end points, line intersections and vertices. It is also used to identify spatial association among objects. This method can be extended to non-linear image processing, as well as, the conventional numeric image processing. As an example of application of this method, automated road feature extraction from LANDSAT TM data will be demonstrated, employing fuzzy spectral and spatial image processing using the 'pixel swapping' method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rational filters are extended to multichannel signal processing and applied to the image interpolation problem. The proposed nonlinear interpolator exhibits desirable properties, such as, edge and details preservation. In this approach the pixels of the color image are considered as 3-component vectors in the color space. Therefore, the inherent correlation which exists between the different color components is not ignored; thus, leading to better image quality than those obtained by component-wise processing. Simulations show that the resulting edges obtained using vector rational filters (VRF) are free from blockiness and jaggedness, which are usually present in images interpolated using especially linear, but also some nonlinear techniques, e.g. vector median hybrid filters (VFMH).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color image thresholding is a special case of color clustering which is commonly used for tasks such as object detection, region segmentation, enhancement, and target tracking. As compared to the three-dimensional (3-D) color clustering, thresholding is computationally more efficient for computer implementation and pipelined hardware realization. Traditionally, this method operates on a particular color component whose distribution possesses more prominent peaks than the other two color histograms. In this operation, it is expected that the histogram peaks represent meaningful object areas. However, the color component thresholding results are less reliable than those of 3-D clustering because the valuable information in the other two color components are ignored in region acceptance process. To improve the performance of thresholding, we describe a method that thresholds an input image three times on three different color components independently. The best thresholds are selected by optimizing the within-group variance or directed divergence measure for red, green, and blue distributions separately. The resultant three binary images are combined by means of a predicate logic function that makes use of a 3-input, 1-output majority logic gate. This enables 1-D thresholding mechanism to incorporate the information on all the color components in region acceptance process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A subjective analysis is performed on various classical, and order statistic-based color edge detectors. Order statistic edge detectors imply that image pixels within a specific region, are treated statistically such that outliers can be rejected from the general trends in the data. A different type of subjective rating system is employed here and the rationale behind its use is explained. The importance of subjective edge detection lies in the development of a color edge detector which can accurately simulate what is seen by humans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The amount of research published to date indicates an increasing interest in the area of color image processing. It is widely accepted that color conveys information about the objects in a scene and this information can be used to further refine the performance of an imaging system. Processing of color image data using durational information has received increased attention lately due to the introduction of the Vector Directional Filters. These ranked-order type filters utilize the direction of color vectors to enhance, restore and segment color images. The objective of this paper is two-fold. First, to present the different Vector Directional Filters already in use focusing on their similarities and differences and secondly, to introduce new non-parametric vector directional filters. Simulation studies involving color images are used to assess the performance of the different Vector Directional Filters, and to compare them with other commonly used non-linear filters. The simulation studies reported, indicate that the new adaptive filters are computationally attractive and have excellent performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various nonlinear, fixed-neighborhood techniques for filtering color images based on local statistics have been proposed in the literature. We present adaptive neighborhood filtering (ANF) techniques for noise removal in color images. The main idea is to find for each pixel in the image (called the 'seed' when being processed) a variable-shaped, variable-sized neighborhood that contains only pixels that are similar to the seed. Then, statistics computed using pixels within the adaptive neighborhood are used to derive the filter output. Results of the ANF techniques are compared with those given by a few multivariate fixed-neighborhood filters: the double- window modified trimmed-mean filter, the generalized vector directional filter -- double-window -- (alpha) -trimmed mean filter, the adaptive hybrid multivariate filter, and the adaptive non-parametric filter with Gaussian kernel. It is shown that the ANF techniques provide the best visual results, effectively suppressing noise while not blurring edges. The ANF results are also the best in terms of objective measures (such as normalized mean-squared error and normalized color difference).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the problem of lossless and nearly- lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Transformation of the available data, and coding. The purpose of the first stage is to express the data as uncorrelated data in an optimal way. In the second stage, coding is performed by means of an arithmetic coder. In this paper, we discuss two well-known approaches for spatial as well as multispectral compression of SPOT images: (1) The efficiency of several predictive techniques (MAP, CALIC, 3D predictors), are compared, and the advantages of 2D versus 3D error feedback and context modeling are examinated; (2) The use of wavelet transforms for lossless multispectral compression are discussed. Then, applications of the above mentioned methods for quincunx sampling are evaluated. Lastly, some results, on how predictive and wavelet techniques behave when nearly-lossless compression is needed, are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of novel adaptive image compression methods have been developed using a new approach to data representation, a mixture of principal components (MPC). MPC, together with principal component analysis (PCA) and vector quantization (VQ), form a spectrum of representations. The MPC approach still suffers from block effect distortion. While existing lapped transforms eliminate this distortion, they not take into account the need for adaptation on a block-to-block basis. Further, the basis vectors are fixed so they cannot be adapted in any optimal fashion from one image to another. In this paper, a lapped orthogonal projection is used to generate subblocks for both the classic Karhunen-Loeve transform (KLT) and the adaptive MPC. The resulting images are free of block effect distortion. Further, the squared error can be reduced. Therefore, both the nonadaptive and adaptive methods under the projection tend to outperform the respective block methods both in terms of subjective criteria and squared error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Requirements for a good shape representation lead to descriptors that are object centered and that have the notion of scale. These representations usually take the form of shape skeletons at multiple detail levels. Classical tool for skeleton extraction is the grassfire equation, in which the process is lossless and the equation can be run backwards in order to obtain shape boundary from the shape skeleton. Many complicated strategies have been devised to assign significance to skeletal points in order to arrive at the skeleton scale space. A recent alternative approach is to introduce regularization directly to the skeleton extraction process, by combining diffusion with grassfire. Very recently, techniques, similar in spirit, which combine nonlinear smoothing of the shape boundary with the grassfire, in order to extract an axis based description, are presented independently. When diffusion is introduced into the formulation, inverse equation is no longer stable. This is the issue we will be addressing in the context of the method presented by Tari and Shah for extraction of nested symmetries from arbitrary images in arbitrary dimension. The basic tool used in the method is a specific distance function which is the steady-state solution of an elliptic boundary value problem. We present an inverse equation and show how one may obtain the whole distance surface from a sparse representation, providing a means for determining the shape boundary from the shape skeleton. The presented technique can be used for feature-preserving compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that the family of hit-miss operators constitutes a sup-generating family for W-operators, that is, any W-operator can be represented as the supremum of hit-miss operators. We present here a new sup-generating family for W- operators: compositions of hit-miss operators with dilations. The representation based on this sup-generating family is called compact, since it may use less sup-generating operators than the hit-miss representation. Considering the W-operators that are both anti-extensive and idempotent (in a strict sense), we have also gotten a simplification of the compact representation. Furthermore, adding the hypothesis of increasingness, we have shown that the simplified compact representation reduces to a minimal realization of the classical representation of Matheron for translation invariant openings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.