Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires
high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority
of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner
surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound
with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac
surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model
cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic
domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify
potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT
scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer
surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner
surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking
and robustness of the proposed system are demonstrated through results obtained on a real patient dataset
with tracking root mean square error ≤ (0.23 ± 0.04)mm and ≤ (0.30 ± 0.07)mm for outer & inner surfaces
respectively.
This paper presents an adaptive digital image watermarking scheme that uses successive subband quantization (SSQ) and perceptual modeling. Our approach performs a multiwavelet transform to determine the local image properties optimal and the watermark embedding location. The multiwavelet used in this paper is the DGHM multiwavelet with approximation order 2 to reduce artifacts in the reconstructed image. A watermark is embedded into the perceptually significant coefficients (PSC) of the image in each subband. The PSCs in high frequency subbands are selected by setting the thresholds to one half of the largest coefficient in each subband. After the PSCs in each subband are selected, a perceptual model is combined with a stochastic approach based on the noise visibility function to produce the final watermark.
In this paper we consider a discretized version of the problem of optimal beam-from design for stationary radar target localization in the presence of white Gaussian noise. We show that the finite-horizon solution to this problem is equivalent to the construction and rotation of Simplex structures in high dimensions. We present closed form solutions that are optimal in the sense of minimizing a tight upper-bound on the probability of mislocating the target. We compare our approach to the conventional exhaustive search and show its superiority.
KEYWORDS: Wavelets, Visualization, Digital watermarking, Distortion, Wavelet transforms, Signal processing, Reconstruction algorithms, Linear filtering, Digital filtering, Filtering (signal processing)
In this work, we propose new multiresolution techniques for data embedding in imagery. The wavelet extrema of the image are exploited to embed data. We use the wavelet extrema of the dyadic non-orthogonal wavelet transform. These extrema represent high frequency points in the image, hence modifications in their neighborhoods have minor visual distortion.
We present a novel technique for detecting the presence of a wipe transition in video sequences and automatically identifying its type. Our scheme focuses on analyzing the characteristics of the underlying special edit effects and estimates actual transitions by polynomial data interpolation. In particular, a B-spline polynomial curve fitting technique is used to measure 'goodness' of fitting to determine the presence of gradual transitions. Our approach is able to recover the original transition behavior of an edit effect even if it is distorted by various post- processing stages. Our wipe transition detector has been tested on various real video sequences to evaluate the performance of the proposed algorithms.
KEYWORDS: Sensors, Signal detection, Acoustic emission, Principal component analysis, Neural networks, Metals, Time-frequency analysis, Data centers, Neurons, Signal generators
Acoustic Emission signals (AE), generated by the formation and growth of micro-cracks in metal components, have the potential for use in mechanical fault detection in monitoring complex- shaped components in machinery including helicopters and aircraft. A major challenge for an AE-based fault detection algorithm is to distinguish crack-related AE signals from other interfering transient signals, such as fretting-related AE signals and electromagnetic transients. Although under a controlled laboratory environment we have fewer interference sources, there are other undesired sources which have to be considered. In this paper, we present some methods, which make their decision based on the features extracted from time-delay and joint time-frequency components by means of a Self- Organizing Map (SOM) neural network using experimental data collected in a laboratory by colleagues at the Georgia Institute of Technology.
In this paper we report a novel method to estimate the scaling factor of a previously scaled watermarked image and the angle by which the image has been rotated. Scaling and rotation performed on a watermarked image, as part of the attacks the image may undergo, can very easily confuse the decoder unless it rescales and/or rotates the image back to its original size/orientation, i.e., recover the lost synchronism. To be able do so, the decoder needs to know by how much the image has been scaled and rotated, i.e., needs to know both the scaling factor and the rotation angle. In our approach, we compute the Edges Standard Deviation Ratio (ESDR) which gives us an accurate estimate for the scaling factor. The rotation angle is approximated by the Average Edges Angles Difference (AEAD). Both ESDR an AEAD are computed from wavelet maxima locations which have been estimated form the non orthogonal dyadic wavelet transform. The proposed scheme does not require the original image provided that a proper normalization has been attained. Our method has proved its robustness to wide rotation and scale ranges.
KEYWORDS: Sensors, Signal detection, Interference (communication), Data modeling, Signal to noise ratio, Acoustic emission, Signal processing, Wave propagation, Inspection, Ultrasonics
Automatic monitoring techniques are a means to safely relax and simplify preventive maintenance and inspection procedures that are expensive and necessitate substantial down time. Acoustic emissions (AEs), that are ultrasonic waves emanating from the formation or propagation of a crack in a material, provide a possible avenue for nondestructive evaluation. Though the characteristics of AEs have been extensively studied, most of the work has been done under controlled laboratory conditions at very low noise levels. In practice, however, the AEs are buried under a wide variety of strong interference and noise. These arise due to a number of factors that, other than vibration, may include fretting, hydraulic noise and electromagnetic interference. Most of these noise events are transient and not unlike AE signals. In consequence, the detection and isolation of AE events from the measured data is not a trivial problem. In this paper we present some signal processing techniques that we have proposed and evaluated for the above problem. We treat the AE problem as the detection of an unknown transient in additive noise followed by a robust classification of the detected transients. We address the problem of transient detection using the residual error in fitting a special linear model to the data. Our group is currently working on the transient classification using neural networks.
In this paper we consider a discretized version of the problem of optimal beam-forming, or radar transmit and receive pattern design, for stationary radar target localization in the presence of white Gaussian noise. We assume that the target is equally likely to be in one of N discrete cells and the number of allowed observations L is strictly less than N, making an exhaustive search not feasible. We propose two new approaches for beam-form design in target localization problems: a fixed, off-line beam-form design approach and an adaptive, on-line beam-form design technique. The beam-form is designed off-line in the fixed approach to minimize the probability of error after exactly L observations. In particular, the decision is available only after the last (Lthe) observation is acquired. We show that this fixed beam-form design approach is directly related to signal constellations design in digital communications. By contrast, the beam-form is optimized after each observation to minimize the probability of incorrectly localizing the target after the next observation is acquired at each step of the process. The optimization relies on the previously acquired information. The adaptive approach has a better performance than the fixed one. Unlike binary search, these two approaches can work with any number of observations. This work falls under the area of optimal search, which deals with optimal allocation of effort in search problems. The need for optimal search strategies arises in many areas such as the radar target localization problems that we address here, fault location in circuits, localization of mobile stations in wireless networks and Internet information searches.
In this paper, we describe an efficient video indexing scheme based on motion behavior of video objects for fast content-based browsing and retrieval in a video database. The proposed novel method constructs a dictionary of prototype objects. The first step in our approach extracts moving objects by analyzing layered images constructed from the coarse data in a 3D wavelet decomposition of the video sequence. These images capture motion information only. Moving objects are modeled as collections of interconnected rigid polygonal shapes in the motion sequences that we derive from the wavelet representation. The motion signatures of the object are computed from the rotational and translational motions associated to the elemental polygons that form the objects. These signatures are finally stored as potential query terms.
KEYWORDS: Video, Data hiding, Video compression, Error control coding, Data modeling, Visualization, Quantization, Video coding, Signal to noise ratio, Computer security
We introduce a scheme for hiding supplementary data into digital video by directly modifying the pixels in the video frames. The techniques requires no separate channel or bit interleaving to transmit the extra information. The data is invisibly embedded using a perception-based projection and quantization algorithm. The data hiding algorithm supports user-defined levels of accessibility and security. We provide several examples of video data hiding including real-time video-in-video and audio-in-video. We also demonstrate the robustness of the data hiding procedure to video degradation and distortions, e.g., those that result from additive noise and compression.
We propose a technique to search through large image collections. Each database image is stored as a combination of potential query terms and non-query terms. The query terms are represented by affine-invariant B-spline moments and wavelet transform subbands. The dual representation supports a two-stage image retrieval system. A user-posed query is first mapped to a dictionary of prototype object contours represented by B-spline moments. The B-spline mapping reduces the query search space to a subset of the original database. Furthermore, it provides an estimate of the affine transformation between the query and the prototypes. The second stage consist of a set of embedded VQ dictionaries of multiresolution subbands of image objects. The estimated affine transformation is employed as a correction factor for the multiresolution VQ mapping. A simple bit string matching algorithm compares the resulting query VQ codewords with the codewords of the database images for retrieval.
We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
We develop a new coding technique for content-based retrieval of images and text documents which minimizes a weighted sum of the expected compressed file size and query response time. Files are coded into three parts: (1) a header consisting of concatenated query term codewords, (2) locations of the query terms, and (3) the remainder of the file. The coding algorithm specifies the relative position and codeword length of all query terms. Our approach leads to a progressive refinement retrieval by successively reducing the number of searched files as more bits are read. It also supports progressive transmission.
Wavelets are a new family of signal transformations. In a wavelet transform, the signal is decomposed in terms of dilates and translates of a single function, the mother wavelet. Wavelet transforms have a number of properties that can be exploited in many signal processing applications. These properties include an ability to trade time and frequency resolutions in a controlled manner, a relationship between the time behavior of a signal and the structure of its wavelet transform and compact representations for wide classes of deterministic and stochastic signals. This paper provides an overview of continuous and discrete wavelet transforms and reviews some of their applications.
In this paper, we describe an improved version of our previous approach for low bit rate near- perceptually transparent image compression. The method exploits both frequency and spatial domain visual masking effects and uses a combination of Fourier and wavelet representations to encode different bands. The frequency domain masking model is based on the psychophysical masking experimental data of sinusoidal patterns while the spatial domain masking is computed with a modified version of Girod's model. A discrete cosine transform is used in conjunction with frequency domain masking to encode the low frequency subimages. The medium and high frequency subimages are encoded in the wavelet domain with spatial domain masking. The main improvement over our previous technique is that a better model is used to calculate the tolerable error level for the subimages in the wavelet domain, and a boundary control is used to prevent or reduce the ringing noise in the decoded image. This greatly improves the decoded image quality for the same coding bit rates. Experiments show the approach can achieve very high quality to nearly transparent compression at bit rates of 0.2 to 0.4 bits/pixel for the image Lena.
We discuss the problem of waveform selection for wideband radar imaging. We show how this problem can be solved in a finite-dimensional setting. We discuss an algorithm for imaging an unknown target from a collection of targets belonging to M classes where M > 2. We present two possible algorithms for simultaneous imaging and classification of unknown targets. We describe how the radar simulations can be done in these cases.
We briefly discuss two approaches for enhancing the resolution enhancement of range-Doppler images (including synthetic aperture radar and inverse synthetic aperture radar images). The first approach can be used in conjunction with current radar systems that use a fixed narrowband waveform to acquire target data. It measures Doppler shifts more accurately than the traditional fast Fourier transform based techniques. It performs well in low signal-to-clutter regimes regardless of the statistical structure of the clutter. The second technique assumes that the radar can transmit different waveforms that are matched to the imaging task under consideration. It is based on the fact that the most accurate reconstruction of a range-Doppler target density function from N waveforms and their echoes is obtained by transmitting the singular functions corresponding to the N largest singular values of two kernels derived from the target density. We discuss two strategies for selecting the radar waveforms. The first strategy uses fixed waveforms that act as approximate singular functions for the kernels corresponding to wide classes of target densities. The second strategy adaptively selects the transmitted waveforms by solving a simultaneous target classification and image reconstruction problem.
This paper provides a simple parametrization of perfect reconstruction filter banks with arbitrary regularity. The parametrization shows that a perfect reconstruction filter bank with regularity N can be constructed by using an overlapping discrete 2N point Chebyshev transform followed by an orthogonal 2N X 2 transform and an arbitrary perfect reconstruction filter bank.
In this paper, we briefly discuss two approaches for enhancing the resolution enhancement of range-Doppler images (including SAR and ISAR images). The first approach can be used in conjunction with current radar systems that use a fixed narrow band waveform to acquire target data. It measures Doppler shifts more accurately than the traditional FFT based techniques. It performs well in low signal-to-clutter regimes regardless of the statistical structure of the clutter. The second technique assumes that the radar can transmit different waveforms that are matched to the imaging task under consideration. It is based on the fact that the most accurate reconstruction of a range-Doppler target density function from N waveforms and their echoes is obtained by transmitting the singular functions corresponding to the N largest singular values of two kernels derived from the target density. We discuss two strategies for selecting the radar waveforms. The first strategy uses fixed waveforms that act as approximate singular functions for the kernels corresponding to wide classes of target densities. The second strategy adaptively selects the transmitted waveforms by solving a simultaneous target classification and image reconstruction problem.
KEYWORDS: Wavelets, Wavelet transforms, Computer programming, Linear filtering, Compact discs, Digital signal processing, Discrete wavelet transforms, Signal analyzers, Signal processing, Error analysis
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
A novel fast magnetic resonance imaging (MRI) technique is proposed. The technique is based on a proper design of the excitation profile. In particular, the excitation profile is chosen to be highly regular along its boundaries. It is shown that the free induction decay (FID) signals that result from such excitations decay much faster than those produced by traditional MRI approaches as long as the underlying magnetization and a number of its derivatives are continuous. The proposed technique exploits this fact together with an exact model of the slow decaying components of the FID signal (those produced by discontinuities in the magnetization or its derivatives) to produce fast high quality reconstructions of the underlying magnetization. Experimental results obtained with a 4 Tesla whole body scanner are included to demonstrate the viability of this approach.
Filtered 2-D fractionally differenced discrete processes are proposed as texture models. An iterative algorithm is presented for estimating the parameters of the model from a given texture image. Synthesized textures are provided to demonstrate the power of the proposed model.
A multiresolution model of a discrete fractional Brownian motion is developed. The model leads to a multiscale algorithm for constructing the optimal filter that must be used in detection problems involving a fractional Brownian motion and white noise.
Edge detection algorithms play an important role in automated vision analysis. In this sequel we propose
a model based edge detection algorithm based on the second directional derjvative of each pixel in the
image. The new method models the picture as the output of a two dimensional all pole causal sequence
with a quarter plane region and a nonsymmetric half plane region of support. To estimate the parameters
of the model we used an overdetermined system of normal equations and utilized the least squares and total
least squares approach to solve for the unknown parameters. The estimated parameters were subsequently
used in a closed form to approximate the second directional derivative for detecting edges.
We compared our method along with Haralick's [5] and Thou et a! [10] with that of Canny's [2]. The
first three algorithms are parametric algorithms: they are based on parametrizing the local behavior of the
image. By contrast the last algorithm is non-parametric since it does not assume any particular model for
the image.
We take mto account the quantitative measures introduced in [9] to study the performance of various
algorithms for different synthetic images.
keywords- QP: quarter plane, NSHP: non symmetric halfplane, LS: least squares, TLS: total least squares
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.