Extending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important and much research has been done recently. Our previous Generalized Haar-Walsh Transform (GHWT) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh-Hadamard Transforms. This article proposes the extended Generalized HaarWalsh Transform (eGHWT). The eGHWT and its associated best-basis selection algorithm for graph signals will significantly improve the performance of the previous GHWT with the similar computational cost, O(N log N) where N is the number of nodes of an input graph. While the previous GHWT/best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than (1.5)N possible bases, the eGHWT/best-basis algorithm can find a better one by searching through more than 0.618 · (1.84)N possible bases. This article describes the details of the eGHWT/basis-basis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals.
The application of graph Laplacian eigenvectors has been quite popular in the graph signal processing field: one can use them as ingredients to design smooth multiscale basis. Our long-term goal is to study and understand the dual geometry of graph Laplacian eigenvectors. In order to do that, it is necessary to define a certain metric to measure the behavioral differences between each pair of the eigenvectors. Saito (2018) considered the ramified optimal transportation (ROT) cost between the square of the eigenvectors as such a metric. Clonginger and Steinerberger (2018) proposed a way to measure the affinity (or ‘similarity’) between the eigenvectors based on their Hadamard (HAD) product. In this article, we propose a simplified ROT metric that is more computational efficient and introduce two more ways to define the distance between the eigenvectors, i.e., the time-stepping diffusion (TSD) metric and the difference of absolute gradient (DAG) pseudometric. The TSD metric measures the cost of “flattening” the initial graph signal via diffusion process up to certain time, hence it can be viewed as a time-dependent version of the ROT metric. The DAG pseudometric is the l 2 -distance between the feature vectors derived from the eigenvectors, in particular, the absolute gradients of the eigenvectors. We then compare the performance of ROT, HAD and the two new “metrics” on different kinds of graphs. Finally, we investigate their relationship as well as their pros and cons.
KEYWORDS: Time-frequency analysis, Fourier transforms, Signal processing, Algorithm development, Signal analyzers, Algorithms, Filtering (signal processing), Signal analysis, Digital signal processing
In recent years, the synchrosqueezing transform (SST) has gained popularity as a method for the analysis of signals that can be broken down into multiple components determined by instantaneous amplitudes and phases. One such version of SST, based on the short-time Fourier transform (STFT), enables the sharpening of instantaneous frequency (IF) information derived from the STFT, as well as the separation of amplitude-phase components corresponding to distinct IF curves. However, this SST is limited by the time-frequency resolution of the underlying window function, and may not resolve signals exhibiting diverse time-frequency behaviors with sufficient accuracy. In this work, we develop a framework for an SST based on a “quilted" short-time Fourier transform (SST-QSTFT), which allows adaptation to signal behavior in separate time-frequency regions through the use of multiple windows. This motivates us to introduce a discrete reassignment frequency formula based on a finite difference of the phase spectrum, ensuring computational accuracy for a wider variety of windows. We develop a theoretical framework for the SST-QSTFT in both the continuous and the discrete settings, and describe an algorithm for the automatic selection of optimal windows depending on the region of interest. Using synthetic data, we demonstrate the superior numerical performance of SST-QSTFT relative to other SST methods in a noisy context. Finally, we apply SST-QSTFT to audio recordings of animal calls to demonstrate the potential of our method for the analysis of real bioacoustic signals.
In this paper, we apply the scattering transform (ST)—a nonlinear map based off of a convolutional neural network (CNN)—to classification of underwater objects using sonar signals. The ST formalizes the observation that the filters learned by a CNN have wavelet-like structure. We achieve effective binary classification both on a real dataset of Unexploded Ordinance (UXOs), as well as synthetically generated examples. We also explore the effects on the waveforms with respect to changes in the object domain (e.g., translation, rotation, and acoustic impedance, etc.), and examine the consequences coming from theoretical results for the scattering transform. We show that the scattering transform is capable of excellent classification on both the synthetic and real problems, thanks to having more quasi-invariance properties that are well-suited to translation and rotation of the object.
In recent years, the advent of new sensor technologies and social network infrastructure has provided huge opportunities and challenges for analyzing data recorded on such networks. In the case of data on regular lattices, computational harmonic analysis tools such as the Fourier and wavelet transforms have well-developed theories and proven track records of success. It is therefore quite important to extend such tools from the classical setting of regular lattices to the more general setting of graphs and networks. In this article, we first review basics of graph Laplacian matrices, whose eigenpairs are often interpreted as the frequencies and the Fourier basis vectors on a given graph. We point out, however, that such an interpretation is misleading unless the underlying graph is either an unweighted path or cycle. We then discuss our recent effort of constructing multiscale basis dictionaries on a graph, including the Hierarchical Graph Laplacian Eigenbasis Dictionary and the Generalized Haar-Walsh Wavelet Packet Dictionary, which are viewed as generalizations of the classical hierarchical block DCTs and the Haar-Walsh wavelet packets, respectively, to the graph setting. Finally, we demonstrate the usefulness of our dictionaries by using them to simultaneously segment and denoise 1-D noisy signals sampled on regular lattices, a problem where classical tools have difficulty.
We present a new method for discrimination of data classes or data sets in a high-dimensional space. Our
approach combines two important relatively new concepts in
high-dimensional data analysis, i.e., Diffusion Maps
and Earth Mover's Distance, in a novel manner so that it is more tolerant to noise and honors the characteristic
geometry of the data. We also illustrate that this method can be used for a variety of applications in high
dimensional data analysis and pattern classification, such as quantifying shape deformations and discrimination
of acoustic waveforms.
The polyharmonic local cosine transform (PHLCT), presented by Yamatani and Saito in 2006, is a new tool for local
image analysis and synthesis. It can compress and decompress images with better visual fidelity, less blocking artifacts,
and better PSNR than those processed by the JPEG-DCT algorithm. Now, we generalize PHLCT to the high-dimensional
case and apply it to compress the high-dimensional data. For this purpose, we give the solution of the high-dimensional
Poisson equation with the Neumann boundary condition. In order to reduce the number of coefficients of PHLCT, we use
not only d-dimensional PHLCT decomposition, but also d-1, d-2, . . . , 1 dimensional PHLCT decompositions. We
find that our algorithm can more efficiently compress the high-dimensional data than the block DCT algorithm. We will
demonstrate our claim using both synthetic and real 3D datasets.
The polyharmonic local sine transform is a recently introduced transform that can be used to compress and interpolate datasets. This transform is based on both Fourier theory and the solution to the polyharmonic equation (e.g. Laplace equation, biharmonic equation). The efficiency of this transform can be traced to theorems from Fourier analysis that relate the smoothness of the input function to the decay of the transform coefficients. This decay property is crucial and allows us to neglect high order coefficients. We will discuss the case of the Laplace sine transform. We introduce an arbitrary dimensional version of this transform for datasets sampled on rectangular domains and generalize the 2-dimensional algorithm to n dimensions. Since the efficiency of the Laplace sine transform relies on the smoothness of the data, we cannot directly apply it to discontinuous data. In order to deal with this, we have developed a segmentation algorithm that allows us to isolate smooth regions from those with discontinuities. The result is a transform that will locally adapt to the features in a dataset,allowing us to perform meaningful tests on real, discontinuous datasets.
We introduce a practical and improved version of the Polyharmonic Local Fourier transform (PHLFT) called PHLFT5. After partitioning an input image into a set of rectangular blocks, the original PHLFT decomposes each block into the polyharmonic component and the residual. The polyharmonic component solves the polyharmonic equation with the boundary condition that matches the values and normal derivatives of first order up to higher order of the solution along the block boundary with those of the original image block. Thanks to this boundary condition, the residual component can be expanded into a complex Fourier series without facing the Gibbs phenomenon and its Fourier coefficients decay faster than those of the original block. Due to the difficulty of estimating the higher order normal derivatives, however, only the harmonic case (i.e., Laplace's equation) has been implemented to date. In that case, the Fourier coefficients of the residual decay as O (||k||-2) where k is the frequency index vector. Unlike the original version, PHLFT5 only imposes the boundary values and the first order normal derivatives as the boundary condition, which can be estimated using our robust algorithm. We then derive a fast algorithm to compute a fifth degree polyharmonic function that satisfies such boundary condition. The Fourier coefficients of the residual now decay as O (||k||-3). We shall also show our preliminary numerical experiments that demonstrates the superiority of PHLFT5 over the original PHLFT of harmonic case in terms of the decay rate of the residual and interpretability of oriented textures.
KEYWORDS: Image compression, Data modeling, Wavelets, Image segmentation, Quantization, Image processing, Head, Signal analyzers, Fourier transforms, Signal to noise ratio
We introduce a new local sine transform that can completely localize image information in both the space and spatial frequency domains. Instead of constructing a basis, we first segment an image into local pieces using the characteristic functions, then decompose each piece into two component: the polyharmonic component and the residual. The polyharmonic component is obtained by solving the elliptic boundary value problem associated with the so-called polyharmonic equation (e.g., Laplace equation, biharmonic equation, etc.) given the boundary values (the pixel values along the borders created by the characteristic functions) possibly with the estimates of normal derivatives at the boundaries. Once this component is obtained this is subtracted from the original local piece to obtain the residual, whose Fourier sine series expansion has quickly decaying coefficients since the boundary values of the residual (possibly with their normal derivatives) vanish. Using this transform, we can distinguish intrinsic singularities in the data from the artificial discontinuities created by the local windowing. We will demonstrate the superior performance of this new transform in terms of image compression to some of the conventional methods such as JPEG/DCT and the lapped orthogonal transform using actual examples.
KEYWORDS: Fourier transforms, Radon, Signal processing, Information visualization, Discrete wavelet transforms, Image segmentation, Visualization, Wavelets, Signal analyzers, Algorithm development
The Local Fourier Transform (LFT) provides a nice tool for concentrating both a signal and its Fourier transform. But there are certain properties of this algorithm that make it unattractive for various applications. In this paper, some of these disadvantages are explored, and a new approach to localized Fourier analysis is proposed, the continuous boundary local Fourier transform (CBLFT), which attempts to correct some of these shortcomings. Results ranging from segmentation to representation cost to compression are also presented.
We examine the similarity and difference between sparsity and statistical independence in image representations in a very concrete setting: use the best basis algorithm to select the sparsest basis and the least statistically- dependent basis from basis dictionaries for a given dataset. In order to understand their relationship, we use synthetic stochastic processes as well as the image patches of natural scene. Our experiments and analysis so far suggest the following: 1) Both sparsity and statistical independence criteria selected similar bases for most of our examples with minor differences; 2) Sparsity is more computationally and conceptually feasible as a basis selection criterion than the statistical independence, particularly for dat compression; 3) The sparsity criterion can and should be adapted to individual realization rather than for the whole collection of the realizations to achieve the maximum performance; 4) The importance of orientation selectivity of the local Fourier and brushlet dictionaries was not clearly demonstrated due to the boundary effect caused by the folding and local periodization.
Fractographs are elevation maps of the fracture zone of some broken material. The technique employed to create these maps often introduces noise composed of positive or negative 'spikes' that must be removed before further analysis. Since the roughness of these maps contains useful information, it must be preserved. Consequently, conventional denoising techniques cannot be employed. We use continuous and discrete wavelet transforms of these images, and the properties of wavelet coefficients related to pointwise Hoelder regularity, to detect and remove the spikes.
The local Fourier dictionary contains a large number of localized complex exponential functions. Representations of a function using the dictionary elements locally inherit many nice properties of the conventional Fourier representation, such as translation invariance and orientation selectivity. In this paper, after giving an intuitive review of its construction, we describe an algorithm to recover location-dependent shifts of local features in signals for matching and registration, and propose a best local translation basis selected from the local Fourier basis. Then we will report our preliminary results on the statistical analysis of natural scene images using the local Fourier dictionary, whose purpose is to examine the importance of sparsity, statistical independence, and orientation selectivities in representation and modeling of such images.
Statistical independence is one of the most desirable properties for a coordinate system for representing and modeling images. In reality, however, truly independent coordinates may not exist for a given set of images, or it may be computationally too difficult to obtain such coordinates. Therefore, it makes sense to obtain the least statistically dependent coordinate system efficiently. This basis--we call it Least Statistically-Dependent Basis (LSDB)--can be rapidly computed by minimizing the sum of the differential entropy of each coordinate in the basis library. This criterion is quite different from the Joint Best Basis (JBB) proposed by Wickerhauser. We demonstrate the use of the LSDB for image modeling and compare its performance with JBB and Karhunen-Loeve Basis.
We describe an extension to the `best-basis' method to construct an orthonormal basis which maximizes a class separability for signal classification problems. This algorithm reduces the dimensionality of these problems by using basis functions which are well localized in time- frequency plane as feature extractors. We tested our method using two synthetic datasets: extracted features (expansion coefficients of input signals in these basis functions), supplied them to the conventional pattern classifiers, then computed the misclassification rates. These examples show the superiority of our method over the direct application of these classifiers on the input signals. As a further application, we also describe a method to extract signal component from data consisting of signal and textured background.
We describe an algorithm to estimate a discrete signal from its noisy observation, using a library of orthonormal bases (consisting of various wavelets, wavelet packets, and local trigonometric bases) and the information-theoretic criterion called minimum description length (MDL). The key to effective random noise suppression is that the signal component in the data may be represented efficiently by one or more of the bases in the library, whereas the noise component cannot be represented efficiently by any basis in the library. The MDL criterion gives the best compromise between the fidelity of the estimation result to the data (noise suppression) and the efficiency of the representation of the estimated signal (signal compression): it selects the 'best' basis and the 'best' number of terms to be retained out of various bases in the library in an objective manner. Because of the use of the MDL criterion, our algorithm is free from any parameter setting or subjective judgments. This method has been applied usefully to various geophysical datasets containing many transient features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.