As a high resolution radar imaging modality, SAR detects and localizes non-moving targets accurately, giving it an advantage over lower resolution GMTI radars. Moving target detection is more challenging due to target smearing and masking by clutter. Space-time adaptive processing (STAP) is often used on multiantenna SAR to remove the stationary clutter and enhance the moving targets. In (Greenewald et al., 2016),1 it was shown that the performance of STAP can be improved by modeling the clutter covariance as a space vs. time Kronecker product with low rank factors, providing robustness and reducing the number of training samples required. In this work, we present a massively parallel algorithm for implementing Kronecker product STAP, enabling application to very large SAR datasets (such as the 2006 Gotcha data collection) using GPUs. Finally, we develop an extension of Kronecker STAP that uses information from multiple passes to improve moving target detection.
We consider the application of KronPCA spatio-temporal modeling techniques1, 2 to the extraction of spatiotemporal features for video dismount classification. KronPCA performs a low-rank type of dimensionality reduction that is adapted to spatio-temporal data and is characterized by the T frame multiframe mean μ and covariance ∑ of p spatial features. For further regularization and improved inverse estimation, we also use the diagonally corrected KronPCA shrinkage methods we presented in.1 We apply this very general method to the modeling of the multivariate temporal behavior of HOG features extracted from pedestrian bounding boxes in video, with gender classification in a challenging dataset chosen as a specific application. The learned covariances for each class are used to extract spatiotemporal features which are then classified, achieving competitive classification performance.
Along-track synthetic aperture radar (SAR) systems can be used to remove the bright stationary clutter in order to more easily detect and track moving targets within the scene. In this work, we derive a Cramér Rao Lower Bound (CRLB) on the localization error of moving targets in these systems in the presence of correlated and spatially-varying noise. The CRLB is used to determine the minimum range/cross-range position and velocity estimation errors and is evaluated over radar and target parameters. Bounds were compared both over design parameters such as the number of antennas and the coherent processing interval, as well as non-design parameters such as signal quality and clutter coherence. Furthermore, the bias sensitivity of the CRLB to small estimator biases was analyzed.
In this paper we present a method called local hub screening for detecting hubs in a sparse correlation or partial correlation network over p nodes. The proposed method is related to hub screening1 where a Poisson-type limit is used to specify p-values on the number of spurious hub nodes found in the network. In this paper we also establish Poisson limits. However, instead of being on the global number of hub nodes found, here the Poisson limit applies to the node degree found at an individual node. This allows us to de ne asymptotic p-values that are local to each node. We will see that the convergence rates for proposed local hub screening method are at least a factor of p faster than those of global correlation and hub screening.
In along-track synthetic aperture radar systems, measurements from multiple phase centers can be used to
remove bright stationary clutter in order to detect and estimate moving targets in the scene. The effectiveness
of this procedure can be improved by increasing the number of antennas in the system. However, due to
computational and communication constraints, it may be prohibitive to use a large number of antennas. In
this work, an efficient resource allocation policy is provided to exploit sparsity in the scene, namely that there
are few targets relative to the size of the scene. It is shown that even with limited computational resources,
one can have significant estimation and computational gains over non-adaptive strategies. Moreover, the
performance of the adaptive strategy approaches that of an oracle policy as the number of the stages grows
large.
This paper proposes a hierarchical Bayesian model for multiple-pass, multiple antenna synthetic aperture
radar (SAR) systems with the goal of adaptive change detection. We model the SAR phenomenology directly,
including antenna and spatial dependencies, speckle and specular noise, and stationary clutter. We extend
previous work1 by estimating the antenna covariance matrix directly, leading to improved performance in
high clutter regions. The proposed SAR model is also shown to be easily generalizable when additional prior
information is available, such as locations of roads/intersections or smoothness priors on the target motion.
The performance of our posterior inference algorithm is analyzed over a large set of measured SAR imagery.
It is shown that the proposed algorithm provides competitive or better results to common change detection
algorithms with additional benefits such as few tuning parameters and a characterization of the posterior
distribution.
This paper addresses the problem of joint image reconstruction and point spread function PSF) estimation when the PSF of
the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for
the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework,
using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic
measure that corresponds to image sparsity. Simulation results demonstrate that the semi-blind deconvolution algorithm
compares favorably with previous Markov chain Monte Carlo MCMC) version of myopic sparse reconstruction. It also
outperforms non-myopic algorithms that rely on perfect knowledge of the PSF. The algorithm is illustrated on real data from
magnetic resonance force microscopy MRFM).
This paper develops a hierarchical Bayes model for multiple-pass, multiple antenna synthetic aperture radar
(SAR) systems with the goal of adaptive change detection. The model is based on decomposing the observed data
into a low-rank component and a sparse component, similar to Robust Principal Component Analysis, previously
developed by Ding, He, and Carin1 for E/O systems. The developed model also accounts for SAR phenomenology,
including antenna and spatial dependencies, speckle and specular noise, and stationary clutter. Monte Carlo
methods are used to estimate the posterior distribution of the variables in the model. The performance of
the proposed method is analyzed using synthetic images, and it is shown that the performance is robust to a
large space of operating characteristics without extensive tuning of hyperparameters. Finally, the method is
applied to measured SAR data, providing competitive results compared to standard methods with the additional
benefits of uncertainty characterization through a posterior distribution, explicit estimates of both foreground
and background components, and flexibility in including other sources of information.
We propose a solution to the image deconvolution problem where the convolution operator or point spread function
(PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited
to produce a few principal components explaining the uncertainty in a high dimensional space. Specifically,
we assume the image is sparse corresponding to the natural sparsity of magnetic resonance force microscopy
(MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of
our Bayesian myopic algorithm is superior to previously proposed algorithms such as the alternating minimization
(AM) algorithm for sparse images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
In this paper, we present a novel information embedding based approach for video indexing and retrieval. The
high dimensionality for video sequences still poses a major challenge of video indexing and retrieval. Different
from the traditional dimensionality reduction techniques such as Principal Component Analysis (PCA), we embed
the video data into a low dimensional statistical manifold obtained by applying manifold learning techniques
to the information geometry of video feature probability distributions (PDF). We estimate the PDF of the
video features using histogram estimation and Gaussian mixture models (GMM), respectively. By calculating
the similarities between the embedded trajectories, we demonstrate that the proposed approach outperforms
traditional approaches to video indexing and retrieval with real world data.
In this work, the problem of detecting and tracking targets with synthetic aperture radars is considered. A
novel approach in which prior knowledge on target motion is assumed to be known for small patches within the
field of view. Probability densities are derived as priors on the moving target signature within backprojected
SAR images, based on the work of Jao.1 Furthermore, detection and tracking algorithms are presented to take
advantage of the derived prior densities. It was found that pure detection suffered from a high false alarm rate
as the number of targets in the scene increased. Thus, tracking algorithms were implemented through a particle
filter based on the Joint Multi-Target Probability Density (JMPD) particle filter2 and the unscented Kalman
filter (UKF)3 that could be used in a track-before-detect scenario. It was found that the PF was superior than
the UKF, and was able to track 5 targets at 0.1 second intervals with a tracking error of 0.20 ± 1.61m (95%
confidence interval).
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables).
Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
We consider the image reconstruction problem when the original image is assumed to be sparse and when partial
knowledge of the point spread function (PSF) is available. In particular, we are interested in recovering the
magnetization density given magnetic resonance force microscopy (MRFM) data, and we present an iterative
alternating minimization algorithm (AM) to solve this problem. A smoothing penalty is introduced on allowable
PSFs to improve the reconstruction. Simulations demonstrate its performance in reconstructing both the image
and unknown point spread function. In addition, we develop an optimization transfer approach to solving a
total variation (TV) blind deconvolution algorithm presented in a paper by Chan and Wong. We compare the
performance of the AM algorithm to the blind TV algorithm as well as to a TV based majorization-minimization
algorithm developed by Figueiredo et al.
Registration of medical images (intra- or multi-modality) is the first step before any analysis is performed.
The analysis includes treatment monitoring, diagnosis, volumetric measurements or classification to mention a
few. While pairwise registration, i.e., aligning a floating image to a fixed reference, is straightforward, it is not
immediately clear what cost measures could be exploited for the groupwise alignment of several images (possibly
multimodal) simultaneously. Recently however there has been increasing interest in this problem applied to atlas
construction, statistical shape modeling, or simply joint alignment of images to get a consistent correspondence
of voxels across all images based on a single cost measure.
The aim of this paper is twofold, a) propose a cost function - alpha mutual information computed using
entropic graphs that is a natural extension to Shannon mutual information for pairwise registration and b)
compare its performance with the pairwise registration of the image set. We show that this measure can be
reliably used to jointly align several images to a common reference. We also test its robustness by comparing
registration errors for the registration process repeated at varying noise levels.
In our experiments we used simulated data, applying different B-spline based geometric transformations to the
same image and adding independent filtered Gaussian noise to each image. Non-rigid registration was employed
with Thin Plate Splines(TPS) as the geometric interpolant.
Landmine sensor technology research has proposed many types of sensors. Some of this technology has matured and can be implemented in sensor arrays that scan for landmines. Other technologies show great promise for distinguishing landmines from clutter, but are more practical to implement on a point-by-point basis as confirmation sensors. This work looks at the problem of scheduling confirmation sensors. Three sensors are considered for their ability to distinguish between landmines and clutter. A novel sensor scheduling algorithm is employed that learns an optimal policy for applying confirmation sensors based on reinforcement learning. A performance gain is realized in both probability of correct classification and processing time. The processing time savings come from not having to deploy all sensors for every situation.
See Through The Wall (STTW) radar applications have become of high importance to Homeland Security and Defense needs. In this work surface penetrating radar is simulated using basic physical principles of radar propagation and polarimetric scattering. Wavenumber migration imaging is applied to simulated radar data to produce polarimetric imagery. A detection algorithm is used to identify dihedral scattering signatures for mapping inner building walls. The detector utilizes two polarimetric channels: HH and VV to classify objects as outer wall, inner wall, or object within room. The final product is a data generated building model that maps the interior walls of the building.
Tagged Magnetic Resonance Imaging (MRI) is currently the reference modality for myocardial motion and strain analysis. Mutual Information (MI) based non rigid registration has proven to be an accurate method to retrieve cardiac motion and overcome many drawbacks present on previous approaches. In a previous work1, we used Wavelet-based Attribute Vectors (WAVs) instead of pixel intensity to measure similarity between frames. Since the curse of dimensionality forbids the use of histograms to estimate MI of high dimensional features, k-Nearest Neighbors Graphs (kNNG) were applied to calculate α-MI. Results showed that cardiac motion estimation was feasible with that approach. In this paper, K-Means clustering method is applied to compute MI from the same set of WAVs. The proposed method was applied to four tagging MRI sequences, and the resulting displacements were compared with respect to manual measurements made by two observers. Results show that more accurate motion estimation is obtained with respect to the use of pixel intensity.
KEYWORDS: Sensors, Land mines, Metals, Active remote sensing, Iron, Mining, General packet radio service, Electromagnetic coupling, Ground penetrating radar, Electromagnetism
A method known as active sensing is applied to the problem of landmine detection. The platform utilizes two scanning sensor arrays composed of ground penetrating radar (GPR) and electromagnetic induction (EMI) metal detectors. Six simulated confirmation sensors are then dynamically deployed according to their ability to enhance information gain. Objects of interest are divided into ten class types: three classes are for metal landmines, three classes for plastic landmines, three classes for clutter objects, and one final class for background clutter. During the initial scan mode, a uniform probability is assumed for the ten classes. The scanning measurement assigns an updated probability based on the observations of the scanning sensors. At this point a confirmation sensor is chosen to re-interrogate the object. The confirmation sensor used is the one expected to produce the maximum information gain. A measure of entropy called the Renyi divergence is applied to the class probabilities to predict the information gain for each sensor. A time monitoring extension to the approach keeps track of time, and chooses the confirmation sensor based on a combination of maximum information gain and fastest processing time. Confusion matrices are presented for the scanning sensors showing the initial classification capability. Subsequent confusion matrices show the classification performance after applying active sensing myopically and with the time monitoring extension.
Landmine data for electromagnetic induction (EMI) and ground penetrating radar (GPR) sensors has been collected in two background environments. The first environment is clay and the second is gravel. A multi-modal detection algorithm that utilizes a Maximum A Posteriori (MAP) approach is applied to the clay background data and compared to a pair of similar MAP detectors that utilize only the single sensors. It is shown that the multi-modal detector is more powerful than both single mode detectors regardless of landmine type. The detectors are then applied to the data from the gravel background. It is shown that a more powerful performance is achieved if the MAP detector adapts to the statistics of the new background rather than training it a priori with broader statistics that encompass both environmental conditions.
This paper shows how information-directed diffusion can be used to manage the trajectories of hundreds of smart mobile sensors. This is an artificial physics method in which the sensors move stochastically in response to an information gradient and artificial inter-sensor forces that serve to coordinate their actions.
Measurements received by the sensors are centrally fused using a particle filter to estimate the Joint Multitarget Probability Density (JMPD) for the surveillance volume. The JMPD is used to construct an information surface which gives the expected gain for sensor dwells as a function of position. The updated sensor position is obtained by moving it in response to artificial forces derived from the information surface, which acts as a potential, and inter-sensor forces derived from a Lennard-Jones-like potential. The combination of information gradient and inter-sensor forces work to move the sensors to areas of high information gain while simultaneously ensuring sufficient spacing between the sensors. We evaluate the performance of this approach using a simulation study for an idealized Micro Air Vehicle with a simple EO detector and collected target trajectories. We find that this method provides a factor of 5 to 10 improvement in performance when compared to random uncoordinated search.
The use of affine image registration based on normalized mutual information (NMI) has recently been proposed by Frangi et al. as an automatic method for assessing brachial artery flow mediated dilation (FMD) for the characterization of endothelial function. Even though this method solves many problems of previous approaches, there are still some situations that can lead to misregistration between frames, such as the presence of adjacent vessels due to probe movement, muscle fibres or poor image quality. Despite its widespread use as a registration metric and its promising results, MI is not the panacea and can occasionally fail. Previous work has attempted to include spatial information into the image similarity metric. Among these methods the direct estimation of α-MI through Minimum Euclidean Graphs allows to include spatial information and it seems suitable to tackle the registration problem in vascular images, where well oriented structures corresponding to vessel walls and muscle fibres are present. The purpose of this work is twofold. Firstly, we aim to evaluate the effect of including spatial information in the performance of the method suggested by Frangi et al. by using α-MI of spatial features as similarity metric. Secondly, the application of image registration to long image sequences in which both rigid motion and deformation are present will be used as a benchmark to prove the value of α-MI as a similarity metric, and will also allow us to make a comparative study with respect to NMI.
We will present a new approach for pattern matching which is applicable to very high dimensional features. This approach is based on maximizing a novel non-linear measure of "mutual information" which is constructed from the k nearest neighbor graph through the feature vector set.
We present in this paper an information based method for sensor management that is based on tasking a sensor to make the measurement that maximizes the expected gain in information. The method is applied to the problem of tracking multiple targets. The underlying tracking methodology is a multiple target tracking scheme based
on recursive estimation of a Joint Multitarget Probability Density (JMPD), which is implemented using particle filtering methods. This Bayesian method for tracking multiple targets allows nonlinear, non-Gaussian target motion and measurement-to-state coupling. The sensor management scheme is predicated on maximizing the expected Renyi Information Divergence between the current JMPD and the JMPD after a measurement has been made. The Renyi Information Divergence, a generalization of the Kullback-Leibler Distance, provides a way to measure the dissimilarity between two densities. We use the Renyi Information Divergence to evaluate the expected information gain for each of the possible measurement decisions, and select the measurement that maximizes the expected information gain for each sample.
This paper addresses the problem of tracking multiple moving targets by estimating their joint multitarget probability density (JMPD). The JMPD technique is a Bayesian method for tracking multiple targets that allows nonlinear, non-Gaussian target motions and measurement to state coupling. JMPD simultaneously estimates both the target states and the number of targets. In this paper, we give a new grid-free implementation of JMPD based on particle filtering techniques and explore several particle proposal strategies, resampling techniques, and particle diversification methods. We report the effect of these techniques on tracker performance in terms of tracks lost, mean squared error, and computational burden.
In this paper we treat the problem of robust entropy estimation given a multidimensional random sample from an unknown distribution. In particular, we consider estimation of the Renyi entropy of fractional order which is insensitive to outliers, e.g. high variance contaminating distributions, using the k-point minimal spanning tree. A greedy algorithm for approximating the NP-hard problem of computing the k-minimal spanning tree is given which is a generalization of the potential function partitioning method of Ravi et al. The basis for our approach is asymptotic theorem establishing that the log of the overall length or weight of the greedy approximation is a strongly consistent estimator of the Renyi entropy. Quantitative robustness of the estimator to outliers is established using Hampel's method of influence functions. The structure of the influence function indicates that the k-MST is a natural extension of the 1D, (alpha) -trimmed mean for multi- dimensional data.
Images and signals can be characterized by representations invariant to time shifts, spatial shifts, frequency shifts, and scale changes as the situation dictates. Advances in time-frequency analysis and scale transform techniques have made this possible. The next step is to distinguish between invariant forms representing different classes of image or signal. Unfortunately, additional factors such as noise contamination and `style' differences complicate this. A ready example is found in text, where letters and words may vary in size and position within the image segment being examined. Examples of complicating variations include font used, corruption during fax transmission, and printer characteristics. The solution advanced in this paper is to cast the desired invariants into separate subspaces for each extraneous factor or group of factors. The first goal is to have minimal overlap between these subspaces and the second goal is to be able to identify each subspace accurately. Concepts borrowed from high-resolution spectral analysis, but adapted uniquely to this problem have been found to be useful in this context. Once the pertinent subspace is identified, the recognition of a particular invariant form within this subspace is relatively simple using well-known singular value decomposition techniques.
The well-known uncertain principle is often invoked in signal processing. It is also often considered to have the same implications in signal analysis as does the uncertainty principle in quantum mechanics. The uncertainty principle is often incorrectly interpreted to mean that one cannot locate the time-frequency coordinates of a signal with arbitrarily good precision, since, in quantum mechanics, one cannot determine the position and momentum of a particle with arbitrarily good precision. Renyi information of the third order is used to provide an information measure on time-frequency distributions. The results suggest that even though this new measure tracks time-bandwidth results for two Gabor log-ons separated in time and/or frequency, the information measure is more general and provides a quantitative assessment of the number of resolvable components in a time frequency representation. As such, the information measure may be useful as a tool in the design and evaluation of time-frequency distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.