We evaluate a recently reported algorithm for computing frequency-dependent radar imagery in scenarios relevant for performing spectral feature identification. For each image pixel in the spatial domain a computed frequency dependent reflectivity is used to produce a corresponding spectral feature identification. We show that this novel image reconstruction technique is capable of considerable flexibility for achieving fine spectral resolution in comparison with previous techniques based on conventional synthetic aperture radar (SAR), yet new challenges are introduced with regard to achieving fine range resolution.
Shape- and motion-reconstruction is inherently ill-conditioned such that estimates rapidly degrade in the presence of noise, outliers, and missing data. For moving-target radar imaging applications, methods which infer the underlying geometric invariance within back-scattered data are the only known way to recover completely arbitrary target motion. We previously demonstrated algorithms that recover the target motion and shape, even with very high data drop-out (e.g., greater than 75%), which can happen due to self-shadowing, scintillation, and destructive-interference effects. We did this by combining our previous results, that a set of rigid scattering centers forms an elliptical manifold, with new methods to estimate low-rank subspaces via convex optimization routines. This result is especially significant because it will enable us to utilize more data, ultimately improving the stability of the motion-reconstruction process.
Since then, we developed a feature- based shape- and motion-estimation scheme based on newly developed object-image relations (OIRs) for moving targets collected in bistatic measurement geometries. In addition to generalizing the previous OIR-based radar imaging techniques from monostatic to bistatic geometries, our formulation allows us to image multiple closely-spaced moving targets, each of which is allowed to exhibit missing data due to target self-shadowing as well as extreme outliers (scattering centers that are inconsistent with the assumed physical or geometric models). The new method is based on exploiting the underlying structure of the model equations, that is, far-field radar data matrices can be decomposed into multiple low-rank subspaces while simultaneously locating sparse outliers.
This paper considers a time domain ultrasonic tomographic imaging method in a multi-static configuration using
the propagation and backpropagation (PBP) method. Under this imaging configuration, ultrasonic excitation
signals from the sources probe the object imbedded in the surrounding medium. The scattering signals are
recorded by the receivers. Starting from the nonlinear ultrasonic wave propagation equation and using the
recorded time domain signals from all the receiver sensors, the object is to be reconstructed. The conventional
PBP method is a modified version of the Kaczmarz method that iteratively updates the estimates of the object
acoustical potential distribution within the image area. Each source takes turns to excite the acoustical field
until all the sources are used. The proposed multi-static image reconstruction method utilizes a significantly
reduced number of sources that are simultaneously excited. We consider two imaging scenarios with regard to
source positions. In the first scenario, sources are uniformly positioned on the perimeter of the imaging area.
In the second scenario, sources are randomly positioned. By numerical experiments we demonstrate that the
proposed multi-static tomographic imaging method using the multiple source excitation schemes results in fast
reconstruction and achieves high resolution imaging quality.
It is not currently known if it is possible to accurately form a synthetic aperture radar image from N data
points in provable near-linear complexity, where accuracy is defined as the ℓ2 error between the full O(N2)
backprojection image and the approximate image. To bridge this gap, we present a backprojection algorithm
with complexity O(log(1/ε)N log N), with ε the tunable pixelwise accuracy. It is based on the butterfly scheme,
which works for vastly more general oscillatory integrals than the discrete Fourier transform. Unlike previous
methods this algorithm allows the user to directly choose the amount of acceptable image error based on a
well-defined metric. Additionally, the algorithm does not invoke the far-field approximation or place restrictions
on the antenna flight path, nor does it impose the frequency-independent beampattern approximation required
by time-domain backprojection techniques.
Understanding and organizing data is the first step toward exploiting sensor phenomenology for dismount tracking.
What image features are good for distinguishing people and what measurements, or combination of measurements,
can be used to classify the dataset by demographics including gender, age, and race? A particular technique,
Diffusion Maps, has demonstrated the potential to extract features that intuitively make sense [1]. We want to
develop an understanding of this tool by validating existing results on the Civilian American and European Surface
Anthropometry Resource (CAESAR) database. This database, provided by the Air Force Research Laboratory
(AFRL) Human Effectiveness Directorate and SAE International, is a rich dataset which includes 40 traditional,
anthropometric measurements of 4400 human subjects. If we could specifically measure the defining features for
classification, from this database, then the future question will then be to determine a subset of these features that can
be measured from imagery. This paper briefly describes the Diffusion Map technique, shows potential for dimension
reduction of the CAESAR database, and describes interesting problems to be further explored.
In synthetic aperture radar (SAR) imaging, a scene of interest is illuminated by electromagnetic waves. The aim
is to reconstruct an image of the scene from the measurement of the scattered waves using airborne antenna(s).
There are many imaging systems which are built upon this notion such as mono-static SAR, bi-static SAR, and
hitchhiker SAR. For these modalities, there are analytic reconstruction algorithms based on backprojection.
Backprojection-based algorithms have the advantage of putting the visible edges of the scene at the right location
and orientation in the reconstructed images.
On the other hand, there is also a SAR imaging method based on the generalized likelihood-ratio test (GLRT).
In particular we consider the problem of detecting a target at an unknown location. In the GLRT, the presence
of a target in the scene is determined based on the likelihood-ratio test. Since the location of the target is not
known, the GLRT test statistic is calculated for each position in the scene and the location corresponding to the
maximum test statistic indicates the location of a potential target.
In this paper, we show that the backprojection-based analytic reconstruction methods include as a special
case the GLRT method. We show that the GLRT test statistic is related to the reflectivity of the scene when a
backprojection-based reconstruction algorithm is used.
This paper demonstrates image enhancement for wide-angle, multi-pass three-dimensional SAR applications.
Without sufficient regularization, three-dimensional sparse-aperture imaging from realistic data-collection scenarios
results in poor quality, low-resolution images. Sparsity-based image enhancement techniques may be used
to resolve high-amplitude features in limited aspects of multi-pass imagery. Fusion of the enhanced images across
multiple aspects in an approximate GLRT scheme results in a more informative view of the target. In this paper,
we apply two sparse reconstruction techniques to measured data of a calibration top-hat and of a civilian vehicle
observed in the AFRL publicly-released 2006 Circular SAR data set. First, we employ prominent-point autofocus
in order to compensate for unknown platform motion and phase errors across multiple radar passes. Each
sub-aperture of the autofocused phase history is digitally-spotlighted (spatially low-pass filtered) to eliminate
contributions to the data due to features outside the region of interest, and then imaged with l1-regularized
least squares and CoSaMP. The resulting sparse sub-aperture images are non-coherently combined to obtain a
wide-angle, enhanced view of the target.
This paper addresses several fundamental problems that have hindered the development of model-based recognition
systems: (a) The feature-correspondence problem whose complexity grows exponentially with the number
of image points versus model points, (b) The restriction of matching image data points to a point-based model
(e.g., point based features), and (c) The local versus global minima issue associated with using an optimization
model.
Using a convex hull representation for the surfaces of an object, common in CAD models, allows generalizing
the point-to-point matching problem to a point-to-surface matching problem. A discretization of the Euclidean
transformation variables and use of the well known assignment model of Linear Programming renown leads to
a multilinear programming problem. Using a logarithmic/exponential transformation employed in geometric
programming this nonconvex optimization problem can be transformed into a difference of convex functions
(DC) optimization problem which can be solved using a DC programming algorithm.
The ability to reconstruct the three dimensional (3D) shape of an object from multiple images of that object is
an important step in certain computer vision and object recognition tasks. The images in question can range
from 2D optical images to 1D radar range profiles. In each case, the goal is to use the information (primarily
invariant geometric information) contained in several images to reconstruct the 3D data. In this paper we apply
a blend of geometric, computational, and statistical techniques to reconstruct the 3D geometry, specifically the
shape, from multiple images of an object. Specifically, we deal with a collection of feature points that have been
tracked from image (or range profile) to image (or range profile) and we reconstruct the 3D point cloud up to
certain transformations-affine transformations in the case of our optical sensor and rigid motions (translations
and rotations) in the radar case. Our paper discusses the theory behind the method, outlines the computational
algorithm, and illustrates the reconstruction for some simple examples.
KEYWORDS: 3D image processing, 3D acquisition, Detection and tracking algorithms, Radar, Scattering, 3D image reconstruction, Sensors, 3D modeling, Motion models, Denoising
If a target's motion can be determined, the problem of reconstructing a 3D target image becomes a sparse-aperture
imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse
data collection that yields very low-resolution images if backprojection or other standard imaging techniques are
used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on
the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are
compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.
KEYWORDS: Time-frequency analysis, Radar, Doppler effect, Transform theory, Fourier transforms, Data modeling, Scattering, Sensors, Iterated function systems, X band
This paper describes work that considered two Joint Time-Frequency
Transforms (JTFTs) for use in a SAR-based (single sensor/platform
Synthetic Aperture Radar) 3D imaging approach. The role of the
JTFT is to distinguish moving point scatterers that may become
collocated during the observation interval. A Frequency Domain
Velocity Filter Bank (FDVFB) was compared against the well-known
Short Time Fourier Transform (STFT) in terms of their maximal
Time-Frequency energy concentrations. The FDVFB and STFT energy
concentrations were compared for a variety of radar scenarios. In
all cases the STFT achieved slightly higher energy concentrations
while simultaneously requiring half the computations needed by the
FDVFB.
This paper describes the development of an algorithm for detecting
multiple-scattering events in the 3D Geometric Theory of
Diffraction (GTD)-based Jackson-Moses scattering model. This
approach combines microlocal analysis techniques with
geometric-invariant theory to estimate multiple-scattering events.
After multiple-scattering returns were estimated, the algorithm
employed the Generalized Radon Transform to determine the
existence of multiple scattering within the measured data. The
algorithm was tested on an X-band simulation of isotropic point
scatterers undergoing unknown rotational motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.