PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Constant False Alarm Rate (CFAR) detectors are designed to perform when the clutter information is partially unknown and/or varying. This is accomplished using local threshold estimates from background observations in which the CFAR level is maintained. However, when local observations contain target or irrelevant information, censoring is warranted to improve detection performance. Order Statistics (OS) processors have been shown to perform robustly (referring to type II errors or CFAR loss) for heterogeneous background clutter observations, and their performance has been analyzed for exponential clutter with unknown power. In this paper, several order statistics are used to create an invariant test statistic for Weibull clutter with two varying parameters (i.e., power and skewness). The robustness of a two-parameter invariant CFAR detector is analyzed and compared with an uncensored Weibull-Two Parameter (WTP) CFAR detector and conventional Cell Averaging (CA)-CFAR detector (i.e., designed invariant to exponential clutter). The performance trade-offs of these detectors are gaged for different scenarios of volatile clutter environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the payoff for opportunistic fusion of essentially autonomous sensors operating with unequal orders of non-coherent integration. We also evaluate the degradation in fusion performance due to intersensor cross-correlation introduced via a Swerling 2 target model. We parameterize where fusion is not productive and show the integration necessary for comparable single sensor performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the performance of the two dimensional least mean square adaptive filter as a prewhitening filter for detection systems. In two dimensional infrared sensor data, the clutter is correlated and much wider in spatial extent than the signal of interest. The two dimensional adaptive filter can be trained to adapt and predict the clutter, thereby enabling the error channel output to contain the signal of interest in white noise. Performance of the adaptive prewhitener, in terms of local signal to clutter ratio's(LSCR) and the gain obtained is described. The gain in LSCR due to this augmenting filter, is shown to depend on the statistics of the background clutter, in particular on the local mean. It is shown that, as the amount of color in the background clutter increases, the performance of the conventional matched filter performance degrades much more than the performance of a detector based on the augmenting prewhitener.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of targets moving in an environment dominated by "noise" is addressed from the perspective of nonlinear dynamics. Sensor data are used to drive a Korteweg-deVries (soliton) equation, inducing a resonance-type phenomenon which indicates the presence of hidden target signals. The algorithm is implemented in terms of a novel neural architecture, which we have named "spectral network", which can easily be implemented in optoelectronic hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of direction-upon-arrival (DOA) estimation. It is well known that the mean-squared errors (MSE) of the DOA are proportional to reciprocal of the squared distance d between array elements, and asymptotically approach to the Cramer-Rao bound (CRD) as d increases. However, this may cause spatial frequency alinning. In this paper, using non-uniform array, we regard this problem as data least square fitting one, and estimate DOA by a proper direction vector fitting. Two theorems are proved to ensure the method not to cause ambiguity. The advantages of proposed method are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a technique for digitally processing a sequence of images obtained with a staring mosaic detector array to permit the resolution of features smaller than the spacing between individual detector elements. It is based on combining the sequence of images to obtain an effective higher spatial sampling rate and Nyquist cutoff frequency, and then deconvolving the finite size of the detector elements as well as other sources of blurring. Application of the technique to surveillance imagery obtained in fixed stare, target autotrack, and pushbroom scanning modes is addressed. We provide a theoretical description of the sub-pixel resolution technique, the results of a simulation study, and a demonstration using experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared target detection and tracking has received considerable attention in recent years because of the emerging importance of infrared search and track (IRST) systems as a passive sensing modality. Various forms of advanced multi-frame signal processing algorithms have been developed to address the problem of detecting low contrast targets in clutter backgrounds. Advanced tracking algorithms such as multiple hypothesis tracking and track-before-detect are also being studied to improve overall system performance. While different algorithms have been derived in different contexts, they may be profitably examined within a common framework. This paper examines the problem of JR detection and tracking within the general framework of Bayesian hypothesis testing, and develops different detection and tracking algorithms as solutions when various simplifying assumptions are made. This exercise lends insight into the inter-relationships among different algorithms, and facilitates comparison of their strengths and weaknesses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates methods for adaptive detection of a 3D (space-time) LR/EO target in clutter of unknown and possibly non-stationary statistics. Non-stationary data conditions necessitate estimation of the environmental parameters over local regions. A low degree-of-freedom model for the space-time clutter characteristics is assumed. This allows the model based matched filter to adapt to the changing clutter characteristics better than the usual technique of matched filter construction, e.g. direct sample matrix inversion (SMI).
The models used for characterizing the space-time clutter characteristics include a 3D autoregressive model, a non-causal minimum variance representation model, and a space-time separable clutter model The matched filter algorithms are derived for these models using the estimated model parameters.
In addition, the signal-to-noise ratios (SNR) that are achievable in stationary clutter conditions by the model-based filters are compared to that obtained by the optimal linear filter for a variety of clutter and target characteristics. An analysis of losses due to target/clutter mis-modeling and other effects are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In passive detection of small infrared targets in image data, we are faced with the difficult task of enhancing some characteristic of the target or signal while suppressing the clutter or background image noise. We reported that an effective means by which targets may be identified is to exploit characteristics which exist between scenes measured in different bands in the long wave infrared region of the electromagnetic spectrum. These methods are broadly termed multispectral techniques. In this paper we present a method by which a two- dimensional least-mean square adaptive filter is used to distinguish between target and clutter using multispectral techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clutter and noise can preclude single-frame detection of moving targets in image sequences, necessitating the application of multiple frame (3-D) filters. Two new types of 3-D fan filter show considerable promise in this application. The first type of fan filter suppresses all clutter and other objects whose velocity is contained in a user-specified velocity set. The ability to reject clutter over a continuous range of velocity is a great advantage of this class of filter with respect to velocity-notch filters such as the frame-to-frame subtracter. The second type of fan filter passes without attenuation all objects whose velocity is contained in a user-specified velocity set while providing the maximum possible suppression of broad-band noise. For detecting a target of unknown velocity in broad-band noise a bank of this type of fan filter can replace a band of 3-D matched filters, with the advantage of avoiding the velocity mismatch losses of the 3-D matched filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-frame, multi-spectral image sequences can be exploited in real-time, giving analysts access to information on evolving strategic events or threats. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low-contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots may enhance resolution of adjacent objects. In image sequences contaminated by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. Image sequences can be screened automatically for low-frequency, high-magnitude events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous work in adaptive space-time processing has concentrated on either covariance estimation or on using a region segmentation approach where a bank of filters is designed from previously collected data. The first method typically involves performing a generalized likelihood ratio test (GLRT), which for a typical 3-D target signature involves estimation of an enormous number of covariance elements. The second method relies on filter construction from previously collected data. Two of the major penalties which are incurred in any covariance estimation technique include a large amount of computational complexity and a lack of robustness in a changing environment due to the large number of covariance samples required for statistical stability. The region segmentation approach is useful when the clutter being processed resembles that which was used for filter construction, but suffers large potential losses when the data which is operated on has different statistical properties than that which was used in filter construction. The method which is being addressed in this study for mitigating the problems associated with a space-time covariance estimation procedure and/or the dependence on a bank of fixed filters, is to assume a low degree of freedom model for the space-time clutter characteristics. This allows the adaptive filter to be estimated over a much smaller region. The detection algorithm can therefore track the clutter characteristics of a changing environment more closely while minimizing any losses in a stationary environment. This paper addresses the statistical behavior of the model-based algorithms. The statistical behavior is analyzed as a function of the number of filter tap weights and the estimation region size used for filter construction. Performance in a non-stationary environment is analyzed via Monte-Carlo techniques on both simulated and recently collected longwave IR clutter. The results indicate that the reduced degree of freedom model-based algorithms can provide significant performance improvement when the dimensions of the test vector are large and only a small amount of data is available for covariance estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Expressions are derived for the single look tracking noise (SLTN) for an unthresholded intensity centroid algorithm and a binary centroid with a lower threshold. The SLTN is defined as the rss value of the rms errors due to (1) photoelectron statistics, (2) background in the tracking gate, and (3) non-coincidence of the object boundaries and the focal plane array (FPA) detector pixels. The expressions, eqs. (27) and (28) below, are applied to typical directed energy weapon scenarios where the objects are pulse-illuminated distant, Lambertian targets. We find for both algorithms that the SLTN decreases with increasing illuminator energy to a terminal value set by the boundary mismatch error (3) above, but for a fixed illumination and pixel footprint on the target this error is smaller for the intensity algorithm. For each algorithm, the footprint can be optimized to obtain a minimal SLTN, and in the cases studied the binary SLTN was consistently lower than the intensity SLTN. Furthermore, the dependence on illuminator pulse energy E is approximately 1/(root)E for the intensity, approximately 1/E2/3 for the binary centroid. Finally, the SLTN is approximately proportional to 1/D(root)(eta) , where D is the collecting aperture and (eta) is the tracker photon efficiency (transmission X quantum). Thus the advantage of a `shared' DEW tracker compared to `separate' is typically a factor 5 or less.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A use of automatic acquisition and tracking for electro-optic fire control is described. The techniques used for detection of targets and rounds are described. Additionally test results obtained during Navy testing are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new signal processor (SP) architecture has been designed by SPAR to meet the signal processing requirements of tactical infrared search and surveillance (IRSS) systems. The new SP, which can operate in a dual infrared (IR) spectral band configuration, effectively decouples high volume, low latency pixel processing from lower volume, data-dependent detection processing. It enables IRSS SPs to be hosted on heterogeneous multiprocessor networks whose components can be individually matched to the requirements of each process, thus offering flexibility and high growth potential. Pixel processing includes background prefiltering, local feature extraction for adaptive filter selection from multiple filter banks, spatial filtering for threat signal-to-background-noise enhancement, and adaptive thresholding for data detection. Detection processing compares incoming data detections against a target threshold, identifies and reports potential targets, and updates the adaptation parameters that are used to compute detection thresholds in order to maintain constant false alarm rate (CFAR) control. The new nonlinear CFAR process for detection processing utilizes signal processor resources more productively, and can be used to optimize the tracking performance of the detection post-processor. A prototype of the new SP architecture, hosted on the U.S. Navy standard AN/UYS-2 signal processor operating with an AT&T DSP3 processor, has demonstrated improved performance in terms of higher probability of detections (Pd) at a lower detection loading for fixed false alarm rate under clutter and blue sky backgrounds. Improvements in other performance metrics have been proven by software simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advanced surveillance testbed was developed to provide high-fidelity simulation of space- based electro-optical sensors, their associated signal processing, and multi-target tracking algorithms as well as the testing/evaluation of the algorithms. It has a modular design for ease of expansion and maintenance. The testbed begins with scenario definition which includes the definition of satellite constellations, specification of targets, the modeling of atmospheres, and the selection or generation of background scenes. Existing sensor modules include an advanced staring sensor, two linear scanners of advanced design, and a simplified generic sensor for quick simulations having lower fidelity. Modules for two-dimensional (monocular) tracking include a highly specialized tracker for use with the staring sensor and several two- dimensional trackers that can use angle-intensity data from any source. Results of two or more monocular simulations can be combined as inputs to algorithms for data fusion and three- dimensional tracking. `Optimal' two- and three-dimensional tracking algorithms are also provided. Additional modules provide overall control, graphics, and track analysis. This presentation discusses the testbed and its capabilities describing the models with emphasis placed on the background modeling and the implications of background and the lessons learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surveillance test bed (STB) is utilized by the Strategic Defense Initiative Organization in its system integration and testing. The test bed provides a `level field' or standardized test environment for multiple surveillance signal and data processing algorithm developers. STB's most salient features are the integration of high fidelity signatures, backgrounds, and signal processing models with algorithms for sensor tasking, bulk filtering, track/correlation, and discrimination with the integration of radar and optical estimates for track and discrimination. The STB currently hosts baseline tasking, bulk filtering, correlation/tracking, and discrimination algorithms which are prototypes for the algorithms which will be hosted in the operational system. This paper reviews the current status and capabilities of the STB with respect to optical signal and data processing needs for small targets in backgrounds typically found in Strategic Defense scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The midcourse space experiment (MSX) satellite, to be launched in 1993, will carry as one of its experiments the onboard signal and data processor (OSDP). OSDP will demonstrate real time tracking of targets in space, processing the long wave infrared data from MSXs Spirit III sensor. Hughes Aircraft has built and delivered the OSDP flight unit; verification and acceptance tests have been completed, with all technical requirements satisfied. Built upon the experience gained on the previous generation signal processor currently operating on the airborne surveillance testbed (AST, also known as AOA), OSDP implements improved and simplified algorithms that promise to reduce the computational load while substantially enhancing functional capability. This paper describes the OSDP concept and these functional enhancements. The key OSDP functions are described in detail but on a qualitative basis, to promote an understanding of the functional enhancements and their significance. The characteristics and capabilities of OSDP are compared with those of its predecessors on AST, and future additional enhancements that may be implemented in a next generation to follow OSDP are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interacting multiple model (IMM) algorithm uses multiple models that interact through state mixing to track a target maneuvering through an arbitrary trajectory. However, when a target maneuvers through a coordinated turn, the acceleration vector of the target changes magnitude and direction, and the maneuvering target models commonly used in the IMM (e.g., constant acceleration) can exhibit considerable model error. To address this problem an IMM algorithm that includes a constant velocity model, a constant speed model with the kinematic constraint for constant speed targets, and the exponentially increasing acceleration (EIA) model for maneuver response is proposed. The constant speed model utilizes a turning rate in the state transition matrix to achieve constant speed prediction. The turning rate is calculated from the velocity and acceleration estimates of the constant speed model. The kinematic constraint for constant speed targets is utilized as a pseudomeasurement in the filtering process with the constant speed model. Simulation results that demonstrate the benefits of the EIA model and the kinematic constraint to the IMM algorithm are given. The tracking performance of the proposed IMM algorithm is compared with that of an IMM algorithm utilizing constant velocity and constant turn rate models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper documents an analytical effort that looks at the expectation of being able to resolve individual members of a cluster of objects as a function of the parameters of time after deployment, the number and distribution of objects in the cluster, their relative separation velocities, sensor resolution capability, and the shape of the cluster -- essentially local object density. Multiple methods of modeling object clusters were investigated and found to be equivalent in their results. A simple set of equations has been derived that fits modeled data over a wide range of the parameter variations. For N objects in a cluster of average density equals d objects per resolution cell; R approximately equals N (DOT) e-d equals the expected number of objects resolved, and P approximately equals (N - R)/d equals the expected number of subclusters perceived. For uniform cluster densities, d is inversely proportional to time squared, and a method is shown for calculating d for non-uniform cluster densities. In addition, an approximately constant relationship between the number of objects perceived and the number of resolved objects is shown; R approximately equals P2/N. Several applications of these relationships which are of interest to the Strategic Defense Initiative (SDI) are examined, including the `Cheshire Cat Effect' wherein the number of perceived objects as a function of resolution and sensor sensitivity is discussed. In addition, system level implications of the effects of target density during boost phase and during the cluster tracking phase of mid-course are covered. The behavior of large numbers of clusters in a threat tube is examined and characterized as the individual clusters overlap each other as they expand and form a `supercluster.' An equilibrium limit of resolution possible within a `supercluster' is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking of individual targets in a cluster presents difficult and computationally expensive problems that may be addressed using cluster tracking. This paper investigates the feasibility of tracking a single cluster using one or two space-based passive optical sensors. A functional model for closely spaced object (CSO) resolution was used to generate simulated measurements, and standard extended Kalman filter (EKF) techniques, along with gating and clustering logic, were used to estimate the state of the cluster centroid. An estimate was maintained of the two-dimensional extent of the cluster in each sensor's field-of-view. Results for a single-sensor filter run separately with two sets of measurements from two sensors, and for a centralized filter combining the same two sets of measurements, show that the effects of bias in CSO measurements cannot necessarily be overcome by the use of a second sensor. Results from the single-sensor filter over twenty Monte Carlo runs, all starting with the same initial state estimate (simulated handover error), are compared with results using the same sensor and measurements, but drawing the handover state errors from Gaussian distributions. The variance of the error in the second case is much larger throughout the entire track time, emphasizing the need for accurate handover data in a single angle-only sensor cluster tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the recently completed Covert Air Combat Definition Study, a form of multiple hypothesis tracking, known as structured branching (SB/MHT), was developed and tested by Hughes Radar Systems Group. SB/MHT offers significant computational savings compared to other approaches, enabling it to maintain a great number of hypothesized tracks, initiated in high false alarm environments without overwhelming current generation tactical processors. Under the recently initiated Advanced Tracking Algorithms Program, the SB/MHT algorithm will be further developed and hosted on a tactical airborne processor to demonstrate realtime performance. This paper walks through the algorithm sequence of operations in order to give the reader an intuitive understanding of SB/MHT. The paper begins with a description of the basic idea of MHT algorithms; i.e., to carry hypotheses when there is doubt about which tracks to associate with new observations. The primary differences between SB/MHT and `classical' MHT are briefly discussed. Each operation in the SB/MHT block diagram is explained by stepping through the operations that would take place given an assumed set of tracks, and a set of observations to be processed. Operations to be discussed include: observation filtering and prediction, gate formation and observation-to-track association, track branching and initiation, initial track scoring and pruning, track clustering, hypothesis generation and scoring, and finally, global track scoring and pruning. Methods for controlling track-file growth and its resultant computational load are also discussed. Although high level in terms of the amount of detail covered, this description should provide the reader with a good understanding of the fundamental characteristics of a streamlined MHT algorithm envisioned to operate in real time on a current generation airborne tactical processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The symmetric measurement equation (SME) filter approach to track maintenance in multiple target tracking (MTT) is extended to the case when there are false and/or missing detections. In the SME filter approach to MTT there is no need to consider target/measurement association or to identify which data points in the measurement set are missing or are due to false detections. As a result, there is a substantial reduction in the computational complexity of the tracking filter, and in fact the computational requirements for implementing the filter are comparable to that of a standard Kalman filter of dimension 6N where N is the number of targets. In this paper the SME tracking filter is formulated for the case of N targets moving with constant velocities and with position measurements given in Cartesian coordinates. It is assumed that the measurement noises are zero mean, white, and independent (of each other), so that target motion can be decoupled into x, y, z components. A computer simulation is given in the case of three targets with false and missing measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of applications exist for which association of target tracks obtained from different types of sensors is required, and an algorithm to aid the user in performing the association is desired. Depending on the application, the association algorithm must satisfy a set of system requirements, e.g., minimum acceptable probability of correct association, maximum elapsed time for association after target detection or automatic track initiation, maximum acceptable probability of false track-to-track association, etc. Evaluation of such algorithms are usually conducted using time consuming Monte Carlo simulations and are sensitive to the scenarios chosen for this purpose. This paper describes an ESM/radar track association algorithm and an analytical technique to evaluate this algorithm which avoids Monte Carlo simulation. Closed form mathematical expressions for the probabilities of correct association and false correlation are derived. In addition, trade-off studies involving several system parameters are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a result of an on-going IR&D effort, General Atronics Corporation (GAC) has developed effective algorithms for automatic acquisition, tracking, and recognition of multiple targets - broadly referred to as Automatic Target Recognition (ATR) - using passive sensors on board multiple distributed platforms. The goal of AIR is to either provide timely and accurate assistance to a human operator, who is the ultimate decision maker, or he totally autonomous in extracting the necessary information, its processing, and final decision making. Applications of this technology include Electro-Optical Fire Control Systems (EOFCS), Infrared Search and Track (IRST), Infrared Counter Measures (IRCM), Covert Aircraft Recovery and Tracking Systems (CARTS), Strategic Defense Initiative (SDI), Anti-Submarine Warfare (ASW), Battlefield Intelligence, and Security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion concepts have been applied in many disciplines, but a general systematic formulation has not been well developed. This paper is intended to provide a guideline in applying data fusion techniques to a practical problem, the fusion of target identification (ID) attributes measurements. Formation of a consensus function is first presented, and is followed by construction of a hierarchical probabilistic network for computing a joint probability density. An ID fusion processing approach is described and integrated into a generalized track/data association algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data integration (fusion) trees provide a top-down functional partitioning of the level 1 data fusion process. Since this process has exponential growth in complexity the fusion tree is selected to balance system performance with cost. The fusion tree defines the order in which the data is to be integrated. The data integration is specified at each fusion node to include common referencing, data association, and state estimation. The breadth of application of this paradigm is described herein.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a technique for the estimation and removal of sensor biases and sensor frame misalignment errors for netted 3-D radar systems. One radar is assumed to have no bias in its measurements, and no tilt errors in its reference frame. The algorithm involves a two stage process. The first stage involves estimating the bias of each sensor and removing the effects of that bias with the estimate. The second stage uses the `bias-free' sensor measurements to estimate the sensor frame tilt errors. Simulation results are presented to demonstrate the performance of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surveillance of very dim IR targets or `fast-burn' targets, or both, is usually assumed to require the use of either staring sensors or scanning sensors with relatively fast revisit rates. The thesis of this paper is that a greater multiplicity of scanning sensors with relatively slow revisit rates often can be used effectively to achieve the required surveillance results. In particular, we describe conditions under which a dim or fast-burn target can be detected and tracked, and accurate tactical parameters can be computed for the target, by using as few as one observation per sensor per target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multispectral passive sensor may be used to measure the range to an object, provided that the light emitted is described by a black (gray) body distribution, and atmospheric attenuation coefficients between receiver and target are known. With additional time evolution passive measurements, it may be possible to estimate target range and radial velocity with more limited amount of information on atmospheric channel properties. In this paper we establish limits on the accuracy of range and velocity estimation using selected statistical models and LOWTRAN VII generated atmospheric scenarios. The results presented and the methodology developed are important in determination of the limits of infrared sensing and for a critical evaluation of possible advantage of hybrid passive-active or passive only sensors in various scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multiple target environment, hit-to-track data association is important for properly tracking targets in view. Track monitoring is a fast method of determining whether the proper data association has been made. This paper presents track monitoring algorithms for both single and multiple passive 2-D sensors along with analytical methods of evaluating their effectiveness. Single sensor track monitoring produces a dilemma: a poor track is a result of target maneuver or incorrect hit-to-track data association. The algorithm presented for multiple 2-D passive sensors solves the problem by adding inclination angle monitoring. The multiple sensor monitoring system can distinguish between target maneuver, incorrect hit-to-track data association, and the addition problem with multiple sensors of incorrect track-to-track association, or `ghost' tracks. These monitoring methods can be used to prune poor hypotheses in MHT filters or in single assignment tracking when processor power is limited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Both high-prf and medium-prf waveforms can cancel clutter and provide an estimate of the target's radial speed. This paper considers the track initiation process using imperfect velocity estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The central problem in multitarget-multisensor tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. Many previous and current methodologies are based on single scan processing, which is real-time, but often leads to a large number of partial and incorrect assignments, and thus incorrect track identification. The fundamental difficulty with this approach is that there is simply not enough information in single scan processing to properly partition the observations into tracks and false alarms. In this work we formulate the problem of data association for track initiation and extension using multiscan windows in order to obtain superior track identification. A model problem is investigated to show the effect of window size, probability of detection, probability of false alarms, and measurement error on solution quality and timings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many large, complex target tracking scenarios, such as full-scale strategic missile attacks or low-observable tactical engagements, require both advanced algorithms and state-of-the-art parallel processing to produce accurate, timely results. In this paper, approaches for implementing multiple object, multiple hypothesis tracking algorithms on a massively parallel computer are presented and evaluated. Multiple hypothesis tracking algorithms offer improved performance over more traditional approaches, albeit at the expense of increased processing and storage requirements. Massively parallel array processors can deliver this needed computational power, assuming the algorithms can be efficiently mapped onto this restrictive class of architectures. Algorithms are described here for all the functions within the multiple hypothesis approach. These algorithms are then benchmarked using the distributed array of processors (DAP) series from Active Memory Technology, Inc. Results of these benchmarks show that the multiple hypothesis tracking algorithms can be successfully implemented on array processors, displaying processing times that increase sublinearly with the number of objects under surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an overall methodology for the application of a multiple hypothesis tracking (MHT) algorithm to the IR surveillance system problem of tracking dim targets in a heavy clutter or false alarm background. First, It discusses the manner in which the detection and tracking systems are jointly designed to optimize performance. Next, it presents approximate methods that can conveniently be used for preliminary system design and performance prediction. Finally, it discusses the use of a detailed Monte Carlo simulation for final system evaluation and presents results illustrating the proposed methods and comparing predicted and simulation performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DARPA multi-spectral infrared camera (MUSIC) was used for a series of experiments in Australia and Maui, Hawaii in 1991. The Maui experiments, conducted from a high mountain, concentrated on the detection of aircraft. The detection of air vehicles without the use of temporal motion (such as the case of a head-on approaching air vehicle) is a challenging problem when background clutter is present. The technique investigated was not dependent upon either the angular motion or the spectral signature of the aircraft. This approach exploits the differential transmission of the atmosphere in neighboring long wave infrared bands. This differential transmission between the target and background `colors' the background relative to the target and allows its removal. This technique was demonstrated on many examples of MUSIC data collected in Maui, Hawaii. Targets approaching the sensor head-on were successfully detected against clouds and other backgrounds using spectral along with spatial techniques. Several different algorithms were investigated and results are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the pursuit of detecting ever smaller and dimmer object the track-before-detect methodology has been employed to integrate the target energy through a time sequence of frames. Current track-before-detect algorithms maintain a path statistic for each potential object trajectory. This set of trajectories is usually trimmed to keep the number of statistics manageable, and it becomes difficult to characterize the performance of the detector. In this paper we propose a pixel based statistic rather than a path based statistic and use it in a track-before-detect algorithm for a class of trajectories constrained only by a maximum target velocity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the application of total least squares (TLS) technique in conditioning optical flow field estimates obtained from gradient based optical flow constraints. Optical flow field processing has been applied to perform moving target indication (MTI) for IR/TV sensors but results can be severely degraded in noisy imagery. The usual solution is to apply some form of nonstatistical pre-processing to the input image intensities or statistical post- processing spatial smoothing, such as least squares (LS) fitting, to the output optical flow field vectors to suppress noise. However, LS solution is known for generating biased optical flow field vector estimate in noisy imagery due to spatial gradient matrix noise. Our empirical results show improved performance of TLS over LS at lower SNRs. Results are presented in terms of optical flow field accuracy measures and target detection rates, for synthetic imagery and real infrared imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an overview of new classes of algorithms for the algorithm concepts for the solution of combinatorial optimization problems arising in data association. These algorithms are based on extensions of Bertsekas' auction algorithm. The paper includes experimental results using these algorithms and a modification of the algorithm of Jonker and Volgenant on 2-dimensional measurement-track data association problems in the presence of false alarms, missed detections, sensor bias noise, and sensor measurement noise. The results indicate that some of the new algorithms are very efficient for the solution of these data association problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluating performance of tracking algorithms is straightforward for a simulation with a single target and one computed track. Performance evaluation with multiple targets, on the other hand, is complex due to ambiguities that create confusion about which target goes with a track. The ambiguities are caused by misassociations or unresolved closely spaced objects. Various considerations in choosing a methodology for performance evaluation to handle these ambiguities are discussed. An approach to assigning tracks to targets is described that takes these considerations into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The correlation and tracking problem for ballistic objects is a major concern for the Strategic Defense Initiative (SDI). In this presentation, the use of angular momentum for gating and assignment of reports and tracks of ballistic objects is proposed. The various errors and appropriate statistical tests that could be used on angular momentum to potentially improve the tracking and correlation, multi-track hypothesis and Kalman filters are studied. The investigation is performed in earth centered inertial Cartesian coordinates for position, velocity, and acceleration in tracking, gating, and assignment. A discussion is given of a brief investigation, the theory developed, the results of numerical test, and the conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.