KEYWORDS: Signal to noise ratio, Sensors, Signal detection, Interference (communication), Clouds, Particles, Long wavelength infrared, Terbium, Signal attenuation, Target detection
The physical model for long wave infrared (LWIR) thermal imaging through a dust obscurant incorporates
transmission loss as well as an additive path radiance term, both of which are dependent on an obscurant
density along the imaging path. When the obscurant density varies in time and space, the desired signal
is degraded by two anti-correlated atmospheric noise components-the transmission (multiplicative) and the
path radiance (additive)-which are not accounted for by a single transmission parameter. This research
introduces an approach to modeling the performance impact of dust obscurant variations. Effective noise
terms are derived for obscurant variations detected by a sensor via a forward radiometric analysis of the
imaging context. The noise parameters derived here provide a straightforward approach to predicting imager
performance with existing NVESD models such as NVThermIP.
KEYWORDS: Targeting Task Performance metric, Eye, Systems modeling, Infrared imaging, Imaging systems, Analytical research, Thermal modeling, Current controlled current source, Contrast transfer function
The ability of observers to identify human activities in noise is expected to differ from performance with static targets in noise due to the unmasking that is provided by target motion. At a minimum, the probability of identification should increase when the temporal bandwidth of the noise is less than that of the system. Results from a human activities identification experiment are presented in this paper, along with results from a moving character experiment that is intended to provide better understanding of basic motion in noise with varied temporal bandwidth. These results along with further experiments and analysis will eventually be used to improve performance predictions derived from the Targeting Task Performance (TTP) metric.
This paper presents object profile classification results using range and speed independent features from an infrared
profiling sensor. The passive infrared profiling sensor was simulated using a LWIR camera. Field data collected near the
US-Mexico border to yield profiles of humans and animals is reported. Range and speed independent features based on
height and width of the objects were extracted from profiles. The profile features were then used to train and test three
classification algorithms to classify objects as humans or animals. The performance of Naïve Bayesian (NB), K-Nearest
Neighbors (K-NN), and Support Vector Machines (SVM) are compared based on their classification accuracy. Results
indicate that for our data set all three algorithms achieve classification rates of over 98%. The field data is also used to
validate our prior data collections from more controlled environments.
This paper presents initial object profile classification results using range and elevation independent features from a
simulated infrared profiling sensor. The passive infrared profiling sensor was simulated using a LWIR camera. A field
data collection effort to yield profiles of humans and animals is reported. Range and elevation independent features
based on height and width of the objects were extracted from profiles. The profile features were then used to train and
test four classification algorithms to classify objects as humans or animals. The performance of Naïve Bayesian (NB),
Naïve Bayesian with Linear Discriminant Analysis (LDA+NB), K-Nearest Neighbors (K-NN), and Support Vector
Machines (SVM) are compared based on their classification accuracy. Results indicate that for our data set SVM and
(LDA+NB) are capable of providing classification rates as high as 98.5%. For perimeter security applications where
misclassification of humans as animals (true negatives) needs to be avoided, SVM and NB provide true negative rates of
0% while maintaining overall classification rates of over 95%.
This paper presents progress in image fusion modeling. One fusion quality metric based on the Targeting Task
performance (TTP) metric and another based on entropy are presented. A human perception test was performed with
fused imagery to determine effectiveness of the metrics in predicting image fusion quality. Both fusion metrics first
establish which of two source images is ideal in a particular spatial frequency pass band. The fused output of a given
algorithm is then measured against this ideal in each pass band. The entropy based fusion quality metric (E-FQM) uses
statistical information (entropy) from the images while the Targeting Task Performance fusion quality metric (TTPFQM)
utilizes the TTP metric value in each spatial frequency band. This TTP metric value is the measure of available
excess contrast determined by the Contrast Threshold Function (CTF) of the source system and the target contrast. The
paper also proposes an image fusion algorithm that chooses source image contributions using a quality measure similar
to the TTP-FQM. To test the effectiveness of TTP-FQM and E-FQM in predicting human image quality preferences,
SWIR and LWIR imagery of tanks were fused using four different algorithms. A paired comparison test was performed
with both source and fused imagery as stimuli. Eleven observers were asked to select which image enabled them to
better identify the target. Over the ensemble of test images, the experiment showed that both TTP-FQM and E-FQM
were capable of identifying the fusion algorithms most and least preferred by human observers. Analysis also showed
that the performance of the TTP-FQM and E-FQM in identifying human image preferences are better than existing
fusion quality metrics such as the Weighted Fusion Quality Index and Mutual Information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.