Real imagery and video data from cameras are frequently needed to conduct research and experiments for model development, algorithm training, and more. When collecting real imagery and video with cameras in uncontrolled environments, the environmental signatures can change over time, like temperature and sun angles, and cause the image quality to change in an unpredictable and undesirable manner. Due to the limited availability of military targets, range availability, vast personnel support needed, and the typical high costs associated with conducting data collections in the field, it is imperative that low quality data is not unintentionally collected. Moreover, a need exists to increase automation in order to reduce manpower needed during data collections. To address such issues, this paper describes a software utility incorporating various image quality metrics (IQMs), which can enable automatic monitoring of the quality of collected imagery and video data with less cost and minimal modification of the imaging system. As a part of the utility, an automated alert algorithm based on a majority vote is discussed along with a selection of suitable IQMs according to their characteristics and temporal noise filtering for stable decision making. Design criteria for an optimal performance of the automated alert algorithm is presented. Also discussed is a practical application scenario that demonstrates the capabilities and limitations of the alert system using both real and synthetic video examples.
Simulation-based training for target acquisition algorithms is an important goal for reducing the cost and risk associated with live data collections. To this end, the US Army Night Vision and Electronic Sensors Directorate (NVESD) has developed high-fidelity virtual scenes of terrains and targets using the DIRSIG in pursuit of a virtual DRI (detect, recognize, identify) capability. In this study, the NVESD has developed a neural network (NN) algorithm that can be trained on simulated data to classify targets of interest when presented with real data. This paper discusses the classification performance of a NN algorithm and the potential impact training with simulated data has on algorithm performance.
KEYWORDS: Information visualization, Visualization, Data processing, Human vision and color perception, Contrast transfer function, Target detection, Eye, Spatial frequencies, Performance modeling, Visual process modeling
The impact noise has on the processing of visual information at various stages within the human visual system (HVS) is still an open research area. To gain additional insight, twelve experiments were administered to human observers using sine wave targets to determine their contrast thresholds. A single frame of additive white Gaussian noise (AWGN) and its complement were used to investigate the effect of noise on the summation of visual information within the HVS. A standard contrast threshold experiment served as the baseline for comparisons. In the standard experiment, a range of sine wave targets are shown to the observers and their ability to detect the targets at varying contrast levels were recorded. The remaining experiments added some form of noise (noise image or its complement) and/or an additional sine wave target separated between one to three octaves to the test target. All of these experiments were tested using either a single monitor for viewing the targets or with a dual monitor presentation method for comparison. In the dual monitor experiments, a ninety degree mirror was used to direct each target to a different eye, allowing for the information to be fused binocularly. The experiments in this study present different approaches for delivering external noise to the HVS, and should allow for an improved understanding regarding how noise enters the HVS and what impact noise has on the processing of visual information.
In the use of conventional broadband imaging systems, whether reflective or emissive, scene image contrasts are often so
low that target discrimination is difficult or uncertain, and it is contrast that drives human-in-the-loop (HIL) sensor range
performance. This situation can occur even when the spectral shapes of the target and background signatures (radiances)
across the sensor waveband differ significantly from each other. The fundamental components of broadband image
contrast are the spectral integrals of the target and background signatures, and this spectral integration can average away
the spectral differences between scene objects. In many low broadband image contrast situations, hyperspectral imaging
(HSI) can preserve a greater degree of the intrinsic scene spectral contrast for the display, and more display contrast
means greater range performance by a trained observer. This paper documents a study using spectral radiometric
signature modeling and the U.S. Army’s Night Vision Integrated Performance Model (NV-IPM) to show how waveband
selection by a notional HSI sensor using spectral contrast optimization can significantly increase HIL sensor range
performance over conventional broadband sensors.
In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.
Major decisions regarding life and death are routinely made on the modern battlefield, where visual function of the individual soldier can be of critical importance in the decision-making process. Glasses in the combat environment have considerable disadvantages: degradation of short term visual performance can occur as dust and sweat accumulate on lenses during a mission or patrol; long term visual performance can diminish as lenses become increasingly scratched and pitted; during periods of intense physical trauma, glasses can be knocked off the soldier’s face and lost or broken. Although refractive surgery offers certain benefits on the battlefield when compared to wearing glasses, it is not without potential disadvantages. As a byproduct of refractive surgery, elevated optical aberrations can be induced, causing decreases in contrast sensitivity and increases in the symptoms of glare, halos, and starbursts. Typically, these symptoms occur under low light level conditions, the same conditions under which most military operations are initiated. With the advent of wavefront aberrometry, we are now seeing correction not only of myopia and astigmatism but of other, smaller optical aberrations that can cause the above symptoms. In collaboration with the Warfighter Refractive Eye Surgery Program and Research Center (WRESP-RC) at Fort Belvoir and Walter Reed National Military Medical Center (WRNMMC), the overall objective of this study is to determine the impact of wavefront guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK) on military task visual performance. Psychophysical perception testing was conducted before and after surgery to measure each participant’s performance regarding target detection and identification using thermal imagery. The results are presented here.
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Registering two images produced by two separate imaging sensors having different detector sizes and fields of
view requires one of the images to undergo transformation operations that may cause its overall quality to
degrade with regards to visual task performance. This possible change in image quality could add to an already
existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered
outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion
algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit
achievable by each independent spectral band being fused. This study investigates an identification perception
experiment using a simple and intuitive process for discriminating between image fusion algorithm
performances. The results from a classification experiment using information theory based image metrics is
presented and compared to perception test results. The results show an effective performance benchmark for
image fusion algorithms can be established using human perception test data. Additionally, image metrics have
been identified that either agree with or surpass the performance benchmark established.
Often various amounts of complementary information exist when imagery of the same scene is captured in different
spectral bands. Image fusion should merge the available information within the source images into a single fused
image that contains more relevant information compared to any single source image. The benefits of image fusion
are more readily seen when the source images contain complementary information. Intuitively complementary
information allows for measurable improvements in human task performance. However, quantifying the effect
complementary information has on fusion algorithms remains open research. The goal of this study is to quantify
the effect of complementary information on image fusion algorithm performance. Algorithm performance is
assessed using a new performance metric, based on mutual information. Human perception experiments are
conducted using controlled amounts of complementary information as input to a simple fusion process. This
establishes the relationship between complementary information and task performance. The results of this study
suggest a correlation exists between the proposed metric and identification task performance.
The performance of image fusion algorithms is evaluated using image fusion quality metrics and observer performance
in identification perception experiments. Image Intensified (I2) and LWIR images are used as the inputs to the fusion
algorithms. The test subjects are tasked to identify potentially threatening handheld objects in both the original and
fused images. The metrics used for evaluation are mutual information (MI), fusion quality index (FQI), weighted fusion
quality index (WFQI), and edge-dependent fusion quality index (EDFQI). Some of the fusion algorithms under
consideration are based on Peter Burt's Laplacian Pyramid, Toet's Ratio of Low Pass (RoLP or contrast ratio), and
Waxman's Opponent Processing. Also considered in this paper are pixel averaging, superposition, multi-scale
decomposition, and shift invariant discrete wavelet transform (SIDWT). The fusion algorithms are compared using
human performance in an object-identification perception experiment. The observer responses are then compared to the
image fusion quality metrics to determine the amount of correlation, if any. The results of the perception test indicated
that the opponent processing and ratio of contrast algorithms yielded the greatest observer performance on average.
Task difficulty (V50) associated with the I2 and LWIR imagery for each fusion algorithm is also reported.
A perception test determined which of several image fusion metrics best correlates with relative observer preference. Many fusion techniques and fusion metrics have been proposed, but there is a need to relate them to a human observer's measure of image quality. LWIR and MWIR images were fused using techniques based on the Discrete Wavelet Transform (DWT), the Shift-Invariant DWT (SIDWT), Gabor filters, Pixel averaging, and Principal Component Analysis (PCA). Two different sets of fused images were generated from urban scenes. The quality of the fused images was then measured using the mutual information metric (MINF), fusion quality index (FQI), edge-dependent fusion quality index (EDFQI), weighted-fusion quality index (WFQI), and the mean-squared errors between the fused and source images (MS(F-L), MS(F-M)). A paired-comparison perception test determined how observers rated the relative quality of the fused images. The observers based their decisions on the noticeable presence or absence of information, blur, and distortion in the images. The observer preferences were then correlated with the fusion metric outputs to see which metric best represents observer preference. The results of the paired comparison test show that the mutual information metric most consistently correlates well with the measured observer preferences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.