The Photonics Project is a set of web based calculation tools for educational and analytical use. The tools are primarily Python based notebooks that execute as Web based Apps to the user, not requiring any programming knowledge or installation of any software. There will also be some tools showing full mathematical notation that are MathCad based and require the free MathCad plug in. The calculations primarily follow from equations as presented in the Infrared and Electro-Optical Systems textbook by Ron Driggers et al (second edition). They encompass a suite of Radiometric, Optical, and other photonic functionality. Further efforts are ongoing including an active imaging and photonics devices pages. Like Python itself, the site is open to suggestions and collaboration from users and submission of further tools and functionalities. And totally free of charge to all users.
Typically, search research papers assume that target acquisition is described by an exponential distribution. We investigate when this assumption is valid. It is obvious that two people are more effective than one person at finding a target, but how can that be quantified? The network imaging sensor (NIS) and time-dependent search parameter (TDSP) models quantify how much more effective multiple observers are at finding a target than a single individual for a wide variety of scenarios. We reference and summarize evidence supporting the NIS and TDSP models and demonstrate how NIS model results can be expressed in terms of a reduced hyperexponential distribution for scenarios where observer and target are stationary. Target acquisition probabilities are determined by analysis and confirmed by computer simulations and perception experiments. Search by multiple stationary observers looking for a stationary target is described by the hyperexponential distribution. Stationary scenarios with multiple observers are more accurately modeled by hyperexponential rather than exponential distributions. Hyperexponential distributions are an example of phase-type distributions used in queuing and in the performance evaluation of computer networks and systems. The observation that search, queuing, and computer networks share phase-type distributions facilitates cross fertilization between these fields.
Reconnaissance from an unmanned aerial systems (UAS) is often done using video presentation. An alternate method is
Serial Visual Presentation (SVP). In SVP, a static image remains in view until replaced by a new image at a rate
equivalent to the live video. Mardell et al. have shown, in a forested environment, that a higher fraction of targets
(people lost in the forest), are found with SVP than with video presentation. Here Mardell’s experiment is repeated for
military targets in forested terrain at a fixed altitude. We too find a higher fraction of targets are found using SVP rather
than video presentation. Typically it takes five seconds to cover a video field of view and at 30 frames per second. This
implies that, for scenes where the target is not moving, 150 video images have nearly identical information (from a
reconnaissance point of view) as a single SVP image. This is highly significant since transmission bandwidth is a
limiting factor for most UASs. Finding targets in video or in SVP is an arduous task. For that reason we also compare
aided target detection performance (Aided SVP) and unaided target detection performance on SVP images.
This paper deals with three separate topics. 1)The Berek extended object threshold detection model is described,
calibrated against a portion of Blackwell’s 1946 naked eye threshold detection data for extended objects against an
unstructured background, and then the remainder of Blackwell’s data is used to verify and validate the model. A range
equation is derived from Berek’s model which allows threshold detection range to be predicted for extended to point
objects against an un-cluttered background as a function of target size and adapting luminance levels. The range
equation is then used to model threshold detection of stationary reflective and self-luminous targets against an uncluttered
background. 2) There is uncertainty whether Travnikova’s search data for point source detection against an
un-cluttered background is described by Rayleigh or exponential distributions. A model which explains the Rayleigh
distribution for barely perceptible objects and the exponential distribution for brighter objects is given. 3) A technique
is presented which allows a specific observer’s target acquisition capability to be characterized. Then a model is
presented which describes how individual target acquisition probability grows when a specific observer or
combination of specific observers search for targets. Applications for the three topics are discussed.
The Networked Imaging Sensor (NIS) model takes as input target acquisition probability as a function of time for
individuals or individual imaging sensors, and outputs target acquisition probability for a collection of imaging
sensors and individuals. System target acquisition takes place the moment the first sensor or individual acquires
the target. The derivation of the NIS model implies it is applicable to multiple moving sensors and targets. The
principal assumption of the NIS model is independence of events that give rise to input target acquisition
probabilities. For investigating the validity of the NIS model, we consider a collection of single images where
neither the sensor nor target is moving. This paper investigates the ability of the NIS model to predict system
target acquisition performance when multiple observers view first and second Gen thermal imagery, field-of-view
imagery that has either zero or one stationary target in a laboratory environment when observers have a maximum
of 12, 17 or unlimited seconds to acquire the target. Modeled and measured target acquisition performance are
in good agreement.
The problem solved in this paper is easily stated: for a scenario with 𝑛 networked and moving imaging sensors, 𝑚 moving targets and 𝑘 independent observers searching imagery produced by the 𝑛 moving sensors, analytically model system target acquisition probability for each target as a function of time. Information input into the model is the time dependence of 𝘗∞ and 𝜏, two parameters that describe observer-sensor-atmosphere-range-target properties of the target acquisition system for the case where neither the sensor nor target is moving. The parameter 𝘗∞ can be calculated by the NV-IPM model and 𝜏 is estimated empirically from 𝘗∞. In this model 𝑛, 𝑚 and 𝑘 are integers and 𝑘 can be less than, equal to or greater than 𝑛. Increasing 𝑛 and 𝑘 results in a substantial increase in target acquisition probabilities. Because the sensors are networked, a target is said to be detected the moment the first of the 𝑘 observers declares the target. The model applies to time-limited or time-unlimited search, and applies to any imaging sensors operating in any wavelength band provided each sensor can be described by 𝘗∞ and 𝜏 parameters.
In this paper the mean time to acquire a stationary target by n stationary imaging sensors is computed using probability theory by making use of the well established result that the detection time for a single imaging sensor is a random variable from an exponential probability density function. Each imaging sensor is characterized by a separate P∞ value which describes the probability an observer using that sensor will eventually acquire the target and a separate t value which describes the mean time to acquire the target using that sensor. There is no restriction on the wavelength band used by the imaging sensor. There are no empirical constants in the model presented here and the results are in agreement with and generalize previously published equations. The newly developed equations have been verified by numerical simulations and also yield the expected mean detection time for all limiting values of the input parameters. The code used in the numerical simulations is exhibited. For any given scenario, the separate τ observer-sensor-target parameters P∞ and t can be estimated using the NV-IPM model or measured in perception experiments. Thus the input parameters needed by the model are generally available. Comparing results presented here with results from war game simulations such as OneSAF may improve the quality of both products.
The probability P(t) of target acquisition, for a single observer who has unlimited time to search a field of view (FOV) for a single target, is expressed in terms of search parameters P ∞ and τ under conditions where these parameters are independent of time. It has been assumed that P ∞ has been determined for a particular target, scene clutter and imaging system and, for a given scenario, τ is determined empirically from P ∞ . The equation for P(t) is then extended to include time-limited search and field of regard (FOR) search, where it is assumed the target has an equal probability of being anywhere in the FOR. Equations are derived for the mean time to find a target for two cases: (1) an arbitrary number of observers using a single sensor search a single FOV or FOR for a single target; (2) two observers using two sensors search independently for a single target. The condition that P ∞ and τ be independent of time is relaxed and this leads to the time dependent search parameter (TDSP) search model. The TDSP search model is used to calculate P(t) in: (1) search from a moving vehicle, (2) FOR search where the condition that the target has an equal probability of being anywhere in the FOR is relaxed, and (3) in multitarget search.
The search problem discussed in this paper is easily stated: given search parameters (Ρ∞, τ) that are known
functions of time, calculate how the probability of a single observer to acquire a target grows with time. This
problem was solved analytically in a previous paper. To investigate the validity of the solution, videos generated
using NVIG software show the view from a vehicle traveling at two different speeds along a flat, straight road.
Small, medium and large sized equilateral triangles with the same gray level as the road but without texture were
placed at random positions on a textured road and military observers were tasked to find the targets. Analysis of this
video in perception experiments yields experimental probability of detection as a function of time. Static perception
tests enabled Ρ∞ and τ to be measured as a function of range for the small, medium and large triangles. Since range is a known function of time, Ρ∞ and τ were known as functions of time. This enabled the calculation of modeled
detection probabilities which were then compared with measured detection probabilities.
The problem solved in this paper is easily stated: given search parameters (p∞, τ)
that are known functions of time,
calculate how the probability a single observer acquires a target grows with time. This problem has been solved
analytically. In this paper we describe the analytical solution and provide derivations of the results. Comparison
with perception experiments will be reported in a future publication and hopefully will support the results presented
here. The provided solution is applicable to any scenario where the search parameters are changing with time and
are specified. In particular, the solution can be used to estimate the probability of target acquisition as a function of
time: (1) when the sensor-target range is changing, (2) for a slewed sensor where the target is alternately in and out
of the field of view, and (3) for a sensor that switches between wide and narrow fields of view.
KEYWORDS: Target detection, Sensors, Probability theory, Target acquisition, Image sensors, Mathematical modeling, Systems modeling, Imaging systems, Communication engineering, Night vision
In this paper we address these problems. 1) Two stationary observers with two sensors independently
search for a stationary target. Each sensor is characterized by individual search parameters (p∞, τ)
which are different either because the sensors are at different ranges or are different because the sensors
are at the same range but have different properties. The target is said to be detected when the first
observer detects the target. Using this definition for time to detect, we derive an analytical expression for
the mean detection time. 2) If multiple observers independently search an image obtained from a single
sensor how does the mean time until the first observer detects the target vary with the number of
observers. 3) If multiple observers independently search an image obtained from a single sensor how
does the probability of detection vary with the number of observers. Here the target is said to be detected
if any of the observers detect the target. 4) For the problem of two stationary observers searching
independently for a stationary target we found the probability density function for the time to detect.
Analytical Model 1 describes how long it takes the first observer to find a target when multiple observers search a field of regard using imagery provided
by a single sensor. This model, developed using probability concepts, suggests considerable benefits accrue from collaborative search: when P is near
one and with ten observers the mean detection time (in reduced time) is reduced by almost an order of magnitude when compared to that of a single
observer. To get the instant of detection in clock time we add the delay time td to the reduced time. Empirical fits for td and are also given in the paper.
Model 1 was verified/validated by computer simulation and perception experiments. Here ten observers searched sixty computer generated fields of
regard (each one was 60 x 20 degrees) for a single military vehicle. Analytical Model 2 describes how the probability of target acquisition increases with
the number of observers. The results of Model 2 suggest that probability of target acquisition increases considerably when multiple observers independently
search a field of regard. Model 2 was verified by simulation but not by perception experiment. Models 1 and 2 are pertinent to development of
search strategies with multiple observers and are expected to find use in wargaming for evaluating the efficacy of networked imaging sensors.
A model has been developed that predicts the probability of detection as a function of time for a sensor on a
moving platform looking for a stationary object. The proposed model takes as input P (calculated from
NVThermIP), expresses it as a function of time using the known sensor-target range and outputs detection
probability as a function of time. The proposed search model has one calibration factor that is determined
from the mean time to detect the target. Simulated imagery was generated that models a vehicle moving
with constant speed along a straight road with varied vegetation on both sides and occasional debris on the
road and on the shoulder. Alongside, and occasionally on the road, triangular and square shapes are visible
with a contrast similar to that of the background but with a different texture. These serve as targets to be
detected. In perception tests, the ability of observers to detect the simulated targets was measured and
excellent agreement was observed between modeled and measured results.
The incoherent diffraction MTF plays an increasingly important role in the range performance of imaging systems as the wavelength increases and the optical aperture decreases. Accordingly, all NVESD imager models have equations that describe the incoherent diffraction MTF of a circular entrance pupil. NVThermIP, a program which models thermal imager range performance, has built in equations which analytically model the incoherent diffraction MTF of a circular entrance pupil and has a capability to input a table that describes the MTF of other apertures. These can be calculated using CODE V, which can numerically calculate the incoherent diffraction MTF in the vertical or horizontal direction for an arbitrary aperture. However, we are not aware of any program that takes as input a description of the entrance pupil and analytically outputs equations that describe the incoherent diffraction MTF. This work explores the effectiveness of Mathematica to analytically and numerically calculate the incoherent diffraction MTF for an arbitrary aperture. In this work, Mathematica is used to analytically and numerically calculate the incoherent diffraction MTF for a variety of apertures and the results are compared with CODE V calculations.
This paper discusses the Modulation Transfer Functions (MTF) associated with image motion. The paper describes MTF for line-of-sight vibration, electronic stabilization, and translation of the target within the field of view. A model for oculomotor system tracking is presented. The common procedure of treating vibration blur as Gaussian is reasonably accurate in most cases. However, the common practice of ignoring motion blur leads to substantial error when modeling search tasks.
KEYWORDS: Sensors, Turbulence, Modulation transfer functions, Systems modeling, Thermal modeling, Thermography, Performance modeling, Imaging systems, Visual process modeling, Night vision
The windows version of the Night Vision Thermal Imaging System Performance Model, NVTherm, was released in March 2001. NVTherm provides accurate predictions of sensor performance for both well-sampled and undersampled thermal imagers. Since its initial fielding in March 2001, a number of improvements have been implemented. The most significant improvements are: (1) the addition of atmospheric turbulence blurring effects, (2) National Imagery Interpretability Rating System (NIIRS) estimates, (3) and the option for slant-path MODTRAN transmission. This paper presents these modifications, as well as a brief description of some of the minor changes and improvements that have been completed over the past year. These significant changes were released in January 2002.
The Night Vision ACQUIRE model predicts range performance when provided with parameters describing the atmosphere, a 2-D MRT curve which describes the sensor and three additional parameters. Two of the additional parameters (characteristic dimension and target- background contrast) describe the target. The third additional parameter, a cycle criterion (N50) relates to task difficulty. Characteristic dimension and target-background contrast are measured directly in the field. The third parameter N50 is empirically determined from the measured range performance associated with the task. The purpose of this communication is to define terms, protocols and where possible to give recommended values for parameters used with the ACQUIRE model. The methodology and recommended parameter values given here represent Night Vision's best estimates based on years of laboratory and field experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.