Advances in infrared sensors, as well as integrated controls and displays have led to mature designs being
incorporated in civil as well as military surveillance and security systems. Technical challenges arise in applying
electro-optical sensor technology to detect, track and identify individuals and to detect contraband and hidden
objects; while at the same time providing positive cost/benefit metrics for both point protection and area surveillance
applications. The previous paper, "EO/IR
Sensors Enhance Border Security" addressed the
advantages and disadvantages of specific electro-optical sensor modalities, including
visible, near-, mid- and far-infrared as well as
ultraviolet that may be used individually and in
combination to perform specific security
applications. System designs employing electro-optical
and infrared sensors for surveillance
applications were reviewed as well as the
cost/benefit metrics used to define trades for
both point protection and area surveillance
applications. This paper will address the use of
these infrared modalities with advanced image
and sensor processing developed by Opgal
specifically for border security applications.
Today's warfighter requires a lightweight, high performance thermal imager for use in night and reduced visibility conditions. To fill this need, the United States Marine Corps issued requirements for a Thermal Binocular System (TBS) Long Range Thermal Imager (LRTI). The requirements dictated that the system be lightweight, but still have significant range capabilities and extended operating time on a single battery load. Kollsman, Inc. with our partner Electro-Optics Industries, Ltd. (ElOp) responded to this need with the CORAL - a third-generation, Military Off-the-Shelf (MOTS) product that required very little modification to fully meet the LRTI specification. This paper will discuss the LRTI, a successful result of size, weight and power (SWaP) tradeoffs made to ensure a lightweight, but high performance thermal imager.
Advances in infrared sensors and developments in pointing and stabilization technology, as well as integrated controls and displays have led to mature designs being incorporated in civil as well as military surveillance and security systems. Technical challenges arise in applying electro-optical sensor technology to detect, track and identify individuals and to detect contraband and hidden objects; while at the same time providing positive cost/benefit metrics for both point protection and area surveillance applications. Specific electro-optical sensor modalities, including visible, near-, mid- and far-infrared as well as ultraviolet may be used individually and in combination to perform specific security applications.
This presentation will review the current electro-optics technology, its applications, and future developments that will have an influence in homeland defense applications.
BAE SYSTEMS has developed a Low Cost Targeting System (LCTS) consisting of a FLIR for target detection, laser-illuminated, gated imaging for target identification, laser rangefinder and designator, GPS positioning, and auto-tracking capability within a small compact system size. This system has proven its ability to acquire targets, range and identify these targets, and designate or provide precise geo-location coordinates to these targets. The system is based upon BAE Systems proven micro-bolometer passive LWIR camera coupled with Intevac's new EBAPS camera. A dual wavelength diode pumped laser provides eyesafe ranging and target illumination, as well as designation; a custom detector module senses the return pulse for target ranging and to set the range gates for the gated camera. Intevac's camera is a CMOS based device with used selectable gate widths and can read at up to 28 frames/second when operated in VGA mode. The Transferred Electron photocathode enables high performance imaging in the SWIR band by enabling single photon detection at high quantum efficiency. Trials show that the current detectors offer complete extinction of signals outside of the gated range, thus, providing high resolution within the gated region. The images have shown high spatial resolution arising from the use of solid state focal plane array technology. Imagery has been collected in both the laboratory and the field to verify system performance during a variety of operating conditions.
KEYWORDS: 3D modeling, LIDAR, Signal to noise ratio, Target detection, Systems modeling, Atmospheric modeling, Imaging systems, Sensors, Performance modeling, Backscatter
BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of target detection and identification, and NEP vs. gain.
BAE Systems reports on a concept utilizing a holographic approach to linear phase conjugation to compensate for atmospheric induced aberrations that severely limit laser performance. In an effort to improve beam quality, fine aimpoint control, and energy delivered to the target, BAE Systems has developed a novel aberration compensation technique. This technique, Holographic Adaptive Tracking (HAT), utilizes a Spatial Light Modulator as a dynamic wavefront reversing element to undo aberrations induced by the atmosphere, platform motion, or both. BAE Systems aberration compensation technique results in a high fidelity, near-diffraction limited laser beam delivered to the target.
An optical polyspectral sensor has been developed and tested which calculates the magnitude and directional velocity of an incoming projectile to queue a reactive countermeasure. This paper describes the sensor modeling, sensitivity analysis, and experimental results of a sensor consisting of four sheets of light. Sensor application could be extended to all projectiles that present a measurable laser radar cross section to the sensor.
BAE SYSTEMS reports on a program to characterize the performance of MEMS corner cube retroreflector arrays under laser illumination. These arrays have significant military and commercial application in the areas of: 1) target identification; 2) target tracking; 3) target location; 4) identification friend-or-foe (IFF); 5) parcel tracking, and; 6) search and rescue assistance. BAE SYSTEMS has theoretically determined the feasibility of these devices to learn if sufficient signal-to-noise performance exists to permit a cooperative laser radar sensor to be considered for device location and interrogation. Results indicate that modest power-apertures are required to achieve SNR performance consistent with high probability of detection and low false alarm rates.
KEYWORDS: 3D modeling, LIDAR, Signal to noise ratio, Target detection, Systems modeling, Imaging systems, Atmospheric modeling, Sensors, Performance modeling, Backscatter
BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length.
The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of target detection and identification, and NEP vs. gain.
BAE SYSTEMS reports on a program to characterize the performance of MEMS corner cube retroreflector arrays under laser illumination. These arrays have significant military and commercial application in the areas of: (1) target identification; (2) target tracking; (3) target location; (4) identification friend-or-foe (IFF); (5) parcel tracking, and; (6) search and rescue assistance. BAE SYSTEMS has theoretically determined the feasibility of these devices to learn if sufficient signal-to-noise performance exists to permit a cooperative laser radar sensor to be considered for device location and interrogation. Results indicate that modest power-apertures are required to achieve SNR performance consistent with high probability of detection and low false alarm rates.
Multispectral sensors are increasingly being employed in military applications. Just as in satellite imagery of the earth, multispectral data is required in order to extract the maximum amount of information from a scene. The advantages of image fusion have been postulated for navigation, surveillance, fire control, and missile guidance to improve accuracy and contribute to mission success. The fusion process is a critical element of each of these applications. Imagery from various sensors must be calibrated, enhanced and spatially registered in order to achieve the desired 'fusion' of information into a single 'picture' for rapid assessment. In a tactical military environment this fusion of data must be presented to the end user in a timely and ergonomical fashion. The end user (e.g., a combat pilot) may already be operating at maximum sensory input capacity. Does he or she really need another cockpit display?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.