PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8052, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Science and Technology (S&T) community we frequently pursue technologies that
are perhaps not mature enough for operational transition in a number of areas. The area
of active tracking and active characterization of space objects is one of those areas. In
this paper, we will lay out the requirements on such tracking and imaging systems from a
pure technology perspective, including top level scenario-driven requirements and tracing
down to camera-level requirements. That is, we will consider the various components of
the tracking and imaging mission only from a performance standpoint and use that to
derive some performance limitations on the system. The operational community will then
have to decide whether such requirements are permissible in view of other constraints
such as system cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architectures for a conformal electro-optical aperture that is based on a collection of subapertures are considered. Such
architectures have different requirements for passive and active systems. In this paper we consider requirements for
active systems. In particular we consider concepts for measuring the scattered illumination field at the subaperture. We
discuss requirements on the subapertures, such as phasing and steering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have modeled the imaging performance of an acquisition, tracking, and pointing (ATP) sensor when operating
on a high-speed aircraft platform through a turreted laser beam director/telescope. We applied standard
scaling relations to wavefront sensor (WFS) data collected from the Airborne Aero-Optics Laboratory (AAOL)
test platform operating at Mach 0.5 to model aero-optical aberrations for a λ = 1 μm wavelength laser system
with a Dap = 30 cm aperture diameter and a 90 cm turret diameter on a platform operating at 30 kft and for
speeds of Mach 0.4-0.8. Using these data, we quantified the imaging point spread function (PSF) for each aircraft
speed. Our simulation results show Strehl ratios between 0.1-0.8 with substantial scattering of energy out to 7.5×
the diffraction-limited core. Analysis of the imaging modulation transfer function (MTF) shows a rapid reduction
of contrast for low-to-mid range spatial frequencies with increasing Mach number. Low modulation contrast at
higher spatial frequencies limits imaging resolution to > 2× diffraction-limit at Mach 0.5 and approximately 5×
diffraction-limit at Mach 0.8. Practical limits to usable spatial frequencies require higher image signal-to-noise
ratio (SNR) in the presence of aero-optical disturbances at high Mach number. Propagation of an illuminator
laser through these aero-optical aberrations produces intensity modulation in the incident target illumination on
scale sizes near the diffraction-limit of the transmitting laser aperture, thereby producing illumination artifacts
which can degrade image-contrast-based tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aperture synthesis offers the potential for high resolution images in a relatively compact system. We describe our multiaperture
IMAGE testbed which uses coherent detection to measure the complex field in spatially separated apertures.
We describe a post-detection optimization algorithm which is used to synthesize a composite image whose angular
resolution exceeds that of a single aperture. We present experimental results in which we image extended targets at a
simulated range using a compact range developed for this purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image and Signal Processing for Target Tracking Applications
Video tracking architectures for small low-power embedded systems are severely constrained by their limited processing
capacity and must therefore be highly optimized to meet modern performance requirements. Consequently the various
design trade-offs have a direct and significant impact on the overall system performance.
The evaluation is based on a test framework and a set of metrics for defining tracking performance. Well-known metrics
appropriate to multi-target video-tracking applications have been selected to provide a generalized and meaningful
characterisation of the system and also to allow easier comparison with other video tracking algorithms. The selected set
is extended further with additional architecture-specific metrics to extract a finer level of granularity in the analysis and
support embedded system issues. The tracking system is evaluated within the test framework using a broad spectrum of
real and synthetic video imagery across multiple scenarios.
In each case the embedded systems are required to robustly track multiple targets and accurately control the sensor
platform in real-time. The key focus (hence the requirement for a rapid design and test cycle) is on evaluating the
different system behaviours through testing and then analysing the results to identify how the various design
methodologies affect the overall performance. We briefly compare some analysis of the tracking performance between
two different types of internal track processes we frequently use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric oxygen absorption bands in observed spectra of boost phase missiles can be used to accurately
estimate range from sensor to target. One method is to compare observed values of band averaged absorption
to radiative transfer models. This is most effective using bands where there is a single absorbing species. This
work compares spectral attenuation of two oxygen absorption bands in the near-infrared (NIR) and visible (Vis)
spectrum, centered at 762 nm and 690 nm, to passively determine range. Spectra were observed from a static test
of a full-scale solid rocket motor at a 900m range. The NIR O2 band provided range estimates accurate to within
3%, while the Vis O2 band had a range error of 15%. A Falcon 9 rocket launch at an initial range of 13km
was also tracked and observed for 90 seconds after ignition. The NIR O2 band provided in-flight range estimates
accurate to within 2% error for the first 30 seconds of tracked observation. The Vis O2 band also provided
accurate range estimates with an error of approximately 4%. Rocket plumes are expected to be significantly
brighter at longer wavelengths, but absorption in the NIR band is nearly ten times stronger than the Vis band,
causing saturation at shorter path lengths. An atmospheric band is considered saturated when all the in-band
frequencies emitted from the rocket plume are absorbed before reaching the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aims to evaluate the improvement in the performance of tracking small maritime targets due to
real-time enhancement of the video streams from high zoom cameras on pan-tilt pedestal. Due to atmospheric
conditions these images can frequently have poor contrast, or exposure of the target if it is far and thus small in
the camera's field of view. A 300mm focal length lens and machine vision camera were mounted on a pan-tilt
unit and used to observe the False Bay near Simon's Town, South Africa. A ground truth data-set was created
by performing a least squares geo-alignment of the camera system and placing a differential global position
system receiver on a target boat, thus allowing the boat's position in the camera's field of view to be determined.
Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on
the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone
mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the
graphics processing unit. This allowed the 1.3 mega-pixel 20 frames per second video stream to be processed
in real-time. A quantified measurement of each tracking algorithm's robustness in the presence of sea-glint, low
contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These
results are then compared to those obtained using data enhanced with the algorithms described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow
field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is
constant and the background is changing. A panoramic camera is able to model the entire scene, or background,
and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the
panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated
machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near
Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing
a differential global position system receiver on a small target boat thus allowing its position in the array's field
of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters
were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques
including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques
were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's
robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed
on the raw recorded video data. These results are then compared to those obtained with the enhanced video
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key factors that determine how well an inertially stabilized Line-of-Sight control system performs is the
ability of the feedback loop to counteract, or reject disturbances such as friction and mass imbalance. These
disturbances are usually a function of both the electro-mechanical design and the dynamic operating environment. In a
typical control system, this disturbance rejection capability is determined primarily by the loop bandwidth which, in
turn, is directly affected by the dynamic characteristics of the gyro, actuator or motor, structural interactions within the
system, and noise coupling from the gyro or other sources. A state estimator, configured as a disturbance observer, has
been previously shown to be an effective method to augment the control system and improve the disturbance rejection
performance. This paper discusses a particularly straightforward reduced-order generic observer design which is
relatively insensitive to system parameters and is simple enough that it can be implemented with a few lines of digital
code or an analog circuit. The effectiveness of the design is investigated with respect to both its performance and its
robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The common two-axis azimuth-elevation gimbaled pedestal has a full-hemispheric, horizon-to-zenith field of regard.
This pedestal has no kinematic difficulties at low elevation angles. In this position, the line-of-sight of the mounted
sensor is perpendicular to both the azimuth and elevation gimbal axes, which thus provide two orthogonal degrees of
freedom. However, as the line-of-sight approaches zenith, the sensor axis nears alignment with the azimuth axis. The
azimuth axis thus loses its ability to move the line-of-sight orthogonally to the sweep of the elevation axis. This
condition is known as gimbal lock and the position range in which dynamic difficulties occur is called keyhole. The
keyhole region is a solid cone centered around the zenith axis. The onset of dynamics difficulties is a continuum from
horizon to zenith, and as such defining the keyhole region is arbitrary. However, dynamic difficulties become rapidly
pronounced somewhere between 70 and 80 degrees, so it is generally agreed that the keyhole region starts in this range.
This paper provides a comprehensive analysis of the keyhole region. While performance problems at keyhole are well
known (high torque, acceleration, and speed requirements), certain dynamic effects actually reduce in keyhole, such that
for some systems the range of worst-case performance is actually outside the keyhole region. Gimbal geometry is
introduced and pointing equations derived using vector methodology. Kinematic equations are then developed, with a
focus on the requirements of line-of-sight stabilization for vehicle-mounted systems. Examples and simulation results are
provided to illuminate the issue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As pointing system performances continue to increase while gimbal and vehicle weights decrease, structural
considerations of the gimbal-base interface become more problematic and important. We begin by presenting
the generalized structural transfer function equation in modal superposition form. We emphasize that the
reaction torque of an actuator must be included in practice. From this theoretical basis, we assume stiff gimbal
structures and evaluate the structural coupling due only to mount flexure. We show that in some inertially
stabilized precision pointing systems the effect of the actuator reaction torques cannot be ignored. We show
that structural stiffness is less important than properties of the mount symmetry. In particular, asymmetries in
the system mount (the structure between the reaction torque and the base) allow dangerous coupling between
gimbal axes. The observed coupling may depend on the angles of the gimbals, making experimental testing and
validation more challenging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Control systems engineers are good at controlling the axis of a gyroscope
but that is not quite the same thing as controlling the lines of sight of the
optical instruments. That difference often remains a large (and uneasy)
uncertainty until the system is actually built and tested. This paper
describes how the author couples the optical lines of sight (imaging and
non-imaging) to the control system's sensors (gyroscopes and
accelerometers) using the optical prescription data and the stiffness, mass
and damping matrices of a proposed structure. The mechanical engineer is
then able to iterate and optimize the structural design in a finite element
modeler/analyzer (Patran/Nastran, for instance) to minimize the errors
between the control system's sensors and the optical lines of sight. The
engineer then includes the optical lines of sight in the transfer functions
and eigenvectors that he passes to the control systems engineer for his
design of the control systems. The author illustrates his method with an
example from his recent practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boulder Nonlinear Systems (BNS) has demonstrated a MWIR step and stare imaging system for AFRL that eliminates
the need for turrets and multiple cameras to scale the performance of available thermal imagers. The demonstration
system non-mechanically switches between fields-of-regard in a Hex-7 pattern to achieve 0.1 milliradian resolution
within a 17.5x17.5 degree field-of-regard. The sub-millisecond shutter switching time and polarization independence
maximizes the imaging integration time and sensitivity. The system uses a 1024x1024 (19.5 micron square pixels) InSb
camera with a 4.5 to 5 micron passband filter. Larger area detectors could be used to obtain larger fields-of-view, or the
system could be scaled to a larger pattern of shutter arrays. The system was developed to provide a cost-effective
method of providing night-vision and thermal imaging capabilities for persistent, high-resolution surveillance
applications with sufficient resolution to track mounted and un-mounted threats. The demo hardware was engineered to
enable near-term field and flight testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-Electro-Mechanical Systems (MEMS) Micro-Mirror Arrays (MMAs) are widely used in advanced laser
beam steering systems and as adaptive optical elements. The new generation of MEMS MMAs are fabricated
by bulk micromachining of a single Silicon-On-Insulator wafer. Optical characterization of MEMS MMAs can
be done by direct detection of the reflected beams or by using more advanced wavefront measuring techniques,
such as a phase-shifting interferometer or Shack-Hartmann wavefront sensor. In the case of an interferometer,
the geometry of the tested MMA can be calculated after performing the phase unwrapping procedure, which
can be quite complex. In the latter case of the Shack-Hartmann wavefront sensor, careful selection of a highquality
array of microlenses is required in order to match the capabilities of the wavefront sensor to the measured
wavefront produced by the MMA. The presented digital Shack-Hartmann technique is a modified approach for
wavefront characterization based on digital processing of the interferometer data. The optical wavefront from
the tested MMA is mixed with the reference wavefront. Then the recorded interference intensity image is Fourier
transformed producing digitally synthesized images of the optical beams in the far field. Therefore, the digital
version of the Shack-Hartmann wavefront sensor does not require the use of an array of microlenses and is
primarily limited by the detector array geometry. One can digitally generate any configuration of subapertures
corresponding to various geometries of microlenses. However, this new technique does require coherent optical
mixing of the two wavefronts in order to produce the interference pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many applications, the utility of coherently-driven arrays of spatially distributed optical apertures is highly dependent
on the "power in the bucket", i.e., power in the main lobe of the beam. This depends on the illumination pattern of the
individual apertures (e.g., truncated-Gaussian) via a geometric efficiency η of order unity. It depends more strongly on
the areal fill factor FA, i.e., the degree to which the emitting apertures (exclusive of the support structure between the
apertures) cover the entire array face. Starting from first principles, we derive rigorous formulations of certain measures
of array beam shape and extract a small set of heuristically-motivated but robust performance measures. At a range R,
the peak irradiance Ip is PtotAarrayFAη/λ2R2. In terms of the effective diameter Deff, which depends on the second moment
<x2> of the emitted amplitude and is equal to the actual diameter for a uniformly illuminated ("top-hat") circular emitter,
we find for a wide range of near-field irradiance distributions that the power in the bucket PB is given by
(1.05±0.05)×(λR/Deff)2Ip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applied Technology Associates (ATA) is developing a field-programmable gate array (FPGA) based processing platform to transition our state-of-the-art fast-steering mirrors (FSM's) and optical inertial reference units (OIRU's) from the laboratory environment to field operation. This platform offers an abundance of reconfigurable high-speed digital input/output (I/O) and parallel hardware-based processing resources in a compact size, weight, and power (SWaP) form factor with a path to a radiation-hardened version. The FPGA's high-speed I/O can be used to acquire sensor data and drive actuators with minimal latency while the FPGA's processing resources can efficiently realize signal processing and control algorithms with deterministic timing. These features allow high sampling rates between 20 KHz - 30 KHz. This will result in higher open-loop bandwidths in our FSM's and OIRU's. This will subsequently result in improved disturbance rejection in our FSM's and improved base motion jitter rejection in our OIRU's.
This paper briefly presents the embedded system requirements of ATA's FSM's and OIRU's and the FPGA-based computational architecture derived to meet these requirements. It then describes the FPGA cores and embedded software that have been developed to efficiently realize interfacing, signal processing, and data collection tasks. Special attention is given to ATA's high-performance floating-point co-processor and innovative design approach that translates signal processing and control algorithms developed in MATLAB®/Simulink® into their equivalent implementation on the co-processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on a broadband, diffractive, light shutter with the ability to modulate unpolarized light. This polarizer-free
approach employs a conventional liquid crystal (LC) switch, combined with broadband Polarization Gratings
(PGs) formed with polymer LC materials. The thin-film PGs act as diffractive polarizing beam-splitters, while
the LC switch operates on both orthogonal polarization states simultaneously. As an initial experimental proof-of-
concept for unpolarized light with ±7° aperture, we utilize a commercial twisted-nematic LC switch and our
own polymer PGs to achieve a peak transmittance of 80% and peak contrast ratio of 230:1. We characterize
the optoelectronic performance, discuss the limitations, and evaluate its use in potential nonmechanical shutter
applications (imaging and non-imaging).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarization gratings (PGs) as polarization sensitive diffractive optical elements work in broadband (UV to Mid-
IR) with nearly 100% diffraction efficiency. We have introduced and utilized the PGs in different types of beam
steering modules presented in our previous papers. Here, we describe and demonstrate a nonmechanical beam
steering device based on passive gratings, liquid crystal (LC) polymer PGs. The device covers a large-angle
Field-Of-Regard (FOR) with high efficiency, and is based on a stack of alternating LC half-wave plates and
LC polymer PGs. The half-wave plates are switchable and are used to select the handedness of the circularly
polarized input beam. The polymer PGs diffract the input beam to either of the first diffraction orders based
on the circular handedness of the beam previously selected. When compared with conventional beam steering
methods based on active gratings (ternary and quasi-ternary designs), this technique is experimentally able to
steer an equivalent number of angles with similar efficiency, but fewer LC cells, and hence, fewer transparent
electrodes and lower absorption. We successfully demonstrate the ability to steer 80° FOR with roughly 2.6°
resolution at 1064 nm wavelength.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a high-precision scanner control system, which is used on earth observation satellites. This control
system keeps high angular precision with ultra smooth rotation. We designed a feedback controller consists of a rate loop
for rotation-speed control and a position loop for angular control. Adding a feedforward controller using cyclic memory,
1st order rotational synchronizing disturbance caused by eccentric load has been compressed over 50 dB. An online
learning controller sustains long-time robustness by suppressing undesired addition to a cyclic memory. The proposed
control system maintains high pointing accuracy with a standard deviation of 0.003 degree at 80 rpm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.