It is becoming more common for search and track algorithms to need to account for observations that can arise from both radio frequency (RF) and electro-optical infrared (EO/IR) measurements in the same scenario. Development of novel algorithms for search and track applications requires measured or synthetically generated data, and frequently only considers one or the other. Historically, the synthetic data generation process for RF and EO/IR developed independent of one another and did not share a common sense of “truth” about the environment or the objects within the simulation. This lack of a common framework with a consistent environment and platform representation between the two sensing modalities can lead to errors in the algorithm development process. For example, if the RF data assumed one set of atmospheric conditions while the EO/IR assumed a different set of conditions, the RF modality could over or under perform compared to the EO/IR. To address this issue, Georgia Tech Research Institute (GTRI) has developed General High-fidelity Omni-Spectrum Toolbox (GHOST) as a plug and play simulation architecture to generate high-fidelity EO/IR and RF synthetic data for search and track algorithm development. Additionally, because GHOST is plug and play, it can potentially provide synthetic or measured result to developmental algorithms without needing to change the algorithm’s interface. This paper presents the efforts GTRI has put into extending GHOST into the RF domain and presents sample results from search and track algorithm development. It also presents a look forward into how GHOST is being adapted to accommodate measured data alongside synthetic data for improved algorithm development.
KEYWORDS: Interpolation, Atmospheric corrections, Computer simulations, Computation time, Systems modeling, Modeling, Knowledge management, Design and modelling, Data modeling, Algorithm development
A common constraint in synthetic data generation is the need to evaluate time and resource intensive equations to model physical systems of interest. In fact, many times one needs to evaluate many such models to build up to the real system of interest. In some cases, it is possible to identify a key set of independent variables that govern the equations of interest, and one can build a look up table for interpolation. However, the down side to this strategy is that many computational resources will be spent computing values that may not be used during a simulation. In this paper, we present a new strategy to lazily evaluate complex calculations to build these multi-dimensional look up tables as needed. The technique relies on identifying the fact that some models are able to reuse partial calculations to generate multiple results in a single invocation. This allows generating a base table in the neighborhood of the initial point of interest. After which, the table is grown as the parameter space expands. This reduces the initial computational cost, and the resultant table can be saved for reuse if desired. In a multiprocessing environment, it would also be possible to generate additional table entries in parallel if those points of interest are known in advance. As a specific example, we apply this technique to computing atmospheric corrections for synthetic image generation.
One major struggle for modeling and simulation (M and S) over the past decades has been the development of individual models in isolation. Typically, models are developed for a single application area where they tend to become domain specific as the complexity of a single model grows. When a future application requires interaction of multiple M and S approaches that have developed independently, it is difficult, if not impossible, for the models to integrate into a common environment. Furthering this difficulty is that the models have likely developed disparate concepts of the world in which they operate. A prime example of this effect is the development of infrared (IR) and radio frequency (RF) models, which have different large scale phenomenology and have, therefore, developed as separate M and S domains. Attempting to combine the two modalities through integration of existing M and S tools specific to each application domain has historically proven nigh impossible. These factors led to the development of the Dynamic Model Integration and Simulation Engine (DMISE) which provides a flexible and extensible framework for integration of different models into a common simulation by defining the interfaces for the simulation components. For multi-spectral IR and RF simulations, the General High-Fidelity Omni-Spectral Toolbox (GHOST) has been built on the DMISE framework to allow for integration of models across the electromagnetic spectrum. This paper presents GHOST and the status of the current effort to provide a true multi-spectral, multi-sensor, and multi-actor M and S environment through simulation of scenarios with combined IR and RF sensors operating in a common environment.
With the ever-growing number of resident space objects (RSOs) surrounding the Earth, it is imperative that we develop techniques for determining their current and future state by leveraging a collection of radio frequency and optical observations to maintain space domain awareness (SDA). The state of an RSO at a future time is determined by its current state and the forces acting upon it. In theory, this prediction is trivial; however, knowing all of the forces is not practical. When a space object undergoes a maneuver, this simple extrapolation fails, and more measurements are required before an updated state estimate would be available causing tracking methods to fail. One means by which an RSO undergoes a maneuver, is by firing its thruster which provides a transient component to its signature. For example, many small satellites use Hall effect thrusters to perform station keeping. The emission from these thrusters can be up to three times greater than the rest of the satellite. This change in the signature provides information about the aspect of the RSO and the amount of energy expended by the engine to produce a thrust on the object. This information can be passed back to the state estimators to reduce the time necessary to update the RSO’s state. In this paper, we present a model for estimating the upper bound of the signature change of an RSO due to thruster engagement. We then present our initial results of a rendered model both with and without a plume present.
A key component of a night scene background on a clear moonless night is the stellar background. Celestial objects affected by atmospheric distortions and optical system noise become the primary contribution of clutter for detection and tracking algorithms while at the same time providing a solid geolocation or time reference due to their highly predictable motion. Any detection algorithm that needs to operate on a clear night must take into account the stellar background and remove it via background subtraction methods. As with any scenario, the ability to develop detection algorithms depends on the availability of representative data to evaluate the difficulty of the task. Further, the acquisition of measured field data under arbitrary atmospheric conditions is difficult if not impossible. For this reason, a radiometrically accurate simulation of the stellar background is a boon to algorithm developers. To aid in simulating the night sky, we have incorporated a star-field rendering model into the Georgia Tech Simulations Integrated Modeling System (GTSIMS). Rendering a radiometrically accurate star-field requires three major components: positioning the stars as a function of time and observer location, determining the in-band radiance of each star, and simulating the apparent size of each star. We present the models we have incorporated into GTSIMS and provide a representative sample of the images generated with the new model. We then demonstrate how the clutter in the neighborhood of a pixels change by including a radiometrically accurate rendering of a star-field.
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
Though many materials behave approximately as greybodies across the long-wave infrared (LWIR) waveband, certain important infrared (IR) scene modeling materials such as brick and galvanized steel exhibit more complex optical properties1. Accurately describing how non-greybody materials interact relies critically on the accurate incorporation of the emissive and reflective properties of the in-scene materials. Typically, measured values are obtained and used. When measured using a non-imaging spectrometer, a given material’s spectral emissivity requires more than one collection episode, as both the sample under test and a standard must be measured separately. In the interval between episodes changes in environment degrade emissivity measurement accuracy. While repeating and averaging measurements of the standard and sample helps mitigate such effects, a simultaneous measurement of both can ensure identical environmental conditions during the measurement process, thus reducing inaccuracies and delivering a temporally accurate determination of background or ‘down-welling’ radiation. We report on a method for minimizing temporal inaccuracies in sample emissivity measurements. Using a LWIR hyperspectral imager, a Telops Hyper-Cam2, an approach permitting hundreds of simultaneous, calibrated spectral radiance measurements of the sample under test as well as a diffuse gold standard is described. In addition, we describe the data reduction technique to exploit these measurements. Following development of the reported method, spectral reflectance data from 10 samples of various materials of interest were collected. These data are presented along with comments on how such data will enhance the fidelity of computer models of IR scenes.
Georgia Tech has investigated methods for the detection and tracking of personnel in a variety of acquisition
environments. This research effort focused on a detailed phenomenological analysis of human physiology and signatures
with the subsequent identification and characterization of potential observables. Both aspects are needed to support the
development of personnel detection and tracking algorithms. As a fundamental part of this research effort, Georgia Tech
collected motion capture data on an individual for a variety of walking speeds, carrying loads, and load distributions.
These data formed the basis for deriving fundamental properties of the individual's motion and the derivation of motionbased
observables, and changes in these fundamental properties arising from load variations. Analyses were conducted
to characterize the motion properties of various body components such as leg swing, arm swing, head motion, and full
body motion. This paper will describe the data acquisition process, extraction of motion characteristics, and analysis of
these data. Video sequences illustrating the motion data and analysis results will also be presented.
Georgia Tech has investigated methods for the detection and tracking of personnel in a variety of acquisition
environments. This research effort focused on a detailed phenomenological analysis of human physiology and signatures
with the subsequent identification and characterization of potential observables. As a fundamental part of this research
effort, Georgia Tech collected motion capture data on an individual for a variety of walking speeds, carrying loads, and
load distributions. These data formed the basis for deriving fundamental properties of the individual's motion and
supported the development of a physiologically-based human motion model. Subsequently this model aided the
derivation and analysis of motion-based observables, particularly changes in the motion of various body components
resulting from load variations. This paper will describe the data acquisition process, development of the human motion
model, and use of the model in the observable analysis. Video sequences illustrating the motion data and modeling
results will also be presented.
Georgia Tech been investigating method for the detection of covert personnel in traditionally difficult environments
(e.g., urban, caves). This program focuses on a detailed phenomenological analysis of human physiology and signatures
with the subsequent identification and characterization of potential observables. Both aspects are needed to support the
development of personnel detection and tracking algorithms. The difficult nature of these personnel-related problems
dictates a multimodal sensing approach. Human signature data of sufficient and accurate quality and quantity do not
exist, thus the development of an accurate signature model for a human is needed. This model should also simulate
various human activities to allow motion-based observables to be exploited. This paper will describe a multimodal
signature modeling approach that incorporates human physiological aspects, thermoregulation, and dynamics into the
signature calculation. This approach permits both passive and active signatures to be modeled. The focus of the current
effort involved the computation of signatures in urban environments. This paper will discuss the development of a
human motion model for use in simulating both electro-optical signatures and radar-based signatures. Video sequences
of humans in a simulated urban environment will also be presented; results using these sequences for personnel tracking
will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.