The Video National Imagery Interpretability Rating Scale (VNIIRS) is a useful standard for quantifying the interpretability of motion imagery. Automated accurate assessment of VNIIRS would benefit operators by characterizing the potential utility of a video stream. For still, visible-light imagery the general image quality equation (GIQE) provides a standard model to automatically estimate the NIIRS of the image using sensor parameters, namely the ground sample distance (GSD), the relative edge response (RER), and signal-to-noise ratio (SNR). Typically, these parameters are associated with a specific sensor and the metadata correspond to specific image acquisition. For many tactical video sensors however, these sensor metadata are not available and it is necessary to estimate these parameters from information available in the imagery. We present methods for estimating the RER and SNR through analysis of the scene, i.e. the raw pixel data. By estimating the RER and SNR directly from the video data, we can compute accurate VNIIRS estimates for the video. We demonstrate the method on a set of video data.
KEYWORDS: Video, Video surveillance, Kinematics, Defense and security, Logic, Detection and tracking algorithms, Surveillance, Video processing, Data modeling, Computer security
The adversary in current threat situations can no longer be identified by what they are, but by what they are doing. This
has lead to a large increase in the use of video surveillance systems for security and defense applications. With the
quantity of video surveillance at the disposal of organizations responsible for protecting military and civilian lives comes
issues regarding the storage and screening the data for events and activities of interest.
Activity recognition from video for such applications seeks to develop automated screening of video based upon the
recognition of activities of interest rather than merely the presence of specific persons or vehicle classes developed for
the Cold War problem of "Find the T72 Tank". This paper explores numerous approaches to activity recognition, all of
which examine heuristic, semantic, and syntactic methods based upon tokens derived from the video.
The proposed architecture discussed herein uses a multi-level approach that divides the problem into three or more tiers
of recognition, each employing different techniques according to their appropriateness to strengths at each tier using
heuristics, syntactic recognition, and HMM's of token strings to form higher level interpretations.
Unlike the navigation problem of Earth operations, the precise navigation of a vehicle in a remote planetary environment
presents a challenging problem for either absolute or relative navigation. There exist no GPS/INS solutions due to a lack
of a GPS constellation, few or no accurately surveyed markers for use in terminal sensing measurements, and highly
uncertain terrain elevation maps used by a TERCOM system. These, and other, issues prompted the investigation of the
potential use of a visual navigation aid to supplement an Inertial Navigation System (INS) and radar altimeter suite of a
planetary airplane for the purpose of the identifying the potential benefit of visual measurements to the overall
navigation solution.
The mission objective used in the study, described herein, requires the precise relative navigation of the airplane over an
uncertain terrain. Unlike the previously successful employment of visual aided navigation on the MER1 landing vehicle,
the mission objectives require that the airplane traverse a precise flight pattern over the objective terrain at relatively low
altitudes for hundreds of kilometers, and is more akin to a velocity correlator application than a terminal fix problem.
The results of the investigation indicate that a good knowledge of aircraft altitude is required in order to obtain the
desired performance for velocity estimate accuracy. However, it was determined that the direction of the velocity vector
can be obtained without a high accuracy height estimate. The characterization of the dependency of velocity estimate
accuracy upon the variety of factors involved in the process is the primary focus of this report.
This report describes the approach taken in this investigation to both define the architecture of the solution for minimal
impact upon payload requirements, and the analysis of the potential gains to the overall navigation problem. Also
described as part of the problem definition are the initially assumed contribution sources of visual measurement errors
and some additional constraints which limit the choices of solutions.
Conference Committee Involvement (7)
Intelligent Robots and Computer Vision XXXII: Algorithms and Techniques
9 February 2015 | San Francisco, California, United States
Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques
5 February 2014 | San Francisco, California, United States
Intelligent Robots and Computer Vision XXX: Algorithms and Techniques
4 February 2013 | Burlingame, California, United States
Intelligent Robots and Computer Vision XXIX: Algorithms and Techniques
23 January 2012 | Burlingame, California, United States
Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques
19 January 2009 | San Jose, California, United States
Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision
9 September 2007 | Boston, MA, United States
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision
3 October 2006 | Boston, Massachusetts, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.