The current extent of publicly available space-based imagery and data products is unprecedented. Data from research
missions and operational environmental programs provide a wealth of information to global users, and in many cases,
the data are accessible in near real-time. The availability of such data provides a unique opportunity to investigate how
information can be cascaded through multiple spatial, spectral, radiometric, and temporal scales. A hierarchical image
classification approach is developed using multispectral data sources to rapidly produce large area landuse identification
and change detection products. The approach derives training pixels from a coarser resolution classification product to
autonomously develop a classification map at improved resolution. The methodology also accommodates parallel
processing to facilitate analysis of large amounts of data.
Previous work successfully demonstrated this approach using a global MODIS 500 m landuse product to construct a
30 m Landsat-based classification map. This effort extends the previous approach to high resolution U.S. commercial
satellite imagery. An initial validation study is performed to document the performance of the algorithm and identify
limitations in the process. Results indicate this approach is scalable and has broad applications to target and anomaly
detection applications. In addition, discussion is focused on how information is preserved throughout the processing
chain, as well as situations where the data integrity could break down. This work is part of a larger effort to deduce
practical, innovative, and alternative ways to leverage and exploit the extensive low-resolution global data archives to
address relevant civil, environmental, and defense objectives.
The LSpec site in Nevada was established in late 2006 to enable unattended vicarious calibration of visible and nearinfrared
remote sensing instruments. The site was selected because the large, relatively undisturbed, uniform, dry lakebed
is suitable as a reflectance target. LSpec data include autonomous measurements of the playa taken at near-nadir
viewing conditions every five minutes throughout the day. Thus, LSpec data can be retrieved for a calibration overflight
time which will account for non-zenith solar angles. However, if the sensor under test views the playa from an off-nadir
geometry, the LSpec reflectance data must be corrected for the playa's bi-directional reflectance factor. A ground-based
experiment was performed on July 23, 2008 to collect reflective data from the playa over a series of solar zeniths, solar
azimuths, and off-nadir collection angles. The collection conditions were managed to richly sample the space of zenith
and azimuth differences between the sun and sensor. Data reduction shows that it is possible to experimentally derive a
bi-directional reflectance distribution from field measurements suitable to use for calibration corrections. Additionally,
analysis of the experimental data demonstrates consistent results with the data collected concurrently from the LSpec insitu
sensors.
A new method of performing vicarious calibration of Visible-Near Infrared (VNIR) sensors has been developed which does not require the manual efforts of a field team to capture surface and atmospheric measurements. Instead, an array of unattended sensors captures the required data on a near continuous basis for recording to a web-based retrieval system. The LSpec (LED Spectrometer) facility, located at Frenchman Flat at the Nevada Test Site, began initial operations in November 2006. The LSpec sensors measure surface reflectance at several VNIR bands, and the accompanying atmospheric measurements allow the production of top-of-atmosphere radiance estimates to calibrate space-borne sensor products. Data are distributed via the Internet, and are available to the calibration community. This paper describes the test site, web-access to the data, and makes use of these data to compute top-of-atmosphere radiance (TOA) and compare to those from the Multi-angle Imaging SpectroRadiometer (MISR) imagery.
The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.
Spatial resolution enhancement and spectral mixture analysis are two of the most extensively used image analysis algorithms. This paper presents an algorithm that merges the best aspects of these two techniques while trying to preserve the spectral and spatial radiometric integrity of the data. With spectral mixture analysis, the fraction of each material (endmember) in every pixel is determined from hyperspectral data. This paper describes an improved unmixing algorithm based upon stepwise regression. The result is a set of material maps where the intensity corresponds to the percentage of a particular endmember within the pixel. The maps are constructed at the spatial resolution of the hyperspectral sensor. The spatial resolution of the material maps is then enhanced using one or more higher spatial resolution images. Similar to the unmixing approach, different endmember contributions to the pixel digital counts are distinguished by the endmember reflectances in the sharpening band(s). After unmixing, the fraction maps are sharpened with a constrained optimization algorithm. This paper presents the results of an image fusion algorithm that combines spectral unmixing and spatial sharpening. Quantifiable results are obtained through the use of synthetically generated imagery. Without synthetic images, a large amount of ground truth would be required in order to measure the accuracy of the material maps. Multiple band sharpening is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. The analysis includes an examination of the effects of constraints and texture variation on the material maps. The results show stepwise unmixing is an improvement over traditional unmixing algorithms. The results also indicate sharpening improves the material maps. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.