Hyperspectral imaging instruments measure hundreds of spectral bands (at different wavelength channels) for the same area of the surface of the Earth. Typically the data cube collected by these sensors comprises several GBs per flight, which have attracted attention to on-board techniques for compression. Typically these compression techniques are expensive from the computational point of view. Due to this fact, a number of Compressive Sensing and Random Projection techniques have raised as an alternative to reduce the signal size on-board the sensor. The measuring process of these techniques usually consist on performing dot products between the signal and random vectors. The Compressive Sensing process is performed directly in the optic system, however, in this paper, we propose to perform the random projection measurement process on a low power consumption Graphic Processing Unit. The experiments are conducted on a Jetson TX1 board, which is well suited to perform vector operations such as dot products. These experiments have been performed to demonstrate the applicability, in terms of accuracy and time consuming, of these methods for onboard processing. The results show that by using this low power consumption GPU is it possible to obtain real-time performance with a very limited power requirement.
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA).
Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
In the last years, hyperspectral analysis have been applied in many remote sensing applications. In fact, hyperspectral unmixing has been a challenging task in hyperspectral data exploitation. This process consists of three stages: (i) estimation of the number of pure spectral signatures or endmembers, (ii) automatic identification of the estimated endmembers, and (iii) estimation of the fractional abundance of each endmember in each pixel of the scene. However, unmixing algorithms can be computationally very expensive, a fact that compromises their use in applications under real-time constraints. In recent years, several techniques have been proposed to solve the aforementioned problem but until now, most works have focused on the second and third stages. The execution cost of the first stage is usually lower than the other stages. Indeed, it can be optional if we known a priori this estimation. However, its acceleration on parallel architectures is still an interesting and open problem. In this paper we have addressed this issue focusing on the GENE algorithm, a promising geometry-based proposal introduced in.1 We have evaluated our parallel implementation in terms of both accuracy and computational performance through Monte Carlo simulations for real and synthetic data experiments. Performance results on a modern GPU shows satisfactory 16x speedup factors, which allow us to expect that this method could meet real-time requirements on a fully operational unmixing chain.
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
KEYWORDS: Hyperspectral imaging, Reconstruction algorithms, Compressed sensing, Image compression, Image restoration, Signal to noise ratio, Sensors, Monte Carlo methods, Coded apertures, Algorithm development
In Hyperspectral imaging the sensors measure the light refelcted by the earth surface in differents wavelenghts, usually the number of measures is between one and several hundreds per pixel. This generates huge data ammounts that must be transmitted to the earth and for subsequent processing. The real-time requirements of some applications make that the bandwidth required between the sensor and the earth station is very large. The Compressive Sensing (CS) framework tries to solve this problem. Althougth the hyperspectral images have thousands of bands usually most of the bands are highly correlated. The CS exploit this feature of the hyperspectral images and allow to represent most of the information in few bands instead of hundreds. This compressed version of the data can be sent to a earth station that will recover the original image using the corresponding algorithm. In this paper we describe an Compressive Sensing algorithm called Hyperspectral Coded Aperture (HYCA) that was developed in previous works. This algorithm has a parameter that need to be optimized empirically in order to get the better results. In this work we present a novel way to reconstruct the compressed images under the HYCA framework in which we do not need to optimize any parameter due to all parameters can be estimated automatically. The results show that this new way to reconstruct the images without the parameter provides similar results with respect to the best parameter setting for the old algorithm. The proposed approach have been tested using synthetic data and also we have used the dataset obtained by the AVIRIS sensor of NJPL over the Cuprite mining district in Nevada.
Hyperspectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It amounts
at estimating the abundance of pure spectral signatures (called endmembers) in each mixed pixel of the original
hyperspectral image, where mixed pixels arise due to insufficient spatial resolution and other phenomena. A
challenging problem in spectral unmixing is how to automatically derive endmembers from hyperspectral images,
particularly due to the presence of mixed pixels which generally prevents the localization of pure spectral
signatures in transition areas between different land-cover classes. A possible strategy to address this problem
is to guide the endmember extraction process to spatially homogeneous areas. For this purpose, several preprocessing
methods (intended to be applied prior to the endmember extraction stage) have been developed in
the literature. However, most of these methods only include spatial information during the preprocessing and
disregard spectral information until the subsequent endmember extraction stage. In this paper, we develop a
new joint spatial and spectral preprocessing method which can be combined with any endmember extraction
algorithm from hyperspectral images. The proposed method is intended to retain spectrally pure pixels which
belong to spatially homogeneous areas. Our assumption is that spectrally pure signatures are more likely to be
found in spatially homogeneous areas rather than in transition areas between different land-cover classes, which
are expected to be dominated by mixed pixels. Our experimental results, conducted with a variety of hyperspectral
images, reveal the robustness of the proposed method when compared to other similar preprocessing
strategies.
Spectral unmixing is an important task for remotely sensed hyperspectral data exploitation. The spectral signatures
collected in natural environments are invariably a mixture of the pure signatures of the various materials
found within the spatial extent of the ground instantaneous field view of the imaging instrument. Spectral
unmixing aims at inferring such pure spectral signatures, called endmembers, and the material fractions, called
fractional abundances, at each pixel of the scene. A standard technique for spectral mixture analysis is linear
spectral unmixing, which assumes that the collected spectra at the spectrometer can be expressed in the form
of a linear combination of endmembers weighted by their corresponding abundances, expected to obey two constraints,
i.e. all abundances should be non-negative, and the sum of abundances for a given pixel should be
unity. Several techniques have been developed in the literature for unconstrained, partially constrained and fully
constrained linear spectral unmixing, which can be computationally expensive (in particular, for complex highdimensional
scenes with a high number of endmembers). In this paper, we develop new parallel implementations
of unconstrained, partially constrained and fully constrained linear spectral unmixing algorithms. The implementations
have been developed in programmable graphics processing units (GPUs), an exciting development
in the field of commodity computing that fits very well the requirements of on-board data processing scenarios,
in which low-weight and low-power integrated components are mandatory to reduce mission payload. Our experiments,
conducted with a hyperspectral scene collected over the World Trade Center area in New York City,
indicate that the proposed implementations provide relevant speedups over the corresponding serial versions in
latest-generation Tesla C1060 GPU architectures.
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
One of the most important techniques for hyperspectral data exploitation is spectral unmixing, which aims at
characterizing mixed pixels. When the spatial resolution of the sensor is not fine enough to separate different
spectral constituents, these can jointly occupy a single pixel and the resulting spectral measurement will be a
composite of the individual pure spectra. The N-FINDR algorithm is one of the most widely used and successfully
applied methods for automatically determining endmembers (pure spectral signatures) in hyperspectral image
data without using a priori information. The identification of such pure signatures is highly beneficial in order
to 'unmix' the hyperspectral scene, i.e. to perform sub-pixel analysis by estimating the fractional abundance
of endmembers in mixed pixels collected by a hyperspectral imaging spectrometer. The N-FINDR algorithm
attempts to automatically find the simplex of maximum volume that can be inscribed within the hyperspectral
data set. Due to the intrinsic complexity of remotely sensed scenes and their ever-increasing spatial and spectral
resolution, the efficiency of the endmember searching process conducted by N-FINDR depends not only on the
size and dimensionality of the scene, but also on its complexity (directly related with the number of endmembers).
In this paper, we develop a new parallel version of N-FINDR which is shown to scale better as the dimensionality
and complexity of the hyperspectral scene to be processed increases. The parallel algorithm has been implemented
on two different parallel systems, in which two different types of commodity graphics processing units (GPUs)
from NVidia™ are used to assist the CPU as co-processors. Commodity computing in GPUs is an exciting
new development in remote sensing applications since these systems offer the possibility of (onboard) high
performance computing at very low cost. Our experimental results, obtained in the framework of a mineral
mapping application using hyperspectral data collected by the NASA Jet Propulsion Laboratory's Airborne
Visible Infra-Red Imaging Spectrometer (AVIRIS), reveal that the proposed parallel implementation compares
favorably with the original version of N-FINDR not only in terms of computation time, but also in terms of the
the accuracy of the solutions that it provides. The real-time processing capabilities of our GPU-based N-FINDR
algorithms and other GPU algorithms for endmember extraction are also discussed.
Spectral unmixing is an important tool for interpreting remotely sensed hyperspectral scenes with sub-pixel
precision. It relies on the identification of a set of spectrally pure components (called endmembers) and the
estimation of the fractional abundance of each endmember in each pixel of the scene. Fractional abundance
estimation is generally subject to two constraints: non-negativity of estimated fractions and sum-to-one for all
abundance fractions of endmembers in each single pixel. Over the last decade, several algorithms have been
proposed for simultaneous and sequential extraction of image endmembers from hyperspectral scenes. In this
paper, we develop a new sequential algorithm that automatically extracts endmembers by using an unconstrained
linear mixture model. Our assumption is that fractional abundance estimation using a set of properly selected
image endmembers should naturally incorporate the constraints mentioned above, while imposing the constraints
for an inadequate set of spectral endmembers may introduce errors in the model. Our proposed approach first
applies an unconstrained linear mixture model and then uses a new metric for measuring the deviation of the
unconstrained model with regards to the ideal, fully constrained model. This metric is used to derive a set of
spectral endmembers which are then used to unmix the original scene. The proposed algorithm is experimentally
compared to other algorithms using both synthetic and real hyperspectral scenes collected by NASA/JPL's
Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS).
Hyperspectral image compression has received considerable interest in recent years. However, an important issue
that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications,
which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances
(called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate
the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by
algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual
information in the spectral endmember search). The two considered algorithms are the automatic morphological
endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental
results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in
Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments
reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction
algorithms to compressed hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.