An improved classified DCT-based compression algorithm for hyperspectral image is proposed. As variation of pixel
values in one band of the hyperspectral image is large, the traditional DCT is not very efficient for spectral decorrelation
(compared with the optimal KLT). The proposed algorithm is designed to deal with this problem. Our algorithm begins
with a 2D wavelet transform in spatial domain. After that, the obtained spectral vectors are clustered into different subsets
based on their statistics characteristics, and a 1D-DCT is performed on every subset. The classified algorithm consists of
three steps to make the statistics features fully used. In step1, a mean based clustering is performed to obtain basic subsets.
Step2 refines clustering by the range of spectral vector curve. Spectral vector curves, whose maximum and minimum
values are located in different intervals, are separated in step3. Since vectors in one subset are close to each other both in
values and statistic characteristics, which means a high relationship within one subset, the performance of DCT can be
very close to KLT, but the computation complexity is much lower. After the DWT and DCT in spatial and spectral domain,
an appropriate 3D-SPIHT image coding scheme is applied to the transformed coefficients to obtain a bit-stream with
scalable property. Results show that the proposed algorithm retains all the desirable features of compared state-of-art
algorithms despite its high efficiency, and can also have high performance over the non-classified ones at the same bitrates.
Vector quantization is an optimal compression strategy for hyperspectral imagery, but it can’t satisfy the fixed bitrate application. In this paper, we propose a vector quantization algorithm for AVIRIS hyperspectral imagery compression with fixed low bitrate. The 2D-TCE lossless compression for codebook image and index image, the codebook reordering, the remove water absorbed band algorithm are introduced to the classical vector quantization, and the bitrate distribution is replaced by choosing the appropriate codebook size algorithm. Experimental results show that the proposed vector quantization has a better performance than the traditional hyperspectral imagery lossy compression with fixed low bitrate.
JPEG-LS is an ISO/ITU lossless/near-lossless compression standard for continuous-tone images with both low
complexity and good performance. However, the lack of rate control in JPEG-LS makes it unsuitable for applications,
which have the requirement of the compression to a pre-specified size for purpose of effective storage management or
effective bandwidth management. This paper proposes an efficient rate control scheme for JPEG-LS with a high bitrate.
It is based on a good relationship for the optimal quantization steps of different slices and a good relationship for the
optimal target bitrates of different slices. Comparing with the most previous JPEG-LS with rate control schemes, the
proposed rate control scheme has a uniform performance for the whole image, and it is more suited for the near-lossless
compression, but the most previous JPEG-LS with rate control schemes have the non-uniform performance. The
experimental results show that the proposed rate control scheme achieves better compression performance for remote
sensing compression with a high bitrate.
Recent compressed sensing (CS) results show that it is possible to accurately reconstruct images from a small
number of linear measurements via convex optimization techniques. In this paper, according to the correlation analysis
of linear measurements for hyperspectral images, a joint sparsity reconstruction algorithm based on interband prediction
and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among
successive spectral band measurement vectors. The obtained residual measurement vectors are then recovered using the
proposed joint optimization based POCS (projections onto convex sets) algorithm with the steepest descent method. In
addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed
algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same
measurement rates, while with a faster convergence speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.