An improved classified DCT-based compression algorithm for hyperspectral image is proposed. As variation of pixel
values in one band of the hyperspectral image is large, the traditional DCT is not very efficient for spectral decorrelation
(compared with the optimal KLT). The proposed algorithm is designed to deal with this problem. Our algorithm begins
with a 2D wavelet transform in spatial domain. After that, the obtained spectral vectors are clustered into different subsets
based on their statistics characteristics, and a 1D-DCT is performed on every subset. The classified algorithm consists of
three steps to make the statistics features fully used. In step1, a mean based clustering is performed to obtain basic subsets.
Step2 refines clustering by the range of spectral vector curve. Spectral vector curves, whose maximum and minimum
values are located in different intervals, are separated in step3. Since vectors in one subset are close to each other both in
values and statistic characteristics, which means a high relationship within one subset, the performance of DCT can be
very close to KLT, but the computation complexity is much lower. After the DWT and DCT in spatial and spectral domain,
an appropriate 3D-SPIHT image coding scheme is applied to the transformed coefficients to obtain a bit-stream with
scalable property. Results show that the proposed algorithm retains all the desirable features of compared state-of-art
algorithms despite its high efficiency, and can also have high performance over the non-classified ones at the same bitrates.
The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications,
such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data
quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting
the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on
the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless
compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression
performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the
current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method
that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments
on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying
its lossless compression performance when compared to C-DPCM-APL.
JPEG2000 is an important technique for image compression that has been successfully used in many fields. Due to the increasing spatial, spectral and temporal resolution of remotely sensed imagery data sets, fast decompression of remote sensed data is becoming a very important and challenging object. In this paper, we develop an implementation of the JPEG2000 decompression in graphics processing units (GPUs) for fast decoding of codeblock-based parallel compression stream. We use one CUDA block to decode one frame. Tier-2 is still serial decoded while Tier-1 and IDWT are parallel processed. Since our encode stream are block-based parallel which means each block are independent with other blocks, we parallel process each block in T1 with one thread. For IDWT, we use one CUDA block to execute one line and one CUDA thread to process one pixel. We investigate the speedups that can be gained by using the GPUs implementations with regards to the CPUs-based serial implementations. Experimental result reveals that our implementation can achieve significant speedups compared with serial implementations.
Spectral unmixing is an important research hotspot for remote sensing hyperspectral image applications. The unmixing process is comprised of the extraction of spectrally pure signatures (also called endmembers) and the determination of the abundance fractions of endmembers. Due to the inconspicuous signatures of pure spectra and the challenge of inadequate spatial resolution, sparse regression (SR) techniques are adopted in solving the linear spectral unmixing problem. However, the spatial information has not been fully utilized by state-of-art SR-based solutions. In this paper, we propose a new unmixing algorithm which involves in more suitable spatial correlations on sparse unmixing formulation for hyperspectral image. Our algorithm integrates the spectral and spatial information using Adapting Markov Random Fields (AMRF) which is introduced to exploit the spatial-contextual information. Compared with other SR-based linear unmixing methods, the experimental results show that the method proposed in this paper not only improves the characterization of mixed pixels but also obtains better accuracy in hyperspectral image unmixing.
By fully exploiting the high correlation of the pixels along an edge, a new lossless compression algorithm for hyperspectral images using adaptive edge-based prediction is presented in order to improve compression performance. The proposed algorithm contains three modes in prediction: intraband prediction, interband prediction, and no prediction. An improved median predictor (IMP) with diagonal edge detection is adopted in the intraband mode. And in the interband mode, an adaptive edge-based predictor (AEP) is utilized to exploit the spectral redundancy. The AEP, which is driven by the strong interband structural similarity, applies an edge detection first to the reference band, and performs a local edge analysis to adaptively determine the optimal prediction context of the pixel to be predicted in the current band, and then calculates the prediction coefficients by least-squares optimization. After intra/inter prediction, all predicted residuals are finally entropy coded. For a band with no prediction mode, all the pixels are directly entropy coded. Experimental results show that the proposed algorithm improves the lossless compression ratio for both standard AVIRIS 1997 hyperspectral images and the newer CCSDS test images.
Typical distributed video coding architecture always controls rate at the decoder via a feedback channel, which is not realistic in application. In order to remove the feedback channel, an efficient encoder rate control method is proposed for unidirectional distributed video coding in this paper. First a low-complexity motion estimation method is proposed to create estimated side information at the encoder, in which the motion consistency between frames is utilized to reduce the search range and obtain accurate motion vectors. Then the conditional entropy of each bitplane is computed based on the inter-bitplane correlation, which is further curve-fitted as the estimated rate. Experimental results show that our proposed encoder rate control method could estimate the required rate accurately with much lower complexity.
Vector quantization is an optimal compression strategy for hyperspectral imagery, but it can’t satisfy the fixed bitrate application. In this paper, we propose a vector quantization algorithm for AVIRIS hyperspectral imagery compression with fixed low bitrate. The 2D-TCE lossless compression for codebook image and index image, the codebook reordering, the remove water absorbed band algorithm are introduced to the classical vector quantization, and the bitrate distribution is replaced by choosing the appropriate codebook size algorithm. Experimental results show that the proposed vector quantization has a better performance than the traditional hyperspectral imagery lossy compression with fixed low bitrate.
Distributed video coding (DVC) reduces the complexity at the encoder by using source statistics at the decoder. However, a performance gap still exists between DVC and hybrid video coders due to the inaccuracy of correlation noise (CN) modeling and inefficient exploitation of side information. We propose improvements to the performance of DVC using the two following aspects. Firstly, a progressive refinement method is proposed to improve the accuracy of CN modeling for transform domain Wyner-Ziv video coding, in which previously decoded bitplanes are exploited to progressively refine estimated CN as bitplane decoding proceeds. Secondly, a maximum likelihood pre-decoding method is also proposed to obtain a further reduction in bitrate. In our method, bitplanes are pre-decoded first using the conditional bit probability without syndrome bits. The bitrate is reduced by avoiding requests for syndrome bits for the bitplanes having strong temporal correlation. The experimental results show that our proposed methods could provide bitrate savings up to 8.26% and PSNR gains up to 0.2 dB without a significant increase in decoding complexity.
Due to the restrained resources on board, compression methods with low complexity are desirable for hyperspectral
images. A low-complexity scalar coset coding based distributed compression method (s-DSC) has been proposed for
hyperspectral images. However there still exists much redundancy since the bitrate of the block to be encoded is
determined by its maximum prediction error. In this paper, a classified coset coding based lossless compression method
is proposed to further reduce the bitrate. The current block is classified to make the pixels with similar spectral
correlation cluster together. Then each class of pixels is coset coded respectively. The experimental results show that the
classification could reduce the bitrate efficiently.
According to the data characteristics of remote sensing stereo image pairs, a novel adaptive compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the flat area, while the disparity estimation is used in the alpine area. The radiation compensation is applied to further improve the performance. Finally, the residual image and the reference image are compressed by JPEG2000 independently. The new algorithm provides a reasonable prediction in different areas according to the image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the DTM and subjective quality is also obviously enhanced.
An efficient compression algorithm for hyperspectral imagery based on compressive sensing and interband linear
prediction is proposed which has the advantages of high compression performance and low computational complexity by
exploiting the strong spectral correlation. At the encoder, the random measurements of each frame are made, quantized
and transmitted to the decoder independently. The prediction parameters between adjacent bands are also estimated
using the linear prediction algorithm and transmitted to the decoder. At the decoder, a new reconstruction algorithm with
the proposed initialization and stopping criterion is employed to reconstruct the current frames with the assistance of the
prediction frame, which is derived from the previous reconstructed neighboring frames and the received prediction
parameters using the same prediction algorithm. Experimental results show that the proposed algorithm not only obtains
about 1.1 dB gains but greatly decreases decoding complexity. Furthermore, our algorithm has the characteristics of
low-complexity encoding and facility in hardware implementation.
In this paper, we propose a lifting architecture based on a basic lifting unit, whose structure performs lifting
operations in a repetitive way. By analyzing computational processes in lifting in detail, the reusable Basic Lifting
Element (BLE) is presented. The BLE structure is designed and optimized from the viewpoint of hardware
implementation. The proposed lifting processor can be executed by arranging BLEs repeatedly. Experimental
results show that the proposed architecture can transform any size of tiles with 9/7 filter and 5/3 filter for lossy
and lossless compression, respectively. The lifting processor is designed in Verilog HDL and synthesized into
Xilinx FPGA, which can run up to 130MHz.
According to the data characteristics of remote sensing stereo image pairs, a novel compression algorithm based on the
combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity
estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for
texture classification. Second, an improved ABM is used in the area with flat terrain (flat area), while the disparity
estimation, a combination of quadtree decomposition and FBM, is used in the area with alpine terrain (alpine area).
Furthermore, the radiation compensation is applied in every area. Finally, the disparities, the residual image, and the
reference image are compressed by JPEG2000 together. The new algorithm provides a reasonable prediction in different
areas according to characteristics of image textures, which improves the precision of the sensed image. The experimental
results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional
algorithm at low or medium bitrates, and the subjective quality is obviously enhanced.
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting
is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region.
Different approximating functions are then constructed for two kinds of regions respectively. For the major interference
region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by
curve-fitting method. For the minor interference region, the data of each interferential curve are independently
approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that,
compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel
for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly,
especially at high bit-rate for lossy compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.