Vector quantization (VQ) is an efficient technique for signal compression. However, it requires much encoding time to find the closest codeword for every input vector. We propose a fast encoding method to speed up the encoding. With the help of a table that is created off-line and can be used by all the images, the encoder searches only part of the entire codebook. The proposed method is implemented to encode Lena and other images to test its performance. Compared to full-searching VQ (FS-VQ), although the encoder searches only about 20 codewords in the codebook for every input vector, more than 95% of the codewords searched by the proposed method are the same as the results searched by FS-VQ on average. In addition, we also adopt partial distortion searching (PDS) and lookup table (LUT) to decrease the mathematic computation. This saves 98.44% of the encoding time and 98.07% of the mathematic operation while encoding Lena. The proposed method is superior to all the existing fast VQ encoding methods. While encoding 100 nature images for testing, it can save more than 97% of the encoding time and mathematic operations, but the PSNR decays at most only 0.19 dB, which is invisible to human eyes.
With the rapid growth of multimedia technology, more and more multimedia content is disseminated in networks or stored in databases. Image data is one of the multimedia types to be seen or accessed by users on the Internet or from databases. Searching the related images by querying image content is helpful for the management and usage of an image database. Therefore, research on image indexing techniques is an important topic. We propose an efficient content-based image retrieval (CBIR) system. Basically, the proposed method is a discrete cosine transform (DCT)-coefficient-based technique that extracts content features using some DCT coefficients. In addition, our method also uses entropy to classify images in the database so that it can reduce the search space to decrease the processing time. The proposed system has the property of robustness to rotation, translation, cropping, noise corruption, etc. The indexing time is only about 4 to 10% compared to most recently published results. According to our experiment results, the system is highly efficient in terms of robustness, precision, and a processing speed.
Fractal theory has been widely applied in the field of image compression due to its advantages of resolution independence, fast decoding, and high compression ratio. However, it has a fatal shortcoming of intolerant encoding time for every range block to find its corresponding best matched domain block. In this work, an algorithm is proposed to improve this time-consuming encoding drawback by an adaptive searching window, partial distortion elimination (PDE), and characteristic exclusion algorithms. The proposed methods efficiently decrease the encoding time. In addition, the compression ratio is also raised due to the reduced searching window. While conventional full search fractal encoding to compress a 512×512 image needs to search 247,009 domain blocks for every range block, our experimental results show that our proposed method only needs to search 122 domain blocks, which is only 0.04939% compared to a conventional fractal encoder for every range block to encode a Lena 512×512 8-bit gray image at a bit rate of 0.2706 bits per pixel (bpp) while maintaining almost the same decoded quality in visual evaluation. In addition, the visual decoded quality of the proposed method is better than the most widely used JPEG compressor.
KEYWORDS: Computer programming, Image compression, Quantization, Distortion, Image quality, Binary data, Fluctuations and noise, Image processing, Signal processing, Signal to noise ratio
Vector quantization (VQ) is an effective technology for signal compression. In traditional VQ, most of the computation concentrates on searching the nearest codeword in the codebook for each input vector. We propose a fast VQ algorithm to reduce the encoding time. There are two main parts in our proposed algorithm. One is the preprocessing process and the other is the practical encoding process. In preprocessing, we will generate some tables that we need to employ for practical encoding. Because those tables are used for all the images, the time to generate these tables does not increase any time in the practical encoding process. On the second part, the practical encoding process, we use the tables generated previously and other techniques to speed up the encoding time. This paper provides an effective algorithm to accelerate the encoding time. The proposed algorithm demonstrates the outstanding performance in terms of time saving and arithmetic operations. Compared to a full search algorithm, it saves more than 95% searching time.
The Joint Photographers Expert Group (JPEG) developed an image compression tool, which is one of the most widely used products for image compression. One of the factors that influence the performance of JPEG compression is the quantization table. Bit rate and the decoded quality are both determined by the quantization table simultaneously. Therefore, the designed quantization table has fatal influences to whole compression performance. The goal of this paper is to seek sets of better quantization parameters to raise the compression performance that means it can achieve lower bit while preserving higher decoded quality. In our study, we employed Genetic Algorithm (GA) to find better compression parameters for medical images. Our goal is to find quantization tables that contribute to better compression efficiency in terms of bit rate and decoded quality. Simulations were carried out for different kinds of medical images, such as sonogram, angiogram, X-ray, etc. Resulting experimental data demonstrate the GA-based seeking procedures can generate better performance than the JPEG does.
Vector Quantization (VQ), is an efficient technique for signal compression. In traditional VQ, the major computation is on searching the nearest codeword of the codebook for every input vector. This paper presents an efficient search method to speed up the encoding process. The search algorithm is based on partial distance Elimination (PDE) and binary search is used to determine first search point. We sort the codebook by the mean value in pre-processing before all the practical compression. The first search point is the closest mean value between the input vector and the codewords in the codebook. Then, find the best match codeword by PDE to reduce the search time. The proposed algorithm demonstrates outstanding performance in terms of the time saving and arithmetic operations. Compared to full search algorithms, it saves more than 95% search time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.