Reassembling fragmented image files plays a crucial role in seizing digital evidence from scattered digital image files. The existing algorithms are mainly graph based, which cast the reassembly problem as a K-vertex disjoint path problem in a directed complete graph, which is an NP-complete problem. Based on the padding bytes in BMP files, we present a method to exclude most impossible paths, which can improve the accuracy and decrease the time complexity of the existing graph-based methods. According to the alignment rule of BMP format, padding bytes must be appended to the end of each row to bring up the length of the row to a multiple of 4 bytes. Hence the fragment, being a vertex of the path which correctly reassembles a file, has a property; its byte values at padding positions must be the padding values. Only the fragments with such property can be candidate fragments for the vertex. On the test dataset which is constructed based on 330 image files, taking eight classical methods as examples, we show that the proposed method produces an accuracy improvement ranging from 32% to 55%, and reduces the run time to a scope from 1/6 to 1/237.
Copy-move is one of the most common methods for image manipulation. Several methods have been proposed to detect and locate the tampered regions, while many methods failed when the copied regions are rotated before being pasted. A rotational invariant detecting method using Polar Complex Exponential Transform (PCET) is proposed in this paper. Firstly, the original image is divided into overlapping circular blocks, and PCET is employed to each block to extract the rotation-invariant robust features. Secondly, the Approximate Nearest Neighbors (ANN) of each feature vector are collected by Locality Sensitive Hashing (LSH). Experimental results show that the proposed technique is robust to rotation.
The feature matching step plays a critical role during the copy-move forgery detection procedure. However, when several highly similar features simultaneously exist in the feature space, current feature matching methods will miss a considerable number of genuine matching feature pairs. To this end, we propose a clustering-based method to collect qualified matching features for the feature point-based methods. The proposed method can collect far more genuine matching features than existing methods do, and thus significantly improve the detection performance, especially for multiple pasting cases. Experimental results confirm the efficacy of the proposed method.
We address two critical issues in the design of a finger multibiometric system, i.e., fusion strategy and template security. First, three fusion strategies (feature-level, score-level, and decision-level fusions) with the corresponding template protection technique are proposed as the finger multibiometric cryptosystems to protect multiple finger biometric templates of fingerprint, finger vein, finger knuckle print, and finger shape modalities. Second, we theoretically analyze different fusion strategies for finger multibiometric cryptosystems with respect to their impact on security and recognition accuracy. Finally, the performance of finger multibiometric cryptosystems at different fusion levels is investigated on a merged finger multimodal biometric database. The comparative results suggest that the proposed finger multibiometric cryptosystem at feature-level fusion outperforms other approaches in terms of verification performance and template security.
This paper presents an efficient image encryption scheme for color images based on quantum chaotic systems. In this scheme, a new substitution/confusion scheme is achieved based on toral automorphism in integer wavelet transform by scrambling only the Y (Luminance) component of low frequency subband. Then, a chaotic stream encryption scheme is accomplished by generating an intermediate chaotic key stream image with the help of quantum chaotic system. Simulation results justify the feasibility of the proposed scheme in color image encryption purpose.
The image preprocessing plays an important role in finger vein recognition system. However, previous preprocessing schemes remind weakness to be resolved for the high finger vein recongtion performance. In this paper, we propose a new finger vein preprocessing that includes finger region localization, alignment, finger vein ROI segmentation and enhancement. The experimental results show that the proposed scheme is capable of enhancing the quality of finger vein image effectively and reliably.
In recent years, verification based on thermal face images has been extensively studied because of its invariance to illumination and immunity to forgery. However, most of them have not given full consideration to high-verification performance and singular within-class scatter matrix problems. We propose a novel thermal face verification algorithm, which is named two-directional two-dimensional modified Fisher principal component analysis. First, two-dimensional principal component analysis (2-DPCA) is utilized to extract the optimal projective vector in the row direction. Then, 2-D modified Fisher linear discriminant analysis is implemented to overcome the singular within-class scatter matrix problem of the 2-DPCA space in the column direction. Comparative experiments on the natural visible and infrared facial expression thermal face subdatabase demonstrate that the proposed approach outperforms state-of-the-art methods in terms of verification performance.
We present a novel salient region detection scheme combining spatial distribution and global contrast. Our scheme considers not only global contrast between different colors/regions, but also spatial relationships within the same color/region. The variance of the spatial position within the same region/color is used to measure spatial saliency. And we incorporate spatial saliency and global contrast saliency into the final saliency map through linear combinations. Experiments show that the proposed scheme not only performs better than five state-of-the-art methods on the publicly available data set, but it also is simple, easy to implement and efficient.
Image encryption process is jointed with reversible data hiding in this paper, where the data to be hided are modulated
by different secret keys selected for encryption. To extract the hided data from the cipher-text, the different tentative
decrypted results are tested against typical random distribution in both spatial and frequency domain and the goodnessof-
fit degrees are compared to extract one hided bit. The encryption based data hiding process is inherently reversible.
Experiments demonstrate the proposed scheme's effectiveness on natural and textural images, both in gray-level and
binary forms.
KEYWORDS: Digital watermarking, Distortion, Wavelets, Image processing, Information security, Image restoration, Medical imaging, Error analysis, Image quality, Data storage
We investigate in this paper several possible methods to improve the performance of the bit-shifting operation based reversible image watermarking algorithm in the integer DCT domain. In view of the large distortion caused by the modification of high-amplitude coefficients in the integer DCT domain, several coefficient selection methods are proposed to provide the coefficient modification process with some adaptability to match the coefficient amplitudes’ status of different 8-by-8 DCT coefficient blocks. The proposed adaptive modification methods include global coefficient-group distortion sorting, zero-tree DCT prediction, and a low frequency based coefficient prediction method for block classification. All these methods are supposed to optimize the bit-shifting based coefficient modification process so as to improve the watermarking performance in terms of capacity/distortion ratio. Comparisons are presented for these methods in aspects of performance in terms of capacity/distortion ratio, performance stability, performance scalability, algorithm complexity and security. Compared to our old integer DCT based scheme and other recently proposed reversible image watermarking algorithms, some of the proposed methods exhibit much improved performances, among which the low frequency based coefficient prediction methods bear highest efficiency to predict the coefficient amplitudes’ status, leading to distinct improved watermarking performance in most aspects. Detailed experimental results and performance analysis are also given for all the proposed algorithms and several other reversible watermarking algorithms.
Vector Quantization (VQ) is an efficient image compression technique. In this paper, a new VLSI architecture for Vector Quantization (VQ) encoding based on the Hadamard Transform (HT) domain with the partial distance search (PDS) technique is proposed. The PDS algorithm is a simple and efficient algorithm, which allows early termination of the distortion calculation between an input vector and a codeword by introducing a premature exit condition in the search process. By using a codeword elimination criterion based on MSE in the Hadamard transform, presorted codebook and nearest search method, a large number of codewords can be rejected before computing MSE while the image quality remaining unchanged compared to the full-search VQ encoder. The proposed fast codeword search algorithm can reduce computation and is easier to be implemented by VLSI technology. Experimental results demonstrate the effectiveness of the proposed VLSI architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.