Non-destructive evaluation is widely used in the manufacturing industry for the detection and characterisation of defects. Typical techniques include visual, magnetic particle, fluorescent dye penetrant, ultrasonic, and eddy current inspection. This paper presents a multi-agent approach to combining image data such as these for quality control. The use of distributed agents allows the speed benefits of parallel processing to be realised, facilitating increased levels of detection through the use of high resolution images. The integration of multi-sensor devices and the fusion of their multi-modal outputs has the potential to provide an increased level of certainty in defect detection and identification. It may also allow the detection and identification of defects that cannot be detected by an individual sensor. This would reduce uncertainty and provide a more complete picture of aesthetic and structural integrity than is possible from a single data source.
A blackboard architecture, DARBS (Distributed Algorithmic and Rule-based Blackboard System), has been used to manage the processing and interpretation of image data. Rules and image processing routines are allocated to intelligent agents that communicate with each other via the blackboard, where the current understanding of the problem evolves. Specialist agents register image segments into a common coordinate system. An intensity-based algorithm eliminates the landmark extraction that would be required by feature-based registration techniques. Once registered, pixel-level data fusion is utilised so that both complementary and redundant data can be exploited. The modular nature of the blackboard architecture allows additional sensor data to be processed by the addition or removal of specialised agents.
Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.
Image retrieval and image compression are both very active fields of research. Unfortunately, in the past they were pursued independently leading to image indexing methods being both efficient and effective but restricted to uncompressed images. In this paper we introduce an image retrieval technique that operates in the compressed domain of vector quantize images. Vector quantization (VQ) achieves compression by representing image blocks as indices into a codebook of prototype blocks. By realizing that, if images are coded with their own VQ codebook then much of the image information is contained in the codebook itself, we propose the comparison of the codebooks, based on a Modified Hausdorff distance, as a novel method for compressed domain image retrieval. Experiments, based on an image database comprising many colorful pictures show this technique to give excellent results, outperforming classical color indexing techniques.
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e.\ discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Significant number of color images on the WWW and many other applications are color quantized. This paper presents a new technique for content-based retrieval of this type of color images. We introduce a technique, which compares the color contents of images by comparing the color tables of the image. An effective measure of the similarity of color tables based on a modified Hausdorff distance has been developed. Computer simulation result on retrieving three sets of colorful images are presented which demonstrated the highly effective of the method.
It has been argued that future image coding techniques should allow for 'midstream access', i.e. allow image query, retrieval, and modification to proceed on the compressed representation. In a recent work, we introduced the color visual pattern image coding (CVPIC) technique for color image compression. An image is divided into blocks and each block coded locally by mapping it to one of a predefined, universal set of visually significant image patterns consisting of representations for both edge and uniform regions. The pattern and color information is then stored, following a color quantization algorithm and an entropy encoding stage. Compression ratios between 40:1 and 60:1 were achieved while maintaining high image quality on a variety of natural color images. It was also shown that CVPIC could achieve comparable performance to state-of-the- art techniques such as JPEG.
In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.
A novel color image coding technique based on visual patterns is presented. Visual patterns, a concept first introduced by Chen and Bovik, are image blocks representing visually meaningful information. A method has been developed to extend the concept of visual patterns (originally developed for grayscale images) to color image coding. A mapping criterion has been developed to map small image blocks to a set of predefined, universal visual patterns in a uniform color space. Source coding and color quantization are applied to achieve efficient coding. Compression ratios between 40:1 and 60:1 (0.6 - 0.4 bpp) have been achieved; subjective as well as objective measures show that the new method is comparable to state-of-the-art techniques such as JPEG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.