PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We measured the degradation of recorded video images caused by repetitive use of a video cassette tape using a computer controlled video cassette recorder. The measurements are done up to 400 times of replay and record. We also measured the degradation due to repetitive time lapse replay up to 400 times as a simulation of still replay up to 80 seconds. The results show that little visible degradation is induced for recorded video images on tapes by multiple replay and record. In case of multiple time lapse replay, video cassette tapes were damaged due to the dropout of SYNC signal. However, there observed little visible image degradation under the condition of SYNC signal existence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for histogram modification via image evolution equations is first presented in this paper. We show that the image histogram can be modified to achieve any given distribution as the steady state solution of this partial differential equation. We then prove that this equation corresponds to a gradient descent flow of a variational problem. That is, the proposed PDE is solving an energy minimization problem. This gives a new interpretation to histogram modification and contrast enhancement in general. This interpretation is completely formulated in the image domain, in contrast with classical techniques for histogram modification which are formulated in a probabilistic domain. From this, new algorithms for contrast enhancement, which include for example, image modeling, can be derived. Based on the energy formulation and its corresponding PDE, we show that the proposed histogram modification algorithm can be combined with denoising schemes. This allows to perform simultaneous contrast enhancement and denoising, avoiding common noise sharpening effects in classical algorithms. The approach is extended to local contrast enhancement as well. Theoretical results regarding the existence of solutions of the proposed equations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a study of some image resoration techniques based on partial differential equations. We study separately the denoising problem and the restoration of discontinuities. We analyze the capabilities of the differential operators to restore images. In particular, we analyze a number of models present in the literature, and we present comparative results. Finally, we present a model based in the combination of the anisotropic diffusion of Alvarez, Lions, and Morel and the shock filters of Osher and Rudin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate Video Scene Measurements and Motion Estimation for Criminal and Accident Investigations
Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle lens, generate a lot of nonlinear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and use of Cognitech's measure package is described. The Measure program allows the user to enter the locations and dimensions of known points in an image, and through the established principles of photogrammetry, recover dimensions and positions of objects that are unknown. Several cases in which Measure was employed in the investigative process are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing entire movies and taking advantage of the interframe redundancy is the key of shape-from-motion analysis. Thus, recovering the depth of a fixed scene from an image sequence can be viewed as a movie processing problem: how to focus the redundant depth information of a noisy image sequence into a perfect depth-coherent movie? We present a natural set of axioms in agreement with the depth recovery, in the simple case of a straight movement of the camera parallel to the focal plane. According to these axioms, we show that there is a unique depth-coherent way of processing movies, described by a nonlinear partial differential equation. The corresponding multiscale analysis has the property of smoothing the motion field of a movie, leading naturally to a perfect motion field compatible with a depth interpretation. Moreover, in the case of an ideal movie, i.e. coherent with the observation of a fixed 3D scene, this analysis can be viewed as a simple filtering of the camera movement preserving the depth interpretation given by the movie, and is thereby perspective invariant. Last, we study a numerical scheme, compatible with the theoretical axioms, and produce some experiments on synthetic noisy movies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents results on an approach to optical flow estimation and image segmentation based on treating the flow of image level sets rather than individual points. This allows the accurate estimation of object velocity even from low quality video sequences and has the advantage of simplifying the analysis of classical ill-condition problems for optical flow estimation such as the aperutre effect. This procedure has been tailored to motion estimation for small to intermediate sized objects and can be applied to the problem of estimating human locomotion from image sequences. Under reasonable assumptions it is shown analytically that the condition number of the from image sequences. Under reasonable assumptions it is shown analytically that the condition number of the aggregate velocity equations from optical flow is related in a natural way to the curvature of the image level set at the point of velocity estimation. The provides a link with affine invariant image processing and opens the door to curvature based chaining methods for estimating the flow velocity of moving targets. Numberical examples are presented illustrating the advantages of this approach over competing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of some new techniques such as self-calibration and calibration from partial a priori information about the observed scene (angles, distances), together with the use of some classical techniques of video sequence processing, such as features extraction and token tracking, makes the computation of 3D measurements of the observed scene from video sequences possible. Depending upon the kind of calibration that has been obtained for the camera, these measurements are projective, affine or metric measurements and can be obtained directly from the images without any reconstruction process. Experimental results on real video sequences are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in millimeter-wave (MMV), microwave, and infrared (IR) technologies provide the means to detect concealed weapons remotely through clothing and is some cases through walls. Since the developemnt of forward-looking infrared instruments, work has been ongoing in attempting to use these devices for concealed weapon detection based on temperatrue differences between metallic weapons and in the infrared has led to the development of techniques based on lower frequencies. Focal plane arrays operating MMW frequencies are becoming available which eliminate the need for a costly and slow mechanical scanner for generating images. These radiometric sensors also detect temperature differences between weapons and the human body background. Holographic imaging systems operating at both microwave and MMW frequencies have been developed which generate images of near photographic quality through clothing and through thin, nonmetallic walls. Finally, a real- aperture radar is useful for observing people and detecting weapons through walls and in the field under reduced visibility conditions. This paper will review all of these technologies and give examples of images generated by each type of sensor. An assessment of the future of this technology with regard to law enforcement applications will also be given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Processing Algorithms for Extracting, Enhancing, and Identification of Marks, Prints, and Patterns from the Crime Scene
One of the image processing functions used for the enhancement of laten fingermark images is the Fourier transform. This paper describes some effects of spatial resolution, zero-filling and windowing on fingermark data in the Fourier domain. It is shown that with an understanding of the fingermark structure it is possible to determine the approximate prosition of the frequency data in the Fourier domain corresponding to the fingermark image detail. The effect of attenuation of frequency data on a zero-filled image is shown to be different to the same attenuation on a non-zero-filled image. The effects of windowing spatial data on the frequency data are also highlighted and compared with the same data after the application of a Hanning window.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes new methods in fingerprint matching using the orientation information extracted from the fingerprint image. Two intrinsic images of the fingerprint were obtained. One is the angle image which describes the ridge orientation at each point. The other is the coherence image which is the measure of the angle accuracy at that point. We use the phase portrait of the ordinary differential equation in 2D case to describe the orientation field of the angle image. The Lagrange multiplier rule methods was used to minimize the objective function in order to estimate the optimal parameter matrix of the ordinary differential equation. We proved that the Jordan form of this matrix which describes the fingerprint ridge orientation is invariant under rotation and translation of the image. So we use the Jordam form of the parameter matrix corresponding with the core and delta region in the fingerprint to match two fingerprint images. Some algorithms and rules were proposed to judge the similarity of the fingerprint images. Some algorithms and rules were proposed to judge the similarity of the fingerprint. We also reconstructed the orientation information of some regions in the fingerprint image based on their neighborhood orientation information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computerized toolmark comparison system is based on a cross correlation between a striation mark left by a tool on a lock and a test mark made by a suspect or the data base. The cross correlation is applied in the frequency domain for time saving. The area to be correlated is defined by the toolmark expert. A profile line is calculated and displayed based on the defined area. The two compared images may appear relatively shifted to one another, or only part of the toolmark that appears in the other. The same length of profiles is chosen from the two samples for entering to the updated correlation process. All possible correlations are checked by cutting and shifting through all cobinations. The database contains the defined images and the profiles calculated from them. The system consists of a 486 PC with a frame grabber and a video camera attached to a microscope. Results show that if the striation marks are clear and are wider than a minimum pixel limit, the correlation result higher than 0.6 is a possible match and has to be checked by the expert for a final decision. Future plans are to implement a 2D correlation. This method will enable us to deal with combinations of striations which are found frequently in real case work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a quantization technique based on the partial differential equation (∂u/∂t) = g(||∇(Gσ * u)||) |∇u|div(∇u/|∇u|) + f(u, t) where |∇u|div(∇u/|∇u|) represents the derivative of the function u in the direction orthogonal to the gradient, Gs is a linear convolution kernel, g is a decreasing function and f(s, t) is a lipschitz function. We assume that when t tends to +∞, f(s,t) tends uniformly to a function f∞(s) which has a finite number of zeros with negative derivative which act as attractors in the system and represent the quantization levels. The location of the zero-crossing of the function f∞s(s) depends on the histogram of the initial image given by u0. We introduce a new energie based in the Lloyd model to compute the quantizer levels. We develop a numerical scheme to discretize the above equation and we present some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques and Systems for Searching, Matching, and Recognition of Forensic Image Databases
The recently proved existence of affine invariant scale spaces for shapes opens new possibilities for shape recognition. While affine invariant shape recognition is easily performed when shapes are complete, partially occluded or incomplete shapes must be recognized by dividing them into intrinsic parts. The characteristic point method, for instance, focuses on configurations of points with maximal curvature of the shape (in a euclidian invariant framework). Using the affine invariant scale space, we define affine invariant characteristic points and affine invariant parts of a shape. We prove that compatibility scale relations make feasible the matching of scale spaces and show experiments with noisy affine distorted and occluded shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Processing Algorithms for Extracting, Enhancing, and Identification of Marks, Prints, and Patterns from the Crime Scene
In the last two years we have been testing FISH in the Netherlands police practice. FISH is a system for automatic recognition of handwriting originally developed by the BKA in Germany. Some of the most important classification algorithms that are used compute a feature vector from a digital binary image that shows handwriting without background patterns. This image is currently obtained in three steps: scanning the document (200 dpi, 64 bits), binary segmentation using a interactively selected threshold value, and interactive removal of background patterns. However, the resulting image sometimes shows black structures or white spaces that are not visible in the original document. In this study we investigated the use of color transformation for obtaining better binary images from digital 3 by 8 bit RGB color images. In each distinctive area of the image, the ink lines, background, and background patterns, a number of pixels are selected interactively and their RGB-values are sorted. Analysis of their distributions showed that separation of ink line and background using R, G and B threshold values can often be improved by performing, prior to the segmentation, a 3D rotation of the RGB-values of all pixels. Methods and results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques and Systems for Searching, Matching, and Recognition of Forensic Image Databases
Current police tools of identification are flawed. The lineup suffers from being too small and no adequate solution to a criterion of similar foils. The mugshot search suffers from too many photos and inadequate procedures for choosing an appropriate subset. The composite, from inappropriate methods of selecting facial features. Our new mugshot search provides solutions to some of the problems. Witnesses choose photos similar to the offender, and the computer, using a similarity network, helps witnesses reach the offender faster. Replacing mugshots with foils, we can use the same technique as a lineup, that is significantly enlarged and leaves the choice of similar foils to the witnesses. When the mugshot search fails, a composite can be composed as a superimposition of the photos most similar to the offender. Several algorithms were tested for implementing the similarity network. They involve both the establishment of a meaningful distance criterion within 'face space' and the usage of efficient search strategies. Here, we discuss their benefits and drawbacks. A system comprising the full local album of the Haifa district (more than 10,000 photos) is presently being subjected to a field test by the Isreali police. We briefly describe the system and its user interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have constructed a proof-of-principle system called the repository for patterned injury data (RPID) for supporting collaborative forensic medicine. The early RPID prototype is built on ABC/DGS, a graph-server and collaborative hypermedia system built in the UNC Collaboratory. ABC provides collaboration services for work groups via shared artifacts, giving common views of the information and allowing conferencing over the data. A second prototype is underway that has more flexible control of multiperson creation of, and access to, the shared patient data and pathology artifacts. We conclude by describing a planned third prototype, to be built not on ABC, but on a modification of the WWW httpd distribution data server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video Investigation: Quality/Degradations Analysis, Enhancement, Frame Fusion, and Accurate Restoration of Evidence Video Tape
We study a signal or image restoration method proposed by Rudin and Osher, namely, the constrained total variation (TV) minimization. This very powerful method gives excellent results on nearly piecewise constant signals, but fails on more complicated data. We propose a very general way to combine convex potentials like the TV, and in this setting we introduce a variant that performs better on piecewise regular--but non constant-- signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate Video Scene Measurements and Motion Estimation for Criminal and Accident Investigations
The UK Home Office has held a long term interest in facial recognition. Work has concentrated upon providing the UK police with facilities to improve the use that can be made of the memory of victims and witnesses rather than automatically matching images. During the 1970s a psychological coding scheme and a search method were developed by Aberdeen University and Home Office. This has been incorporated into systems for searching prisoner photographs both experimentally and operationally. The coding scheme has also been incorporated in a facial likeness composition system. The Home Office is currenly implementing a national criminal record system (Phoenix) and work has been conducted to define and demonstrate standards for image enabled terminals for this application. Users have been consulted to establish suitable picture quality for the purpose, and a study of compression methods is in hand. Recently there has been increased use made by UK courts of expert testimony based upon the measurement of facial images. We are currently working with a group of practitioners to examine and improve the quality of such evidence and to develop a national standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques and Systems for Searching, Matching, and Recognition of Forensic Image Databases
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In suspect identification, witnesses examine photos of known offenders in mugshot albums. The probability of correct identification deteriorates rapidly, however, as the number of mugshots examined increases. Feature approaches, where mugshots are displayed in order of similarity to witness descriptions of suspects, increase identification success by reducing this number. In our computerized feature system, both police raters and witnesses describe facial features of suspects on rating scales such as nose size: small 1 2 3 4 5 large. Feature users consistently identify more target suspects correctly than do album users. Previous experimental tests have failed, however, to examine the effects of feature system performance of the use of live targets as suspects rather than photos, the use of realistic crime scenarios, the number of police raters/mugshot, and differences among raters in their effect on system perfomance. In three experiments, we investigated those four issues. The first experiment used photos as target suspects but with multiple distractors, the second tested live suspects, while the third tested live suspects in a realistic crime scenario. The database contained the official mugshots of 1,000 offenders. Across the three experiments, a second and sometimes a third rater/mugshot significantly reduced the number of photos examined. More raters/mugshot did not affect performance further. Raters differed significantly in their effect on system perfomance. Significantly, our feature system performed well both with target suspects seen live and with live suspects in realistic crime scenarios (performance was comparable to that in previous experiments for photos of target suspects). These results strongly support our contention that feature systems are superior to album systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.