In camera identification using sensor noise, the camera that took a given image can be determined with high certainty
by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal
counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto
an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim.
We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available
to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were
used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled
"triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of
conditions. This test is then extended to the case when none of the images that the attacker used to create the fake
fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's
performance experimentally and investigate its limitations. The conclusion that can be made from this study is
that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously
thought.
We present a new compressed domain method for tracking objects in airborne videos. In the
proposed scheme, a statistical snake is used for object segmentation in I-frames, and motion
vectors extracted from P-frames are used for tracking the object detected in I-frames. It is shown
that the energy function of the statistical snake can be obtained directly from the compressed
DCT coefficients without the need of full decompression. The number of snake deformation
iterations can be also significantly reduced in compressed domain implementation. The
computational cost is significantly reduced by using compressed domain processing while the
performance is competitive to that of pixel domain processing. The proposed method is tested
using several UAV video sequences, and experiments show that the tracking results are
satisfactory.
KEYWORDS: Video, Video compression, Sensors, Optical sensors, Video surveillance, Digital imaging, Image compression, Internet, Video processing, Digital cameras
Photo-response non-uniformity (PRNU) of digital sensors was recently proposed [1] as a unique identification fingerprint
for digital cameras. The PRNU extracted from a specific image can be used to link it to the digital camera that took the
image. Because digital camcorders use the same imaging sensors, in this paper, we extend this technique for
identification of digital camcorders from video clips. We also investigate the problem of determining whether two video
clips came from the same camcorder and the problem of whether two differently transcoded versions of one movie came
from the same camcorder. The identification technique is a joint estimation and detection procedure consisting of two
steps: (1) estimation of PRNUs from video clips using the Maximum Likelihood Estimator and (2) detecting the presence
of PRNU using normalized cross-correlation. We anticipate this technology to be an essential tool for fighting piracy of
motion pictures. Experimental results demonstrate the reliability and generality of our approach.
KEYWORDS: Cameras, Sensors, Optical filters, Error analysis, Denoising, Image compression, Statistical analysis, Signal detection, Digital imaging, Signal to noise ratio
In this paper, we revisit the problem of digital camera sensor identification using photo-response non-uniformity noise
(PRNU). Considering the identification task as a joint estimation and detection problem, we use a simplified model for
the sensor output and then derive a Maximum Likelihood estimator of the PRNU. The model is also used to design
optimal test statistics for detection of PRNU in a specific image. To estimate unknown shaping factors and determine
the distribution of the test statistics for the image-camera match, we construct a predictor of the test statistics on small
image blocks. This enables us to obtain conservative estimates of false rejection rates for each image under Neyman-
Pearson testing. We also point out a few pitfalls in camera identification using PRNU and ways to overcome them by
preprocessing the estimated PRNU before identification.
KEYWORDS: Sensors, Sensor networks, Data compression, Data centers, Data communications, Data fusion, Algorithm development, Data analysis, Head, Quantization
Data compression methods have mostly focused on achieving a desired perception quality for multi-media data for a given number of bits. However, there has been interest over the last several decades on compression for communicating data to a remote location where the data is used to compute estimates. This paper traces the perspectives in the research literature for compression-for-estimation. We discuss how these perspectives can all be cast in the following form: the source emits a signal - possibly dependent on some unknown parameter(s), the ith sensor receives the signal and compresses it for transmission to a central processing center where it is used to make the estimate(s). The previous perspectives can be grouped as optimizing compression for the purpose of either (i) estimation of the source signal or (ii) the source parameter. Early results focused on restricting the encoder to being a scalar quantizer that is designed according to some optimization criteria. Later results focused on more general compression structures, although, most of those focus on establishing information theoretic results and bounds. Recent results by the authors use operational rate-distortion methods to develop task-driven compression algorithms that allow trade-offs between the multiple estimation tasks for a given rate.
Data compression ideas can be extended to assess the data quality across multiple sensors to manage the network of sensors to optimize the location accuracy subject to communication constraints. From an unconstrained-resources viewpoint it is desirable to use the complete set of deployed sensors; however, that generally results in an excessive data volume. Selecting a subset of sensors to participate in a sensing task is crucial to satisfying trade-offs between accuracy and time-line requirements. For emitter location it is well-known that the geometry between sensors and the target plays a key role in determining the location accuracy. Furthermore, the deployed sensors have different data quality. Given these two factors, it is no trivial matter to select the optimal subset of sensors. We attack this problem through use of a data quality measure based on Fisher Information for set of sensors and optimize it via sensor selection and data compression.
Target detection and tracking in real-time videos are very important and yet difficult for many applications. Numerous detection and tracking techniques have been proposed, typically by imposing some constraints on the motion and image to simplify the problem depending on the application and environment. This paper focuses on target detection and tracking in airborne videos, in which not much simplification can be made. We have recently proposed a combined/switching detection and tracking method which is based on the combination of a spatio-temporal segmentation and statistical snake model. This paper improves the statistical snake model by incorporating both edge and region information and enhancing the snake contour deformation. A more complex motion model is used to improve the accuracy of object detection and size classification. Mean-shift is integrated into the proposed combined method to track small point objects and deal with the problem of object disappearance-reappearance. Testing results using real UAV videos are provided.
Digital hologram compression has recently received increasing attention due to easy acquisition and new applications in three-dimensional information processing. Standard compression algorithms perform poorly on complex-valued holographic data. This paper studies quantization techniques for lossy compression of digital holographic images, where three commonly used quantizers are compared. Our observations show that the real and imagery components of holograms and their corresponding Fourier transform coefficients exhibit a Laplacian and Gaussian distribution, respectively. It is therefore possible to design an optimal quantizer for holographic data compression. To further increase the compression ratio, preprocessing techniques to extract the region of interest are presented. These include Fourier plane filtering and statistical snake image segmentation.
KEYWORDS: Sensor networks, Data compression, Sensors, Distortion, Error analysis, Data communications, Signal to noise ratio, Algorithm development, Energy efficiency, Image compression
This paper first discusses the need for data compression within sensor networks and argues that data compression is a fundamental tool for achieving trade-offs in sensor networks among three important sensor network parameters: energy-efficiency, accuracy, and latency. Next, it discusses how to use Fisher information to design data compression algorithms that address the trade-offs inherent in accomplishing multiple estimation tasks within sensor networks. Results for specific examples demonstrate that such trades can be made using optimization frameworks for the data compression algorithms.
Standard image compression algorithms may not perform well in compressing images for pattern recognition applications, since they aim at retaining image fidelity in terms of perceptual quality rather than preserving spectrally significant information for pattern recognition. New compression algorithms for pattern recognition are therefore investigated, which are based on the modification of the standard compression algorithms to simultaneously achieve higher compression ratio and improved pattern recognition performance. This is done by emphasizing middle and high frequencies and discarding low frequencies according to a new distortion measure for compression. The operations of denoising, edge enhancement, and compression can be integrated in the same encoding process in the proposed compression algorithms. Simulation results show the effectiveness of the proposed compression algorithms.
KEYWORDS: Signal to noise ratio, Radar, Data compression, Distortion, Signal processing, Monte Carlo methods, Error analysis, Radar signal processing, Signal detection, Computer simulations
This paper ties together and extends several recent results we have presented. We previously showed: (i) the usefulness of non-MSE distortion criteria in data compression for time-difference-of-arrival (TDOA) emitter location (SPIE 2001 & 2002), and (ii) the ability to exploit redundancy between radar pulses in a joint TDOA/FDOA (frequency-difference-of-arrival) location scheme (SPIE 2001 & 2002). In (ii) we showed how to compress radar signals by gating around the detected pulses and then putting the pulses into the rows of a matrix which is then compressed through use of the SVD; this approach employed a purely MSE distortion criterion. An open question in this approach was: Is it possible to eliminate some of the pulses from the pulse matrix to increase the compression ratio without significantly sacrificing location accuracy?
We resolve this question by applying our proposed non-MSE to the FDOA accuracy and finding the optimal set of pulses to remove from the pulse matrix. The removal of pulses is shown to have negligible impact on the FDOA accuracy but does degrade the TDOA accuracy from that achievable using the SVD-based compression without pulse elimination. However, we demonstrate that the SVD method includes an inherent de-noising effect (common in SVD-based signal processing) that provides an improvement in TDOA accuracy over the case of no compression processing; thus, the overall impact on TDOA/FDOA accuracy is negligible while providing compression ratios on the order of 100:1 for typical radar signals.
We show that the standard image compression algorithms are not suitable for compressing images in correlation pattern recognition since they aim at retaining image fidelity in terms of perceptual quality rather than preserving spectrally significant information for pattern recognition. New compression algorithms for pattern recognition are therefore developed, which are based on the modification of the standard compression algorithms to achieve higher
compression ratio and simultaneously to enhance pattern recognition performance. This is done by emphasizing middle and high frequency components and discarding low frequency components according to a new developed distortion measure for compression. The operations of denoising, edge enhancement and compression can be integrated in the encoding process in the proposed compression algorithms. Simulation results show the effectiveness of the proposed compression algorithms.
The location of an emitter is estimated by intercepting its signal and sharing the data among several platforms to measure the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). A common compression approach is to use a rate-distortion criterion where distortion is taken to be the mean-square error (MSE) between the original and compressed versions of the signal. However, we show that this MSE-only approach is inappropriate for TDOA/FDOA estimation and then define a more appropriate, non-MSE distortion measure. This measure is based on the fact that in addition to the dependence on MSE, the TDOA accuracy also depends inversely on the signal's RMS (or Gabor) bandwidth and the FDOA accuracy also depends inversely on the signal's RMS (or Gabor) duration.
The form of this new measure must be optimized under the constraint of a specified budget on the total number of bits available for coding. We show that this optimization requires a selection of DFT cells to retain that must be jointly chosen with an appropriate allocation of bits to the selected DFT cells. This joint selection/allocation is a challenging integer optimization problem that still has not been solved. However, we consider three possible sub-optimal approaches and compare their performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.