KEYWORDS: Digital watermarking, Visibility, RGB color model, Sensors, Detection and tracking algorithms, Visual system, Printing, Packaging, CMYK color model, Signal detection
To watermark spot color packaging images one modulates available spot color inks to create a watermark signal. By perturbing different combinations of inks one can change the color direction of the watermark signal. In this paper we describe how to calculate the optimal color direction that embeds the maximum signal while keeping the visibility below some specified acceptable value. The optimal color direction depends on the starting color for the image region, the ink density constraints and the definition of the watermark signal. After a description of the general problem of N spot color inks we shall describe two-ink embedding methods and try to find the optimal direction that will maximize robustness at a given visibility. The optimal color direction is usually in a chrominance direction and the resulting ink perturbations change the luminosity very little. We compare the optimal color embedder to a single-color embedder.
KEYWORDS: Digital watermarking, Visibility, RGB color model, Sensors, Packaging, CMYK color model, Acquisition tracking and pointing, Composites, Contrast sensitivity, Neodymium
Most packaging is printed using spot colors to reduce cost, produce consistent colors, and achieve a wide color gamut on the package. Most watermarking techniques are designed to embed a watermark in cyan, magenta, yellow, and black for printed images or red, green, and blue for displayed digital images. Our method addresses the problem of watermarking spot color images. An image containing two or more spot colors is embedded with a watermark in two of the colors with the maximum signal strength within a user-selectable visibility constraint. The user can embed the maximum watermark signal while meeting the required visibility constraint. The method has been applied to the case of two spot colors and images have been produced that are more than twice as robust to Gaussian noise as a single color image embedded with a luminance-only watermark with the same visibility constraint.
A watermark embed scheme has been developed to insert a watermark with the maximum signal
strength for a user selectable visibility constraint. By altering the watermark strength and direction to
meet a visibility constraint, the maximum watermark signal for a particular image is inserted. The
method consists of iterative embed software and a full color human visibility model plus a watermark
signal strength metric.
The iterative approach is based on the intersections between hyper-planes, which represent visibility and
signal models, and the edges of a hyper-volume, which represent output device visibility and gamut
constraints. The signal metric is based on the specific watermark modulation and detection methods and
can be adapted to other modulation approaches. The visibility model takes into account the different
contrast sensitivity functions of the human eye to L, a and b, and masking due to image content.
KEYWORDS: Digital watermarking, Signal to noise ratio, Video, Video compression, Image compression, Sensors, Visibility, Cesium, Image processing, Error analysis
A persistent challenge with imagery captured from Unmanned Aerial Systems (UAS), is the loss of
critical information such as associated sensor and geospatial data, and prioritized routing information
(i.e., metadata) required to use the imagery effectively. Often, there is a loss of synchronization between
data and imagery. The losses usually arise due to the use of separate channels for metadata, or due to
multiple imagery formats employed in the processing and distribution workflows that do not preserve
the data. To contend with these issues and provide another layer of authentication, digital watermarks
were inserted at point of capture within a tactical UAS. Implementation challenges included
traditional requirements surrounding, image fidelity, performance, payload size, robustness and
application requirements such as power consumption, digital to analog conversion and a fixed
bandwidth downlink, as well as a standard-based approach to geospatial exploitation through a serviceoriented-
architecture (SOA) for extracting and mapping mission critical metadata from the video
stream. The authors capture the application requirements, implementation trade-offs and ultimately
analysis of selected algorithms. A brief summary of results is provided from multiple test flights onboard
the SkySeer test UAS in support of Command, Control, Communications, Computers,
Intelligence, Surveillance and Reconnaissance applications within Network Centric Warfare and
Future Combat Systems doctrine.
In the realm of digital watermarks applied to analog media, publications have mostly focused on applications such as document authentication, security, and links where synchronization is merely used to read the payload. In recent papers, we described issues associated with the use of inexpensive cameras to read digital watermarks [5], and we have discussed product development issues associated with the use of watermarks for several applications [3.4.6]. However, the applications presented in these papers also have been focused on the detection and use of the watermark payload as the critical technology. In this paper, we will extend those ideas by examining a wider range of analog media such as objects and surfaces and by examining machine vision applications where the watermark synchronization method (i.e., synchronizing the watermark orientation so that a payload can be extracted) and the design characteristics of the watermark itself are as critical to the application as recovering the watermark payload. Some examples of machine vision applications that could benefit from digital watermarking technology are autonomous navigation, device and robotic control, assembly and parts handling, and inspection and calibration systems for nondestructive testing and analysis. In this paper, we will review some of these applications and show how combining synchronization and payload data can significantly enhance and broaden many machine vision applications.
A high-capacity, data-hiding algorithm that lets the user restore the original host image after retrieving the hidden data is presented in this paper. The proposed algorithm can be used for watermarking valuable or sensitive images such as original art works or military and medical images. The proposed algorithm is based on a generalized, reversible, integer transform, which calculates the average and pair-wise differences between the elements of a vector extracted from the pixels of the image. The watermark is embedded into a set of carefully selected coefficients by replacing the least significant bit (LSB) of every selected coefficient by a watermark bit. Most of these coefficients are shifted left by one bit before replacing their LSBs. Several conditions are derived and used in selecting the appropriate coefficients to ensure that they remain identifiable after embedding. In addition, the selection of coefficients ensures that the embedding process does not cause any overflow or underflow when the inverse of the transform is computed. To ensure reversibility, the locations of the shifted coefficients and the original LSBs are embedded in the selected coefficients before embedding the desired payload. Simulation results of the algorithm and its performance are also presented and discussed in the paper.
The many recent publications that focus upon watermarking with side information at the embedder emphasize the fact that this side information can be used to improve practical capacity. Many of the proposed algorithms use quantization to carry out the embedding process. Although both powerful and simple, recovering the original quantization levels, and hence the embedded data, can be difficult if the image amplitude is modified. In our paper, we present a method that is similar to the existing class of quantization-based techniques, but is different in the sense that we first apply a projection to the image data that is invariant to a class of amplitude modifications that can be described as order preserving. Watermark reading and embedding is done with respect to the projected data rather than the original. Not surprisingly, by requiring invariance to amplitude modifications we increase our vulnerability to other types of distortions. Uniform quantization of the projected data generally leads to non-uniform quantization of the original data, which in turn can cause greater susceptibility to additive noise. Later in the paper we describe a strategy that results in an effective compromise between invariance to amplitude modification and noise susceptibility.
KEYWORDS: Digital watermarking, Cameras, Point spread functions, CCD cameras, Sensors, Digital cameras, Signal to noise ratio, CMOS cameras, Image processing, Amplifiers
Many articles covering novel techniques, theoretical studies, attacks, and analyses have been published recently in the field of digital watermarking. In the interest of expanding commercial markets and applications of watermarking, this paper is part of a series of papers from Digimarc on practical issues associated with commercial watermarking applications. In this paper we address several practical issues associated with the use of web cameras for watermark detection. In addition to the obvious issues of resolution and sensitivity, we explore issues related to the tradeoff between gain and integration time to improve sensitivity, and the effects of fixed pattern noise, time variant noise, and lens and Bayer pattern distortions. Furthermore, the ability to control (or at least determine) camera characteristics including white balance, interpolation, and gain have proven to be critical to successful application of watermark readers based on web cameras. These issues and tradeoffs are examined with respect to typical spatial-domain and transform-domain watermarking approaches.
In many ATR implementations, the treatment of peaks, shadows, and regions are handled very differently in terms of the extraction process, and in terms of the attributes of those features that are used for discrimination. An alternative approach is to derive a generalized filter for each feature that transforms a SAR image into a likelihood image where the height of each pixel is image relational graph (IRG) is an efficient and useful method by which the image can be segmented and from which features can be extracted. In this paper, we will describe IRG construction and processing techniques for segmentation and feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.