Contrast in image processing is typically scaled using a power function (gamma) where its exponent specifies the amount
of the physical contrast change. While the exponent is normally constant for the whole image, we observe that such scaling
leads to perceptual nonuniformity in the context of high dynamic range (HDR) images. This effect is mostly due to lower
contrast sensitivity of the human eyes for the low luminance levels. Such levels can be reproduced by an HDR display
while they can not be reproduced by standard display technology. We conduct two perceptual experiments on a complex
image: contrast scaling and contrast discrimination threshold, and we derive a model which relates changes of physical
and perceived contrasts at different luminance levels. We use the model to adjust the exponent value such that we obtain
better perceptual uniformity of global and local contrast scaling in complex images.
The advances in high dynamic range (HDR) imaging, especially in the display and camera technology, have a significant
impact on the existing imaging systems. The assumptions of the traditional low-dynamic range imaging, designed for
paper print as a major output medium, are ill suited for the range of visual material that is shown on modern displays. For
example, the common assumption that the brightest color in an image is white can be hardly justified for high contrast
LCD displays, not to mention next generation HDR displays, that can easily create bright highlights and the impression
of self-luminous colors. We argue that high dynamic range representation can encode images regardless of the technology
used to create and display them, with the accuracy that is only constrained by the limitations of the human eye and
not a particular output medium. To facilitate the research on high dynamic range imaging, we have created a software
package (http://pfstools.sourceforge.net/) capable of handling HDR data on all stages of image and video processing. The
software package is available as open source under the General Public License and includes solutions for high quality
image acquisition from multiple exposures, a range of tone mapping algorithms and a visual difference predictor for HDR
images. Examples of shell scripts demonstrate how the software can be used for processing single images as well as video
sequences.
An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework
through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.