The most recent High Dynamic Range (HDR) standard, HDR10+, achieves good picture quality by incorporating dynamic metadata that carry frame-by-frame information for tone mapping while most HDR standards use static tone mapping curves that apply across the entire video. Since it is laborious to acquire hand-crafted best-fitting tone mapping curve for each frame, there have been attempts to derive the curves from input images. This paper proposes the neural network framework that generates tone mapping on a frame-by-frame basis. Although a number of successful tone mapping operators (TMOs) have been proposed over the years, evaluation of tone mapped images still remains a challenging topic. We define an objective measure to evaluate tone mapping based on Non-Reference Image Quality Assessment (NR-IQA). Experiments show that the framework produces good tone mapping curves and makes the video more vivid and colorful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.