IBC2023 Tech Papers: Content-Dependent power saving model for HDR display devices

Visual analysis of the application of skin detection and changes to luminous adaptation.

IBC2023: This Technical Paper proposes a just noticeable difference model comprising regions of interest detection, luminous adaptation and spatial correction techniques to conserve power.

Abstract

Widespread adoption of High Dynamic Range (HDR) videos elevates the in-home experience of video consumption. However, displaying HDR video content can escalate the power consumption of the TVs to over 300W, a figure that is content- dependent. Moreover, existing solutions adversely impact visual fidelity in the attempt to reduce power consumption. In response, this paper proposes a just noticeable difference model comprising regions of interest detection, luminous adaptation and spatial correction techniques to conserve power. The model also incorporates skin detection and visual information fidelity based optimization techniques to reduce the visual fidelity loss. Extensive experiments conducted on multiple modes of LCD and OLED TVs demonstrate significant savings achieving an average of 1-18% power reduction. The best performing variant of the proposed JND model can achieve an average power reduction of 41W and up to 69W with LCD cinema home mode.

Introduction

End-user Quality of Experience (QoE) has increased dramatically in recent years with the introduction of High Dynamic Range (HDR) with high-resolution video formats such as Ultra High Definition (UHD) [1] [2]. As such, video streaming providers have shown great interest in delivering HDR services to customers. Consequently, the world has seen a rapid proliferation of advanced display devices such as televisions (TVs) and mobile phones supporting the new video technologies that elevate the in-home experience of video consumption. In this trend, Light Emitting Diode (LED) display technologies have flourished in the past decade, superseding traditional Liquid Crystal Displays (LCD)s with fluorescent backlights, owing to improvements in multiple aspects such as brightness, visual fidelity and power savings. Organic LED (OLED) displays, a variant of the LED technologies, are widely used within high-end consumer devices to provide enhanced image quality [3].

HDR imaging delivers an increased range of luminance, colour gamut and contrast, which shows significant improvements over Standard Dynamic Range (SDR) formats [4] [5]. However, displaying HDR videos comes at an increased cost of power consumption despite the mitigation measures that OLED displays provide. Although standard average power consumption is expected to be around 120W, it may not apply to HDR videos. Power usage for displaying HDR videos is content-dependent and shows great variation from one video to another. Consequently, some, HDR video contents can escalate the power consumption to over 300W for certain video sequences.

Literature in video compression often narrates the deployment of Just Noticeable Difference (JND) models [6] [7]. More often they are used in applications that require the exploitation of perceptual redundancy. In general, JND models inject contaminations to images and videos up to a limit where perceptual differences between the original and the contaminated are minimized. The decomposition of the images plays a major role in a JND model in order to identify perceptual redundancy. In this context, various algorithms have been proposed in the past [8] [6] [9]. Importantly, the utilization of JND models has also been explored in the power reduction of displaying images and videos in OLED displays [10] [11].

The major drawback of existing solutions is the detrimental effect on the visual fidelity. Moreover, literature has not explored power reduction in HDR video content. To this end, a JND model is presented in this paper that is capable of reducing power requirements to display HDR video at a minimal loss of visual fidelity. The proposed JND model leverages deep learning based Regions Of Interests (ROI) detection, luminous adaptation and spatial correction to generate a mask to contaminate the source video. The major contributions of this research are 1. ROI based JND model that can reduce the power requirements of HDR videos at minimal fidelity loss; 2. power and visual quality related analysis in the context of HDR videos, OLED and LCD displays. It is anticipated that the proposed technology would operate at the decoder-side (i.e., TVs, set-top boxes, mobile phones) with a separate mode that would allow the technology to be enabled or disabled as per the user’s discretion.

The rest of the paper is organized as follows. Firstly, existing works are discussed. Next, the overview of the proposed model and individual components are elaborated in the methodology section. Then, the experimental procedure and the results are reported and discussed before presenting the concluding remarks in the final section.

Read the full article

ibc365 gated new screenshot v2

Sign up to IBC365 for free

Sign up for FREE access to the latest industry trends, videos, thought leadership articles, executive interviews, behind the scenes exclusives and more!

Already have a login? SIGN IN