IBC2022: This Technical Paper describes the method used to accurately measure the energy usage of a mobile device playing video content.

Abstract

High-end mobile devices now support displaying video in High Dynamic Range (HDR), delivering a significantly enhanced viewing experience over Standard Dynamic Range (SDR). However, more energy may be required to play HDR, impacting device battery life and reducing overall quality of experience.

We present a new methodology for predicting the real-time energy usage of a mobile device playing video content. Thirty-seven video clips were encoded into 12 combinations of different resolution, frame-rate, bit-rate, and dynamic range. An external power monitor was used to measure the voltage and current drawn by the device while playing the content. These measurements were used to train a neural network to predict the energy requirements of playing any clip.

We show that our model can predict the energy usage of videos with RMS error of 4.88%, achieving a substantial improvement over existing methods that use linear regression, symbolic regression, or trust-region optimisation. 

Introduction

Video content producers and platform owners strive to deliver the best possible viewing experience to customers. Steady improvements to hardware have resulted in mobile devices holding the largest share of the video streaming market. One of those improvements is the availability on phones of High Dynamic Range (HDR) video, an imaging technology that provides greater representation of real-world lighting compared to the traditional Low or Standard Dynamic Range (SDR) imaging. Further hardware enhancements, which include greater screen resolutions and faster wireless streaming, outpace the rate of improvements in Lithium-ion battery technology which is now reaching its theoretical limits. Unfortunately, future portable battery technologies which could replace Li-ion are still in development and are not ready for commercial deployment. 

In this paper, we describe the method we used to accurately measure the energy usage of a mobile device playing video content. These measurements were used to train a neural network to predict the energy usage of a video clip based on its intrinsic properties: the dynamic range, frame rate, resolution, bit-rate, and pixel luminance (brightness) distribution. 

Our model can be combined with extra information, such as the remaining video duration and device battery capacity, to create a system which balances the viewing quality with the need to conserve battery. With the pervasive use of mobile devices in modern life, preserving battery capacity is something that many more users are aware of. Unfortunately, video streaming consumes a huge amount of energy due to the use of the screen, which is “the dominant power consumer in battery-operated devices”.

The key contribution of this paper is a neural network-based method to predict the energy usage of a mobile device playing video content, relying only on the properties of the video content itself. This modelcan be seen on the left-hand side of Figure 1. Previous work in this area has focused solely on SDR content. This paper describes a model that applies to both SDR and HDR content, and which is also more accurate than previous methods.

Download the paper below