Abstract

With the advent of immersive media applications, the requirements for the representation and the consumption of such content has dramatically increased. 

Watch Video presentation of the Technical Paper

The ever-increasing size of the media asset combined with the stringent motion-to-photon latency requirement makes the equation of a high quality of experience for XR streaming services difficult to solve.

The MPEG-I standards aim at facilitating the wide deployment of immersive applications. This paper describes part 13, Video Decoding Interface, and part 14, Scene Description for MPEG Media of MPEG-I which address decoder management and the virtual scene composition, respectively. 

These new parts intend to make complex media rendering operations and hardware resources management hidden from the application, hence lowering the barrier for XR application to become mainstream and accessible to XR experience developers and designers. Both parts are expected to be published by ISO at the end of 2021. 

Introduction

Extended Reality (XR) is an umbrella term for immersive experiences that includes Virtual Reality (VR), Mixed Reality (MR) and Augmented Reality (AR).

These applications utilize computationally demanding technologies such as 360-degree video, spatial audio, 3D graphics, etc. In order to successfully combine these media technologies in power constrained end-devices, strict synchronization of the representations and resource allocation is paramount.

Currently, this is achieved with tailored application-specific solutions. In many cases, application developers do not have access to advanced hardware resources, especially when it comes to real-time media in immersive experiences. In order to enable interoperability and efficient usage of device resources, the Moving Picture Experts Group (MPEG) is working on part 13 and part 14 of the ISO/IEC 23090 MPEG Immersive (MPEG-I) standard, both expected to be published by ISO at the end of 2021. 

Download the paper below