IBC2022: This Technical Paper presents the 5G Edge-XR project. 


There is an increasing appetite for extended reality (XR) enhanced broadcast experiences that offer augmented graphics, volumetric video, and immersive spatial content. However, the computational requirements for such implementations are high, resulting in experiences that can only be enjoyed by those with the most powerful consumer hardware which limits audience reach. In this paper we present the work of the 5G Edge-XR project, a DCMS funded collaboration of UK organisations that looked to leverage cloud GPU compute and high-speed 5G connectivity to increase access to next generation Extended Reality experiences for the broadest range of application use cases.


In this paper we present work undertaken in the UK Department of Culture Media and Sport (DCMS) funded, 5G Edge-XR project. A collaboration between BT, Salsa Sound, Condense, The Grid Factory, DanceEast and the University of Bristol. The project explored how high-quality augmented and virtual reality immersive experiences could be broadcast to audiences with consumer AR/VR headsets, smartphones and tablets, using cloud-GPU to render XR presentations delivered over 5G networks. The goal was to democratise access to XR experiences by reducing the need for heavy processing on end-user devices.

The project has shown that cloud-based GPU clusters can deliver complex real-time rendered free-viewpoint XR experiences over high-speed low-latency 5G networks, with a fidelity that cannot be easily matched by traditional client-side rendering approaches. The use-cases that the technology facilitates range from medical data imaging, retail, enhanced and immersive sport broadcast, in stadium experiences and dance education and performance. Of primary focus in this paper are the sports broadcast use-cases which include AR volumetric boxing, AR MotoGP, AR in-stadium rugby and VR 360º football.

The paper is structured as follows. We begin with a description of the overall architecture of the 5G Edge-XR system with particular focus on the cloud-GPU technologies that under-pin the system and delivery network. Following this we provide an overview of the important scene capture technologies including the volumetric video, 360º video, spatial audio and tracking data. We then describe in more detail the focused use-cases and finally present results and evaluation of the project.

Download the paper below