Abstract

Recent advances in volumetric capture technology have started to enable the creation of high-quality 3D video content for free-viewpoint rendering on VR and AR glasses. This allows highly immersive viewing experiences, which are currently limited to experiencing pre-recorded content.

However, for an immersive experience, interaction with virtual humans plays an important role. In this paper, we address interactive applications of free-viewpoint volumetric video and present a new framework for the creation of interactive volumetric video content of humans as well as real-time rendering and streaming.

Re-animation and alteration of an actor’s performance captured in a volumetric studio becomes possible through semantic enrichment of the captured data and new hybrid geometry- and video-based animation methods that allow a direct animation of the high-quality data itself instead of creating an animatable model that resembles the captured data. As interactive content presents new challenges to real-time rendering, we have developed a cloud-based rendering system that reduces the high processing requirements on the client side. 

Introduction

Volumetric videos capture 3D spaces with a high degree of accuracy and enable services with six degrees of freedom (6DoF) that give the viewers the freedom to change both their position and orientation in virtual space.

Volumetric video is expected to enable novel use cases in the entertainment domain (e.g. gaming, sports replay) as well as in cultural heritage, education, health and e-commerce [Schre19a]. While Volumetric Video enables highly photorealistic free viewpoint video, it is usually limited to playback of recorded scenes.

Alteration and animation of the content per se is not possible. If interactive and animatable content is envisioned, the classical approach is to build upon traditional hand crafted Computer Graphics models, which lack photorealism. Recent works have proposed to  combine the photorealism of Volumetric Video data with the flexibility of Computer Graphics models in order to make Volumetric Video animatable or create new animations from Volumetric Video data [Hils20].

In this approach, re-animation and alteration of an actor’s performance captured in a volumetric studio becomes possible through semantic enrichment of the captured data and new hybrid geometry- and video-based animation methods that allow a direct animation of the high-quality data itself instead of creating an animatable CG model that resembles the captured data.  

Download the paper below

Topics