We are nowadays experiencing the convergence of the Media Entertainment and Gaming industries thanks to the emergence of Virtual Reality.
VR immerses users in an experience that gives them the illusion of reality and opens the door to numerous challenges and opportunities towards the definition of next-generation immersive media.
On the one hand, immersive media are not standardized and not compatible over players and equipment.
On the other hand, professional workflows for movie creation, previously designed for screen-experience, cannot be applied as such to new immersive experiences.
But VR offers as well the opportunity to add other multi-modal feedback channels such as Haptics (sensorial), for which entirely new workflows need to be defined.
In this paper, we present a workflow to create, distribute and render immersive contents, involving immersive videos, haptic feedback, and interactivity. We focus on the locks and opportunities, and discuss real production examples.
Virtual Reality (VR), Video, and Video games are converging. Movies and games are getting closer, sharing techniques and contents.
Immersive 360° videos have been produced for decades, and today anyone can shoot his own 360° video with a smartphone and a mirror kit.
Similarly, Head Mounted Displays (HMD) have existed for a very long time, going back to the video game consoles of the 90’s (Nintendo virtual boy).
Still, only now market and technology are simultaneously ready, through the emergence of new devices (the Oculus Rift, the Sony Morpheus, the HTC Vive or the Samsung Gear VR) and thanks to their quality of experience, their price, and the limits of the TV-screen experience.
Finally, while « 4D cinema» (which consists in stimulating other human senses) has been available in dedicated theatres and movies, the « haptics » market at home is strongly growing.
The next generation immersive media can be defined as a set of multi-modal immersive experiences.
It includes the stimulation of all of human senses: vision, sound, haptics, smell, proprioception, etc. It targets a perfect immersion in the experience, making it alive as real, and a perfect illusion for the user.
Beyond the passive experience of watching a screen, the user is able to look around him/her and to interact with artistic content.
It is not only about a two hour 360° videos, it may also be a 10 minute immersive artistic experience with adaptations and interactions with some characters or objects.
We are at the very beginning of this evolution, still having tools and workflows either adapted to videos, videos+VFX, VR or video games. Mixing everything is a huge challenge.
The tendency right now is to start with the audiovisual (AV) evolution, making it visually immersive. So the first step consists in integrating 360° omnistereo videos in the workflows, with obviously spatialized audio.
But the following step, which will provide a true Virtual Reality media approach, involving real time rendering, adaptation, and interaction, requires deeper changes.
The typical workflow for video-streaming comprises three stages: production, distribution and rendering. Making it applicable to next generation immersive media raises a number of issues.
The following sections describe those three stages, and illustrate their use with examples. Haptics is used as a transversal example of the workflow.
We then present media produced by Technicolor and MPC taking into account this new workflow. We finally conclude and provide perspectives.
The creation phase encompasses the techniques and tools to produce the new immersive content. We detail below the specific challenges for this phase.
The immense majority of cameras are made to acquire a planar image, in 2D or 3D. One of the future challenges here is to be able to go on new forms of video acquisitions.
For example, the capture of spherical images, producing immersive 360° videos, has been known for decades.
Solutions are now available for capturing omnistereo, and the workflows start to evolve towards that.
But to go further on a VR approach, we will have to address the still open problem of free viewpoint videos. This calls for convenient, usable capture setups of reasonable size, and compatible with limited processing power.
These solutions should provide a content allowing to move inside the video, around the acquired objects.
This is the true intersection between video games, VR and videos, where real objects are modeled in 3D with their video texture. Approaches such as light fields could be involved here.