The potential of augmented video for immersive experiences is unparalleled with the futuristic innovations and mixing of realities but what’s next for adoption?

The arrival of YouTube in 2005 and the launch of Apple’s first iPhone in 2007, with Google’s Android platform entering the market in 2008, kicked off what has become a dynamic and monumental change for the global video industry.

illustration_mixed_reality

Mixed Reality is becoming a reality

The two-screen catalyst for video has shifted the way content is created, consumed and delivered.

Today the power of immersive technologies to overlay and enhance content experiences is an unparalleled opportunity, but one that has arguably yet to mature.

With the likes of InterDigital research deep-diving into some of the most advanced technologies in its immersive innovation lab, IBC365 saw first-hand the capabilities of CGI video rendering, digital avatars, mixed reality (MR), light field and volumetric video as the ultimate user experience and virtual production opportunities at its lab in Rennes, France.

The site is home to research, development and innovation labs with teams focussing on immersive experiences, home and imaging science as well as work on wireless, including research into 6G connectivity.

The company develops patents based on industry standards and, since its acquisition of Technicolor’s research unit last February, it is pushing into the video space with the applications of AI and computer vision linking the physical and digital worlds together.

InterDigital chief communication officer Patrick Van de Wille explains the firm’s expansion into video: “Video is new to us; standards are mature from a licensing standpoint, but video is more fragmented than the wireless side.

“The world has united around standards. However, it’s a little more complicated because we are a transparent organisation and a publicly-traded organisation.”

As a major standards participant on the wireless side, the business has seen great success. For video delivery, it is dominated by standards which marries perfectly with the firm’s deep long-term research into immersive technologies.

It has 350 engineers employed in R&D centres across New York, London, Paolo Alto, Rennes, Montreal, San Diego and Philadelphia, with a further 200 employed as support staff, including 40 who work on patents across the converging video and wireless industries.

Today it has four dedicated R&I labs including home lab, immersive lab, imaging science lab and experimentation lab.

Van de Wille adds: “We think AI is very important and we are spending some resources on that large-scale AI which in theory could be applied to anything.”

Home lab director Laurent Depersin explains how today’s current standards cannot cope with broadcast and whilst optimisation has occurred on the wireless side, the physical layer and spectrum efficiency of standards for broadcast are being looked at with the likes of 5G.

A member of 17 research consortiums, InterDigital is looking towards visual technologies with the same long-term research approach, including a focus on gaming.

“There is no longer a frontier between video and wireless,” Van de Wille explains, adding that the research it is working on now is for use in decades time, in line with standards developed.

He adds: “It is about delivering an experience, not a technical standard because people don’t care about how they’re connected.”

InterDigital’s philosophy is straightforward, favouring plain English. After launching a microsection of its website for transparency of its licensing, the company tour began with the immersive lab.

Digital and physical worlds collide

The integration of Technicolor’s R&I labs meant its work on encoding volumetric video, capturing metadata, multi-view camera capture, 3D video displays and virtual reality (VR) and holographic displays were showcased during the tour.

Mixed reality 2

Volumetric video: Allows viewers a view of depth to the video as if it was the real world.

One stand-out feature was the volumetric video displays that created an immersive experience giving the viewers a view of depth to the video as if it was the real world. The rendering was shot from real-time but the developments were certainly impressive.

In a unique 16-camera rig created to collect images for volumetric video display its work also incorporated the use of 4K cameras, with the lab scientists test content that they hinted could be showcased at IBC this year.

Along with the evolution towards MR, immersive and realist experiences, the future of video will let the user experience the sense of depth and sense of parallax. Scientists explained how this implies the evolution of the full value chain with innovative light field capture and content creation tools; new distribution formats and video rendering technologies; future light field and holographic displays and AR devices.

Digital avatar

InterDigital has a vision that every human will have a digital double as their avatar for the e-Society, which will become the interface for communication with the emerging world.

The aim to develop this is through the exploration of new technologies, formats, services and standards where a digital human could apply animation, communication, social, e-health, learning, sports, entertainment and other gaming.

Facet

Facet: Digital avatar

In a complex cove of screens and wires, the company demonstrated how quickly a person’s head can be captured volumetrically and rendered in real-time, taking about 20 minutes to achieve and have a similar result to an animated character.

InterDigital immersive lab director Gaël Seydoux explains that the aim is to have a digital double to interact on a digital layer of society.

Using AI the team is training the avatar to gain authentic expressions and eventually voices. The project is in its early phase and will likely take another few years of experimentation with digital interacting occurring complementary to the physical world.

However, this comes not without privacy concerns, however, the evolution of future content is being looked at as a solution for secondary characters in films and movies on a different scale.

This was set to be demonstrated at MWC this year before the telecoms showcase was cancelled. However, it might also be showcased during IBC2020.

Mega-mixed realities

Foraying into the world of mixed realities, InterDigital is exploring various ways that augmented reality (AR) can be used on the second screen to enhance the viewing experience for both advertisers and content creators.

The demonstration focused on sports but could also be applied to gaming and other forms of entertainment. The team was confident MR will drive the future with consolidation occurring across the VR markets as well as the emergence of a large number of MR applications. 

Mixed reality

Mixed reality: Living room scanned with mobile XR

Initially, these consolidated MR experience will be professional and industry-based, moving to public events and locations and eventually making their way into the home.

With the ability to “change society in unprecedented ways,” Seydoux adds, building MR technology using computer graphics and vision will be key to changing the applications and consumption of entertainment content.

Adjacent to the MR demo was a real-time application of AR to create and remove objects in a physical setting. Its working title is ‘diminished reality’ where objects such as a lamp or a flower vase can be added, moved and removed with what looked like ease.

The lab scientists explained it is still in its early stages of development, but they foresee the application of this technology an important element to post-production in the future as well as concept building in design.

Past to present

The company was founded two years before the first mobile phone was made with its focus on wireless research. It has since developed many core technologies that made the mobile revolution possible.

Among its 9,800 patents, the business primarily focusses on standards but is not limited to this with ongoing research based on technologies fluid to the development of standards and its research into next-generation codecs and technology advances including artificial intelligence (AI), light field video distribution, digital human doubles and 6G connectivity.

The sustainability of any technology is a major parameter alongside performance and historically engineering is about performance drive.

Among its technology innovations and research the company is focussed on sustainability and what Van de Wille says is “doing the responsible thing.”

He explains: “Launching in the first half of this year is a comprehensive effort at looking at sustainability on mobile network and as it intersects with mobile video.”

The aim is to foster a discussion around this with an understanding on the key areas looking at smaller and leaner AI application as well as focus on specific tasks as opposed to more general, in a bid to help reduce carbon.