Nvidia thinks it has the fabric for building the metaverse on a larger than planet scale.

Avatars, stargates, wormholes, teleportation, holodecks, digital twins. Nvidia CEO and co-founder Jensen Huang uses the language of science fiction to describe updates to his company’s computing technology. It’s a colourful metaphor for actions and objects that Nvidia believes will be part of the fabric of the next generation internet known as the metaverse.

Nvidia 3x2

Nvidia: Laid out its vision for the ‘Omniverse’

“That the internet changed everything is surely an understatement,” Huang said in a keynote to Nvidia’s latest GPU Technology Conference.

“The internet is essentially a digital overlay on the world. It’s largely 2D information: text, voice, image and video. That is about to change. We now have the technology to create new 3D virtual worlds or to model our physical world.”

Nvidia is laying the groundwork for virtual worlds that it says will enable the next era of innovation whether that’s visualisation and layouts for cities, Earth simulation for weather patterns, digital production or synthetic data generation for autonomous vehicles.

“This new world will be much larger than the physical world. We will buy and own 3D things. We will buy and sell homes, furniture, cars, luxury goods and art in this world. Creators will create more things in the virtual world than they do in the physical world,” Jensen Huang, Nvidia

Foundational to this enterprise is what it calls the ‘Omniverse’, best thought of as a series of Lego bricks for other companies to design and build 3D worlds and which crucially connect into a series of shared virtual worlds.

“In the metaverse we will jump from one world to another – like we do on the web with hypertext,” Huang said.

“This new world will be much larger than the physical world. We will buy and own 3D things. We will buy and sell homes, furniture, cars, luxury goods and art in this world. Creators will create more things in the virtual world than they do in the physical world.”

And Nvidia is pitching Omniverse as the building material of these virtual worlds.

JHH-headshot

Jensen Huang: Nvidia co-founder

“Omniverse is very different than a game engine,” Huang said.

“Omniverse is designed to be data centre scale and hopefully, one day, planetary scale. Some worlds will be built for games but a great many will be built by scientists, creators and companies. Virtual worlds will crop up like websites.”

The ability to interchange tools, protocols, formats, and services to enable persistent and ubiquitous virtual simulations is perhaps the most important aspect of the entire Metaverse project.

“Things created in Adobe can be connected to those in an Autodesk world enabling designers to collaborate in a shared space.

”Changes by a designer in one world are updated for all connected designers - essentially like a cloud share document for 3D design.”

It can do this because Nvidia based Omniverse on Universal Scene Description (USD) originally developed by Pixar and open-sourced by the Disney animation house in 2015.

“We took notice of it and felt that this was the first time there was a chance for a common standard to describe 3D virtual worlds,” explained Rev Lebaredian, VP of simulation technology at Nvidia.

There had been several attempts at this in computer graphics before going back to VRML developed by SGI in 1996/7 and its successor, X3D. Both are ISO standards that “never quite worked,” says Lebaredian, “either because they were never quite good enough or were too specific to certain industries and largely because 3D is just really hard and the field of computer graphics and simulation software is nascent.

He continued, “If you think about it the one company in the world that has been building large virtual worlds at high fidelity with hundreds or thousands of people collaborating together using a heterogenous set of tools is Pixar.”

Ubiquitous like HTML

USD was Pixar’s fourth iteration at creating such a standard and it’s far from complete. Nvidia liken it to ubiquitous web language HTML.

“HTML was in its infancy 1994/5. It’s taken the industry almost two decades to get to HTML-5. We believe USD is in an analogous stage: at USD 1.0. We need a lot of things to happen to turn it into a dynamic interactive standard that has everything you need.”

Nvidia have been working with Pixar and others to extend USD to describe more elements of virtual worlds that have never been standardised before. For instance, it worked with Pixar and Apple to create a new way to describe rigid body physics which was added to the USD standard late last year. It is continuing to do similar for materials, behaviour and character rigs.

“Our hope is that we’ll end up taking it for granted that just as you can load any web page and any web browser you should be able to load any virtual world described by USD in any tool or simulator at some point in future,” says Lebaredian.

“You can think of USD as HTML for 3D. It helps unify all these different software products. The beauty is that once detected in the Omniverse they also have the ability to interoperate with each other. Most of the applications in the real world don’t really talk well with one another.”

In Huang’s phrasing, USD is the portal into the Omniverse. “USD is essentially a digital wormhole that connects people and computers to Omniverse. USD is to Omniverse what HTML is to websites.

In a bit more detail, Omniverse is composed of three main parts: The Omniverse Nucleus is a database engine that connects users and enables the interchange of 3D assets and scene descriptions. Once connected, designers doing modelling, shading, layout, animation, VFX or rendering collaborate to create a scene. The Nucleus is described by the USD interchange framework.

The second part of Omniverse is a composition, rendering and animation engine for simulation of the virtual world. An important aspect here is that it is based on real world physics in order to accurately simulate how objects, waves, particles and materials will behave in a simulated environment.

Lead image - Nvidia

The third part of Omniverse is Nvidia CloudXR “a stargate if you will,” says Huang. “You can teleport into Omniverse with VR and AIs can teleport out of Omniverse with AR.”

ILM is among companies testing Omniverse to unite internal and external tool pipelines from multiple studios, to render final shots in realtime and to create massive virtual sets like holodecks.

WPP is another user. WPP says it is creating photoreal virtual locations composed from 10 billion points captured in the real world. It uses that data to create a giant mesh in Omniverse from which to generate virtual locations for its media productions.

“A collaborative platform means multiple artists in multiple locations at multiple points in the pipeline can collaborate,” said Perry Nightingale, SVP, Creative AI, WPP in a video testimony. “We can create realtime CGI and a sustainable production. This is the future of film at WPP.”

Omniverse Avatar

Nvidia is also offering a suite of technologies for companies to populate the metaverse with artificially intelligent avatars. It is targeting this at industries with customer service interactions — restaurant orders, banking transactions, helping to make personal appointments and reservations.

Its speech recognition is based on Nvidia Riva, a software development kit that recognises speech across multiple languages. Riva is also used to generate human-like speech responses using text-to-speech capabilities. Its natural language understanding is based on Nvidia’s Megatron 530B Large Language Model that can recognise, understand and generate human language. It also contains a recommendation engine to make smarter suggestions and perception capabilities enabled by Nvidia Metropolis, a computer vision framework for video analytics.

“It’s not enough just to interact with an AI [assistant] only with audio and speech,” says Lebaredian. “Humans are really attuned to communicate in different modes – of which speech is one of them. We communicate with our hands, our eyes, our facial expression, in the things we don’t say. It’s critical that the AIs we create that are intended to interact with humans reproduce all of those modes of communication.”

In one demonstration Huang showed colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself — conversing on such topics as exoplanet formation and climate science.

Since its open beta launch in December, Omniverse has been downloaded by more than 70,000 individual creators. It’s also being used at over 700 companies, including BMW, Lockheed Martin and Sony Pictures Animation.

Digital Twins

Nvidia started out in computer graphics which at its core is about using maths and computer science to simulate the interactions of light and material, the physics of objects, particles and waves and now simulating intelligence in animation.

Among the new use cases touted for this technology is the creation of digital twins. This means building an exact replica of real objects within the virtual world. It can be an object like a car, a whole system like a factory, even a city.

“Once you have an accurate representation of the real world inside the virtual and you have a simulator that can accurately simulate how that world behaves you can teleport,” says Richard Kerris, VP of the Omniverse development platform. “If you have a representation of your factory or a model of a city inside the virtual world you can jump to any point in that world and you can feel it and see it as if you were there.”

With this power comes the potential of time travel, virtually of course.

“We can reconstruct what happened in the past and rewind to play back what happened, for example, in a factory, or in a traffic incident based on data from sensors onboard vehicles, street furniture, buildings,” Kerris said.

If you can accurately simulate the past, you can fast forward as well.

“Not only can we go into the future but we can model alternative futures by changing the parameters inside the virtual world. That allows us to plan much better futures to optimise business and create better applications.”

Ericsson has built a city scale digital twin to simulate placement of 5G cell phone towers in order to optimise signal coverage.

Read more Understanding the metaverse 

Kerris said, “Generally speaking, virtual worlds have only been created by a small number of experts working in industries like VFX when constructing worlds for movies like The Lion King or Avengers: Endgame or by video game developers. These virtual worlds require often thousands of artists to painstakingly create them and render them. That type of skill does not exist in any other industry. Everything we’re doing with Omniverse is to try to democratise [it] for everyone to participate in constructing these virtual worlds so it’s not only a select few artists at big studios that can do this.”

Omniverse enables engineers and designers to build physically accurate digital twins of buildings and products, or create massive, true-to-reality simulation environments for training robots or autonomous vehicles before they’re deployed in the physical world.

The logic, according to Huang, is this headscratcher: “Omniverse digital twins are where we will design train and continuously monitor cars, warehouses and factories of the future. The virtual factories are the digital twins of their physical replica… and the physical version is the replica of the digital since they are produced from the digital original.”

The goal is to reduce the gap between reality and sim so that artificially intelligent robots won’t know which realm they are operating in.

To cap it all, Nvidia says it is building a digital twin of the entire earth to simulate and predict climate change.

IBC Digital provides industry insight and the opportunity to engage with exhibitors in the run up to, and during, IBC2021. Register here

Downloads