Nvidia is going all-in on the metaverse. At this year’s SIGGRAPH, an annual conference for computer graphics, Nvidia announced a range of new metaverse initiatives. These include the launch of Omniverse Avatar Cloud Engine (ACE), a “suite of cloud-native AI models and services” to build 3D avatars; new neural graphics SDKs, such as NeuralVDB; plans to evolve Universal Scene Description (USD), an open source file format for 3D scene representation; and various other updates to its Omniverse platform.
This year’s SIGGRAPH will “probably go down in history,” said Rev Lebaredian, vice president of Omniverse and Simulation Technology, in a press briefing. He thinks 2022 will be the biggest inflection point for the computer graphics industry since 1993, when the movie “Jurassic Park” came out. The World Wide Web and Nvidia were both launched in 1993 too, he added.
“What we’re seeing is the start of a new era of the internet,” continued Lebaredian. “One that is generally being called ‘metaverse.’ It’s a 3D overlay of the existing internet — the existing two-dimensional web — and it turns out that the foundational technologies that are necessary to power this new era of the internet are all the things that people at SIGGRAPH have been working towards for decades now.”
Yes indeed, 1993 was a huge inflection point for computing and digital graphics. But will the metaverse — still little more than a concept in 2022 — ever match the impact of the web? It’s impossible to say, because we have only seen the “foundational technologies” (like USD) emerge so far. There is no actual “metaverse” currently — just a lot of talk about building one.
Lebaredian admitted later in the briefing that Nvidia is a “tools company, ultimately,” and so it will be up to others to do the work required to develop the metaverse. That said, the tooling it announced looks promising.
Neural Graphics
Nvidia is primarily known for its graphics processing units (GPUs), but underpinning most of the metaverse announcements today is AI, or what the company refers to as “neural graphics.”
“Graphics is really reinventing itself with AI, leading to significant advances in this field,” said Sanja Fidler, vice president of AI Research, in the briefing.
Nvidia defines neural graphics as “a new field intertwining AI and graphics to create an accelerated graphics pipeline that learns from data.” The pipeline is illustrated in the diagram below, which Fidler said will be used “for simulating and rendering a dynamic virtual world.”
Nvidia neural graphics pipeline
Developers can access this functionality through various Neural Graphics SDKs, including new releases NeuralVDB (an update to industry standard OpenVDB) and Kaolin Wisp (a Pytorch library that aims to be a framework for neural fields research).
Fidler explained that 3D content creation will be a critical part of users adopting the metaverse. “We need to put stuff in the virtual world,” she said, “and we’re going to have many, many virtual worlds. Maybe each of us wants to create our own virtual world [and] we want to make them interesting, diverse, realistic content — or maybe even not so realistic, but interesting content.”
So the idea is that neural graphics will guide content creators to create “interesting content” for the metaverse.
“We believe that AI is existential for 3d content creation, especially for Metaverse,” said Fidler. “We just don’t have enough experts to populate all the content we need for the metaverse.”
One example application is bringing scanned 2D photography into virtual reality. While this was already possible, Fidler said that “it was somewhat cumbersome for artists — they had to use many different tools and it was rather slow.” Nvidia’s new “neural reconstruction” process, she said, turns it into “a single unified framework.” She mentioned a tool called Instant NeRF, which does exactly this (NeRF stands for “neural radiance fields”).
Fidler even hinted that neural graphics would make it easy for social media users — and not just artists — to create 3D content based on photographs. Certainly, if the metaverse is to take off like the web did in the early 2000s, then ordinary users will need the ability to “write” as well as “read” 3D content.
Avatar Cloud Engine
Perhaps the most intriguing tool is Omniverse Avatar Cloud Engine (ACE), a new AI-assisted 3D avatar builder that will become available “early next year” — including on “all major cloud services.”
If ordinary people are going to use the metaverse as much as they use the web today, they will need easy ways to create personalized avatars. Not only that, Nvidia claims that ACE will be able to create autonomous “virtual assistants and digital humans.”
“ACE combines many sophisticated AI technologies, allowing developers to create digital assistants that are on a path to pass the Turing test,” said Lebaredian.
Avatar Cloud Engine (ACE)
ACE is built on top of NVIDIA’s Unified Compute Framework, which will become available to developers in “late 2022.”
Lebaredian added that ACE is “graphics engine agnostic,” meaning it can “connect to virtually any engine you choose to represent the avatars.”
Modern Tools for the Metaverse
In addition to neural graphics and ACE, Nividia released a new version of Omniverse at SIGGRAPH, which CEO Jensen Huang described as “a USD platform, a toolkit for building metaverse applications, and a compute engine to run virtual worlds.”
It remains to be seen how many 3D artists and developers — not to mention consumers — take up Nvidia’s latest collection of 3D graphics and AI tools. But just as the web needed graphical tool companies (like Adobe and Macromedia) to arise in the 1990s, the metaverse will need tool suppliers too. Nvidia is attempting to take up that mantle.
Meta-Tips: Neural graphics is a new field intertwining AI and graphics to create an accelerated graphics pipeline that learns from data.