There is an increasing overlap between computer graphics, the metaverse and AI and that overlap is exactly what is on display this week at the SIGGRAPH 2022 conference, where Nvidia is revealing its latest set of software innovations for computer graphics.
Today at the conference, Nvidia announced a series of technology innovations that bring the metaverse and AI closer together than ever before. Among the announcements is the Nvidia Omniverse Avatar Cloud Engine, which is a set of tools and services designed to create AI-powered virtual assistants.
The company also announced multiple technology efforts to advance its computer graphics generation capabilities for metaverse applications. One of the efforts is the new NeuralVBD library, which is a next generation of the OpenVBD open-source library for sparse volume data. Additionally, Nvidia is working on enhancing the open-source Universal Scene Description (USD) format to help further enable metaverse applications.
“3D content is especially critical for the Metaverse as we need to put stuff in the virtual world,” Sanja Fidler, VP of AI research at Nvidia said in a press briefing. “We believe that AI is existential for 3D content creation, especially for the metaverse.”
Neural graphics AI-powered innovations are key to the future of the metaverse
Computer graphics are no longer simply rendered images, they can be much more with the concept of neural graphics.
Fidler explained that neural graphics aim to insert AI capabilities into various parts of the graphics pipeline. The addition of AI can accelerate graphics in any number of different types of applications including gaming, digital twins and the metaverse.
At SIGGRAPH 2022 Nvidia announced a pair of new software development kits (SDKs) with Kaolin WISP and NeuralVDB that apply the power of neural graphics to the creation and presentation of animation and 3D objects. Kaolin WISP is an extension to an existing PyTorch machine learning library designed to enable fast 3D deep learning. Fidler explained that Kaolin WISP is all about neural fields, which is a subset of neural graphics that focuses on 3D image representation, and content creation using neural techniques.
While Kaolin WISP is about speed, NeuralVDB is a project designed to help compact 3D images.
“Using machine learning, NeuralVDB introduces really compact neural representations that dramatically reduce the memory footprint, which basically means that we can now represent much higher resolution of 3D data,” Fidler said.
According to Rev Lebaredian, VP of Omniverse and simulation technology at Nvidia — one of the most important but probably the least understood aspects of creating the metaverse is the core technology needed to represent all the things inside the metaverse.
For Nvidia, that technology is the open-source Universal Scene Description (USD) technology, originally developed by the animation studio Pixar. Nvidia’s Omniverse platform is built on top of USD.
“We’ve been hard at work advancing Universal Scene Description, extending it and making it viable as the core pillar and foundation of the metaverse, so that it will be analogous to the metaverse just like HTML is to the web,” Lebaredian said.
At SIGGRAPH 2022, Nvidia is announcing its plans for extending USD, which includes new compatibility suites with graphics tools as well as tools to help users learn how to use USD.
Chatbots and virtual avatars are not a new phenomenon, but to date, they haven’t been particularly lifelike, but that could soon change thanks to the new Nvidia Avatar Engine.
Lebaredian explained that the Avatar Cloud Engine is a framework with the core technologies necessary to create avatars. The avatars are robots that are driven by artificial intelligence that enables them to converse, perceive and behave inside virtual worlds and the metaverse.
“The metaverse without human-like representations or artificial intelligence inside it will be a very dull and sad place,” Lebaredian said. “We are providing the toolkit of technologies necessary to construct avatars of different forms so that others can take these technologies and build their specific ideas around what avatars should look, feel and behave like in those worlds.”
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |