top of page
  • Kyle Chua

Nvidia Bringing Generative AI, Neural Graphics Advancements To SIGGRAPH 2023

Updated: Dec 19, 2023

SIGGRAPH 2023, an annual conference devoted to computer graphics, is still a few months out, but Nvidia appears to already be geared up for the event.

NVIDIA
Credit: Reuters

The Santa Clara, California-based tech giant announced it would be sharing around 20 research papers that advance generative artificial intelligence (AI) and neural graphics at the conference, which will be running from 6-10 August in Los Angeles.


The papers the company plans to share would supposedly help artists bring their ideas to life, covering a range of topics that include "AI models that turn text into personalised images, inverse rendering tools that transform still images into 3D objects, neural physics models that use AI to simulate complex 3D elements with stunning realism and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details".


Nvidia explains that while not all these technologies are new, the existing versions of them are still somewhat limited in what they can do. They tend to struggle when it comes to specificity and context of the outputs. If a creative director for a toy company, for example, wants to visualise a teddy bear in different situations for an ad campaign, such as a teddy bear tea party, the existing tools might not be able to handle it – or the quality of the output won't come out good.


That's why Nvidia's researchers in one of the papers came up with a technique that allows generative AI models to use a single sample image to deliver new output in very specific ways. And a single unit of the company's A100 Tensor Core chip can accelerate the process from minutes to just about 11 seconds.

NVIDIA
Credit: Nvidia

Another paper, meanwhile, discusses Perfusion, a highly compact model which the company says "takes a handful of concept images to allow users to combine multiple personalised elements – such as a specific teddy bear and teapot – into a single AI-generated visual".


Apart from these, Nvidia is also sharing advancements in 2D-to-3D rendering. The company touted that it's working on a technology that can generate a photorealistic 3D head-and-shoulders model based on a single 2D portrait, which has the potential to streamline the creation of avatars for video conferencing and virtual reality applications.


In a separate paper, Nvidia discussed how creators can feed an AI model with video from, say, a tennis match, and it would bring that lifelike motion to a 3D model. The process would allow the virtual tennis player to simulate the movement of an actual tennis player without the need for motion capture technology.

NVIDIA
Credit: Nvidia

Nvidia's new research also brings improvements to its neural rendering techniques, tapping AI models for textures, materials and volumes to create more photorealistic visuals in virtual worlds. Its new neural rendering compression techniques can up how realistic objects look by bringing out detail and making them appear sharp.


Nvidia collaborated with over a dozen universities in Europe, Israel and the U.S. for the different research papers. The company hopes that after it presents the papers, developers and enterprises can leverage the advancements for their own purposes in fields like architecture, graphic design, game development or film.


Nvidia Research organisation typically shares the full research papers to the public and uploads the tech involved for developers to access in GitHub.

 
  • Nvidia announced it would be sharing around 20 research papers that advance generative artificial intelligence (AI) and neural graphics at the conference, which will be running from 6-10 August in Los Angeles.

  • The papers the company plans to share would supposedly help artists bring their ideas to life, covering a range of topics that include "AI models that turn text into personalised images" and "inverse rendering tools that transform still images into 3D objects", among others.

  • The company hopes that after it presents the papers, developers and enterprises can leverage the advancements for their own purposes in fields like architecture, graphic design, game development or film.










As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page