If Meta's research into new tech, including the metaverse, continues to advance, you might one day be able to see and interact with photorealistic versions of people in the virtual world.
According to UploadVR, the company formerly known as Facebook today can already allow you to generate close to lifelike-looking 3D models of yourself using only an iPhone. Previously, to be able to do this, you would need a special studio setup equipped with unique camera and lighting systems.
The breakthrough is part of the social media giant's Codec Avatar system that was first showcased in 2019. The system leverages the capabilities of neural networks, a type of self-learning artificial intelligence system, to generate detailed avatars without sophisticated equipment.
You essentially only need a smartphone with a front facing depth sensor, such as any modern iPhone, for instance, which you'll use to scan your face while copying a wide range of facial expressions.
The scanning process takes around three and a half minutes, but generating the avatar itself can take up to six hours on a rig with four high-end graphics cards. The goal is likely to eventually have the rendering process work on cloud-based graphics units, given how not everyone will have access to powerful computers.
A lot of the heavy lifting in generating the details and nuances of a person's face is said to be handled by the Universal Prior Model “hypernetwork", the special neural network trained to generate person-specific Codec Avatar. The researchers behind the project said they trained the unique AI by having it scan the faces of over 255 diverse individuals using a studio-like capture rig equipped with 90 cameras.
You can then control the generated avatar in real time using a VR headset with five cameras: two capturing each eye and three on the lower face.
While the technology itself isn't new, Meta touts that no other company has generated avatars with this level of detail. The system, however, can still be improved, with the researchers noting it struggles in rendering glasses or long hair. It's also limited to the head as the bodies of subjects cannot be scanned.
Then again, there seems to still be time for the developers to iron these limitations out since the system isn't being put out to the public just yet. The idea supposedly is to have Codec Avatars be separate option from the cartoony avatars that you use in Meta's Horizon Worlds VR platform rather than a replacement. A realistic avatar can, for instance, be used in work meetings, while cartoony avatars can be used for games.
Rendering detailed avatars can also be demanding for the devices or platforms handling them, so its implementations, at least for it early stages, could be in some form be limited. But Meta said that at this point in the development, they're much closer to their vision for this technology than they were before.
Meta can already allow you to generate close to lifelike-looking 3D models of yourself using only an iPhone.
Previously, to be able to do this, you would need a special studio setup equipped with unique camera and lighting systems.
The system leverages the capabilities of neural networks, a type of self-learning artificial intelligence system, to generate detailed avatars without sophisticated equipment.
The generated avatars can then be controlled in real time using a VR headset.