Meta to Label AI-Generated Images from Companies like OpenAI, Google
Meta Platforms will detect and label AI-generated images from other companies' services. The labels will be applied to content posted on Facebook, Instagram, and Threads. Meta already labels content generated using its own AI tools.
In a significant move, Meta Platforms has announced its plans to detect and label images generated by other companies' artificial intelligence (AI) services. The company will achieve this by using a set of invisible markers embedded in the files. This development aims to inform users that the images, which often resemble real photos, are actually digital creations. Nick Clegg, Meta's President of Global Affairs, shared this news in a blog post.
Currently, Meta already labels content generated using its own AI tools. However, with the new system, the company will extend this labelling practice to images created on platforms operated by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet's Google.
This announcement provides a glimpse into the emerging standards that technology companies are developing to address the potential risks associated with generative AI technologies. These technologies have the ability to produce fake but highly realistic content in response to simple prompts.
The approach taken by Meta builds upon a template established over the past decade by several companies to coordinate the removal of banned content across platforms. This includes content depicting mass violence and child exploitation.
In an interview with Reuters, Clegg expressed confidence in the companies' ability to reliably label AI-generated images. However, he acknowledged that marking audio and video content is more complex and still under development.
Clegg stated, "Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow."
In the meantime, Meta plans to require individuals to label their own altered audio and video content. Failure to comply with this requirement may result in penalties, although Clegg did not provide specific details about the penalties.
However, Clegg mentioned that there is currently no viable mechanism to label written text generated by AI tools such as ChatGPT. He stated, "That ship has sailed."
It remains unclear whether Meta will apply labels to generative AI content shared on its encrypted messaging service, WhatsApp. A Meta spokesperson declined to comment on this matter.
Recently, Meta's independent oversight board criticised the company's policy on misleadingly doctored videos, suggesting that the content should be labeled rather than removed. Clegg expressed agreement with the board's critiques, stating that Meta's existing policy is inadequate in an environment where synthetic and hybrid content is becoming more prevalent.
Clegg emphasised that the new labelling partnership demonstrates Meta's commitment to moving in the direction proposed by the oversight board.
Meta Platforms will detect and label AI-generated images from other companies' services.
The labels will be applied to content posted on Facebook, Instagram, and Threads.
Meta already labels content generated using its own AI tools.
Source: REUTERS