top of page
  • Sweet Gaston

Top AI Companies Join Forces to Protect Children from Deepfake Scandals

Leading AI companies, including Meta, Google, Microsoft, and Amazon, have joined forces to combat the spread of AI-generated child sexual abuse material (CSAM) and deepfakes. The companies have committed to integrating "Safety by Design" principles into their technologies, including the development of AI detection technology and excluding CSAM from training datasets. The prevalence of AI-generated CSAM and deepfakes has raised concerns in Congress and society, with reports of teenage girls being victimised by explicit images featuring their own likenesses.

In response to a wave of deepfake scandals and the spread of child sexual abuse material (CSAM), leading artificial intelligence (AI) companies have united to combat this alarming issue.


Thorn, a nonprofit organization dedicated to fighting child sexual abuse, announced on Tuesday that industry giants such as Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI, and several others have committed to new standards aimed at addressing the problem head-on. This collaborative effort aims to curb the creation and dissemination of AI-generated CSAM.


The prevalence of AI-generated CSAM and deepfakes has become a pressing concern not only within the AI industry but also in Congress and society at large. Disturbing reports have emerged, highlighting instances where teenage girls have been victimized by AI-generated sexually explicit images featuring their own likenesses.


NBC News previously shed light on the issue, revealing that sexually explicit deepfakes with real children's faces were among the top search results for terms like "fake nudes" on Microsoft's Bing search engine. Similar results were found on Google when searching for specific female celebrities and the term "deepfakes." Furthermore, NBC News uncovered an advertisement campaign on Meta platforms in March 2024 promoting a deepfake app that offered to "undress" a picture of a 16-year-old actress.


To address these concerns, the participating companies have pledged to integrate the "Safety by Design" principles into their technologies and products. These principles include the development of technology capable of detecting AI-generated images, as well as ensuring that CSAM is not included in training datasets for AI models. However, implementing these principles may prove challenging, as early iterations of AI detection technology, such as watermarks, are often easily removable.


The need for action is evident, as Stanford researchers discovered over 1,000 images of child sexual abuse in a widely-used open-source image database used to train Stability AI's Stable Diffusion 1.5, one of the most popular AI image generators. Stability AI clarified that the dataset was not created or managed by them and was promptly taken down. They stated that their models were trained on a "filtered subset" of the dataset where the child sexual abuse images were found, and efforts were made to mitigate any residual behaviors.


It is worth noting that several of the companies involved in this commitment have previously faced scandals related to child sex abuse material and AI. However, their participation in this joint effort signifies a collective commitment to addressing the issue and implementing necessary safeguards.

 
  • Leading AI companies, including Meta, Google, Microsoft, and Amazon, have joined forces to combat the spread of AI-generated child sexual abuse material (CSAM) and deepfakes.

  • The companies have committed to integrating "Safety by Design" principles into their technologies, including the development of AI detection technology and excluding CSAM from training datasets.

  • The prevalence of AI-generated CSAM and deepfakes has raised concerns in Congress and society, with reports of teenage girls being victimized by explicit images featuring their own likenesses.


Source: NBC NEWS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page