top of page
tech360.tv

OpenAI Prevents Attempts to Misuse AI for Deceptive Activity

OpenAI has prevented five attempts by threat actors to utilise its AI models for fraudulent purposes. The campaigns addressed a variety of subjects and locations in an attempt to sway public opinion and impact political results. OpenAI's paper expresses worry about the potential misuse of powerful AI technologies.

OpenAI logo
Credit: REUTERS

Over the last three months, threat actors from Russia, China, Iran, and Israel have used OpenAI's AI models to generate short comments, lengthy articles in many languages, and even faked names and bios for social media accounts.


These campaigns addressed a variety of themes, including Russia's invasion of Ukraine, the crisis in Gaza, the Indian elections, and political difficulties in Europe and the US. OpenAI claimed that these fraudulent tactics were intended to manipulate public opinion and affect political results.


The research from the San Francisco-based company raises concerns about the possible misuse of its superior AI technology, which can generate text, graphics, and audio that closely mimic human works. In response to these concerns, OpenAI formed a Safety and Security Committee, chaired by CEO Sam Altman and other board members, to ensure the responsible development and deployment of its next AI model.


It is worth noting that the fraudulent advertising did not achieve considerable audience engagement or reach using OpenAI's capabilities. The company emphasised that these operations used a mix of AI-generated content and hand produced phrases or memes gathered from multiple online platforms.


In a separate event, Meta Platforms disclosed in its quarterly security report that it has discovered "likely AI-generated" information being utilised deceptively on its Facebook and Instagram platforms. This included positive remarks for Israel's handling of the Gaza conflict, which were published beneath messages from worldwide news agencies and US lawmakers.

 
  • OpenAI has thwarted five attempts by threat actors to misuse its AI models for deceptive activity.

  • The campaigns targeted various issues and regions, aiming to manipulate public opinion and influence political outcomes.

  • OpenAI's report highlights concerns about the potential misuse of advanced AI technology.


Source: REUTERS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page