OpenAI Prevents Attempts to Misuse AI for Deceptive Activity
OpenAI has prevented five attempts by threat actors to utilise its AI models for fraudulent purposes. The campaigns addressed a variety of subjects and locations in an attempt to sway public opinion and impact political results. OpenAI's paper expresses worry about the potential misuse of powerful AI technologies.
Over the last three months, threat actors from Russia, China, Iran, and Israel have used OpenAI's AI models to generate short comments, lengthy articles in many languages, and even faked names and bios for social media accounts.
These campaigns addressed a variety of themes, including Russia's invasion of Ukraine, the crisis in Gaza, the Indian elections, and political difficulties in Europe and the US. OpenAI claimed that these fraudulent tactics were intended to manipulate public opinion and affect political results.
The research from the San Francisco-based company raises concerns about the possible misuse of its superior AI technology, which can generate text, graphics, and audio that closely mimic human works. In response to these concerns, OpenAI formed a Safety and Security Committee, chaired by CEO Sam Altman and other board members, to ensure the responsible development and deployment of its next AI model.
It is worth noting that the fraudulent advertising did not achieve considerable audience engagement or reach using OpenAI's capabilities. The company emphasised that these operations used a mix of AI-generated content and hand produced phrases or memes gathered from multiple online platforms.
In a separate event, Meta Platforms disclosed in its quarterly security report that it has discovered "likely AI-generated" information being utilised deceptively on its Facebook and Instagram platforms. This included positive remarks for Israel's handling of the Gaza conflict, which were published beneath messages from worldwide news agencies and US lawmakers.
OpenAI has thwarted five attempts by threat actors to misuse its AI models for deceptive activity.
The campaigns targeted various issues and regions, aiming to manipulate public opinion and influence political outcomes.
OpenAI's report highlights concerns about the potential misuse of advanced AI technology.
Source: REUTERS