Meta Exposes AI-Generated Content Campaign Praising Israel
Meta discovered "likely AI-generated" content being utilised deceptively on Facebook and Instagram. The text includes positive comments about Israel's handling of the Gaza conflict. Accounts posing as Jewish students, African Americans, and concerned citizens targeted audiences in the United States and Canada.
This includes positive remarks for Israel's handling of the Gaza conflict, which were placed below posts from worldwide news agencies and US lawmakers. According to the social media giant's quarterly security report, these accounts impersonated Jewish students, African Americans, and concerned citizens in order to reach audiences in the United States and Canada. STOIC, a Tel Aviv-based political marketing agency, is said to have created the campaign. STOIC has yet to respond to the charges.
The relevance of this discovery stems from the fact that, while Meta has previously detected simple profile photographs created by AI in influence operations, this research is the first to reveal the usage of text-based generative AI technology. This technology surfaced in late 2022 and has sparked anxiety among researchers. Generative AI has the ability to generate human-like text, picture, and audio fast and cheaply, potentially leading to more successful disinformation operations and electoral impact.
During a press conference, Meta's security executives stated that they quickly deleted the Israeli effort and did not believe that new AI technology hampered their capacity to break influence networks. These networks are concerted efforts to disseminate specific messages. The executives also stated that they had not come across networks using AI-generated imagery of politicians that was convincing enough to be mistaken for genuine photos.
Meta's quarterly security report identified six covert influence efforts that were disrupted during the first quarter. In addition to the STOIC network, Meta shut down an Iran-based network dedicated to the Israel-Hamas conflict. However, no generative AI was detected in that particular campaign.
The misuse of new AI technologies, notably in elections, has posed a threat to Meta and other industry titans. Researchers discovered instances of image generators from firms such as OpenAI and Microsoft producing photographs containing voting-related disinformation, despite the companies' regulations prohibiting such content. To solve this issue, these companies have highlighted the use of digital labelling systems to mark AI-generated material as it is created. However, these technologies do not perform well with text, and academics remain sceptical of their overall usefulness.
Meta's defences will be tested critically in the coming months, with elections in the European Union in early June and in the United States in November.
Meta has identified "likely AI-generated" content being used deceptively on Facebook and Instagram.
The content includes comments praising Israel's handling of the war in Gaza.
The accounts posing as Jewish students, African Americans, and concerned citizens targeted audiences in the United States and Canada.
Source: REUTERS