top of page

Militant Groups Turn to AI for Recruiting and Deepfakes, Experts Warn

  • Writer: tech360.tv
    tech360.tv
  • 3 hours ago
  • 3 min read

Militant groups are currently experimenting with AI, and the associated risks are widely expected to grow. While these groups may not yet be certain exactly what to do with the technology, AI presents a powerful new tool for extremist organisations. National security experts and spy agencies have warned that AI could be used effectively for recruiting new members, refining cyberattacks, and generating realistic deepfake images.


Credit: Unsplash
Credit: Unsplash

Associated Press have shared that the adoption of AI by groups such as the Islamic State is unsurprising, given that years ago the organisation recognised social media as a potent instrument for disinformation and recruitment. Supporters of the Islamic State are being urged to integrate AI into their operations, with one user on a pro group website noting that one of the best things about AI is its ease of use. This individual encouraged fellow supporters to use AI for recruiting a reality.


AI makes it significantly easier for any adversary to execute their plans. For individual bad actors or loose knit extremist groups that lack resources, AI enables the large scale production of deepfakes or propaganda, which increases their influence and widens their reach. A former vulnerability researcher at the National Security Agency stated that AI allows even a small group without much money to still make an impact. Militant groups began incorporating AI as soon as programs like ChatGPT became easily accessible to the public.


Extremist groups are increasingly utilising generative AI programs to create realistic looking video and photos. When paired with social media algorithms, this fake content is employed to spread propaganda at a scale previously unimaginable, frighten enemies, confuse opponents, and recruit new believers. For example, violent groups in the Middle East, along with antisemitic hate groups in the United States and other locations, spread fake images two years ago of the Israel Hamas war that depicted abandoned, bloodied babies in bombed out buildings. These images were used to spur polarisation and outrage while obscuring the conflict’s actual horrors, and they were used to recruit new members. Similarly, following an attack claimed by an Islamic State affiliate that killed nearly 140 people at a Russian concert venue last year, AI crafted propaganda videos were widely circulated on social media and discussion boards seeking new recruits.


The Islamic State has also advanced its use of the technology by creating deepfake audio recordings of its own leaders reciting scripture and by utilising AI to translate messages quickly into multiple languages. In the cyber domain, hackers are already using synthetic video and audio for phishing campaigns where they try to impersonate senior business or government leaders to gain unauthorised access to sensitive networks. AI can also be used by hackers to automate certain aspects of cyberattacks or to write malicious code. Furthermore, IS and Al-Qaeda have held training workshops aimed at helping supporters learn how to use AI.


While militant groups currently view the more sophisticated uses of AI as “aspirational” and lag behind state actors like China, Iran, or Russia, the risks presented by the growing use of cheap, powerful AI are too high to be ignored. Organisations like ISIS are always looking for the next technological advantage to add to their arsenal, just as they quickly adopted Twitter. Of greater concern is the possibility that militant groups might attempt to use AI to overcome their lack of technical expertise in producing chemical or biological weapons. This risk was included in the US Department of Homeland Security’s updated Homeland Threat Assessment released earlier this year. Given the urgent need to address these evolving threats, lawmakers have noted that policies and capabilities must keep pace with the threats of tomorrow.

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page