AI: A New Weapon for Cybercriminals - Canadian Cybersecurity Official Warns
Canada's top cybersecurity official reveals cybercriminals are exploiting AI to create dangerous malware, craft deceptive phishing emails and spread disinformation. The rise of AI in the wrong hands sparks global concern.
Hackers and propagandists have embraced the power of artificial intelligence (AI) to unleash havoc in cyberspace, according to Canadian Centre for Cyber Security Head, Sami Khoury.
In a recent interview, Khoury expressed alarming concerns over AI's role in crafting malicious code, disseminating misinformation and spearheading phishing attacks. He issued a stark warning about the urgent need to address the adoption of this emerging technology by cybercriminals.
Several cyber watchdog groups have also issued cautionary reports, flagging the potential risks of large language models (LLMs) in AI. The advanced language processing programs have the ability to generate realistic dialogue, documents and more, making it easier for criminals to impersonate individuals or organisations convincingly. The European police organisation, Europol, highlighted the risks posed by models like OpenAI's ChatGPT, while Britain's National Cyber Security Centre warned of criminals potentially exploiting LLMs to enhance their cyber attack capabilities.
Some cybersecurity researchers have already reported encountering suspected AI-generated content in the wild. Recently, a former hacker disclosed discovering an LLM trained on malicious material that attempted to deceive targets into making fraudulent cash transfers through a craftily composed three-paragraph email.
Despite the early stages of using AI to draft malicious code, Khoury expressed concerns about the rapid evolution of AI models, making it challenging to predict their full malicious potential before they are unleashed. He emphasised the uncertainty surrounding the future implications of this technology in the hands of cybercriminals.
Canadian Cybersecurity Head, Sami Khoury, warns of cybercriminals leveraging AI for malicious purposes.
Watchdog groups highlight the risks of advanced language models in AI impersonating individuals or organisations convincingly.
Suspected AI-generated content already being observed, raising global concerns.