The rise of artificial intelligence (AI) has sparked a global race, with nations and organizations rushing to harness its immense power. However, amidst this technological revolution, a disturbing trend has emerged: extremist groups, including the Islamic State (IS), are also experimenting with AI, presenting a unique and complex challenge to global security.
But here's where it gets controversial: these groups, despite their limited resources and expertise, are finding ways to exploit AI for their sinister purposes.
National security experts and spy agencies have issued stark warnings about the potential consequences. AI, they say, could become a potent tool for extremist organizations to recruit new members, create realistic deepfake images, and refine their cyberattacks.
And this is the part most people miss: AI is not just a futuristic concept for these groups; it's a present-day reality. A user posting on a pro-IS website last month urged others to incorporate AI into their operations, highlighting its ease of use.
IS, with its decentralized structure and violent ideology, has long recognized the power of social media for recruitment and disinformation. So, it's no surprise that they're now testing the waters of AI.
For loosely organized extremist groups or even lone actors, AI offers a powerful means to amplify their reach and influence. With AI, they can generate propaganda and deepfakes at scale, spreading their message further and faster than ever before.
"AI makes it much easier for any adversary to do things," says John Laliberte, a former researcher at the National Security Agency. "Even a small group with limited resources can still make an impact."
These groups have already begun experimenting with AI programs like ChatGPT, creating realistic-looking photos and videos. When combined with social media algorithms, this fake content can be a potent tool for recruitment, spreading propaganda, and causing confusion and fear.
Two years ago, during the Israel-Hamas war, extremist groups spread fake images of bloodied babies in bombed-out buildings, sparking outrage and polarization. These images, created using AI, were used by violent groups in the Middle East and antisemitic hate groups in the U.S. and elsewhere to recruit new members.
A similar incident occurred last year after an attack in Russia claimed by an IS affiliate killed nearly 140 people. AI-crafted propaganda videos circulated widely, seeking new recruits.
IS has also created deepfake audio recordings of its leaders and used AI for quick translations, showcasing their evolving use of this technology.
However, these groups still lag behind nations like China, Russia, and Iran in terms of AI sophistication. According to Marcus Fowler, a former CIA agent, they view more advanced AI applications as "aspirational."
But the risks are real and growing. Hackers are already using synthetic audio and video for phishing campaigns, and AI can be used to write malicious code and automate cyberattacks.
More worryingly, there's a risk that militant groups may attempt to use AI to develop biological or chemical weapons, compensating for their lack of technical expertise. This threat was highlighted in the Department of Homeland Security's updated Homeland Threat Assessment.
"ISIS was an early adopter of Twitter and found ways to exploit social media," says Fowler. "They're always looking for the next tool to add to their arsenal."
Lawmakers are aware of the urgency and have proposed legislation to address this growing threat. Sen. Mark Warner of Virginia, for example, believes the U.S. must facilitate information sharing between AI developers and security agencies to track how their products are being misused by bad actors.
During a recent hearing, House lawmakers learned that IS and al-Qaida have held training workshops to teach their supporters how to use AI.
A bill passed by the U.S. House last month mandates an annual assessment by homeland security officials of the AI risks posed by extremist groups.
"Guarding against the malicious use of AI is no different from preparing for more conventional attacks," says Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must evolve to match the threats of tomorrow."
The challenge of countering AI-empowered extremist groups is complex and multifaceted, requiring a coordinated response from governments, security agencies, and technology developers. The question remains: Can we stay one step ahead of these evolving threats?