The digital era has transformed the battlefield for extremist groups. Organisations such as the Islamic State of Iraq and Syria (ISIS), Islamic State Khorasan Province (ISKP), and far-right militant factions are leveraging Artificial Intelligence (AI) to recruit, radicalise, and disseminate propaganda on an unprecedented scale. These groups exploit the convergence of generative AI, social media, and encrypted communication platforms to reach vulnerable individuals worldwide, circumvent monitoring, and tailor extremist messaging with surgical precision.
Extremist recruitment has undergone a profound transformation over the past two decades. Traditional methods like leaflets, sermons, and local networks have given way to online strategies that exploit the global reach of social media platforms such as Facebook, Instagram, X (formerly Twitter), and YouTube. Generative AI (GenAI) enables the creation of highly personalised content, including deepfake videos, images and interactive chatbots. These tools allow recruiters to deliver messages that resonate with specific psychological and cultural vulnerabilities making extremist propaganda more persuasive and harder to detect than ever before.
AI provides extremist groups with several advantages: scalability, continuous operation, and the ability to bypass traditional counterterrorism monitoring. A group with the name of Khurasan TV, began to circulate AI-generated videos to propagate the attack claimed by ISKP between 18 and 20 May 2024. It is to be noted that these videos shared primarily on the encrypted platform Teleguard, a Swiss made alternative to Telegram which claims stronger privacy and encryption, used AI presenters in newsroom-like environments and voice synthesis to narrate attacks in Pashto such as the May 17, 2024 Bamyan incident that killed three Spanish tourists.
Research conducted by Afghan Citizens (AW), which is a research and monitoring platform that tracks extremist and militant activities particularly in Afghanistan and the surrounding region, indicates that the AI avatars were generated using Virbo, a machine-learning-based software from Wondershare that allows users to animate presenters and provide voice narration from scripts. AW was able to match those used by the Khurasan TV to the ones available on Verbo. Khurasan TV appears to act independently from other ISKP media outlets, and their content remains relatively isolated. Other pro-ISKP groups, such as Hisad, have posted similar videos on Rocketchat, though internal debates persist over whether using AI-generated avatars and voices is religiously permissible.
ISIS has similarly leveraged GenAI to create multilingual propaganda, mass-produce content at low cost, and micro-target potential recruits. For example, a pro-ISIS user employed AI-based automatic speech recognition to translate Arabic propaganda against Russia after the Crocus City Hall attack in Moscow, enabling wider international dissemination without human translators. ISIS media outlets including Halummu (which produces English-language content), Al-Azaim (targeting Central Asian audiences), and Al-Murhafat and Al-Adiyat (Arabic content producers) have adopted GenAI for visual and linguistic propaganda, while also using it to profile individuals likely to be susceptible to radicalisation. The United Nations has expressed concern over the use of AI for micro-profiling and micro-targeting in extremist recruitment campaigns. Voice-generation software, such as ElevenLabs.io, has been used to create AI-narrated propaganda content, while image and video generation platforms including Stable Diffusion and various online video synthesis tools allow extremists to produce realistic visuals at low cost.
These capabilities allow ISIS and ISKP to manipulate social media algorithms, bypass content moderation, and flood platforms with coordinated messages. In February 2024, research highlighted that ISIS used AI-generated blurred images of its flags and weapons to evade filters on Instagram and Facebook, complicating counterterrorism monitoring. The groups also exploit encrypted and niche platforms like Teleguard, Telegram, and decentralized messaging applications, moving away from mainstream networks as detection improves.
While AI provides significant advantages, extremist organisations also recognise its inherent risks. In April 2025, ISIS’s Qiman Electronic Foundation (QEF) released a bilingual guide in English and Arabic Titled: “A Guide to AI Tools and Their Dangers”detailing privacy and security concerns associated with AI-enabled data collection. Generative AI tools, if mishandled, can expose operational tactics or recruit data to authorities. Legal loopholes and regulatory gaps in major countries further enable these groups to exploit GenAI for propaganda and recruitment purposes. For instance, a 2021 UN report highlighted AI’s potential for online terrorism while noting the limited use of AI in counter-radicalisation initiatives. Democratic governments face a delicate balance between implementing surveillance and respecting citizens’ privacy and freedom of expression.
The integration of AI into extremist operations is not limited to text or imagery. Interactive AI chatbots have been used to engage potential recruits personally, simulating conversations with senior militants. Platforms like Replika and Character.ai have been cited in experiments where chatbots mimicked extremist leaders to radicalise users. Virtual reality environments and immersive platforms provide spaces to plan attacks, train recruits, and simulate operations remotely, adding another layer of sophistication to radicalisation strategies. Generative AI also allows for advanced translation, enabling propaganda to reach diverse linguistic audiences globally.
The convergence of AI, social media, and extremist ideology marks a new phase in recruitment and propaganda. Groups like ISKP and ISIS are leveraging these tools to globalise their reach, personalise messaging and maintain operational relevance despite territorial losses. Deepfake videos and AI-generated media make content appear authentic, persuasive and psychologically compelling. Far-right extremists have similarly weaponised AI to enhance recruitment and spread ideology.
Countering AI-enabled extremist content requires a multifaceted approach. Automated detection systems can flag content across platforms, while algorithmic counter-narratives redirect vulnerable users. Legal frameworks must balance privacy, freedom of expression, and security. Collaboration among tech companies, governments, academia, and civil society is crucial to share intelligence and develop effective responses. Digital literacy programs can empower users to recognise and resist radicalising content.
The digital battlefield has expanded and extremist groups’ adaptation of AI presents a global security threat. Policymakers, tech companies, and researchers must act swiftly to mitigate risks while harnessing AI responsibly for counterterrorism, education, and public safety. If unchecked, AI will remain a force multiplier for extremist organisations, amplifying radicalisation and undermining stability worldwide.



