In a concerning development, militant groups in Pakistan—including the banned Tehreek-e-Taliban Pakistan (TTP), Islamic State-Khorasan (ISKP), and other extremist factions—are reportedly turning to artificial intelligence (AI) tools to enhance their recruitment strategies and propaganda campaigns. Security analysts and law enforcement officials describe this trend as a dangerous evolution in their operational tactics, warning that these groups are leveraging advanced AI software to expand their reach, evade detection, and target vulnerable communities with alarming precision.
Experts note that these networks have shifted from relying on leaflets, amateur videos, and sporadic internet posts to employing meticulously calculated strategies involving AI-generated content, digitally manipulated footage, and falsified documents. For instance, in 2023, a deepfake video purportedly showing a prominent religious leader endorsing sectarian violence went viral in Khyber Pakhtunkhwa, inciting clashes before it was debunked. The accessibility of AI-driven tools like ChatGPT, voice cloning software, deepfake generators, and AI-powered video synthesis—many available for minimal fees or open-source—has enabled extremist groups to produce highly realistic propaganda at an unprecedented scale.
These organisations use AI-driven chat programs to craft sophisticated messages that mimic human conversation, generating responses in multiple tones and styles to manipulate audiences. They issue fabricated statements in the names of political leaders, clerics, and community figures, and create convincing deepfake images and videos depicting staged atrocities, rallies, and endorsements from influential personalities.
Automated chatbots engage potential recruits in one-on-one encrypted conversations on platforms like WhatsApp, Telegram, and Signal, systematically radicalising individuals through AI-generated discussions. Voice cloning technology replicates the voices of high-profile figures, spreading misinformation and inciting unrest. AI-powered translation and text-to-speech software allow these groups to produce extremist content in multiple languages, including Pashto, Urdu, Balochi, Sindhi, English, Dari, English and Arabic, enabling them to infiltrate diverse communities across Pakistan and beyond. Subtitles and voice-overs are used to repurpose foreign extremist content for local audiences, adapting narratives to resonate with specific ideological and regional concerns.
Social media platforms like Facebook, X (formerly Twitter), YouTube, and TikTok, alongside encrypted messaging services such as Telegram and WhatsApp, have amplified the reach of extremist propaganda. AI-enhanced algorithms show users similar content after they engage with extremist material, reinforcing echo chambers and accelerating radicalisation. In some cases, AI-generated recruitment ads masquerading as religious study groups or political activism pages lure unsuspecting users into extremist networks with tailored messaging. The ease of generating, refining, and distributing sophisticated disinformation has created unprecedented challenges for counter-terrorism efforts, as the volume, speed, and realism of AI-powered propaganda outpace traditional monitoring capabilities.
Recent social media platforms posts reveal that counter-terrorism operations in regions like Khyber Pakhtunkhwa have uncovered electronic devices containing AI-based data-sifting programs, advanced chatbots, and editing software capable of producing highly realistic audio and video. Examples of these manipulations appear in short clips shared via Twitter and Telegram, as well as in statements and press releases distributed in Pashto and Urdu. Factions linked to Tehreek-i-Taliban Pakistan (TTP) and the Baloch Liberation Army (BLA) demonstrate a similar pattern of AI-enabled outreach. Unverified reports claim that over 10,000 pieces of AI-generated extremist content were removed from social media platforms in the past year, a 40% increase from the previous year, though publicly accessible data confirming these figures is scarce.
Specialists emphasise AI’s central role in identifying and recruiting new members, particularly among young people facing unemployment, social isolation, political dissatisfaction, or a desire for adventure. By analysing users’ online behaviour, these groups systematically expose targets to extremist messages, leading them to adopt violent ideologies. When individuals express interest or sympathy, recruiters redirect them to private channels for deeper indoctrination. Encryption complicates investigations, as a small number of administrators can oversee extensive discussions while concealing their identities. The widespread use of virtual private networks (VPNs) in Pakistan—often employed to bypass social media restrictions—has inadvertently facilitated contact between militant factions and otherwise unreachable audiences.
AI-driven disinformation has exacerbated the problem, with doctored videos of political or religious figures making inflammatory remarks spreading rapidly online. Although eventually exposed as forgeries, these clips sow seeds of mistrust. Experts in this area and community leaders in Khyber Pakhtunkhwa warn that once such material gains traction, reversing its impact is challenging, especially in regions where poverty, low education levels, and longstanding grievances create fertile ground for sensationalist claims.
Analysts in this field highlight the increasing quality and volume of AI-generated content, compounded by social media algorithms that push similar material to users who engage with extremist posts, reinforcing echo chambers. Coupled with AI-driven targeting campaigns by militant groups, this cycle can deepen radical convictions quickly. Counter-terrorism strategists argue that merely blocking accounts or removing videos is insufficient. Instead, they advocate for social initiatives, job creation, and improved education to address the root causes of radicalization. These experts advocate for a combination of policy reforms, enhanced technological solutions, and public awareness campaigns, though few believe any single approach will fully resolve the issue.
Pakistan’s reliance on large-scale military offensives and field intelligence has faced criticism as extremist networks shift their operations online. Critics argue that security agencies’ disproportionate focus on conventional tactics has allowed online extremist activity to flourish unchecked. Analysts warn that without significant policy reforms—including enhanced digital surveillance, stronger technical expertise, and updated legislation—militant factions will continue to recruit, spread propaganda, and operate with minimal resistance. While some advocate for advanced digital monitoring, community outreach, and open dialogue as the most effective response, little progress has been made. Many experts stress that unless the government acknowledges the gravity of this online threat and allocates resources to preventative measures rather than conventional counter terrorism tactics, it risks losing critical ground to groups adept at leveraging AI for extremist purposes.
Author
-
Muhammad Irfan is a researcher at the University of Limerick, specialising in counter-narratives, de-radicalisation strategies, artificial intelligence and media communication.
View all posts