
On 12 February, the world marks the International Day for the Prevention of Violent Extremism as and when Conducive to Terrorism. It is a timely reminder that prevention is not a slogan. It is sustained work to strengthen the social, civic, and institutional conditions that reduce the appeal of violence. In 2026, that work faces a new reality: AI-generated influence at scale.
Violent extremist movements have always adapted to the dominant communication tools of their era. What changes with today’s AI, especially generative AI, is how dramatically it lowers the effort required to influence audiences. Content that once required teams, time, and resources can now be produced and iterated rapidly: text, imagery, audio, video, translations, and stylistic variants tailored for specific audiences. Recent security reporting, including Europol’s TE-SAT 2025, has warned that AI is accelerating the production and spread of extremist propaganda and hate, particularly by enabling faster, cheaper, and more frequent content creation.
Yet the most important lesson from prevention research remains clear: people do not radicalize because of content alone. Exposure matters, but it is usually amplified by deeper drivers such as social isolation, identity crises, perceived injustice, exclusion, and offline networks. Europol’s assessments of youth radicalization, for example, underline how vulnerability often intersects with disconnection, mental health pressures, and digital dependency. In other words, AI can intensify the information environment, but it does not replace the human and societal conditions that make extremist narratives feel compelling.
In my research, published in Pathways to Violent Extremism in Lebanon, I found empirical evidence that reinforces this point. I interviewed 156 individuals who had joined violent extremist groups or participated in terroristic acts, 114 inside Romieh and Jezzine prisons and 42 outside prison following amnesty. The findings complicate the assumption that online content alone creates radicalization. While 75 percent of respondents reported using the internet, and 91 percent used it daily, only 4 percent reported being recruited online. Recruitment remained intensely social and local. Thirty four percent were introduced through a friend or family member, and 60 percent had a family member already in the group. What is striking is the pace of mobilization. Forty five percent joined after the first encounter, suggesting that when grievances, identity threats, and broken trust accumulate, mobilization can become rapid once an opening appears. In this context, generative AI matters less as a cause and more as an accelerant. It can make false messages appear credible, increase the number of first contacts, and amplify identity-based fears.
This is where prevention has to be both modern and disciplined. A debate that swings between panic, for example that AI will cause mass radicalization, and techno-fix fantasies, such as the idea that AI will detect and stop extremism, is not useful. AI can be misused. AI can also support prevention, if it is governed ethically and embedded within non-coercive, rights-respecting approaches.
A practical and promising direction is AI-assisted analysis of narratives and ecosystems rather than the targeting of individuals. Tools that monitor broad online discourse can help identify emerging grievance frames, spikes in dehumanizing language, or coordinated amplification patterns. These are signals that allow educators, civil society, municipalities, and policymakers to respond early with credible engagement, community support, and prevention messaging. The prevention value lies in understanding information dynamics and strengthening protective environments, while avoiding automated suspicion, profiling, or surveillance practices that erode trust and ultimately undermine prevention.
Lebanon’s experience is relevant here. Eight years ago, the Lebanese government endorsed a National Strategy for Preventing Violent Extremism designed around a whole-of-government and whole-of-society approach. Its architecture recognizes that prevention is not only security work. It extends across dialogue and conflict prevention, governance, the rule of law and human rights, balanced development, gender equality, education and skills, economic opportunity, strategic communications and social media, and youth empowerment. In practice, this framework aims to tackle the conditions extremist actors exploit, including marginalization, mistrust, and fragility, while creating space for prevention through schools, communities, and institutions.
In the age of generative AI, that strategy remains directionally right, but it needs a sharper emphasis on societal resilience to manipulated media. Prevention now must include media and information literacy, rapid verification habits, educator training, and institutional readiness to respond when manipulated content inflames tensions or accelerates polarization. It also requires safeguarding trust through transparency, human oversight, and clear red lines that protect privacy and civic space, because prevention cannot succeed if communities feel watched rather than supported.
On this 12 February, the challenge is not to win a technology race. It is to ensure that prevention keeps pace, so that AI does not widen fractures but instead strengthens the conditions that make violence less thinkable: belonging, dignity, opportunity, and credible institutions.
Dr. Rubina Abou Zeinab, Executive Director of the Hariri Foundation for Sustainable Human Development, and the National Coordinator for Preventing Violent Extremism in Lebanon.

