Content Warning: This article discusses image-based sexual abuse, deepfakes, and online harassment. If you need support, contact the 24/7 Domestic & Sexual Abuse Helpline on 0808 802 1414.
A message from her friend read: “Is this really you?” Attached was a sexually explicit video with her face seamlessly superimposed onto someone else’s body. Within hours, the video had been shared across multiple platforms. Her face and her identity weaponised without her consent. This is the reality of deepfake abuse, and it’s happening to women and girls at an alarming pace.
Artificial intelligence (AI) has brought about many technological possibilities but has also created new tools used for harassment and abuse. From deepfake pornography to AI-generated misinformation campaigns, we’re witnessing a dangerous evolution in digital violence.
The deepfake threat
Deepfakes are hyper-realistic digital forgeries created using AI. The most common is the creation of non-consensual intimate imagery deepfake pornography featuring real people’s faces on fabricated explicit content.
Women in public life journalists, politicians, activists, and content creators are particularly vulnerable. A single photo scraped from social media can be enough to generate convincing deepfake pornography. But it’s not just public figures at risk. Everyday women are targeted by ex-partners, online harassers, or complete strangers who view this technology as a new form of sexual violence.
The psychological impact is devastating. Victims describe feeling violated and many victims withdraw from public life, delete social media accounts, or change their appearance to escape the abuse. Some face professional consequences when employers or colleagues encounter the deepfakes, despite knowing they’re fake.
The technology is also being used for financial exploitation. Perpetrators create deepfake content and threaten to distribute it unless victims pay, or they use deepfakes in romance scams and financial fraud. In domestic abuse contexts, abusers threaten to create and share deepfakes as a means of control or do so as punishment for leaving the relationship.
Social media has enabled the rise of harmful influencers individuals who use their platforms to spread misogyny, advocate for violence against women, or promote extremist ideologies. What makes them particularly dangerous is their ability to package harmful content in seemingly palatable, even aspirational, formats.
These influencers often target young men and boys, promoting toxic masculinity, celebrating the exploitation of women, and encouraging harassment. They frame abuse as empowerment, manipulation as strategy, and violence as justified. Their content normalises attitudes that directly contribute to real-world violence against women and girls.
Some harmful influencers specifically instruct followers on how to use technology to abuse, stalk, or control women. They share tactics for digital surveillance, recommend stalker ware applications, or encourage coordinated harassment campaigns. Many operate in the manosphere—online communities built around anti-feminist ideology—where they radicalise vulnerable young people into viewing women as adversaries rather than equals.
Astroturfing: Manufacturing Consent for Abuse
Astroturfing creating fake grassroots movements to manipulate public opinion has become a sophisticated tool in digital abuse campaigns. Using AI-generated accounts, bots, and coordinated networks, abusers and their sympathisers can manufacture the appearance of widespread support for harmful narratives or opposition to victims speaking out.
When a woman reports abuse or speaks publicly about violence, astroturfing campaigns may flood social media with seemingly organic criticism. Hundreds of accounts—many AI-generated or bot-controlled—post coordinated messages questioning her credibility, claiming she’s lying, or arguing that she deserved the abuse. To casual observers, it appears that public opinion is against the victim.
These campaigns silence victims by creating hostile online environments. They manipulate public perception, making abuse seem more acceptable or victims less credible. They influence policy discussions by creating false impressions of public opinion on issues like domestic violence legislation or online safety regulations.
AI has made astroturfing frighteningly easy. Automated tools can generate convincing social media profiles complete with AI-generated profile photos, fabricated posting histories, and human-like interaction patterns.
AI-enabled abuse is especially dangerous because it takes advantage of existing systemic weaknesses. Technology is currently evolving faster than platform policies can adapt, leaving gaps in protection. Public safety authorities often lack the specialised skills and resources needed to properly investigate these cases. Meanwhile, laws built for physical-world offenses often fail to capture the complexity of digital forms of harm Technology is becoming ever more accessible and sophisticated. With deepfake creation tools that once required technical expertise these are now available as simple apps. With AI writing tools, harassing content can be created faster and more easily than ever before, making digital abuse more accessible and more damaging.
Fighting Back
Addressing AI-facilitated abuse requires action on multiple fronts. We need stronger legislation specifically addressing deepfake creation and distribution, supported by clear accountability mechanisms for perpetrators. Platforms need to priortise investing in detection technology and respond swiftly to reports of AI-generated abuse. Education about digital literacy, critical thinking, and recognising manipulation must start early with age-appropriate awareness of the dangers of Ai facilitated abuse.
For individuals, protection strategies include limiting publicly available photos, using reverse image search to monitor for misuse of images, reporting deepfake content immediately to platforms and law enforcement, and seeking support from organisations experienced in image-based abuse.
If you’ve been targeted with deepfakes, astroturfing, or influenced by harmful content, remember: the abuse is not your fault. The technology may be new, but the underlying intention to control, harm, and silence is still violence.
Get Help Now
Nexus NI
Nexusni.org
24/7 Domestic & Sexual Abuse Helpline: 0808 802 1414
PSNI Non-Emergency: 101
Emergency: 999
Safety Notice: If someone is monitoring your device, use private browsing or access support from a safe location.
Published by Nexus NI as part of our commitment to addressing emerging forms of technology-facilitated violence against women and girls.

