We previously examined how AI-powered detection capabilities can protect election integrity at operational scale. That piece focused on the technology side — the detection algorithms, forensic analysis, and deployment models. This article examines the other side: the threat actors themselves, and how their tactics evolved dramatically throughout 2025.
After tracking deepfake activity across election campaigns in Romania, Poland, Ireland, Ecuador, Germany, the Netherlands, Singapore, South Korea, Australia, and beyond, we have identified five distinct attack vectors that represent a meaningful escalation from what agencies faced in 2024.
1. Deepfake Financial Scams — Exploiting Political Trust for Profit
The most commercially motivated vector: scammers using deepfakes of political leaders to promote fraudulent investment schemes. The logic is straightforward — during election season, politicians are highly visible, their public statements carry weight, and voters are primed to pay attention to anything they say.
Ahead of Romania’s May 2025 presidential election, scammers distributed deepfake videos on Facebook showing several presidential candidates promoting a non-existent government investment opportunity. In the Czech Republic’s October 2025 parliamentary campaign, deepfakes of politicians appeared advertising fake investment platforms designed to harvest bank credentials. In Canada, fraudsters promoted a cryptocurrency scam using a deepfake interview with Liberal leader Mark Carney in the lead-up to the April 2025 federal election.
These are not isolated incidents. Deepfake-driven financial fraud generated over $200 million in losses in Q1 2025 alone. Election seasons amplify the attack surface because they create a high-attention environment where political endorsements carry outsized credibility.
Singapore: The Non-Election Case That Proves the Point
You do not need an active election to exploit political trust. Singapore demonstrates this clearly. In late 2023, a deepfake video of then-Prime Minister Lee Hsien Loong circulated widely online, built from a real CGTN interview. The scammers altered his mouth movements and replaced his voice with an AI-generated version, featuring him endorsing a cryptocurrency investment platform. PM Lee publicly warned citizens to “ignore audio deepfake videos of me purporting to promote crypto scams.”
Current Prime Minister Lawrence Wong faced the same treatment in March 2025, with deepfake videos using his image to endorse cryptocurrency schemes, get-rich-quick investments, and even fraudulent Permanent Resident application services. The scams were promoted through verified Google Ads, fake news websites, and deepfake video — a multi-layered operation designed to look legitimate at every touchpoint.
The Singapore case went further still. In late 2024, more than 100 public servants — including five sitting ministers — were targeted in a deepfake extortion campaign. The attackers used publicly available photos, including LinkedIn profile images, to generate fabricated compromising images superimposed onto explicit content. The demand: US$50,000 in cryptocurrency.
Singapore’s Cyber Security Agency responded with a formal advisory on detecting and responding to deepfake scams. But the episode underscores a critical point: if this is happening to one of the world’s most digitally mature governments outside of election season, every agency should assume they are already a target.
2. Polling-Day Timing Attacks — Maximising Damage, Minimising Debunk Time
The most tactically sophisticated vector: releasing synthetic content so close to the vote that there is no time for effective debunking before ballots are cast.
In Buenos Aires, two deepfake videos were released hours before polls opened for the May 2025 city elections, falsely claiming that a political candidate had withdrawn from the race. In Ireland, less than four days before the October presidential election, a candidate was targeted with a nearly identical deepfake — an AI-generated video designed to look like a bulletin from national news service RTÉ News, complete with deepfake versions of two well-known TV presenters validating the story. The candidate described it as a “disgraceful attempt to mislead voters.”
In Poland, after the first round of voting in the May 2025 presidential election, AI-generated images featured in four of 23 viral videos containing disinformation alleging voter fraud. None included disclosure labels indicating AI generation.
In South Korea, the electoral authorities filed complaints against three YouTubers who uploaded deepfake smears of political candidates just days before the June 2025 presidential election — including images of a candidate in a prisoner’s uniform and AI-generated news anchors declaring premature victories or defeats.
The challenge for election security officials is acute. During polling periods, there may be legal restrictions on public communications. Debunking itself risks amplifying the content. And the compressed timeline makes verification nearly impossible before the damage is done.
3. News-Format Forgeries — Weaponising Institutional Credibility
This is the vector that concerns us most for 2026: deepfakes that do not just impersonate politicians but impersonate the institutions that verify truth.
In the lead-up to Ecuador’s February 2025 election, AI-generated content mimicked CNN and France 24 newscasts — complete with fake logos, studio backgrounds, and synthetic anchors — to falsely implicate political candidates in scandals and spread allegations of election fraud. In Ireland, the deepfake announcing a candidate’s withdrawal was styled as an RTÉ News broadcast so convincingly that some users believed it was genuine.
Germany’s February 2025 federal election saw an even more unusual variant: an AI-edited announcement apparently from MI6, the British intelligence service. The video incorporated genuine MI6 communications, overlaid with AI-generated narration containing false claims about bomb threats, poisoned ballots, and imminent attacks on German polling stations. The intent was clear — deter voters from showing up.
Separate deepfake videos fabricated testimonies from apparent witnesses and whistleblowers, accusing a German minister of child abuse. These were linked to the Russian disinformation campaign tracked as Storm-1516, and were distributed through websites designed to resemble legitimate news platforms.
The dilemma for targeted institutions is severe: publicly debunk the content and risk amplifying it, or stay silent and allow some percentage of the public to accept it as real.
4. Political Parties as Threat Actors — Normalising Deceptive AI Use
Perhaps the most corrosive long-term trend: political parties themselves deploying AI-generated content as a routine campaign tool, without disclosure, and often designed to mislead.
Alternative for Germany (AfD) ran a series of AI-generated political advertisements ahead of the February 2025 federal election — synthetic music videos and idealised imagery glorifying traditional German values, contrasted with dystopian depictions of life under other parties. Researchers described these as “nostalgia machines” designed to consolidate the party’s base through emotional manipulation.
The Dutch Party for Freedom (PVV) launched its general election campaign with an AI-generated video depicting a future Netherlands under Sharia law. Candidates from the same party shared deepfake images of rival politicians being led away by police in handcuffs. Crucially, most of this content lacked any disclosure labels indicating AI generation.
When political parties normalise the use of undisclosed synthetic content in campaigns, they erode the very foundation of informed democratic participation. Voters cannot make sound judgements if they cannot distinguish between authentic and fabricated evidence. And the more mainstream this practice becomes, the harder it is to draw credible red lines around genuinely malicious deepfakes from external threat actors.
5. AI Chatbot Data Poisoning — Corrupting the Information Supply Chain
The newest and potentially most scalable vector: rather than targeting individual voters with specific deepfakes, poisoning the data sources that AI chatbots rely on to generate answers for millions of users simultaneously.
Ahead of Australia’s May 2025 federal election, a Russian-linked influence network published thousands of fake news articles through a network including a site called Pravda Australia. The articles were not primarily written for human readers — they were designed to attract the search engine crawlers that build the training and retrieval datasets for AI chatbots.
The objective: distort the underlying data so that when voters asked chatbots about candidates, policies, or current events, the responses would reflect Kremlin-aligned narratives. During controlled testing of 300 prompts covering ten false narratives across leading chatbots, nearly 17% of responses amplified the false narratives.
Roughly one in eight UK voters used AI chatbots for election information in 2024. As that proportion grows, data-poisoning attacks represent a force multiplier — a relatively cheap technique that can influence far more people than any individual deepfake video.
The Shadow Economy Behind It All
Underpinning these five vectors is an emerging infrastructure. In the buildup to the Irish presidential election, a library of over 120 deepfake images of Irish politicians was uploaded to a marketplace for AI-generated content — ready for anyone to purchase and deploy. Ahead of Moldova’s September parliamentary election, a Russian-funded network used ChatGPT to optimise pro-Kremlin propaganda for social media engagement, while an AI-generated platform called Restmedia paid engagement farms in Africa to amplify content through verified social media accounts.
This is no longer a story about lone actors with access to AI tools. It is an industrialised supply chain for election interference, with specialised roles for content generation, platform mimicry, distribution, and amplification.
What This Means for 2026
With major elections scheduled in the US, UK, Hungary, Brazil, Bangladesh, and elsewhere, the five vectors described here should inform immediate preparation. Our assessment:
- Detection alone is insufficient. The speed, variety, and institutional mimicry of current deepfakes means agencies need real-time monitoring across video, image, audio, social media, and AI chatbot outputs simultaneously. BlackVidINT addresses the video and image forensics layer; BlackWebINT covers the social media, web, and underground monitoring layer.
- Timing attacks require pre-positioned capability. Deploying detection systems after a polling-day attack has begun is too late. Agencies need always-on monitoring with automated alerting configured specifically for election-related synthetic content.
- Data-poisoning requires a new monitoring paradigm. Tracking deepfake videos and images is necessary but no longer sufficient. Intelligence teams must also monitor the information supply chain — the web crawlers, the fake news networks, and the chatbot outputs themselves.
- Multi-source intelligence fusion is the only viable defence. No single tool can cover video forensics, social media monitoring, dark web tracking, financial transaction analysis, and chatbot output verification. BlackFusion was designed precisely for this kind of multi-source correlation.
The 2025 election cycle proved that deepfake threats are no longer experimental. They are operational, industrialised, and evolving faster than most agencies’ defensive capabilities. The question for 2026 is not whether these attacks will occur, but whether the agencies responsible for election security will be ready when they do.