Deepfakes, Disinformation, and a Network of Bots Work to Influence Voters Ahead of the UK General Election

With deepfakes going viral and an AI avatar running for Parliament, the UK general election demonstrates the impact of Generative AI tools on democratic processes.

Emily Kohlman and Keira Lowden

With more than 50 elections occurring globally this year, voters worldwide have been subjected to deceptive AI-generated audio and visual deepfakes. As the UK general election approaches on July 4, deepfakes and disinformation narratives have already been working to influence the outcome. Following Prime Minister Rishi Sunak’s surprise announcement on a rainy day in May that the election would be held on July 4th, a network of bots have been helping to fuel the disinformation fire. Pro-Russian disinformation campaigns and misleading deepfakes have impacted Sunak’s campaign and his Conservative Party, while the opposition Labour Party’s party head, Keir Starmer, has also been the target of narrative attacks stemming from AI-generated content.  

Analyzing over a million social media conversations taking place around the election, Blackbird.AI’s Constellation Narrative Intelligence Platform helped identify multiple narratives emerging from AI-generated content and disinformation, and Compass by Blackbird.AI checked viral claims and deepfakes, offering critical context for informed decision-making.

LEARN MORE: What Is A Narrative Attack?

Narrative 1: Audio and Visual Deepfakes Go Viral, Seeking to Deter Voters

The increasing accessibility of Generative AI tools has paved the way for AI-generated content to circulate online and reach thousands of potential voters quickly. During the Labour Party conference last October, an audio clip emerged purporting to portray Starmer verbally abusing aides. Due to choppy background noises and repetitive phrases, the audio clip was dismissed as a deepfake, but not before it was widely circulated online.

Months later, hundreds of thousands were exposed to over a hundred deepfake video advertisements impersonating Sunak. In February, a deepfake audio clip targeted London Mayor Sadiq Khan, depicting him making inflammatory remarks disparaging Armistice Day and calling for pro-Palestinian marches to take precedence. The clip of Khan was widely shared online – particularly in far-right groups – and had the potential to cause severe public disorder.

After Sunak announced that the general election would take place on July 4th, a presenter for GB News – a British news channel that has been criticized for violating broadcasting impartiality rules – posted multiple AI-generated images of Starmer, combining elements of LGBTQ pride and cultural diversity, likely intending to play into far-right tropes.

In another attempt to stir controversy and contribute to Islamophobic and anti-LGBTQ narratives, an image circulated across multiple social media platforms and allegedly showed an op-ed attributed to Starmer titled “Islam has embraced the LGBTQ+ community. My aim is to get Britain to embrace Islam.” Compass by Blackbird.AI context-checked the claims made in the image and confirmed that the image was fabricated and the op-ed never existed.

This claim was context-checked by Compass by Blackbird.AI.

Narrative 2: There is an AI Candidate on the Ballot Running for Parliament 

Deepfakes are not the only type of AI-generated content to surface ahead of the UK general election. In the small seaside Brighton Pavilion constituency, an AI candidate – “AI Steve” – is on the ballot. AI Steve is an avatar of businessman Steve Endacott, who would be the in-person representative if AI Steve wins the election.

Seeking to reconnect politicians with their constituents using AI technology, Endacott created the SmarterUK Party. With platform policies comprising a four-day work week, increased affordable housing, and reduced university fees, AI Steve would be the world’s first AI Member of Parliament (MP) if elected and promises 24/7 availability for constituents to interact with the AI chatbot to better inform policies. According to voters’ interactions so far with AI Steve, Top concerns ahead of the election include safety for Palestinians, trash bins, bicycle lanes, immigration, and abortion.

This claim was context-checked by Compass by Blackbird.AI.

Narrative 3: Bot-Like Social Media Amplification Attempts to Sway Voters

On various social media platforms, hashtags like #VoteReform, #ReformUKWinning, and #LabourLosing have supported Brexit proponent Nigel Farage’s Reform UK Party while attacking the Labour Party. This cross-platform messaging has been aided by inauthentic, bot-like amplification, exhibiting characteristics such as a lack of personal identifiers and repetitive posting patterns. This unusual behavior has raised concerns about foreign interference. Fake social media accounts propagated pro-Kremlin narratives ahead of the European Parliament elections June 6-9, prompting concerns that the UK must be more prepared to fight against foreign interference attempts.

Bot-like digital manipulation is a tactic used to fabricate or exaggerate support for specific candidates or political parties. Much like deepfakes, bot–like amplification can widely spread disinformation that can be difficult to combat. 

This network graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions between posts urging support for Reform UK and attacking Labour, colorized on a white-to-red gradient based on bot-like activity.

Narrative 4: Claims of Postal Voting Fraud and Security Concerns

Another narrative benefitting from bot-like amplification emerged, stemming from fears around the lack of security and integrity of postal voting. Social media conversations expressed concerns that postal voting is more susceptible to fraud and manipulation than in-person voting, highlighting distrust of verification processes and fears of voter impersonation and ballot harvesting. While the narrative that postal voting is fraudulent has been around for previous elections, the evidence does not support these claims.  

This narrative has become more pronounced in the run-up to the July 4th election, as political parties and public figures with significant online reach express growing apprehensions about the potential for fraud. Conversations in right-wing communities in the UK have also circulated specific claims accusing the Labour Party of cheating through mail-in ballots despite a lack of evidence supporting these allegations. These disinformation campaigns rely on fears around election rigging and voter fraud to rapidly spread false narratives across social media platforms. Even though these claims of postal voting fraud are unproven, misinformation is still widely circulated and used to stoke skepticism among voters.

This claim was context-checked by Compass by Blackbird.AI.

The UK government has taken steps to enhance the security of postal voting, implementing measures such as stricter ID requirements and improved verification procedures. Public information campaigns have also been launched to reassure voters about the safety and reliability of postal voting. Ongoing misinformation targeting the Labour Party underscores the need for continued vigilance and clarity in electoral processes.

This graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions in discussions concerning the integrity of postal voting, colorized on a white-to-red gradient based on bot-like activity.

Narrative 5: The Impact of Climate Change Misinformation

Sunak and Starmer went head-to-head on June 4th in the first debate before the elections. In addition to sparking misinformation narratives related to taxes, the debate accentuated a larger divide over climate concerns. After the debate presenter mentioned climate change in the opening, social media conversations emerged, questioning the legitimacy of climate change and upset that it be mentioned in a political debate. This narrative further gained traction referencing an audience member’s use of the phrase “climate catastrophe,” sparking the circulation of the words “fraud,” “hoax,” and “scam” about climate change. 

Media figures further fueled this skepticism by making disparaging remarks about climate change. Whether disseminated by political figures, social media, or traditional media, false narratives distort public understanding of climate issues, ultimately influencing election results. Inflammatory statements lend themselves to being more widely and quickly disseminated across social media, impacting public perception and potentially swaying voter opinions. These narratives come in various forms, including denying climate change, downplaying its impacts, promoting ineffective solutions, and using misleading statistics.

The Way Forward

Misinformation and disinformation powered by deepfakes and bot-like amplification undermine trust in democratic processes and add a layer of uncertainty to elections. Online conversations indicate a desire for more comprehensive regulation to combat deepfakes and detect deceptive content. The UK government is working to fight these deceptive narratives through policy initiatives, collaboration with tech companies, and public awareness campaigns. UK organizations have initiated efforts to better equip young voters with a media literacy toolkit to identify and fight election-related misinformation. 

While there is a desire to mitigate the spread of disinformation and related damages, this complex issue makes effective regulatory efforts challenging. This is where Blackbird.AI can help. Blackbird.AI’s Constellation Narrative Intelligence Platform and Compass by Blackbird.AI provide crucial analysis and context-checking to aid in identifying harmful narratives stemming from deepfakes and disinformation. 

‍To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.