Vote 2024: The Top Disinformation Attacks Disrupting Global Elections

By Emily Kohlman and Dan Patterson

AI-powered narrative attacks—disinformation, deepfakes, bot swarms, and false information—shaped elections globally, undermining trust and disrupting democratic processes in 60+ nations.

With over half the world’s population in over 60 countries voting in national elections, 2024 is a historic election year. From the UK to Bangladesh to Pakistan and the United States, Blackbird.AI’s RAV3N Narrative Intelligence team’s work during this election cycle involved an in-depth analysis of how narrative manipulation impacts voting integrity and public trust.

Our team, leveraging our AI-based Narrative Intelligence Platform Constellation, analyzed millions of election-related posts across social media, messaging apps, the dark web, and news media. Notably, our research reveals that disparate groups of agenda-driven actors and nation states worldwide are adopting similar AI-enabled disinformation tactics, indicating that emerging technology enables, enhances, and amplifies disinformation targeting voters. Moreover, narrative manipulation and disinformation tactics aiming to influence elections are increasingly sophisticated, leveraging a blend of social media posts, deepfakes, bots, state media stories, and targeted misinformation to influence voters and enhance social divisions. 

LEARN MORE: What Is Narrative Intelligence?

These are the top disinformation techniques used to disrupt worldwide elections:

  1. Narrative Attacks

Narrative attacks involve creating and disseminating false or misleading stories to shape public opinion, often by targeting the credibility of candidates, electoral processes, or democratic institutions. They can be particularly effective because they tap into existing beliefs or biases, making them more likely to be accepted and shared.

Key characteristics:

  • Often based on a kernel of truth, then distorted
  • Designed to evoke emotional responses 
  • Spread rapidly through social media, news and chat networks
Compass Context checked this claim.
  1. Deepfakes

Deepfakes, referring to AI-generated audio and visual content, represent a significant advancement in disinformation technology. This artificial content can create highly convincing fake videos, images or audio of politicians or public figures saying or doing things they never actually did. The political implications of such media were seen in multiple elections this year – most notably in Mexico, where a deepfake attempted to undermine the leading presidential candidate; in Indonesia, where deceased former leaders were resurrected with deepfake technology to endorse a candidate; and in India, where more than 75% of the population was exposed to political deepfakes during the general election, making it challenging for voters to identify authentic content.

Potential impacts:

  • Eroding trust in visual evidence
  • Creating false endorsements or damaging statements
  • Influencing public opinion just before an election, when there’s little time for fact-checking
This graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions between narratives, hashtags, and URLs circulating allegations that Mexico’s governing party – Morena – is corrupt, colorized on a white-to-red gradient based on the amount of anger expressed, with deeper red indicating more anger detected in narratives.
  1. AI Flooding Tactics

These tactics aim to overwhelm the public, fact-checkers, and newsrooms with false claims, deepfakes, and other forms of disinformation. The goal is to deplete fact-checking resources and hinder debunking efforts, allowing some disinformation to slip through undisputed.

Effects of flooding:

  • Exhaustion of fact-checking resources
  • Delayed responses to critical disinformation
  • Potential for some false claims to go unchallenged
This claim was checked by Compass Contact.
  1. Bot Networks

Bot networks, consisting of automated accounts that can rapidly amplify disinformation across social media platforms, can create the illusion of widespread support for a particular candidate or viewpoint and potentially influence real voters. During the UK elections, these networks sought to amplify support for Brexit proponent Nigel Farage’s Reform UK Party while attacking the Labour Party.

Tactics used by bot networks:

  • Coordinated posting of similar content
  • Rapid sharing and liking of specific posts
  • Creating trending topics through mass engagement
This network graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions between posts urging support for Reform UK and attacking Labour, colorized on a white-to-red gradient based on bot-like activity.
  1. Foreign Influence Operations

These are coordinated efforts by foreign actors or governments to manipulate public opinion and influence election outcomes in other countries. They often involve a combination of other tactics, including narrative attacks, bot networks, and AI-generated content. Pro-Russian influence operations were evident in multiple elections, including in Chad and the UK. In Bangladesh, following the ousting of the former prime minister, narrative attacks from both Indian and Russian State Supporters amplified disinformation seeking to destabilize the country during its transition and delegitimize the new interim leader.

Common objectives:

  • Sowing discord and polarization
  • Undermining trust in democratic processes
  • Promoting favorable candidates or policies
This line graph from Blackbird.AI’s Narrative Intelligence Platform displays engagements on posts from August 5-29, with posts by Indian State Supporters in green, Russian State Supporters in blue, and posts exhibiting anomalous activity in purple. Anomalous refers to unusual patterns of content propagation that would suggest the presence of a coordinated campaign. The highest spike of engagements for both Indian State Supporters and anomalous posts occurred on August 9, just after the interim government was sworn in, indicating a coordinated effort among Indian State Supporters to amplify the narrative that the US orchestrated a coup to get Yunus in power. Smaller yet notable peaks in Russian State Supporter engagements before and after Yunus was sworn in indicate that these users sought to bring this pro-India narrative into pro-Russia circles.
  1. AI-Generated Content

Advanced AI models can now create persuasive fake news articles, images, and videos. This content can be tailored to specific audiences and distributed at scale, making it a powerful tool for disinformation campaigns. In Indonesia, a doctored video of a speech given by the then-president made it appear as though he gave the speech in Mandarin, even though he gave it in English. Deepfakes were not the only type of AI-generated content affecting the elections in Indonesia. The winner of the presidential election used this technology to create content that was clearly AI-generated—unlike deceptive deepfakes—showing him as a cartoon character in an effort to better resonate with younger voters.

Challenges posed by AI-generated content:

  • Difficulty in distinguishing from genuine content
  • Ability to produce large volumes of unique disinformation
  • Potential for personalized disinformation targeting individual voters
This claim was checked by Compass Context.
  1. Impersonation

This tactic involves creating fake social media profiles or cloning websites of legitimate news sources in order to spread disinformation. Impersonation efforts can be particularly effective because they exploit the trust people have in established sources. India used this tactic to serve pro-India interests by destabilizing Pakistan through a 15-year disinformation campaign referred to as the “Indian Chronicles” and influenced international organizations through at least 750 fake news outlets across 119 countries.

Forms of impersonation:

  • Fake social media accounts of politicians or journalists
  • Cloned websites with slightly altered URLs
  • Impersonation of government agencies or election officials
This was context-checked by Compass by Blackbird.AI.

  1. Exploiting Societal Vulnerabilities

Disinformation campaigns often identify and target existing social divisions to amplify discord. This can involve exploiting racial tensions, economic inequalities, or political polarization. In the aftermath of the ousting of Bangladesh’s former prime minister, Indian State Supporters sought to sow discord across Bangladesh by spreading disinformation specifically exploiting societal vulnerabilities. These actors used various tactics—including misrepresenting videos and images depicting something else entirely, initiating hashtag campaigns, and using coordinated posting tactics—to amplify the narrative around the alleged targeting of Hindus in the country.

Strategies employed:

  • Tailoring messages to specific demographic groups
  • Amplifying extreme viewpoints on both sides of an issue
  • Creating false narratives around sensitive social topics
This graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions in social media conversations using hashtags and specific phrases to advocate for support of Hindus living in Bangladesh due to allegedly being targeted, with abnormal activity colorized in red. An abundance of red nodes indicates a coordinated effort to amplify this narrative.
  1. The “Big Lie” Strategy

This strategy involves repeatedly propagating a significant falsehood to make it seem credible. If a lie is bold enough and repeated often enough, people will start to believe it.

Characteristics of the “Big Lie”:

  • Often simple and easy to remember
  • Repeated consistently across multiple channels
  • More minor, related falsehoods may support this
  1. Deny, Distract, Distort

When caught spreading disinformation, bad actors often employ this three-pronged approach: denying accusations, distracting from facts, and distorting the narrative.

Components of this strategy:

  • Deny: Outright rejection of any wrongdoing
  • Distract: Shifting focus to unrelated issues
  • Distort: Twisting facts to create a different narrative
This claim was checked by Compass Context.

LEARN MORE: 8 Ways for Security Leaders to Protect Their Organizations from Mis/Disinformation Attacks

The Way Forward

Here are five key takeaways for organization leaders about narrative attacks and disinformation tactics disrupting elections around the world in 2024:

  • AI-powered disinformation campaigns pose an unprecedented threat to elections and democratic processes globally. Organizations must stay vigilant and proactively monitor for potential narrative attacks, deepfakes, bot networks, and other sophisticated tactics that could impact their brand, executives, or stakeholders.
  • Disinformation tactics are becoming increasingly complex, combining multiple techniques, such as AI-generated content, impersonation, content amalgamation, and leveraging unwitting individuals to spread false narratives. Leaders should invest in advanced threat detection solutions and educate their teams about these evolving risks.
  • Bad actors often exploit societal vulnerabilities, such as racial tensions, economic inequalities, or political polarization, to amplify discord and undermine trust. Organizations must be prepared to respond quickly and transparently to disinformation targeting their brand or industry while promoting unity and fact-based discourse.
  • The “deny, distract, distort” strategy employed by bad actors when caught spreading disinformation highlights the importance of robust crisis communication plans. Leaders should have clear protocols to address false accusations, focus on facts, and control the narrative to minimize reputational damage.
  • Collaboration between organizations, government agencies, and tech platforms will be crucial in combating disinformation. Leaders should actively participate in industry initiatives, share best practices, and support research and development efforts to counter AI-powered narrative attacks and protect the integrity of information ecosystems.

Democracies worldwide are grappling with this new wave of AI-powered narrative attacks and disinformation. From deepfakes flooding social media accounts in India to bot networks amplifying extremist messages in the UK, this new threat vector not only undermines voter confidence but is often used to target brands, organizations, and high-profile individuals like influencers, celebrities, CEOs, and CISOs.

Book a demo to learn how Blackbird.AI detects narrative attacks.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.