Vote 2024: The Top Narrative Attacks Disrupting Global Elections
By Emily Kohlman and Dan Patterson
AI-powered narrative attacks, deepfakes, bot swarms, and false information—shaped elections globally, undermining trust and disrupting democratic processes in 60+ nations.
With over half the world’s population in over 60 countries voting in national elections, 2024 is a historic election year. From the UK to Bangladesh to Pakistan and the United States, Blackbird.AI’s RAV3N Narrative Intelligence team’s work during this election cycle involved an in-depth analysis of how narrative manipulation impacts voting integrity and public trust.
Our team, leveraging our AI-based Narrative Intelligence Platform Constellation, analyzed millions of election-related posts across social media, messaging apps, the dark web, and news media. Notably, our research reveals that disparate groups of agenda-driven actors and nation states worldwide are adopting similar AI-enabled narrative attack tactics, indicating that emerging technology enables, enhances, and amplifies narrative attacks targeting voters. Moreover, narrative manipulation and narrative attack tactics aiming to influence elections are increasingly sophisticated, leveraging a blend of social media posts, deepfakes, bots, state media stories, and targeted misinformation to influence voters and enhance social divisions.
LEARN MORE: What Is Narrative Intelligence?
These are the top narrative attack techniques used to disrupt worldwide elections:
Narrative attacks involve creating and disseminating false or misleading stories to shape public opinion, often by targeting the credibility of candidates, electoral processes, or democratic institutions. They can be particularly effective because they tap into existing beliefs or biases, making them more likely to be accepted and shared.
Key characteristics:
- Often based on a kernel of truth, then distorted
- Designed to evoke emotional responses
- Spread rapidly through social media, news and chat networks
Deepfakes, referring to AI-generated audio and visual content, represent a significant advancement in narrative attack technology. This artificial content can create highly convincing fake videos, images or audio of politicians or public figures saying or doing things they never actually did. The political implications of such media were seen in multiple elections this year – most notably in Mexico, where a deepfake attempted to undermine the leading presidential candidate; in Indonesia, where deceased former leaders were resurrected with deepfake technology to endorse a candidate; and in India, where more than 75% of the population was exposed to political deepfakes during the general election, making it challenging for voters to identify authentic content.
Potential impacts:
- Eroding trust in visual evidence
- Creating false endorsements or damaging statements
- Influencing public opinion just before an election, when there’s little time for fact-checking
These tactics aim to overwhelm the public, fact-checkers, and newsrooms with false claims, deepfakes, and other forms of narrative attacks. The goal is to deplete fact-checking resources and hinder debunking efforts, allowing some narrative attacks to slip through undisputed.
Effects of flooding:
- Exhaustion of fact-checking resources
- Delayed responses to critical narrative attacks
- Potential for some false claims to go unchallenged
Bot networks, consisting of automated accounts that can rapidly amplify narrative attacks across social media platforms, can create the illusion of widespread support for a particular candidate or viewpoint and potentially influence real voters. During the UK elections, these networks sought to amplify support for Brexit proponent Nigel Farage’s Reform UK Party while attacking the Labour Party.
Tactics used by bot networks:
- Coordinated posting of similar content
- Rapid sharing and liking of specific posts
- Creating trending topics through mass engagement
These are coordinated efforts by foreign actors or governments to manipulate public opinion and influence election outcomes in other countries. They often involve a combination of other tactics, including narrative attacks, bot networks, and AI-generated content. Pro-Russian influence operations were evident in multiple elections, including in Chad and the UK. In Bangladesh, following the ousting of the former prime minister, narrative attacks from both Indian and Russian State Supporters amplified narrative attacks seeking to destabilize the country during its transition and delegitimize the new interim leader.
Common objectives:
- Sowing discord and polarization
- Undermining trust in democratic processes
- Promoting favorable candidates or policies
Advanced AI models can now create persuasive fake news articles, images, and videos. This content can be tailored to specific audiences and distributed at scale, making it a powerful tool for narrative attack campaigns. In Indonesia, a doctored video of a speech given by the then-president made it appear as though he gave the speech in Mandarin, even though he gave it in English. Deepfakes were not the only type of AI-generated content affecting the elections in Indonesia. The winner of the presidential election used this technology to create content that was clearly AI-generated—unlike deceptive deepfakes—showing him as a cartoon character in an effort to better resonate with younger voters.
Challenges posed by AI-generated content:
- Difficulty in distinguishing from genuine content
- Ability to produce large volumes of unique narrative attacks
- Potential for personalized narrative attack targeting individual voters
This tactic involves creating fake social media profiles or cloning websites of legitimate news sources in order to spread narrative attacks. Impersonation efforts can be particularly effective because they exploit the trust people have in established sources. India used this tactic to serve pro-India interests by destabilizing Pakistan through a 15-year narrative attack campaign referred to as the “Indian Chronicles” and influenced international organizations through at least 750 fake news outlets across 119 countries.
Forms of impersonation:
- Fake social media accounts of politicians or journalists
- Cloned websites with slightly altered URLs
- Impersonation of government agencies or election officials
Narrative attack campaigns often identify and target existing social divisions to amplify discord. This can involve exploiting racial tensions, economic inequalities, or political polarization. In the aftermath of the ousting of Bangladesh’s former prime minister, Indian State Supporters sought to sow discord across Bangladesh by spreading narrative attacks specifically exploiting societal vulnerabilities. These actors used various tactics—including misrepresenting videos and images depicting something else entirely, initiating hashtag campaigns, and using coordinated posting tactics—to amplify the narrative around the alleged targeting of Hindus in the country.
Strategies employed:
- Tailoring messages to specific demographic groups
- Amplifying extreme viewpoints on both sides of an issue
- Creating false narratives around sensitive social topics
This strategy involves repeatedly propagating a significant falsehood to make it seem credible. If a lie is bold enough and repeated often enough, people will start to believe it.
Characteristics of the “Big Lie”:
- Often simple and easy to remember
- Repeated consistently across multiple channels
- More minor, related falsehoods may support this
When caught spreading narrative attacks, bad actors often employ this three-pronged approach: denying accusations, distracting from facts, and distorting the narrative.
Components of this strategy:
- Deny: Outright rejection of any wrongdoing
- Distract: Shifting focus to unrelated issues
- Distort: Twisting facts to create a different narrative
LEARN MORE: 8 Ways for Security Leaders to Protect Their Organizations from Narrative Attacks
The Way Forward
Here are five key takeaways for organization leaders about narrative attacks tactics disrupting elections around the world in 2024:
- AI-powered narrative attack campaigns pose an unprecedented threat to elections and democratic processes globally. Organizations must stay vigilant and proactively monitor for potential narrative attacks, deepfakes, bot networks, and other sophisticated tactics that could impact their brand, executives, or stakeholders.
- Narrative attack tactics are becoming increasingly complex, combining multiple techniques, such as AI-generated content, impersonation, content amalgamation, and leveraging unwitting individuals to spread false narratives. Leaders should invest in advanced threat detection solutions and educate their teams about these evolving risks.
- Bad actors often exploit societal vulnerabilities, such as racial tensions, economic inequalities, or political polarization, to amplify discord and undermine trust. Organizations must be prepared to respond quickly and transparently to narrative attacks targeting their brand or industry while promoting unity and fact-based discourse.
- The “deny, distract, distort” strategy employed by bad actors when caught spreading narrative attacks highlights the importance of robust crisis communication plans. Leaders should have clear protocols to address false accusations, focus on facts, and control the narrative to minimize reputational damage.
- Collaboration between organizations, government agencies, and tech platforms will be crucial in combating narrative attacks. Leaders should actively participate in industry initiatives, share best practices, and support research and development efforts to counter AI-powered narrative attacks and protect the integrity of information ecosystems.
Democracies worldwide are grappling with this new wave of AI-powered narrative attacks. From deepfakes flooding social media accounts in India to bot networks amplifying extremist messages in the UK, this new threat vector not only undermines voter confidence but is often used to target brands, organizations, and high-profile individuals like influencers, celebrities, CEOs, and CISOs.
Book a demo to learn how Blackbird.AI detects narrative attacks.