Black Hat 2024: Foreign Influence Operations Evolve as Narrative Attacks Become More Sophisticated
By Dan Patterson
At Black Hat 2024, former NATO analyst Franky Saegerman reveals the intricate strategies and real-world impact of state-sponsored narrative attacks campaigns that seek to manipulate global narratives.
The landscape of foreign influence operations is undergoing a rapid evolution as state actors, capitalizing on the rapid advancement of technology and the inherent vulnerabilities of an increasingly interconnected world, deploy sophisticated narrative attack tactics with unprecedented precision and scale. These campaigns, crafted to exploit societal fault lines and manipulate public discourse, pose a growing threat to the integrity of democratic processes and the stability of nations across the globe.
DOWNLOAD PRESENTATION: Franky Saegerman – Foreign Information Manipulation and Interference
In a presentation on Wednesday at the annual Black Hat cybersecurity conference in Las Vegas, Franky Saegerman, a former NATO analyst specializing in information warfare, delivered a speech exposing the evolving nature of these campaigns, revealing the complex strategies, real-world impact, and emerging threats posed by foreign information manipulation and interference (FIMI). Saegerman underscored the urgent need for a comprehensive, multi-faceted approach to counter this growing menace by dissecting the tactics employed by state actors and examining case studies from around the world.
What are Narrative Attacks and FIMI?
Narrative attacks involve the deliberate spread of false information to deceive. FIMI, on the other hand, includes a range of mostly non-illegal activities aimed at manipulating information environments and influencing political processes. Saegerman emphasizes that while all FIMI involves narrative attacks, not all narrative attacks qualify as FIMI. The important characteristics of FIMI include its intentional, manipulative, and coordinated nature.
The ABC(DE) Model
The ABC(DE) model, developed by James Pamment from Lund University in Sweden, provides a comprehensive framework for understanding narrative attack campaigns. This model helps analysts effectively dissect and counteract narrative attack efforts:
- Actors: Who is behind the narrative attack?
- Behavior: What patterns and tactics are being used?
- Content: What is the narrative attack attack about?
- Distribution: How does a narrative attack spread?
- Effects: What impact does the narrative attack have?
Narrative attack campaigns often follow predictable patterns and employ several recurring tactics:
- Exploiting Cracks: Identifying and exploiting societal vulnerabilities.
- Creating a Big Lie: Spreading a significant falsehood.
- Kernel of Truth: Wrapping lies around a small element of truth.
- Concealing the Source: Hiding the origin of the narrative attacks.
- Using “Useful Idiots”: Leveraging individuals who unwittingly spread narrative attacks.
- Deny, Distract, Distort: When caught, deny the accusations, distract from the facts, and distort the narrative.
- Playing the Long Game: Sustaining narrative attacks efforts over a prolonged period.
Case Studies
Doppelganger
The Doppelganger operation is a good example of a sophisticated narrative attack operation characterized by its innovative tactics and wide-reaching impact. Initially exposed in 2022, the European Union sanctioned this Russian campaign, targeting various European audiences. Doppelganger used cloned websites of legitimate media outlets, fake social media profiles, and manipulated content to spread pro-Russian narratives and discredit Ukraine. By mimicking the appearance of credible news sources, Doppelganger effectively deceives the public, promoting false information about the Ukraine war and other geopolitical issues.
Initially believed to have a limited scope, research by several private firms revealed that the Doppelganger operation’s reach was significantly larger than previously estimated, impacting five to ten times more individuals than researchers first thought. The campaign’s persistence and adaptability, including the ability to quickly respond to current events and evade sanctions, underscore the challenges in combating such operations.
Despite efforts from various stakeholders, including legal actions and platform takedowns, the Doppelganger operation is still active. The European Union and the United States have imposed sanctions on individuals and entities involved, but the operation continues to evolve. This ongoing activity highlights the need for more robust and coordinated international efforts to counteract these narrative attack campaigns. Measures such as better regulation of domain names, enhanced platform accountability, and improved data access for researchers are crucial steps towards mitigating the impact of operations like Doppelganger.
The Rise of “Pink Slime” Websites
The rise of “pink slime” websites—fake websites, often AI-generated, designed to appear authentic—reflects a troubling trend in narrative attacks. These websites are frequently backed by agenda-driven groups or foreign entities, aiming to spread narrative attacks by leveraging the appearance of credibility typically associated with traditional local news. As of mid-2024, there are more than 1,265 such sites in the U.S., surpassing the number of genuine local newspapers, which stands at 1,213. This shift exacerbates the existing decline in local journalism.
The proliferation of these sites is particularly concerning as they fill the void left by the closure of many local newspapers. More than 2,900 newspapers have shut down since 2005, creating “news deserts” where communities have limited access to reliable local news. This environment is ripe for exploitation by entities seeking to erode trust in democratic institutions and manipulate public opinion. The use of AI to generate content for these pink slime sites further enhances their ability to produce vast amounts of narrative attacks quickly and efficiently, posing a significant threat to the integrity of information ecosystems.
Operation Paperwall
Operation Paperwall was a sophisticated narrative attack campaign orchestrated by Chinese entities posing as local news outlets in various countries to disseminate pro-Beijing narratives. The operation involved at least 123 websites that appear to be local news sources in over 30 countries, including Turkey, Brazil, South Korea, Japan, and Russia. These sites often republish content from Chinese state media alongside local news stories, creating a veneer of legitimacy while promoting Beijing’s geopolitical interests and discrediting critics of the Chinese government.
One of the critical tactics of Operation Paperwall is to blend narrative attacks with legitimate news, making it difficult for readers to distinguish between the two. These websites have been known to carry out targeted attacks on Beijing’s critics and spread conspiracy theories, such as unfounded claims about the U.S. government conducting human experiments. The content was often syndicated across multiple sites simultaneously, amplifying its reach. Despite the relatively low traffic to these sites, the concern is that their growing number and localized content may eventually attract unsuspecting readers, further spreading narrative attacks and influencing public opinion globally.
Operation Overload
Operation Overload is a sophisticated narrative attacks campaign orchestrated by pro-Russian actors to overwhelm fact-checkers, newsrooms, and researchers. The primary tactic involves flooding these media organizations with anonymous emails containing links to fabricated content, often focused on anti-Ukraine narratives. This strategy aims to deplete the resources of credible information ecosystems, forcing journalists and fact-checkers to spend excessive time and effort verifying and debunking these false claims, thus reducing their ability to focus on genuine news stories.
The operation is highly coordinated, utilizing networks of messenger app channels, inauthentic social media accounts, and Russia-aligned websites. This multi-layered approach, termed “content amalgamation,” blends various manipulated content into cohesive, fabricated narratives. These narratives are then strategically amplified across different platforms, creating a false sense of urgency and legitimacy. For instance, fake emails and videos are disseminated, often linking false narratives to real-world events to enhance their credibility and impact.
Operation Overload’s scale and sophistication are evident in its extensive reach, targeting over 800 organizations across Europe, particularly in France and Germany. The campaign exploits significant events like the Paris Olympics to maximize its disruptive potential. Despite efforts to curb these activities, social media platforms struggled to effectively manage and dismantle the inauthentic networks driving these narrative attacks. The operation serves the Kremlin’s agenda and aims to create societal divisions by spreading misleading information about politically sensitive topics.
Additional Real-World Examples of Narrative Attack Campaigns
- Support for Ukraine: Narrative attacks efforts have targeted Western support for Ukraine, attempting to influence political decisions.
- Attack on Kyiv Children’s Hospital: False information about a missile strike on a children’s hospital in Kyiv gained significant traction, highlighting the emotional manipulation often employed in narrative attack campaigns.
- UK Elections: Deepfakes, bot-like social media amplification, and false narratives about postal voting fraud and climate change misinformation have sought to influence voters and disrupt electoral processes.
- Alexei Nevalny’s Death: Hyper-agenda-driven communities spread false narratives about Navalny’s death.
- The 2024 Summer Olympics in Paris: Since June 2023, several prominent Russian influence actors, identified by Microsoft as Storm-1679 and Storm-1099, have shifted their operations to concentrate on the Olympics.
LEARN MORE: What Is Narrative Intelligence?
The Threat of AI-Generated Deepfakes
AI-generated deepfakes represent a new frontier in narrative attacks. Audio deepfakes, in particular, are easier and cheaper to produce than video deepfakes, making them a potent tool for spreading false information. Saegerman points to OpenAI’s exposure of covert influence campaigns utilizing AI to generate and disseminate content across multiple languages and platforms, underscoring the potential for AI to turbocharge narrative attacks.
For example, a minute of someone’s voice recording is enough to generate a convincing audio deepfake using off-the-shelf generative AI tools, making it a potent tool for spreading narrative attacks. These audio deepfakes can be used maliciously because they are difficult to identify, are often designed to provoke an emotional response, and frequently travel quickly across the social web and messaging apps.
The dissemination of deepfakes can undermine public trust in authentic information sources and democratic institutions. Social media platforms are often slow to detect and remove such content, allowing deepfakes to spread widely before being addressed. This delay in response can result in significant damage, especially if the deepfake is released close to critical events like elections, leaving little time for debunking. Some states in the U.S. have enacted laws to criminalize the creation and distribution of politically motivated deepfakes during election seasons. However, the effectiveness of these laws is still being determined, and they are unlikely to deter foreign entities intent on using deepfakes for narrative attacks campaigns.
To combat the threat of AI-generated deepfakes, there are ongoing efforts to develop detection tools and regulatory frameworks. The World Economic Forum’s Digital Trust Initiative aims to counter harmful online content, including deepfakes, by promoting the responsible design and deployment of AI systems.
LEARN MORE: What is Cognitive Security?
Hybrid Warfare: Blurring the Lines
Hybrid warfare is an evolving conflict that strategically blends conventional military tactics with cyber and information operations. This approach aims to achieve strategic objectives by exploiting the gray areas between peace and war, making identifying explicit acts of aggression challenging. Narrative attacks plays a critical role in hybrid warfare, complicating the attribution and response efforts by blurring the lines between state and non-state actors. By creating ambiguity, hybrid warfare undermines the target state’s ability to respond effectively, leveraging techniques such as cyberattacks and economic coercion and spreading false information to erode public trust and destabilize institutions.
A comprehensive strategy to counter hybrid threats involves integrating cyber defense, information security, and public awareness. Experts emphasize the need for robust measures to detect, deter, and respond to hybrid attacks. NATO, for instance, has developed strategies to improve the resilience of its member states against such threats, focusing on enhanced situational awareness, strategic communications, and joint civil-military responses. This multifaceted approach includes training and exercises to prepare for hybrid scenarios and cooperation with international partners to share knowledge and best practices. By building public trust and fortifying critical infrastructure, nations can better defend against hybrid warfare’s pervasive and covert nature.
Saegerman’s Black Hat speech underscores the high stakes of the battle against narrative attacks. As state actors refine their tactics and exploit new technologies, the response from governments, businesses, media organizations, and the public should build strategies to protect the integrity of information and safeguard democratic processes.
To learn more about how Blackbird.AI can help you with election integrity, book a demo.