Black Hat 2024: Foreign Influence Operations Evolve as Narrative Attacks Become More Sophisticated

At Black Hat 2024, former NATO analyst Franky Saegerman reveals the intricate strategies and real-world impact of state-sponsored disinformation campaigns that seek to manipulate global narratives.

Dan Patterson

The landscape of foreign influence operations is undergoing a rapid evolution as state actors, capitalizing on the rapid advancement of technology and the inherent vulnerabilities of an increasingly interconnected world, deploy sophisticated disinformation tactics with unprecedented precision and scale. These campaigns, crafted to exploit societal fault lines and manipulate public discourse, pose a growing threat to the integrity of democratic processes and the stability of nations across the globe.

DOWNLOAD PRESENTATION: Franky Saegerman – Foreign Information Manipulation and Interference

In a presentation on Wednesday at the annual Black Hat cybersecurity conference in Las Vegas, Franky Saegerman, a former NATO analyst specializing in information warfare, delivered a speech exposing the evolving nature of these campaigns, revealing the complex strategies, real-world impact, and emerging threats posed by foreign information manipulation and interference (FIMI). Saegerman underscored the urgent need for a comprehensive, multi-faceted approach to counter this growing menace by dissecting the tactics employed by state actors and examining case studies from around the world.

Former NATO analyst Franky Saegerman discusses the convergence of cyber attacks and narrative attacks at Black Hat 2024.

What is Disinformation and FIMI?

Disinformation involves the deliberate spread of false information to deceive. FIMI, on the other hand, includes a range of mostly non-illegal activities aimed at manipulating information environments and influencing political processes. Saegerman emphasizes that while all FIMI involves disinformation, not all disinformation qualifies as FIMI. The important characteristics of FIMI include its intentional, manipulative, and coordinated nature.

The ABC(DE) Model

The ABC(DE) model, developed by James Pamment from Lund University in Sweden, provides a comprehensive framework for understanding disinformation campaigns. This model helps analysts effectively dissect and counteract disinformation efforts:

  • Actors: Who is behind the disinformation?
  • Behavior: What patterns and tactics are being used?
  • Content: What is the disinformation about?
  • Distribution: How does disinformation spread?
  • Effects: What impact does the disinformation have?

Disinformation campaigns often follow predictable patterns and employ several recurring tactics:

  • Exploiting Cracks: Identifying and exploiting societal vulnerabilities.
  • Creating a Big Lie: Spreading a significant falsehood.
  • Kernel of Truth: Wrapping lies around a small element of truth.
  • Concealing the Source: Hiding the origin of the disinformation.
  • Using “Useful Idiots”: Leveraging individuals who unwittingly spread disinformation.
  • Deny, Distract, Distort: When caught, deny the accusations, distract from the facts, and distort the narrative.
  • Playing the Long Game: Sustaining disinformation efforts over a prolonged period.

Case Studies

Doppelganger

The Doppelganger operation is a good example of a sophisticated disinformation operation characterized by its innovative tactics and wide-reaching impact. Initially exposed in 2022, the European Union sanctioned this Russian campaign, targeting various European audiences. Doppelganger used cloned websites of legitimate media outlets, fake social media profiles, and manipulated content to spread pro-Russian narratives and discredit Ukraine. By mimicking the appearance of credible news sources, Doppelganger effectively deceives the public, promoting false information about the Ukraine war and other geopolitical issues.

Initially believed to have a limited scope, research by several private firms revealed that the Doppelganger operation’s reach was significantly larger than previously estimated, impacting five to ten times more individuals than researchers first thought. The campaign’s persistence and adaptability, including the ability to quickly respond to current events and evade sanctions, underscore the challenges in combating such operations.

Despite efforts from various stakeholders, including legal actions and platform takedowns, the Doppelganger operation is still active. The European Union and the United States have imposed sanctions on individuals and entities involved, but the operation continues to evolve. This ongoing activity highlights the need for more robust and coordinated international efforts to counteract these disinformation campaigns. Measures such as better regulation of domain names, enhanced platform accountability, and improved data access for researchers are crucial steps towards mitigating the impact of operations like Doppelganger.

The Rise of “Pink Slime” Websites

The rise of “pink slime” websites—fake websites, often AI-generated, designed to appear authentic—reflects a troubling trend in misinformation and disinformation. These websites are frequently backed by agenda-driven groups or foreign entities, aiming to spread misinformation and disinformation by leveraging the appearance of credibility typically associated with traditional local news. As of mid-2024, there are more than 1,265 such sites in the U.S., surpassing the number of genuine local newspapers, which stands at 1,213. This shift exacerbates the existing decline in local journalism. 

The proliferation of these sites is particularly concerning as they fill the void left by the closure of many local newspapers. More than 2,900 newspapers have shut down since 2005, creating “news deserts” where communities have limited access to reliable local news. This environment is ripe for exploitation by entities seeking to erode trust in democratic institutions and manipulate public opinion. The use of AI to generate content for these pink slime sites further enhances their ability to produce vast amounts of disinformation quickly and efficiently, posing a significant threat to the integrity of information ecosystems.

Operation Paperwall

Operation Paperwall was a sophisticated disinformation campaign orchestrated by Chinese entities posing as local news outlets in various countries to disseminate pro-Beijing narratives. The operation involved at least 123 websites that appear to be local news sources in over 30 countries, including Turkey, Brazil, South Korea, Japan, and Russia. These sites often republish content from Chinese state media alongside local news stories, creating a veneer of legitimacy while promoting Beijing’s geopolitical interests and discrediting critics of the Chinese government.

One of the critical tactics of Operation Paperwall is to blend disinformation with legitimate news, making it difficult for readers to distinguish between the two. These websites have been known to carry out targeted attacks on Beijing’s critics and spread conspiracy theories, such as unfounded claims about the U.S. government conducting human experiments. The content was often syndicated across multiple sites simultaneously, amplifying its reach. Despite the relatively low traffic to these sites, the concern is that their growing number and localized content may eventually attract unsuspecting readers, further spreading disinformation and influencing public opinion globally.

Operation Overload

Operation Overload is a sophisticated disinformation campaign orchestrated by pro-Russian actors to overwhelm fact-checkers, newsrooms, and researchers. The primary tactic involves flooding these media organizations with anonymous emails containing links to fabricated content, often focused on anti-Ukraine narratives. This strategy aims to deplete the resources of credible information ecosystems, forcing journalists and fact-checkers to spend excessive time and effort verifying and debunking these false claims, thus reducing their ability to focus on genuine news stories.

The operation is highly coordinated, utilizing networks of messenger app channels, inauthentic social media accounts, and Russia-aligned websites. This multi-layered approach, termed “content amalgamation,” blends various manipulated content into cohesive, fabricated narratives. These narratives are then strategically amplified across different platforms, creating a false sense of urgency and legitimacy. For instance, fake emails and videos are disseminated, often linking false narratives to real-world events to enhance their credibility and impact.

Operation Overload’s scale and sophistication are evident in its extensive reach, targeting over 800 organizations across Europe, particularly in France and Germany. The campaign exploits significant events like the Paris Olympics to maximize its disruptive potential. Despite efforts to curb these activities, social media platforms struggled to effectively manage and dismantle the inauthentic networks driving this disinformation. The operation serves the Kremlin’s agenda and aims to create societal divisions by spreading misleading information about politically sensitive topics.

Additional Real-World Examples of Disinformation Campaigns

  • Support for Ukraine: Disinformation efforts have targeted Western support for Ukraine, attempting to influence political decisions.
  • Attack on Kyiv Children’s Hospital: False information about a missile strike on a children’s hospital in Kyiv gained significant traction, highlighting the emotional manipulation often employed in disinformation campaigns.
  • UK Elections: Deepfakes, bot-like social media amplification, and false narratives about postal voting fraud and climate change misinformation have sought to influence voters and disrupt electoral processes.
  • Alexei Nevalny’s Death: Hyper-agenda-driven communities spread false narratives about Navalny’s death.
  • The 2024 Summer Olympics in Paris: Since June 2023, several prominent Russian influence actors, identified by Microsoft as Storm-1679 and Storm-1099, have shifted their operations to concentrate on the Olympics.

LEARN MORE: What Is Narrative Intelligence?

The Threat of AI-Generated Deepfakes

AI-generated deepfakes represent a new frontier in disinformation. Audio deepfakes, in particular, are easier and cheaper to produce than video deepfakes, making them a potent tool for spreading false information. Saegerman points to OpenAI’s exposure of covert influence campaigns utilizing AI to generate and disseminate content across multiple languages and platforms, underscoring the potential for AI to turbocharge disinformation.

For example, a minute of someone’s voice recording is enough to generate a convincing audio deepfake using off-the-shelf generative AI tools, making it a potent tool for spreading disinformation. These audio deepfakes can be used maliciously because they are difficult to identify, are often designed to provoke an emotional response, and frequently travel quickly across the social web and messaging apps.

The dissemination of deepfakes can undermine public trust in authentic information sources and democratic institutions. Social media platforms are often slow to detect and remove such content, allowing deepfakes to spread widely before being addressed. This delay in response can result in significant damage, especially if the deepfake is released close to critical events like elections, leaving little time for debunking. Some states in the U.S. have enacted laws to criminalize the creation and distribution of politically motivated deepfakes during election seasons. However, the effectiveness of these laws is still being determined, and they are unlikely to deter foreign entities intent on using deepfakes for disinformation campaigns.

To combat the threat of AI-generated deepfakes, there are ongoing efforts to develop detection tools and regulatory frameworks. The World Economic Forum’s Digital Trust Initiative aims to counter harmful online content, including deepfakes, by promoting the responsible design and deployment of AI systems.

LEARN MORE: What is Cognitive Security?

This claim was context-checked by Compass by Blackbird.AI.

Hybrid Warfare: Blurring the Lines

Hybrid warfare is an evolving conflict that strategically blends conventional military tactics with cyber and information operations. This approach aims to achieve strategic objectives by exploiting the gray areas between peace and war, making identifying explicit acts of aggression challenging. Disinformation plays a critical role in hybrid warfare, complicating the attribution and response efforts by blurring the lines between state and non-state actors. By creating ambiguity, hybrid warfare undermines the target state’s ability to respond effectively, leveraging techniques such as cyberattacks and economic coercion and spreading false information to erode public trust and destabilize institutions.

A comprehensive strategy to counter hybrid threats involves integrating cyber defense, information security, and public awareness. Experts emphasize the need for robust measures to detect, deter, and respond to hybrid attacks. NATO, for instance, has developed strategies to improve the resilience of its member states against such threats, focusing on enhanced situational awareness, strategic communications, and joint civil-military responses. This multifaceted approach includes training and exercises to prepare for hybrid scenarios and cooperation with international partners to share knowledge and best practices. By building public trust and fortifying critical infrastructure, nations can better defend against hybrid warfare’s pervasive and covert nature.

Saegerman’s Black Hat speech underscores the high stakes of the battle against disinformation. As state actors refine their tactics and exploit new technologies, the response from governments, businesses, media organizations, and the public should build strategies to protect the integrity of information and safeguard democratic processes.

‍To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.