The Path of Disinformation Intervention

Even before a world flush with bots and Bitcoin, disinformation was wielded constantly. I first became familiar with its potency as Deputy Assistant Secretary of State for Intelligence during the Reagan administration.

Richard Clarke

Even before a world flush with bots and Bitcoin, disinformation was wielded constantly. I first became familiar with its potency as Deputy Assistant Secretary of State for Intelligence during the Reagan administration. The KGB was ramping up a plot to convince the world that HIV/AIDS was a bioweapon developed by the Pentagon to erode trust in the US government abroad and from within its borders. Operation INFEKTION, as the KGB called it, was a massive effort involving fake scientific studies and journal articles, the sponsorship of foreign news outlets, and the recruitment of local agents to substantiate and disseminate the lie.

Some of these tactics, like false stories and trusted personas, likely sound familiar. They are now being leveraged by a far wider range of state-backed and non-state actors to achieve political, financial, or geopolitical gain. More importantly, engagement-driving algorithms and infinitely scalable computing resources have made what were once enormously expensive campaigns like INFEKTION far more accessible and faster to spin up. Those technological developments, which manifest in phenomena like bot networks and automated story generation, have changed both who is being attacked and who is attacking.

LEARN MORE: What Is A Narrative Attack?

‍Bits of misinformation are not meaningful in their own right. The staying power of misinformation is positively correlated to the number of a constellation’s nodes to which it can connect. This is why conspiracy theories continuously characterize new individuals, groups, organizations, and institutions as linked adversaries. The 2016 and 2020 elections, the COVID-19 pandemic, and the Russia-Ukraine conflict have shed light on groups within our society that are most likely to spread misinformation. On the other hand, we have very few ways of detecting when or at whose hands deliberate disinformation activity takes place. We have still fewer tools to keep it from happening.

Blackbird.AI’s capability to analyze patterns in where disinformation originates, who amplifies it, and which channels disseminate it is essential for real-time mitigation and longer-term prevention. Instead of merely examining the fidelity of a narrative, its Constellation platform tracks the audiences, pathways, and inflection points (e.g., “going viral”) in a disinformation narrative’s lifecycle. Its Constellation platform gives would-be target organizations the power to anticipate future reputational and material risks by tracking groups who collectively manipulate information and by correlating open-source insights to criminal activity on the Dark Web.

‍We are increasingly seeing the convergence of interests between groups facilitating disruptive (or even destructive) cyber attacks and those facilitating the spread of disinformation. Both types of actors rely on the same information infrastructure (e.g., the Domain Name System) to reach their targets, enlist hijacked and bot-driven social media accounts to gain trust/access, and may seek to malign and extort the same kinds of commercial or government targets.

‍The total cost of cybercrime in 2021 amounted to nearly $6 trillion worldwide, including an explosion in ransomware attacks that wreaked havoc on critical supply chains. Having warned about the likelihood of those disruptions in the nation’s first National Strategy to Secure Cyberspace back in 2003, I was hardly surprised that some entities proved vulnerable to attack. What was more surprising was how quickly attackers adopted new strategies. It took cyber extortionists very little time to realize there was no need to slap locks on a victim organization’s network. They could just as quickly exfiltrate sensitive data from a target’s network,  threaten to disclose that data, and similarly expect a payday publicly. The next step, even less effort for cybercriminals, is to affect reputational damage until companies pay them to stop.

The same tactics can just as easily target commercial entities’ reputations to move markets, to distract from other malicious activity like cyber attacks, or simply as part of conspiracy theories (as in the cases of Wayfair, Nike, and Dominion Voting Systems). With diminishing barriers to entry and a proliferating range of actors, we need a tool to mitigate disinformation’s spread once it’s out there. The capability of Blackbird’s AI-driven engine to scale its analysis at the speed of disinformation spreads proves that such a needed intervention has already arrived.

‍To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.