WEBINAR: Cognitive Warfare and the Gray Zone: Defending Perception in the Age of Narrative Attacks

Blackbird.AI hosted a private webinar for highly influential industry leaders, bringing together top strategic perspectives on a critical challenge: how organizations can defend against cognitive warfare as the boundary between real and manufactured information rapidly erodes.

Blackbird.AI

Larysa Lacko, Director of National Security at Blackbird.AI, moderated the discussion with Dr. Andrea Hickerson, Dean of the School of Journalism and New Media and Director of the Center for Information Advantage and Effectiveness at the University of Mississippi, and Dr. Pablo Breuer, co-founder of the DISARM Foundation.

The Center for Information Advantage and Effectiveness is an interdisciplinary research center dedicated to understanding the digital information environment. The DISARM Foundation is an independent, open-source organization focused on helping governments, institutions, and civil society detect and counter influence operations and disinformation campaigns. DISARM maintains a shared global framework that maps how information manipulation campaigns operate and what countermeasures are effective.

The conversation explored the transformation of the internet and information environment, and practical strategies for defending against narrative manipulation across sectors. As coordinated narrative attacks become more sophisticated and automated by AI, organizations need frameworks for early detection to protect their operations and reputations.

WATCH: Cognitive Warfare and the Gray Zone: Defending Perception in the Age of Narrative Attacks

The Current Information Environment

The internet was supposed to democratize information and give everyone a voice to break down power structures. But the digital reality differs dramatically from what early internet pioneers envisioned.

Dr. Hickerson opened with a reflection on how wrong early researchers were about the future of the Internet and what it would become. “When I started doing academic research, a lot of the attitude about the internet and online communication was celebratory,” she explained. “People are going to be able to write their own narratives and break power structures. I look back on that, and it makes me cringe a little bit.”

What actually happened? Traditional media organizations were slow to adapt, essentially just shoveling their print content onto websites without embracing what digital communication could actually do. That opened the door for big tech to take over the narrative in ways nobody anticipated.

The result is an information environment characterized by chaos from the consumer perspective. “You see things in a stream of news where you might see something credible next to something that’s not credible,” Dr. Hickerson noted. “It’s delivered to you by a kind of black box AI. So you don’t know what choices are being embedded and how that content gets to you.”

 For many people, the response to this overwhelming chaos is simple: they give up. They tune out. Dr. Hickerson noted that this apathy is sometimes exactly what hostile actors are trying to achieve.

How Echo Chambers Eroded Common Ground

Dr. Breuer drew a stark contrast between the information environment of the 1980s and today. “You would listen to the broadcast, and you would agree with it or disagree with it,” he said. “But when you went to talk to your neighbor about what was going on in the world, you had this shared view of what was presented.”

Dr. Breuer explained that this shared reality enabled civil discourse. People could debate policy because they at least agreed on the basic facts.

Social media shattered that foundation. Now people get siloed into echo chambers where algorithms feed them content that confirms their existing beliefs. Dr. Breuer pointed out that if you were to take some of the top news organizations today and discuss coverage with viewers from different networks, there’s no overlap at all. Dr. Breuer said, “There’s no common ground for them to have that civil discourse. And therefore, we can’t solve problems in a democracy.”

Dr. Breuer pointed out what many miss: the real damage didn’t start with generative AI. It happened years earlier, when algorithms first began deciding what content people would see. “The bigger harm was caused way before Gen.AI,” Dr. Breuer explained. “It was when the algorithms decided what you were going to see on social media. And so it sticks you in this echo chamber where now we hyper-radicalize because literally everything you see is driven by your own echo chamber and your own biases.”

The Overton Window: A Tool for Self-Awareness

One of the most practical ideas Dr. Breuer introduced was the concept of applying the Overton window to social media consumption. The Overton window, a political science concept, describes the range of ideas considered acceptable in public discourse at any given time.

On any given topic, there’s a spectrum of viewpoints that fall within mainstream discussion and others that sit far outside it. The challenge is that when you’re deep in an echo chamber, you lose perspective on where your views sit relative to broader society.

“One of the things you could do is if you were on social media, as you’re reading things, it would be really great if social media would provide you an Overton window,” Dr. Breuer suggested. The platform would essentially tell you: the things you’re viewing right now are either within the normal range of discussion or way outside the norms of what most people are seeing.

“Just that awareness of, most people don’t believe what I do. I am way outside the standard deviation for normal views on this. Just that little bit of awareness might lead people to go, well, maybe I need to do some more reading. So a little bit of self-awareness goes a very long way.”

It’s a simple idea with profound implications for helping people recognize when they’ve been captured by an echo chamber. It is not intended to suppress diverse viewpoints, but to provide context about where those viewpoints sit in the broader conversation.

The DISARM Framework and Understanding How Disinformation Operations Work

Dr. Breuer clarified an important distinction about the DISARM Framework: it doesn’t tell you what is or isn’t misinformation. “The disarm framework explains how misinformation attacks happen. It doesn’t tell you what misinformation is,” he explained. “That’s really left to the analyst and the consumer.”

He is often challenged about what qualifies him to decide what’s true. His answer? “I don’t. Here’s the framework. You get to decide for yourself.”

The framework maps disinformation operations across four phases: planning, preparation, execution, and evaluation. AI can be incredibly useful in three of those phases, particularly for defenders trying to keep track of the massive amount of information crossing the internet, identifying emerging themes, and analyzing sentiment at scale.

But Dr. Breuer was clear about where AI shouldn’t be used. “If you’re going to put yourself out there as the shining knight in spotless armor, the minute that you use AI to create content, people are going to go, well, that’s fake content,” he cautioned. “Using bots to proliferate information, even if it’s true, is not a good use. It seems disingenuous.”

The Problem of Attribution: When Anonymity Enables Deception

Dr. Breuer highlighted a recurring challenge throughout the discussion: attribution. Online anonymity creates massive vulnerabilities for defenders trying to assess threats.

“When you see disinformation—information that is intentionally false in either content or context—almost invariably, what you see is the poster of that information, the originator of that information misrepresents who they are,” Dr. Breuer explained.

Dr. Breuer noted that this distinction matters. In the physical world, accountability mechanisms exist. Online, those guardrails disappear. The account spreading false information about a data breach could be operated by a bot network, a foreign intelligence service, or an individual using a synthetic identity carefully cultivated over months or years, he explained.

Understanding who sits behind an account, or, more precisely, identifying patterns of inauthentic behavior, is essential for distinguishing genuine public sentiment from coordinated manipulation campaigns.

Determining Response Thresholds

How do you know when a narrative crosses the threshold from background noise to genuine threat?

Dr. Hickerson offered a straightforward starting point: “I would start asking myself, what is at stake if this is out there?” Volume alone isn’t a good indicator. Understanding the mechanics behind the spread, who’s amplifying it, whether it’s organic or coordinated, whether it’s jumping from fringe platforms to mainstream ones, that’s what determines risk.

Dr. Breuer emphasized that the answer depends on who you are as an organization. For government institutions, certain types of narratives pose fundamental threats to core functions. For global corporations, false narratives about labor practices in countries where they sell products could be devastating. For local businesses, those same narratives might be irrelevant. “You really have to decide which of the battles are worth fighting, which of the battles are not worth fighting,” he said. “Tabletop exercises go a long way.”

Vulnerabilities Across Sectors

The discussion explored how different sectors face unique vulnerabilities to narrative attacks. Dr. Hickerson pointed out that local news represents a significant security concern; while the public still trusts local outlets, they lack the resources to investigate sophisticated manipulation campaigns. She explained that adversaries exploit this gap by targeting smaller, under-resourced publications first.

Dr. Breuer illustrated this pattern with a historical example of how disinformation operations work. False narratives are often planted in smaller, less scrutinized publications, then cited by progressively larger outlets until they reach mainstream coverage. “You target something small, and then you use that as a reference in ever more trusted pieces of information,” Dr. Breuer said.

Corporate sectors face similar threats. Last year’s World Economic Forum threat report found that misinformation and disinformation ranked in the top three threats across sectors—academia, manufacturing, and agriculture. Dr. Breuer recommends that organizations communicate their practices proactively rather than waiting for false narratives to emerge. “If you push those things out ahead of time and you pre-bunk, when those disinformation narratives come out, you can just point back to your website,” Dr. Breuer explained. “That hits very differently than if you’re trying to play catch-up and do damage control after the fact.”

Building Resilience Through Individual Accountability

Building resilience also requires individual responsibility. Dr. Hickerson emphasized that people have agency in what they share: “We need to think about the consequences that we as individuals have in this, and we’re not passive actors in that.”

Dr. Breuer offered practical advice: “If you read something or you see something and your first reaction is emotional, you’re 100% being manipulated.” Understanding that manipulation is the first step toward making informed decisions about how to respond.

Despite the challenges in the current information environment, Dr. Breuer ended on an optimistic note. When he started working on defense against misinformation in 2017, nobody was talking about it. “Now we’re talking about it in webinars. Now you can turn on nightly news and hear news stories about misinformation and disinformation.”

The first step to solving a problem is recognizing it exists. Society is having that conversation now in ways it wasn’t even five years ago. “We just as a society need to decide what is acceptable and what is not acceptable within our own society. What are acceptable controls, if any. And accountability goes a long way,” Dr. Breuer said.

The Way Forward – Key Takeaways for Organization Leaders

As organizations navigate an increasingly complex information environment in 2026, several imperatives emerged from the discussion:

Build cross-sector partnerships. No single organization can defend against cognitive warfare alone. Effective defense requires trusted relationships across government, media, and private sectors.

Invest in proactive communication. Organizations that wait until they’re under attack will always be on defense. Building trust and communicating practices before narratives emerge creates a foundation for effective crisis response.

Develop shared frameworks and language. Tools like the DISARM Framework enable different communities to communicate effectively about threats. A common language makes collaboration possible.

Prioritize detection over reaction. Understanding patterns of narrative manipulation and spotting emerging threats while they’re still small provides the time and space to respond effectively rather than merely react to damage.

Recognize this is a societal challenge. While organizations need capabilities to defend themselves, building true resilience requires addressing information literacy, platform accountability, and individual responsibility across society.

The threat landscape will continue evolving. But organizations that start building their defenses now, invest in narrative intelligence, forge partnerships across sectors, and move from reactive crisis management to proactive defense will be far better positioned to navigate whatever comes next.

Perception has become a critical attack surface. It’s time to defend it with the same rigor organizations apply to their networks, systems, and data.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.