How Compass by Blackbird.AI Uses Generative AI to Help Organizations Fight Narrative Attacks

I helped build Compass by Blackbird.AI to fight the growing threat of misinformation and disinformation in the generative AI age.

Posted by Vanya Cohen on May 1, 2024

For any organization, the difference between success and crisis can come down to a single social media post going viral. As bad actors exploit social media and artificial intelligence to spread misinformation and disinformation at an unprecedented scale, how can businesses, nonprofits, and especially governments stay ahead of emerging narrative risks?

To help address that problem, I’ve been building Compass by Blackbird.AI – a groundbreaking narrative intelligence platform that combines cutting-edge natural language processing with human-AI collaboration – to help organizations understand and navigate today’s complex information landscape. By automatically analyzing content across social media, news sites, blogs, and more to provide essential context, not just simplistic true/false labels, Compass by Blackbird.AI empowers users to think critically about the narratives impacting their brands, industries, and stakeholders. I believe Compass by Blackbird.AI represents our best path forward – and a capability every organization now needs.

Learn More: Tag Infosphere Report: How Misinformation and Disinformation Represent a New Threat Vector

I’m excited about Compass by Blackbird.AI because, for the first time, there is a scalable path toward automatically analyzing and debunking the increasing volume of misinformation and disinformation online. As a PhD student at The University of Texas at Austin, I saw how AI methods that leverage autonomy like Compass by Blackbird.AI are often the most general and able to handle changing information environments. Bad actors are influencing the online information landscape and using GenAI tools to push narrative attacks; I think we need to give the good guys some powerful GenAI tools too.

Before my PhD, I was an undergraduate and graduate student at Brown University working on natural language understanding for robotics. In addition to published papers, I’ve released GenAI open-source datasets that have been downloaded millions of times. In 2019, I also co-authored the open-source replication of OpenAI’s GPT-2, which became the first publicly released open-source large language model (LLM).

I’m excited to say that as of this month I’m now working at Blackbird.AI full-time on Compass by Blackbird.AI. We launched a beta on Valentine’s Day 2024, to show our love of democracy. Really. That might sound trite, but I care because I’d like to live in a world with a lot less of the polarization that’s crippling our political process. This goal is at the core of Blackbird.AI’s mission, and it is the result of years of work by Blackbird on solving the problem of automatically identifying and debunking online misinformation. 

Much like a human OSINT analyst, Compass by Blackbird.AI analyzes input social posts, images, videos, articles, and other content. It automatically performs background research and finds reliable sources to corroborate claims made in the content. Finally, it outputs a short summary of its findings and the information sources it found. Navigating the online misinformation landscape is challenging and I believe that there is room for a variety of perspectives on issues. Instead of providing simple true/false labels for content, Compass by Blackbird.AI emphasizes adding context to online information over moderation or labeling. Rather than traditional automated fact-checking, which labels claims as true or false, we provide reliable information from a variety of reputable sources and thereby inform users instead of dictating what’s true to them. We call this context checking.

This claim was checked by Compass by Blackbird.AI.

I started working on the problem of automated context checking with Blackbird.AI in January 2019 when the company was in its very early stages. At that time the natural language processing tools available for context checking were not sufficiently advanced to create a reliable system. The company moved on and focused on other efforts to mitigate the spread of harmful content online.

Around the same time, the first large language model (LLM) GPT-2 was announced by OpenAI. However, they declined to release the model publicly, and in the interest of advancing open science, I decided to replicate the model as OpenGPT-2, which was the first open-source release of an LLM. In the weeks following our release of OpenGPT-2, many companies, including OpenAI, decided to publicly release their LLMs to support open-source research. By the time I was hired by Blackbird.AI in 2020, the LLM revolution was underway. But still the NLP models and tools available did not enable a sufficiently reliable solution to automated context checking. It’s only in the last couple of years that NLP methods have advanced to the point of creating something like this beta version of Compass by Blackbird.AI.

Learn More: The Evolution of Misinformation and Disinformation Attacks and Their Organizational Risk

In the summer of 2023 at Blackbird, I created a proof-of-concept for what would become Compass by Blackbird.AI. The initial version only ran locally and, unlike the current version, could not scale to handle thousands of simultaneous users. But after years of trying we finally had a version of automated context checking that worked and proved useful to its human users.

Automated context checking is a challenging problem and surfaces many weaknesses in existing AI techniques. Context checking is also hard, even for expert humans and I think it forms an important proving ground for state-of-the-art AI techniques. Context checking in Compass by Blackbird.AI requires enabling AI to use tools, reasoning under uncertainty, handle out-of-domain and novel information, and, in general, touching upon many important problems in AI. 

First, the problem is inherently real-time. We want systems that can analyze new content without waiting for humans to re-train models or databases of debunked conspiracy theories. Language models trained on the web quickly fall out of date, as they do not know about events after their training data cutoff. We need methods that can stay up-to-date with current events without human intervention. 

Second, language is inherently ambiguous and we often need background information to understand what is being said. It might not be immediately clear from a post what the author is talking about and whether it represents a harmful claim. Third, it requires handling complex sequences of research problems. Sometimes in the process of researching a claim we’ll find other information that needs to be verified. Context checking requires solving all of these problems and many others, like understanding whether sources are broadly seen as reputable.

Learn More: Social Media Misinformation and Disinformation Attack Readiness and Response Checklist

In 2021 I started a PhD in Computer Science at the University of Texas at Austin. At UT I’ve worked on AI methods to enable machines to understand complex long-form instructions and how to follow instructions to solve problems in the real world with robots. For example, to get a robot to follow directions and “put away my groceries.” It might not seem like it at first, but in many ways, the technical problems that Compass by Blackbird.AI solves are similar to my research, albeit in a virtual setting. Both require techniques for following detailed instructions and planning and executing procedures in complex and uncertain environments. We also want to learn from humans how to do these tasks. Instead of hand-specifying a rigid process for context checking, it’s more scalable and general to simply observe how humans complete this task and then mimic them.

Beyond solving the technical challenges of Compass by Blackbird.AI, I am also eager to contribute to solving the pressing social challenges posed by misinformation on social media. As we enter an important presidential election year in the United States, I believe it’s more important than ever that we have access to reputable information and tools to understand what we’re seeing and reading online. Rather than focusing on approaches that use AI to limit speech, I think AI has an important role to play in educating and improving the quality of the information we see.

For the future, I am a strong believer in human-AI collaboration. I don’t want to see humans replaced wholesale by AI (unless people really don’t want to do certain jobs!) and I think human-AI collaboration research deserves focus. Blackbird is an example of a company already taking an integrated human-AI approach. Blackbird combines world-class human intelligence with state-of-the-art AI to tackle online misinformation and I think that makes the company a great environment for working on human-AI collaboration problems. 

Compass by Blackbird.AI is already a powerful tool for collaboration. It provides qualitative outputs designed for humans to augment their own research–unlike other systems which merely provide true/false classifications. AI and humans have strengths and weaknesses and I believe there’s tremendous value to be unlocked through combining our unique talents. Especially for socially fraught domains like misinformation research, I think humans have a lot to bring to the table. I would like to see humans and AI collaborate to solve problems and I think we’ll find better solutions through this collaboration.

The Way Forward

In the coming months, we will continue rapidly expanding Compass by Blackbird.AI’s capabilities— enabling it to analyze information across more languages, media formats, and subject areas. But our north star remains to empower human-AI collaboration. By combining the unmatched pattern recognition and information synthesis abilities of large language models with human judgment and domain expertise, we can tackle misinformation at an unprecedented scale without sacrificing nuance.

Every organization today faces narrative risk. What is being said about your brand, executives, and industry online? How might misinformation impact your employees, customers, and other stakeholders? These are not hypothetical concerns—narrative attacks can emerge suddenly and spiral out of control if you’re not equipped to detect and respond quickly to them. That’s why adopting a narrative intelligence platform is essential.


‍To learn more about how Blackbird.AI can help you in these situations, contact us here.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Powered by our AI-driven proprietary technology, including the Constellation narrative intelligence platform, RAV3N Risk LMM, Narrative Feed, and our RAV3N Narrative Intelligence and Research Team, Blackbird.AI provides a disruptive shift in how organizations can protect themselves from what the World Economic Forum called the #1 global risk in 2024.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.