Cognitive Hacking: The New Social Engineering Threat

By The RAV3N Research Team

Forget fake emails from the CEO. Social engineering has leveled up, leveraging communication networks to exploit psychological vulnerabilities.

“Only amateurs attack machines; professionals target people.” —Bruce Schneier

Social engineering is a well-known tactic – who hasn’t received, for instance, a phishing email? But imagine this on a mass scale, with bad actors using narrative attacks and psychology to manipulate perceptions, potentially causing real-world chaos.

LEARN MORE: What Is A Narrative Attack?

Welcome to cognitive hacking. 

COGNITIVE HACKING IS A TACTIC THAT GOES BEYOND PHISHING EMAILS AND MALWARE TO MANIPULATE PEOPLE’S THOUGHT PROCESSES AND BEHAVIORS TO CAUSE REAL-LIFE ACTIONS. WHILE COGNITIVE HACKING IS A SOMEWHAT BROAD TERM THAT CAN APPLY TO VARIOUS INSTANCES OF SOCIAL ENGINEERING, THIS ARTICLE FOCUSES ON ITS RELATIONSHIP WITH NARRATIVE ATTACKS AND THEIR IMPACT.



WHAT IS COGNITIVE HACKING?

Cognitive hacking is a cyberattack that targets people rather than a corporate infrastructure or internal network. Cognitive hackers leverage information systems (such as social media and other platforms) to manipulate people’s psychological vulnerabilities, perceptions, and behaviors. This is accomplished by tapping into (and fueling) existing biases, prejudices, and political/social allegiances, causing people to take corresponding actions. 

HOW DOES COGNITIVE HACKING WORK?

“Misinformation is by no means new — from the beginning of time, it is a key tactic by people trying to achieve major goals with limited means.” 

Rodney Joffe

Cognitive hackers will exploit biases by creating and disseminating narrative attacks designed to play on psychological elements. For example, hackers might create fake news articles or posts from a fake celebrity account designed to confirm existing beliefs about a particular group or issue. They might also use social media bots to spread narrative attacks, making it appear more popular and more credible than it is.

Hackers’ tools include social media platforms, online discussion forums, troll farms, and fake news sites to disseminate narrative attacks. Using these platforms and possibly creating bot farms is a low-cost and potentially free approach that can amplify false narratives much further (and faster) than ever before.

WHAT ARE COGNITIVE HACKING TECHNIQUES?

There are various cognitive hacking techniques; below are some of the most common ways that cognitive hackers use technology to launch and amplify attacks:

Creating fake accounts: One example of creating a phony account for cognitive hacking is using a celebrity social media account. This is typically done by impersonating a person or brand – but the featured celebrity doesn’t have to exist. This tactic involves four steps:

  • Building a strong following 
  • Running and reposting polls  
  • Building a regional follower base 
  • Expanding the account’s popularity 

Discrediting a journalist: Legitimate news organizations will work to address and debunk fake news and correct misinformation. Therefore, discrediting these journalists is a popular element of cognitive hacking attacks. Actors can purchase packages with a fake news article with tens of thousands of retweets and visits. After the initial period of building retweets and visits, the journalists’s accounts are poisoned with thousands of malicious comments.  

Exploiting trending topics: Cognitive hackers will capitalize on current events to spread misinformation or narrative attacks disguised as news articles or social media posts. These actions help to gain attention and mislead people.

Using bot networks: Actors (including nation-state actors) will create automated accounts to amplify specific messages or drown out opposing voices artificially. This tactic produces a false sense of widespread consensus or urgency, which can lead to online doxxing and real-life altercations. 

Instigating a street protest: Inciting a physical, in-person element (a behavior corresponding to perception) is another goal of cognitive hackers. Techslang states, “Such a campaign can be bought on the dark web. Doing so involves creating 20 social media groups with 1,000+ members each, obtaining 50,000 retweets and 100,000 likes, and publishing ten fake news stories and 50 related videos.”

What is an Example of Cognitive Hacking?

There are many examples of cognitive hacking. Below are several examples related to the 2016 and 2020 elections:

In 2016, Russian hackers established an “elaborately organized and surprisingly low-cost ‘troll farm’ set up to launch an ‘information warfare’ operation to impact U.S. political elections from Russian soil using social media platforms.” Instead of hacking into mainframes to interfere with the election, these operatives were able to influence how American voters thought and how they voted. These actions helped strengthen the political and social polarization in the U.S. – in essence, engaging in a new international warfare without firing a (physical) shot. 

January 6th was an example of cognitive hacking, where online outrage and narrative attacks translated into real-world actions. Right-wing activists amplified the false narrative that Joe Biden had stolen the 2020 election and organized on social media and other platforms, resulting in a violent riot at the U.S. Capitol. After the attack, activist groups continued to spread narrative attacks to sow confusion and discord, including the narrative that protesters were invited into the building by Capitol Police. 

The United States Post Office (USPS) recently fell victim to cognitive hacking. For example, during the 2020 election, activist groups started narrative attacks campaigns that included a fabricated narrative that post office employees were throwing away mail-in ballots for a specific political party. These narratives led to mail carriers being harassed on the street and doxxed online. 

In each case, activists capitalized on current events. They took to online communication platforms, including social media, the dark web, and niche websites, to spread narrative attacks, shape perceptions, and manipulate corresponding behaviors by tapping into inherent biases and prejudices. 

WHAT ARE THE POTENTIAL IMPACTS OF COGNITIVE HACKING?

Cognitive hacking is an incredibly destructive attack, bringing not only potential psychological harm to individuals but also reputational, financial, and physical harm to companies, institutions, and groups. These include:

  • Reputational harm for corporations
  • Financial losses 
  • Boycotts
  • Election interference
  • Doxxing and harassment
  • Societal harm
  • Riots
  • Revolution
  • Physical harm, including vigilante killings 

HOW CAN ORGANIZATIONS PREVENT COGNITIVE HACKING?

Despite its prevalence, the risk of cognitive hacking can be mitigated. Below are a few steps organizational leaders can take:

Be prepared. Fully monitor the landscape to evaluate conversations about your brand. Doing so makes it easier to detect and keep track of narrative attacks. 

Go beyond social listening. Social listening tools rely on keyword searches and cannot monitor the most common problem areas of the communications landscape, such as niche websites and the dark web. Therefore, these tools can do little to evaluate the conversation around your brand online. 

Educate employees. Organizations spend resources training employees on how to spot potential malware and social engineering tactics such as social engineering. The same gravity should apply to identifying occurrences of cognitive hacking. 

Form an executive alliance. C-suite executives, particularly Chief Communications Officers and Chief Information Security Officers, should collaborate on response and mitigation strategies before, during, and after a cognitive hack. 

Partner with a narrative intelligence expert. These experts have the technology to go beyond social listening, monitoring the entire information landscape to detect attacks in their early stages while mapping the actors and the information flow. This information enables leaders to fully evaluate the information landscape (including the “blind spots”) and make the right strategic decisions.

Summary and Key Takeaways

We live in a time of accelerated communication, advancing AI, polarization, and an expanding social media surface, making cognitive hacking more effective than ever. But there are ways to mitigate the risk. Below are a few takeaways:

  • Cognitive hacking is a method of social engineering that targets people rather than computer networks or IT infrastructure. 
  • Cognitive hacking uses technological systems to target people, changing perceptions and corresponding behaviors by exploiting psychological vulnerabilities, existing biases, prejudices, and political/social allegiances.
  • Tools often used to disseminate narrative attacks in cognitive hacking include social media platforms, troll farms, and fake news sites.
  • Techniques used in cognitive hacking include creating fake accounts, discrediting journalists, exploiting current events, and using bot networks.
  • Examples of cognitive hacking include Russian interference in the 2016 U.S. election, the January 6th attack, and harassment of postal workers. 
  • Impacts of cognitive hacking include electoral disruptions, riots, violence, reputational and financial harm, and harassment. 
  • Organizations can take various steps to protect themselves from cognitive hacking, including educating employees, monitoring the landscape, forming executive alliances, and partnering with a narrative intelligence expert.

BLACKBIRD.AI AS A PARTNER TO MITIGATE COGNITIVE HACKING

The BlackBird.AI Constellation Platform is a purpose-built Narrative Intelligence platform that enables organizations across industries and types to protect against emergent narrative attacks created by misinformation and narrative attacks that cause financial and reputational harm.

The Platform enables organizations to detect, measure, gain context, and prioritize risk by understanding the narratives, the influence behind them, the networks the narratives touch, the anomalous campaigns that scale them, and the cohorts that connect them. The platform enables risk analysis across the dark web, social media, and news platforms in 25 languages and redefines how organizations detect, measure, and prioritize narrative risk for critical decision-making. 

A few key platform features:

  • Narrative intelligence surfaces the emerging narratives that drive online conversations, analyzing how they spread and grow.
  • Actor intelligence identifies and maps the most influential actors, cohorts, networks, and actors’ intent and motives.
  • Threat intelligence via the Blackbird.AI Risk Index scores threats related to toxicity, polarization, automated networked activity, sexism, hate speech, misinformation, and other dangerous characteristics.      
  • Impact intelligence allows organizations to understand better and predict the impact of ongoing harmful activity across the information landscape while measuring the success of their mitigative efforts.

Learn about Blackbird.AI and how we can protect your organization’s reputation by booking your personalized demo today.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.