Blackbird.AI Cyber Attack Glossary and Narrative Attack Terminology
NARRATIVE ATTACK: We define narrative as ‘any assertion that shapes perception about a person, place or thing in the information ecosystem.’ An attack occurs when misinformation and disinformation tactics enable these narratives to scale and become harmful, manipulating public perception about organizations, events, groups, or individuals.
NARRATIVE INTELLIGENCE: Narrative Intelligence is the ability to understand and interpret the complex interplay of storylines, information networks, community dynamics, and influential actors that shape public perception and discourse around specific topics or events. It involves examining the evolution and impact of narratives, identifying patterns of information flow and connections between users, segmenting like-minded communities based on shared characteristics, detecting irregular or manipulated behavior that amplifies or suppresses a narrative’s visibility, and measuring the engagement and influence of various actors in driving public opinion.
MISINFORMATION: is when inaccurate or outdated information is shared, usually without the intent to harm. An example is sharing outdated information on vaccines.
DISINFORMATION: is the sharing of fabricated information intended to deceive. An example of this is falsely accusing a company of greenwashing.
MALINFORMATION: is the malicious sharing of factual information intended to sabotage. One example is hackers sharing sensitive company information after a cyber-attack.
COGNITIVE HACKING: a cyberattack that targets people rather than a corporate infrastructure or internal network. Cognitive hackers leverage information systems (such as social media and other platforms) to manipulate people’s psychological vulnerabilities, perceptions, and behaviors.
ECHO CHAMBER: An echo chamber is “an environment where a person only encounters information or opinions that reflect and reinforce their own.”
DEEPFAKE: Generative AI has already played a major role in the spread of disinformation, especially in the U.S. political arena. This includes voice cloning, fake videos impersonating candidates, and an explosion of the Pentagon, which went viral in May of 2023. Generative AI has also been used to defraud financial institutions and other businesses, with bad actors tricking employees into participating in voice and video calls while impersonating executives. This is done with voice-cloning technology – and it works.
SHALLOWFAKE: Unlike a deepfake, which uses artificial intelligence to manipulate visual content, a shallowfake (or “cheapfake”) is created with simpler, more traditional tools and techniques. Examples include slowing down or speeding up a video, splicing video clips together, or removing portions of a video to alter how the viewer perceives the subject.
CHEAP FAKE: altered media that has been changed through conventional and affordable technology. Social media examples of cheap fake techniques include photoshopping (including face swapping), lookalikes, as well as speeding and slowing video.
CATFISHING: refers to the creation of a fictitious online persona, or fake identity (typically on social networking platforms), with the intent of deception, usually to mislead a victim into an online romantic relationship or to commit financial fraud. Perpetrators, usually referred to as catfish, generally use fake photos and lie about their personal lives to present themselves as more attractive for financial gain, personal satisfaction, evasion of legal consequences, or to troll.
CYBORGS: Hybrid accounts that combine bot automation with the occasional human input.
SEALIONING: A form of trolling meant to exhaust the other debate participant with no intention of real discourse. Sealioning refers to the disingenuous action by a commenter of making an ostensible effort to engage in sincere and serious civil debate, usually by asking persistent questions of the other commenter. These questions are phrased in a way that may come off as an effort to learn and engage with the subject at hand but are intended to erode the goodwill of the person to whom they are replying, to get them to appear impatient or to lash out, and therefore come off as unreasonable (Merriam-Webster).
ASTROTURFING: Astroturfing occurs when a message (political, religious, etc.) appears to come from a legitimate, grassroots effort. However, this disguises the real sponsors of the messaging and their motives. This deception is done to lend credibility to the masked organizations by hiding information about any motives, including financial connections.
SOCKPUPPET: Sockpuppets are misleading identities used online for deception and spreading disinformation, such as defending or supporting a person or organization and manipulating public opinion. Sockpuppets are also formed as a workaround for website and forum bans or any other restrictions. This might be confused with a pseudonym; the difference here is that the sockpuppet poses as an independent participant when they are not.
TROLL: A user who intentionally antagonises others online by posting inflammatory, insulting, or disruptive content to get attention, upset, or provoke. Community members fighting against these trolls have started calling themselves “elves”.
TROLL ARMY or TROLL FACTORY: An institutionalized group of Internet trolls that seeks to interfere in political opinions and decision-making. According to the Computational Propaganda Research Project, to win the 2016 elections, Philippines President Duterte spent the equivalent of hundreds of thousands of Euros to fund troll armies that would spread favorable propaganda and target opponents.
BOT-LIKE ACTIVITY: activity not coming from a human, programmed to do harm and to accelerate the trajectory of harmful narratives
ASTROTURFING: is a type of political or corporate propaganda that creates a misleading impression of grassroots support among a large group of people. One of the most famous examples is Working Families for Walmart, an organization funded by the company to promote their interests but camouflaged as an independent grassroots organization.
MANIPULATION: Percentage of manipulation coming from bots and trolls
HACK-AND-LEAK: A tactic that combines a cyber-attack and an information operation, also falling into the category of malinformation. The leaked contents may be authentic, fabricated, doctored or a mix of all these (for an example, see “MacronLeaks”).
HATE SPEECH: Discourse that expresses hate or encourages violence towards a person or group based on inherited characteristics such as race, religion, sex, or sexual orientation. On 9 December 2021, the European Commission adopted a Communication prompting a Council decision to extend the current list of ‘EU crimes’ in Article 83(1) TFEU to hate crimes and hate speech.
LAWFUL BUT AWFUL: A speech or action that cannot be prohibited by law (including the terms of service of platforms) but that profoundly violates many people’s sense of decency, morality, or justice.
WHATABOUTISM: The tactic of responding to an accusation or difficult question by making a counter-accusation or raising a different issue. Since the outbreak of the war in Ukraine, the solidarity of Western populations has been often criticized using similar expressions to emphasize the lack of similar responses during extra-European conflicts.
FAKE NEWS: Disinformation presented as news and optimized for online sharing.
FILTER BUBBLE: The limited perspective that can result from personalized search algorithms.
GROUP POLARIZATION: A group’s tendency to make more extreme decisions than its members would typically be inclined to make.
VISHING: A combination of ‘voice’ and ‘phishing’, it indicates a phone scam that uses social engineering tactics to persuade victims to provide personal information, typically intending to access financial accounts. Users are often tricked into believing that their bank account was compromised or that they received an unmissable offer.
TERRORGRAM: A network of Telegram channels and accounts that subscribe to and promote militant accelerationism (see definition). Ideologically neo-fascist, these channels regularly share instructions and manuals on how to carry out acts of racially motivated violence and anti-government, anti-authority terrorism.
TFGBV (Technology-Facilitated Gender-Based Violence): The United Nations Population Fund defines it as digital violence that is “committed and amplified through the use of information and communications, technologies or digital spaces against a person based on gender. It is facilitated through the design and use of existing and new and emerging technologies (both hardware and software). It is always evolving”. It often overlaps with gender-based disinformation.
DARK PATTERN: First coined by user experience specialist Harry Brignull, the term “dark pattern” refers to an intentionally deceptive design interface that tricks users into doing things they didn’t mean to do and obscures certain information to make it difficult for users to find. According to TechCrunch, the dark pattern technique “often feeds off and exploits the fact that content-overloaded consumers skim-read stuff they’re presented with, especially if it looks dull and they’re trying to do something else — like sign up to a service, complete a purchase, get to something they want to look at or find out what their friends have sent them.”
BRAND PERCEPTION: reflects how audiences understand and view your brand, usually based on social media presence, user-generated content, engagements, and influencer mentions.
BRAND REPUTATION: refers to the public’s overall opinion of your brand or organization, reflecting perceived credibility, trustworthiness, and value.
COHORTS: are like-minded communities segmented to understand how online groups interact with key narratives across a sector, industry, or topic. Measure the proportion or volume of cohorts in your narrative or dataset. Cohorts include special interest groups, political groups, nation-states, cybercriminals, competitors, etc.
HIGH-RISK COHORTS: Percentage of involvement of high-risk cohorts, such as political partisan cohorts and foreign state actors.
AUTHOR GROUPS: Such as influencers
ANOMALOUS ACTIVITY: includes inorganic behavior and manipulation that amplify or dampen a narrative and its spread.
ENGAGEMENT measures the impact of harmful actors and trusted voices that drive narratives.
REACH is the number of unique viewers of your posts and the potential size of your audience. Compare reach with other engagement metrics to determine campaign participation about your core audience.
VIRAL REACH is the total number of times a unique user sees your page’s content appear on their feed due to viral activity, such as connections liking, following, or engaging.
SENTIMENT represents the overall attitude towards a brand, product, or other entity. This is normally expressed via the comment section on social media platforms.
TOXIC LANGUAGE or toxicity (offensive, hate-speech, etc.)
VOLUME measures the number of people discussing your brand/organization and how many posts mention it.
LANGUAGE CLASSIFIERS: such as percentage of narratives across 25 languages. This is a great way to understand your active audience and general increase/decrease in specific demographics.
DISTRIBUTED DENIAL OF TRUST: a coordinated effort at flooding the information ecosystem with subversive or nonsensical content to degrade trust and the organic flow of information.
THOUGHTNET – A group of highly curated automated accounts that have been integrated into politically/ideologically charged communities online to foment malicious narratives.
SUPERNODE – a/the central driving user of a given narrative
HASHTAG HIJACKING: the tactic of utilizing a viral hashtag to redirect public discourse away from the hashtag’s intended use, and toward a new, often adverse area of conversation
NARRATIVE DIFFUSION: describes the process by which a narrative moves from one online community into another (organically or otherwise), often changing in tone and scope in the process.
FOREIGN INFORMATION MANIPULATION AND INTERFERENCE (FIMI) – pattern of behavior that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory.
CONVERSATION DRIVER – items which are garnering highest engagement for a narrative, such as online authors, hashtags, URLs, specific events etc
CONVERSATION TRIGGER – what prompted the narrative to enter circulation in the first place.