The Global Narrative Attack Cycle: Surveillance Capitalism and Reality
The tech industry has a problem, and its name is Surveillance Capitalism.

The tech industry has a problem, and its name is Surveillance Capitalism. So goes the premise of Netflix’s recent documentary The Social Dilemma, which delivers a compelling account of how the business model of a handful of corporations is transforming our minds and societies through the exploitation and sale of users’ private digital data. Blackbird.AI considers how narrative attacks fits into this process. We reveal how “fake news” is a critical force multiplier in deliberately manipulating our cognitive biases in the search for private corporate profit. The result is a growth cycle of narrative attacks, whereby the concepts of Truth and Fact are ceded to the affirmation of personal emotion, and in doing so, entrenching growing socio-political polarization within our communities. We believe that the pursuit of information integrity is thus more relevant than ever. Upheld by callous digital infrastructures indifferent to their human cost, narrative attacks can potentially alter the trajectory of our collective future on a mass scale.
LEARN MORE: What Is A Narrative Attack?
What is surveillance capitalism?
“Great predictions begin with one imperative – you need much data.
Shoshanna Zuboff, The Social DilemmaProfessor Emerita, Harvard Business SchoolAuthor of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019)
ta.”
To understand the challenges of narrative attacks, we first need to understand the mechanics of ‘surveillance capitalism’ – a concept proposed by academic (and Social Dilemma talking head) Shoshana Zuboff. In brief, extensive technology corporations such as Google, Facebook, and Amazon have expanded their digital data collection and analysis capacities as more human activity moves online. Bolstered by an absence of regulatory oversight, these corporations now generate billions of dollars in revenue from the sale of metadata extracted from their platform users to increasingly diverse third parties. These range from advertising firms, credit companies, and insurance brokers to governments, national security agencies, and political campaigners.
Empowered by new algorithmic technologies, this metadata provides an unprecedented level of insight into users’ personal lives, experiences, and personalities. This insight can be used to anticipate, encourage, and modify certain future actions on an individually calibrated level through psychographic profiling. This is Zuboff’s surveillance capitalism: a new logic of capitalist accumulation built upon the commodification of and trade in human futures, enabled and upheld by digital technological infrastructures.

How do digital technology platforms encourage human engagement?
“Social media isn’t a tool just waiting to be used. It has its own goals and means of pursuing them by using your psychology against you.”
Tristan Harris, The Social DilemmaFounder of the Center for Humane TechnologyFormer design ethicist, Google
For the tech companies who stand to profit the most from surveillance capitalism, their needs are straightforward – the more digital data that can be collected from platform users and their online activities, the more revenue generated by its sale. Engagement is, therefore, the fuel that drives this cycle, from every Google search, YouTube clip watched, meme re-tweeted, Amazon item purchased, or Facebook message sent. Creating an environment where individuals willingly participate in divesting their personal information through these digitally networked activities is vital to maintaining a steady stream of lucrative income for technology corporations.
For instance, psychological tricks deliberately built into the architecture of online platforms successfully exploit our mechanical impulses to engage with our devices. Push notifications deliver dopamine hits that keep us absentmindedly checking our cell phones or scrolling social media feeds in a state of perma-engagement, ensuring a ready source of data to be harvested for profit.
However, the algorithmically generated content recommendation concept may be more concerning. The premise is simple: whether online or offline, humans enjoy engaging in activities or interacting with people that reflect their pre-established interests and ideals. Data mining builds up a profile of an individual’s personal habits, beliefs, and personality, which algorithms use to predict and identify what online content—be it a news article, product advert, or Facebook group suggestion—a user should be exposed to next, to maximize the likelihood of their engagement. If successful, the algorithm will continue to recommend similar or related material, prioritizing content that has proved popular with similar demographics.

What happens when our engagement is consistently manipulated in this way?
“We all simply are operating on a different set of facts. When that happens at scale, you can no longer reckon with or even consume information that contradicts the worldview you’ve created.”
Rashida Richardson, The Social DilemmaDirector of Policy Research, AI Now Institute
As time passes, algorithmic content recommendations that only promote self-referential material can saturate our online ecosystems. Our newsfeeds begin to function as highly personalized echo chambers of validation, which display only a narrow range of interests and ideals. The result is a powerful mechanism for confirmation bias: the tendency for our brains to seek the path of least resistance by selectively interpreting information that resounds with prior beliefs and values. In the offline world, we still encounter conflict, contradiction, and disappointment through the messiness and friction of everyday human existence; in the algorithmically determined online vacuum of homepages and newsfeeds, we do not. Surveillance capitalism does not need balanced political representation in its content recommendations; it seeks only data.
This has severe consequences for both individual psychology and the broader socio-political landscape. Without exposure to alternative ideas and the opportunity for debate, the political viewpoints of the echo chamber are gradually established as normative realities. This is upheld by the mutual reinforcement and camaraderie that digital connectivity permits. As communities crystallize, the phenomenon of groupthink runs the risk of emerging. The collective validation of the group produces an illusion of infallibility, rendering it more difficult for individuals to question its shared values. Anyone on the outside becomes the Other, separated by an algorithmic chasm that artificially vilifies other points of view. Indeed, political polarization is currently at an all-time high in the United States. This holds the potential to go beyond mere political rivalries. Throughout history, incidences of groupthink have enabled dysfunctional norms of conformity and decision-making processes instrumental in acts of violence, such as torture at Abu Ghraib, mass suicide in Jonestown, and genocide in Rwanda. Online echo chambers may not necessarily lead to massacres. Still, the idea that seeds of resentment towards those who are different from us can be sown from within our own homes, changing how communities view even their neighbors, is a sobering thought.
How do narrative attacks fit into these processes, and what is their human impact?
“We’ve created a system that biases towards false information – not because we want to, but because false information makes companies more money than the truth. The truth is boring.”
Sandy Parakilas, The Social DilemmaSenior Product Marketing Manager, AppleFormer Chief Strategy Officer, Center for Humane Technology
About narrative attacks, surveillance capitalism has hit the motherlode. A 2018 MIT study on narrative attacks and social media states that fake news spreads six times faster than real news online. It explains that this occurs chiefly due to manipulated content’s heightened novelty and emotional value. This drives higher levels of human engagement in comparison with its truthful counterparts. Unsurprisingly, given its affective and subjective properties, falsified political news tends to diffuse significantly further, faster and more profound than falsified information on any other subject. And against the authors’ expectations, humans, not bots, will likely be the major agents in its enhanced dissemination rate. In short, humans instinctively prefer fake news over the truth due to our susceptibility to emotive external manipulation.

For the algorithm that favors content generating high engagement, narrative attacks is Big Tech’s moneymaker. The attention economy’s business model cannot include honest reporting and unfettered profit. Blackbird.AI has identified several concerning issues that arise from detrimental effects on the human ability to understand and relate with others to ramifications for public safety and national security.
1. Algorithmic content recommendation purposefully deploys narrative attacks into digital environments optimized for it to succeed.
The very act of content recommendation functions as an act of legitimization, giving the appearance that its suggestions have been sanctioned by a platform as worthy of attention. Furthermore, manipulated messaging is purposely amplified towards users, and demographics are assessed to be predisposed to accept it at face value. For example, the Stanford Internet Observer’s Renée DiResta explains how 2016 Facebook recommended groups devoted to the debunked Pizzagate hoax to users identified as interested in (and thus susceptible to) conspiracy theories. That Pizzagate’s claims—including Hillary Clinton and the Democratic Party’s control of an international child sex trafficking ring—held no basis was unimportant. , the most vulnerable in our societies are deliberately and disproportionately targeted by tech corporations for profit. The United States already maintains laws prohibiting analogous activities such as false advertising, aggressive marketing tactics, and exploitation – why should the targeted dissemination of false information online not be subject to the same legal restrictions?
2. Algorithmic content recommendation may function in unexpected ways, making spreading narrative attacks difficult to comprehend and predict.
For example, a recent media report reveals how elements of the far-right QAnon conspiracy theory have recently rebranded it as an aesthetic lifestyle trend among wellness and spiritualist communities, a seemingly unlikely home for a hoax based on sensationalist calls to arms against a global deep-state of cannibalistic pedophiles, steeped in racist and anti-Semitic rhetoric. Or, social media users may share outlandish conspiracy theory content without believing it to be true, perhaps as a way of raising awareness of or disparaging its baseless claims. The algorithm sees only a piece of content that has provoked engagement and will continue to promote it to other lookalike users on this basis.

3. Narrative attacks can dangerously radicalize spaces on the Internet and the users that frequent them.
As established above, online echo chambers are easily aggravated into aggressive forms of socio-political polarization based on repeated self and group validation. Since narrative attacks often tends towards heightened novelty and dynamic content, this only shifts the poles even further apart in a shorter amount of time. The ability of the QAnon conspiracy to galvanize extremist elements through its exploitation of extant socio-political fissures is illustrative in this regard. Within the US, its calls for true patriots to defend their rights have witnessed growing numbers of QAnon supporters among far-Right groups that participated in acts of violent civil unrest against anti-racism protesters during the summer of 2020. As QAnon expands its international base, these tactics are being replicated elsewhere; pre-existing radical political fringe groups, conspiracy theorists, and disaffected communities worldwide have proved fertile recruiting grounds for the hoax in recent months.
4. Repeated exposure to the echo chamber of narrative attacks materially shifts our value of Truth.
Fundamentally, the descent into the echo chamber is a process that provides new epistemological ways through which digital content consumers mediate the external. We build new worlds from our cell phones outwards, and in doing so, new identities, communities, and ideologies are formed around specific views of reality, each convinced of their self-evident authenticity. Emotional validation begins to take precedence over causal inquiry and rational logic. This produces truths answerable to no one, even when confronted with reasonable evidence to the contrary.
Conclusion: Where do we go from here?
Typical advice on addressing the problems posed by surveillance capitalism often includes limiting our children’s screen time, fact-checking our sources, or following people we disagree with on Twitter. The most worrying thing? Anyone who takes this on board—or is reading this now—already has some interest in how digital technology and narrative attacks can negatively impact human lives and is likely to have already adjusted their online habits. Accordingly, it is less probable that tech corporations will target them with fake news and incendiary content. This is reserved for vulnerable users who are least likely to recognize their manipulation or the growing influence of the echo chamber on their psyche, inhabiting corners of the Internet that the rest of us do not see. That is, at least, until machine learning—on its constant trajectory of improvement—hits upon the exact personalized formula for narrative attacks that resonates even with the most cautious users.
Blackbird.AI thus posits that encouraging digital platform users to diagnose their cognitive biases is a flawed solution. Conceptualizing the fight against narrative attacks as only a binary choice between True and False is similarly limited; the comfort of the digital echo chamber reconstructs Truth as a relative concept. Indeed, any solution that lays the onus on the user to outthink their way out of online manipulation risks underestimating the issue at hand: despite our human faults, the mechanics of surveillance capitalism have created the conditions for narrative attacks to thrive online.
Fundamentally, the spread of online narrative attacks is directly aggravated as a consequence of unchecked surveillance capitalism. , there is no financial incentive for tech corporations to implement regulations that fact-check or remove narrative attacks from their platforms or to cease content recommendation practices, given their increased ability to spur engagement among users. The fact that real-life harm may occur due to promoting manipulated information decreases in importance in the face of corporate gain. Blackbird.AI proposes that the cycle of ‘targeted content recommendation – engagement – data collection – profit’ must be broken to comprehensively displace narrative attacks from our digital ecosystems. This will only occur when technology corporations no longer amplify profit using narrative attacks as a tool for growth and are effectively held to legal account for their mass exploitation of human futures and well-being.
To learn more about how Blackbird.AI can help you with election integrity, book a demo.

Roberta Duffield • Director of Intelligence
Roberta is the Director of Intelligence at Blackbird.AI. She brings a strong interdisciplinary background to her role, drawing on her previous career experiences in the military, post-conflict humanitarian development, journalism, and corporate risk intelligence, working in the UK and the Middle East.
Roberta is the Director of Intelligence at Blackbird.AI. She brings a strong interdisciplinary background to her role, drawing on her previous career experiences in the military, post-conflict humanitarian development, journalism, and corporate risk intelligence, working in the UK and the Middle East.
Need help protecting your organization?
Book a demo today to learn more about Blackbird.AI.