The Bot Myth: Busted
By Roberta Duffield
Sooner or later, any conversation around information security is bound to turn to bots.
From botnet attacks capable of taking whole countries offline, to hostile foreign info ops eroding election integrity, bots appear to have infiltrated every corner of the internet as the digital weapon of choice. Most recently, bots became a topic of speculation following Elon Musk’s claim that one-third of all visible Twitter users are “false or spam accounts”, underpinning his attempted withdrawal from a 44-billion dollar deal to purchase the social media platform.
Readers are probably already aware of a bot: a software application that can perform simple automated tasks much faster than a human can manage. Bots can be programmed to artificially boost the visibility of online content by synchronizing thousands of posts or gaming online search rankings by activating links and advertisements. Bots can be infected with malware for high-traffic Distributed Denial of Service (DDoS) attacks that disrupt access to websites or servers. They can also impersonate humans in phishing attempts to trick victims into providing sensitive information.
In short, bots are bad news.
It’s no wonder, therefore, that a frequent organizational or corporate concern is becoming the target of a bot attack online.
LEARN MORE: Use Case: Why Government Leaders and Policymakers Need Narrative Risk Intelligence
Just ask PayPal, who faced serious financial and reputational loss after admitting that 4.5 million bot accounts exploited monetary reward schemes for new users in 2022. Indeed, a recent report by cybersecurity firm Kasada found that of companies surveyed who had experienced at least one bot attack in the last year, 39% reported a loss of 10% or more of their revenue as a direct result of the incident.
However, bots are not the only risk that’s out there. And by over-indexing on the threats posed by bots, companies and organizations run the risk of ignoring other, equally harmful and disruptive forms of digital or information manipulation attacks that might come their way.
Bots are often perceived as dangerous because they can undertake large-scale tasks that humans cannot. However, what tends to be forgotten is that those things that humans—not machines—are best at equally deserve our concern.
Take narrative attacks, for example, which the European Union Agency for Cybersecurity (ENISA) recently named one of its top 10 emerging cybersecurity threats for 2030.
Humans might not be as fast as bots at spreading content online, but our strength lies in our authenticity of meaningful and contextually relevant communication that can inform and persuade. When it comes to narrative attacks or extremist rhetoric, connecting with other seemingly like-minded people whose messaging resounds with your own beliefs or fears can often be a compelling means by which alternative ideas and narratives are introduced. Although the actual on-the-ground impact has been debated, bots were deployed by the Russian state to influence the outcome of the 2016 US presidential election alongside so-called ‘web brigades’; thousands of humans paid to spread narratives of political and social discord on social media. These real-lifeDaaS) operators were deemed as “fluent in American trolling culture” by a report commissioned by the US Senate Intelligence Committee, indicative of their success in infiltrating and engaging with native populations targeted according to their political affiliation, socio-economic background, age, and race.
Malign actors often seek to co-opt this human credibility of communication and connection in a way that bots may struggle to replicate. For example, in 2021, several European social media influencers reported receiving financial offers from a Russian state-linked communications firm to discredit Pfizer’s COVID-19 vaccine as unsafe to their online followers. For information manipulation actors, these fan communities represent a convenient shortcut to pre-established, highly responsive audiences. Human influencer ‘clout’ is something more accessible to purchase rather than synthetically recreate from scratch.
Furthermore, human users seeking to manipulate social media conversations are often much more challenging to detect, given the authenticity of their behavior. Bots usually operate irregularly and deviate from real-life discourse patterns. They can be detected with the right toolkit or, sometimes, just glancing at a Twitter profile full of spam posts, a randomly generated handle, and no profile picture. Although deepfakes are a frequent topic of discussion as technology evolves to become much more realistic (and dangerous), they do not yet represent nearly as much risk as conversational data. Anyone who has attempted to properly converse with a chatbot in the same way as a person—be it an innocuous virtual assistant or a dating app phishing attempt—will be aware of their general inability to pass the Turing Test.
The result is a situation where it is often easy to create a volume using bots (via the rapid publication of posts around a certain topic), but generally significantly harder to generate legitimate human engagement from this content. This doesn’t mean to say that it does not happen – but rather that the meaningful presence, reach, and impact of what bots say online can sometimes be drastically overestimated. It must be taken in concert with various other signals and indicators.
Behind all of this is the algorithmic infrastructure that drives social media platforms. Bad actors can easily exploit content recommendation features, trending aggregators, or targeted advertisements—regardless of whether they’re bots or humans—to maneuver online discourse. This is underpinned by regulatory provisions that often struggle to sufficiently enforce the removal of antagonistic operators and content. Therefore, we must understand our social media ecosystem’s online and offline anatomy to meet these challenges best.
All of this does not mean to say that bots pose less risk than expected. As the examples cited at the beginning of this article prove, that is certainly not the case for many companies, organizations, and national governments. And as developments in hyper-realistic generative AI continue to advance, the threat posed by digital technologies deployed to disrupt and manipulate are growing exponentially.
Instead, the message is that bots are only one part of a complex landscape of information manipulation activities and risk. Bots might grab the headlines, but human threat actors skilled at narrative manipulation pose a significant threat to digital security—albeit in a different form—just as bots do. Understanding the range of digital threat actors and their strengths, weaknesses, and motivations at a more comprehensive, holistic level is paramount to guard against them.
To learn more about how Blackbird.AI can help you with election integrity, book a demo.