Combating AI-Generated Deepfakes and Disinformation: Strategies to Restore Trust in Public Institutions

If left unchecked, AI-enabled deepfakes and narrative attacks created by misinformation and disinformation can erode public trust in institutions, fragment society, and empower authoritarian regimes—but there are solutions.

Dan Patterson

In October 2023, Dan Patterson delivered the keynote address at the European Broadcasting Union’s annual cybersecurity seminar about the dangers of AI-enabled deepfakes and disinformation. This blog post is, in part, a text version of that speech. The original can be viewed here.

Policymakers, leaders of NGOs and nonprofits, CISOs, and media professionals, in particular, must confront the perils of AI-enabled narrative attacks caused by misinformation and disinformation. Unchecked, in the hands of agenda-driven bad actors, these new AI systems create schisms in our shared reality and threaten to rock the pillars of liberal democracy.

Learn More: Use Case: Why Government Leaders and Policymakers Need Narrative Risk Intelligence

The World Economic Forum says AI-enabled misinformation and disinformation is now the top short-term global threat. Narrative attacks can create parallel realities and fracture societies by exploiting human biases, sowing confusion, and eroding trust in shared sources of truth. When false narratives spread unchecked, they can take root in echo chambers where they are reinforced and amplified, leading different population segments to believe in contradictory versions of reality. This splintering of the information landscape undermines the common ground necessary for constructive dialogue, compromise, and effective governance. As a result, societies can become increasingly polarized, with deepening divisions along political, ideological, and cultural lines. In this environment of distrust and disagreement over basic facts, the social fabric frays, leaving communities vulnerable to manipulation by bad actors seeking to further their agendas at the expense of the greater good.

Today, many CISOs and cybersecurity experts agree that cyberattacks are now inexorably linked to narrative attacks, misinformation, and disinformation. And that mitigating false narratives can be a more significant and complex challenge than recovering from a cyberattack. Advanced AI systems like GPT-4 from OpenAI can now generate human-like text on demand for any topic. While aiding creators and lowering barriers to content production, this also means propagandists and bad actors can potentially “mass-produce” fake news articles, harmful social media posts, comments, and more to advance their agendas. Coupled with the hyper-personalization enabled by big data, micro-targeting groups with custom-tailored disinformation at scale is now possible. 

LEARN MORE: Government Leader Narrative Intelligence Datasheet

If used irresponsibly, these systems could drown the online information space in a tsunami of false narratives and misleading distortions, overpowering the voices of credible journalists and expert institutions. As 404 Media recently noted, this risks undermining public trust in real news and dividing societies through polarization from different sets of AI-generated “facts.”

And it’s not just text. Large language models that produce image, video, and audio have advanced rapidly, with multimodal models like DALL-E 3, Midjourney, Suno, ElevenLabs, and other generative AI apps increasing quickly in recent years. Like “cheap fakes,” these systems don’t need to be perfect. They are good enough to shape first impressions and beliefs, which are often hard to change even when proven false. 

The result may be mass confusion about what’s real, an inability to have shared truths and solve collective problems, and fertile ground for authoritarians to seize power. This is the dangerous future we may face if AI disinformation goes unchecked.

LEARN MORE: Tag Infosphere Report: How Misinformation and Disinformation Represent a New Threat Vector

Deepfake Disinformation Danger

Deepfakes represent an especially alarming AI disinformation threat – forging convincing video or audio of high-profile people saying or doing things they never actually did. Enabled by LLMs, deepfakes have rapidly improved from blurry faceswaps to lifelike forgeries that are extremely difficult to detect.

Imagine a video of a politician accepting a bribe, contemptuous remarks from a celebrity, or false orders from a general — all synthesized by AI with no evidence it ever happened. Even proven wrong after the fact, deepfakes can ruin careers and reputations through initial believability and sensationalism.  

Bad actors also produce shallow or “cheap” fakes — simple edits like slowing down or splicing clips to remove context. These don’t require sophisticated AI but can similarly devastate targets.

The ability to fabricate events while maintaining plausibility puts dangerous power in the hands of the corrupt, who will have no limits on destroying opponents. Like fake text, flooding the media with deepfakes could paralyze the public’s ability to discern truth from fiction.

In early 2022, for example, several mainstream social networks removed a deepfake video that purported to show Ukrainian President Volodymyr Zelensky calling on his soldiers to surrender, highlighting the growing threat that artificial intelligence tools can manipulate faces and voices to create fake media. While Zelensky’s government was able to debunk this video quickly, experts warn that deepfakes are becoming more sophisticated and could be used to spread misinformation and sow public discord. Social media platforms need help detecting deepfakes and containing their impact. Combating the malicious use of deepfakes and cheap fakes will require new laws, improved forensic tools, greater public awareness, and a more critical assessment of media authenticity by individuals.  

Last month, political consultant Steve Kramer was indicted on 26 charges and fined $6 million by the Federal Communications Commission for orchestrating a fake robocall impersonating President Biden ahead of New Hampshire’s Democratic primary in 2024. The call, created using deepfake technology, urged Democratic voters to stay home and save their vote for the November general election. Kramer claims he intended to highlight the need for regulation on the use of AI in politics. Still, the incident has raised concerns about the potential threat of generative AI in future elections, prompting calls for immediate action from Congress to manage the malicious use of this technology.

LEARN MORE: How deepfakes, softfakes, and an influential social media scene shaped the Indonesian presidential election

Trust Erosion

Trust erosion poses an existential threat to liberal democracies around the world. If the public cannot believe their eyes and ears, determining truth from fiction becomes nearly impossible. This breakdown of shared reality undermines constructive debate, problem-solving, and accountability. As more voices produce AI-synthesized content, it creates divisions ripe for exploitation by bad actors seeking power, and actual journalism risks becoming irrelevant, unable to resonate amidst the distorted noise. The institutions charged with transparency in the public interest, like quality news outlets, face losing authority and influence. 

These effects will only snowball over time as fake content acquires plausibility through repetition across media channels. Even debunking can backfire by driving more attention to false claims. The institutions charged with upholding truth and transparency in the public interest — journalism included — face the prospect of becoming irrelevant, distrusted relics in the minds of a misled populace. 

This claim was context-checked by Compass by Blackbird.AI.

LEARN MORE: How Compass by Blackbird.AI Uses Generative AI to Help Organizations Fight Narrative Attacks

The Way Forward: Solutions to Maintain Public Trust

Misinformation and disinformation inoculation mechanisms can help protect the public. We should treat AI and social technology safety like other industries, with regulations in place to ensure safety before products are released to the public. Before launching a product, companies should consider whether it might harm people. 

To prepare for the EBU speech and this blog post, I interviewed dozens of AI subject matter experts and public policy officials about solutions to regain public trust in institutions, community, and liberal democracy:

  • Invest in media literacy education: Incorporate critical thinking and media literacy skills into school curricula at all levels. Teach students to identify reliable sources, fact-check claims, and recognize manipulative tactics used in disinformation campaigns. This will help create a more discerning and resilient public.
  • Encourage responsible AI research and development: Foster a culture of ethics and accountability within the AI research community. Develop best practices and guidelines for building AI systems prioritizing transparency, fairness, and robustness against misuse. Encourage researchers to consider the potential societal implications of their work.
  • Develop technology that checks the veracity of online claims: Tools like Compass by Blackbird.AI add essential context to online claims. The product functions as a “context layer for the internet,” providing clarity and context to various types of online content, such as claims, articles, social media posts, and videos. Compass checks claims and analyzes the results using Blackbird.AI’s Constellation Narrative Intelligence Platform. It also delivers comprehensible, evidence-based responses by processing data from many sources in real time. This system aims to empower users by enabling them to discern the nuances of debated topics and make informed decisions amidst the flood of information online​.
  • Promote cross-sector collaboration: Encourage partnerships between government, industry, academia, and civil society to address the challenges posed by AI-enabled disinformation. Share knowledge, best practices, and resources to develop comprehensive strategies and solutions.
  • Enhance platform accountability: Hold social media platforms accountable for the spread of disinformation on their networks. Encourage them to invest in content moderation, fact-checking, and algorithmic transparency. Consider legislation that requires platforms to take proactive measures against malicious actors and harmful content.
  • Protect and support journalists: Provide resources and training for journalists to help them navigate the challenges posed by AI-generated content. Fund and support investigative journalism and fact-checking initiatives that can help counter disinformation campaigns. Ensure legal protections for journalists who face harassment or threats for their work. Underwrite the cost of providing universal access to high-quality media.
  • Foster public dialogue and engagement: Encourage open, inclusive conversations about AI’s impact on society. Create forums and events where diverse stakeholders can discuss concerns, share perspectives, and explore solutions. Engage the public in these discussions to build awareness and trust.

LEARN MORE: How AI-Generated Speeches And Deep Fakes Shaped The 2024 General Election In Pakistan

Almost half the world’s population will vote in over 60 elections in 2024. Once again, emerging technologies, including AI-enabled synthetic media, represent a turning point in politics, media, and culture. We can anticipate that hyper-agenda-driven bad actors will always exploit emerging technologies. However, we also have the technology and expertise to develop intelligent solutions that enable technological innovation, enhance safety, and preserve liberal democratic ideals.

To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.