Navigating the Warped Realities of Generative AI

Deep fakes - a combination of “deep learning” and “fake” - refer to manipulated media generated using artificial intelligence.

Sarah Boutboul

Deep fakes – a combination of “deep learning” and “fake” – refer to manipulated media generated using artificial intelligence. This term first appeared in November 2017 when a Reddit user introduced r/deepfakes, a subreddit dedicated explicitly to creating and spreading AI-edited videos inserting female celebrity faces into pornographic content. Users relied on an algorithm that combined one person’s facial features with another’s body, creating a realistic-looking video. According to a study conducted by Deep Trace Labs in 2019, “non-consensual deep fake pornography” accounted for 96% of the total deep fake content available online and aimed explicitly at damaging the reputation and credibility of women.

Although the subreddit was removed in February 2019, deep fake technology has continued to evolve and is increasingly used in political contexts. For example, deepfake videos of prominent activists and politicians supporting Hong Kong protesters circulated widely on social media to discredit the movement. At the same time, another footage of House Speaker Nancy Pelosi with her speech slowed down to make her look intoxicated was retweeted by former President Donald Trump.

LEARN MORE: What Is A Narrative Attack?

THE NEXT EVOLUTION OF DEEP FAKES

The danger posed by deep fakes has increased considerably with the development of Generative AI. This powerful development uses machine learning algorithms to autonomously create new data, such as images, videos, and text. To do this, machine learning models consume large sets of data and ultimately produce new content based on the knowledge acquired from studying this information. These models are trained to recognize patterns in the data they consume before applying them to new content that looks realistic but is actually completely fabricated.

“Traditional” deep fake imagery was often limited in scope, lacking in technology, difficult to access, and not widely available to the average audience. In comparison, Generative AI offers endless possibilities to its users by generating instant, affordable, accessible, and realistic visual content. As the technology becomes increasingly accurate, images can create their context, a significant asset for malign actors. From this perspective, the tool is the next evolution of the traditional deep fake imagery threat.

Online communities have been a driving force in spreading deep fakes since they first took hold in the digital ecosystem, serving as an entry point and collaborative space for interested users. These communities can now leverage Generative AI to create and distribute convincing fake images and quickly disseminate them on social media platforms, potentially causing significant harm to individuals and organizations.

As the technology evolves, the number of individuals using it to create malicious content will likely increase. Indeed, Generative AI imagery is not yet subject to strict regulatory guidelines such as a label explicitly highlighting its synthetic nature. At the same time, users are not always aware of the latest developments in AI capabilities, making it easier to deceive. ChatGPT can also offer significant help in writing prompts by generating text that can guide Generative AI to create accurate and realistic imagery, thus more sophisticated and more complex to detect. In short, Generative AI’s mainstream accessibility amplifies the once theoretical deepfake threat, enabling rapid, low-cost, easy-to-use content creation tools without traditional attribution trails – a significant challenge to digital security and trust.

NO MORE UNCANNY VALLEY

The “uncanny valley” is a phenomenon that describes the discomfort people feel when confronted with artificial representations, such as robots or computer-generated images, that closely resemble humans but have slight imperfections. This eerie sensation occurs when the representation is almost human but not quite, making it unsettling. The most significant danger posed by Midjourney 5, one of the most prominent AI-generated digital models, is the difficulty of accurately distinguishing Generative AI-created content from human-made content without proper verification. According to a Syzygy Group survey of public perception of Generative AI in Germany, 94% of Germans believe that AI-generated images are so sophisticated that it is difficult to distinguish them from human content. Similarly, only 8% of Germans can identify a photo of a natural person among AI-generated images of fake individuals. This concerning trend shows the influence of Generative AI on our perception of available information, as users may be unable to discern fake content from manipulated content.  

Efforts are underway to create parallel technology that can reliably recognize Generative AI content, but such initiatives lag behind the speed of the tool’s development. Writer is a free service that shows the percentage of human-generated content in a text. Image forensics can also analyze an image’s metadata to detect signs of manipulation. At the same time, training machine learning algorithms on a large Generative AI dataset can be used to detect new related imagery. However, no technology can reliably recognize all instances of Generative AI content.

Difficulties differentiating authentic from Generative AI-generated images could pose a significant challenge for the upcoming 2024 presidential elections, as the technology will likely be deployed as a political tool to spread digital disinformation. Deep fake imagery that is difficult to debunk could be exploited to drive a narrative and deceive voters while simultaneously creating an opportunity for conspiratorial theories to take root. Malicious actors might allege that credible media content is deepfake – now that the line between genuine and fabricated is blurred – which could erode trust towards all media types, already at a historically low level. In a similar vein, AI-generated content could be used to deceive journalists, leading to the spread of fake news and increased reputational harm. In addition, deep fakes created for satirical purposes may be exploited to mislead users. For example, a Generative AI deep fake video created for entertainment could be taken out of context and disseminated as real information, leading to confusion, misinterpretation, and increased societal polarization.

FROM POLARIZING POLITICS TO ALTERNATIVE HISTORY AND FABRICATED NATURAL DISASTERS

Online factions compete to concoct the most outlandish and high-risk scenarios using advanced technologies, pushing the boundaries of AI-generated content and reshaping our perception of reality. There are many examples of fake Generative AI images created for different use cases, as seen on platforms such as Reddit and its online communities.

Some of the most viral fake Generative AI imagery depict the arrest of former US President Donald Trump, Melania Trump and Stormy Daniels laughing together, or Trump playing guitar in prison in the context of the much-anticipated arraignment. 

A Midjourney image of former US President Donald Trump playing guitar in prison
(By u/Pashini90 on r/midjourney).

Other fake images show the US President Biden’s administration partying at the White House, Trump on vacation in China or even Biden and Russian President Vladimir Putin shaking hands.

A Midjourney image of US President Joe Biden and Russian President Vladimir Putin shaking hands(By u/wtfmanuuu on r/midjourney)

The popularity of Generative AI imagery has spread worldwide, as evidenced by the viral photorealistic deep fake of French president Emmanuel Macron’s arrest amidongoing protests in the country. While these fake images are often created for satirical purposes, these can fuel public panic and create confusion or distrust.

A Midjourney image of French President Emmanuel Macron getting arrested
(By @HologramBlues on Twitter)

The creation of fake Generative AI imagery is not limited to the political sphere. Other examples include content depicting UFOs and aliens in various contexts: on Mars, meeting the Pope, or even President Joe Biden appearing as an alien himself. The latter iteration, which features manipulated video and voice technologies, could be used to spread false information about the President and undermine public confidence in the government.

An AI generated image of Pope Francis wearing a puffy jacket
(by Pablo Xavier)

A Midjourney image of the Pope looking at an UFO flying over the Vatican.
(By u/charismactivist on r/midjourney)

In addition to examples of Generative AI deep fake images of political figures, reimagined historical events, or UFOs, imagery of fake earthquakes or tsunamis has garnered significant attention. These hyper-realistic deep fakes focusing on natural hazards could lead to distrust and skepticism about the integrity of natural disasters, hampering efforts to address them as they occur.

Midjourney images displaying historic human landing on the moon as staged
(By u/FineWithIX on r/midjourney)

A Midjourney image of destroyed infrastructures as a result of an earthquake in the US and Canadian Pacific Coast
(By u/Arctic_Chilean on r/midjourney subreddit)

A Midjourney image of disaster relief personnel and destroyed infrastructures as a result of an
earthquake in the US and Canadian Pacific Coast
(By u/Arctic_Chilean on r/midjourney subreddit)

To showcase how easy it is to create deep fake imagery by combining the capabilities of interconnected and readily available AI technologies, Blackbird created its narrative around a natural disaster that never happened. Our team first asked ChatGPT to provide a detailed summary of an oil spill scenario and then to feed the popular Generative AI image creator MidJourney. The prompt, “Devastated news anchor reporting live from Hawaii oil spill, a massive slick of black oil spread across the sea, dead animals, real-life photography,” generated four incredibly detailed and realistic images.

Midjourney images displaying journalists reporting on an oil spill in Hawaii
(Generated by the Blackbird team)

The ease with which both the prompts and images can be created reinforces concerns about the misuse of the technology. In our example, the generated content could be used to disseminate false information about the occurrence of an oil spill in Hawaii, which could then create outrage, boycotts, and more. Additionally, if these images were later debunked, they would prevent the public from trusting similar situations that will be received in the future. We have entered an information environment where Generative AI has enabled threat actors to effortlessly create countless variations of events, turning violence, war crimes, and disasters into media-rich, distorted narratives with nothing more than a well-crafted prompt.

The proliferation of Generative AI mechanisms raises various legal and security concerns, from intellectual property to legal liability. This section briefly overviews the implications of Generative AI concerning defamation and gendered harm, copyright law, legal evidence and admissibility, criminally obscene imagery, corporate brand risk, and extremist propaganda.

Defamation & Libel
Generative AI may have significant implications for defamation law, raising urgent questions about who can or should be held liable for creating and disseminating defamatory content. Defamation occurs when a false statement of fact is communicated to a third party, causing harm to the reputation of the person or entity being defamed. With the rise of Generative AI, creating and disseminating false and defamatory content like deep fakes and synthetic text is becoming increasingly accessible. This creates significant risks for individuals and organizations seeking to protect their reputations.

For example, Brian Hood, the mayor of Hepburn Shire, Australia, is considering a defamation lawsuit against OpenAI and its generative AI chatbot, ChatGPT, after the chatbot allegedly shared false claims about his criminal record. ChatGPT could become the first Generative AI product targeted by a defamation suit if brought to court. Hood’s lawyers sent OpenAI a letter of concern, and the company has 28 days to fix the errors or face a defamation suit. The lawsuit could apply defamation law to a new artificial intelligence and publication area.

Copyright Infringement

Screenshots from “Zarya of the Dawn” a comic written by a human author but illustrated by Midjourney
(Comic by Kris Kashtanova)

Generative AI to create images and videos raises significant copyright implications. Copyright law grants the owner of a copyrighted work exclusive rights to control the work’s reproduction, distribution, and display. When Generative AI is used to create an image or video, questions arise about who holds the copyright in the resulting work. The copyright might sometimes belong to the person who created the generative AI algorithm. In contrast, in others, the copyright may belong to the person who provided the input data to the algorithm.

Another copyright issue arises when Generative AI is used to create works substantially similar to existing works, raising the question of whether the new work infringes the existing work. Courts have held that a work can infringe on a copyrighted work if it is substantially similar to the original work and if the defendant had access to it. This raises the question of whether a Generative AI algorithm trained on existing works can create new works substantially similar to those existing works and, if so, whether such works would be infringing.

In February 2023, the US Copyright Office announced that illustrations created by diffusion model Midjourney for a comic book (pictured above) were not protected by copyright law. While the author retained rights to the human-written text, the illustrations, all of which were AI-generated, received no intellectual property protections. The decision will strongly influence how artists and companies approach the creative use of AI-generated images as we advance.

Legal Evidence & Admissibility Implications
Attorneys Matthew Ferraro and Brent Gurney suggest that defendants, in some instances, are increasingly likely to use deep fakes or other forms of AI-manipulated media to doubt the reliability of video evidence in trials. As deep fakes become more realistic and harder to detect, the risk increases that falsified evidence finds its way into the legal record, causing an unjust result. The existence of convincing deep fakes also makes it more likely for a defendant to challenge the integrity of evidence, even without reasonable suspicion of inauthenticity. The “liar’s dividend” phenomenon severely threatens the judicial process.

As the visual products of Generative AI become increasingly indistinguishable from natural images, the legal system will have to rely on “expert” witnesses to judge the authenticity of photo- and videographic evidence without robust detection technology. According to University of Waterloo professor Maura Grossman, “Neither a judge nor a jury will be in any position to believe their eyes and ears anymore when they look at evidence.” Grossman also notes the potential for inequitable treatment as “Only defendants who can afford to pay for expert analysis of the evidence will be able to get a fair trial when trying to refute deep fakes.”

Corporate Brand Risks.

An AI-generated image of a nonexistent Coca-Cola product, designed to imagine how the product
could be designed if the Coca-Cola Company were started in Japan.  
(By u/mrgalexey in r/midjourney)

As the brand reputation and public image are paramount to success, deepfakes and Generative AI pose significant risks to companies. These technologies allow for the creation of compelling videos, photos, and audio recordings that can be used to manipulate or deceive consumers. This opens up many possibilities for bad actors to damage a brand’s reputation by disseminating false information or creating content that portrays a company negatively.

One of the main concerns is that deep fakes and Generative AI can be used to create fake endorsements or testimonials, which can be used to deceive customers and damage a brand’s reputation. Additionally, these technologies can be used to create fake news or malicious content that can harm a company’s image. As such, companies need to monitor their brand reputation and take swift action to address any issues. This may involve working with legal experts, developing crisis communication plans, and investing in technologies that can help detect and mitigate the impact of deep fakes and Generative AI on their brand.

One risk is the creation of deep fake images that falsely attribute a brand to a product or service that the brand does not endorse. For example, a deep fake image could be created to show a celebrity using a product that the brand does not manufacture or endorse. Such images can cause significant harm to a brand’s reputation by associating the brand with products or services that are of poor quality or are manufactured unethically.

An AI-generated mock-up of a nonexistent Bentley pick-up truck
(From imgur)

Brands are also at risk of losing control of their product image. One popular thread in Reddit’s Midjourney forum showcases the results of one user’s attempt to create internationally inspired Coca-Cola product design. Another user created a gallery of McDonald’s burgers surrounded by dirty or repulsive backgrounds. Yet another user created a gallery of mockups of hypothetical vehicles from various automobile companies.

Extremism Propaganda & Imagery
As the use of Generative AI continues to expand, experts are increasingly warning of the risks posed by its use by extremist groups. With the ability to create highly realistic images, videos, and audio recordings, Generative AI has the potential to be a powerful tool for propaganda creation and distribution and narrative manipulation.

Extremist ideologies require new membership to survive. Generative AI allows extremist propagandists to produce a broader range of materials that are more sophisticated, persuasive, and targeted than ever before at scale. By analyzing social media data, Generative AI algorithms can create visual narratives tailored to the interests and preferences of vulnerable users that extremist groups may be interested in recruiting. The generated materials may also serve the purpose of solidifying and rallying an existing base. On Reddit, one user in the r/Midjourney forum generated a painting-like image of Hillary Clinton as a vampire holding a small child, a play on a QAnon conspiracy theory that claims the Clintons and other elite families drink the blood of children. While the user is likely not a member of an extremist group, such imagery has spurred violent incidents in the past. The user joked, “Making a start on my QAnon comic book.”

Painting-like AI-generated image of Hillary Clinton holding a crying child as a vampire,
referencing popular QAnon conspiracy theories
( By u/asjarra in r/midjourney)

Extremists are also likely to exploit Generative AI to create deep fake videos containing content that may be damaging to social and political opponents.

MITIGATING WARPED REALITY RISKS

We are witnessing rapid advancements in Generative AI and AGI development at a breakneck speed with little thought for the implications for society. Social media platforms have been around for over a decade but have failed to moderate human speed discourse and the related harms. Now, with readily available technology that can generate unlimited narratives and media, we risk warping reality in previously unimaginable ways.

Generative AI poses unprecedented societal and national security risks from a cybersecurity perspective. Threat actors now have a potent tool, enabling everything from disinformation generation to malicious code creation. Furthermore, the popularity of these tools across all modes of work results in massive exposure to enterprise strategy, intellectual property, and other forms of confidential data being fed into large language models with unknown or ever-changing privacy policies.

Suppose we continue to overlook the influence of AI programming on decision-making processes or the risk of centralized technologies compromising personal data privacy. In that case, the unchecked use of Generative AI tools has the potential to alter our perception of reality dramatically. To avoid a distorted reality, we must evaluate multiple scenarios and weigh the costs versus benefits of AI disruption.

Most notably, the difficulty of distinguishing between content created by Generative AI and human-made products, as well as the ability of deep fakes to reinforce erroneous belief systems, are major concerns. This specific issue has the potential to compromise the integrity of elections, fuel false narratives, and contribute to an increased lack of trust in the media, public discourse, and, ultimately, democracy.

Threat mitigation measures include but are not limited to careful vigilance in sourcing and identifying deep fake images, solid cross-industry collaborations, and research investments to monitor and exploit these new tools better. While current technology may not be able to detect Generative AI images systematically, companies must empower themselves with technologies that can monitor and analyze the spread of specific campaigns using AI-generated content to mitigate the impact of disinformation. By tracking how far and fast narratives containing manipulated images spread, stakeholders can contribute to effectively tackling this new generation of a growing new class of threats and successfully staying ahead of a fast-evolving tradecraft designed to shift human perception.

‍To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.