How Generative AI Models Create Brand Bias

By Vanya Cohen

Generative AI models can implicitly favor certain brands, companies, and products over others, creating an uneven playing field and influencing consumer choices without their conscious awareness.

Generative AI is becoming a primary interface between users and computing applications, marking a significant shift in how we interact with technology. From web search to multimedia editing, content consumption through video filters and enhancers, automated content moderation, and writing assistants, GenAI is becoming an integral part of our digital experience.

This trend has sparked concerns about various forms of bias in these models, which are trained on vast amounts of biased data from the web. The ingested data is then reflected in the models and replicated in their outputs. Most GenAI companies work to mitigate biases. However, a relatively unexplored but critical issue is emerging for companies and organizations: How Generative AI Models Are Creating Brand Bias.

Learn More: How the Blackbird AI Platform Addresses Narrative Attacks in the Technology Industry

In a recent podcast with Lex Fridman, Sam Altman, CEO of OpenAI, highlighted a key advantage of ChatGPT over traditional search engines like Google Search: the lack of influence from advertisers. Altman praised this form of information retrieval as free from the commercial pressures of advertising that shape many digital experiences. However, this may not be accurate. As GenAI models increasingly dominate our digital interactions, they will implicitly favor certain brands, companies, and products over others, shaping consumer behavior in ways that may have long term impacts to brands.

Consider, for instance, a car insurance commercial generated by a GenAI model like Sora, OpenAI’s video-generating technology. Unless specified in the prompt, the car makes and models featured in the video’s background will be chosen based on the AI model’s underlying biases. This subtle yet powerful influence raises crucial questions about the aggregate impact on consumer behavior when much of the content they view is created by GenAI, potentially subject to biases in brand representation and presentation.

Learn More: Navigating the Promise and Peril of Large Language Models

Examining the data used to train GenAI models is essential to understand where these biases come from. These models are typically trained on vast amounts of internet data, which includes brand-related content. This data is filled with pre-existing biases, narrative attacks (online claims that cause harm by shaping perception about a person, place, or thing in the information ecosystem), and preferential treatment, reflecting the complex media landscape of the digital world. Suppose brands do not actively protect themselves from negative narratives. In that case, these biases can seep into the GenAI training data and then into GenAI models that can perpetuate and even amplify these biases in their outputs.

For instance, if a particular brand faces sustained negative coverage online and on social media, the training data for GenAI models will reflect this sentiment. Consequently, when users interact with these models, they might encounter content that subtly undermines that brand, influencing their perceptions and purchasing decisions.

In the simplest case, large language models (LLMs) can influence brand and product placement frequency by more often incorporating certain brands or products in generated content. For example, when a user searches for “best smartphone” on a search engine powered by an LLM, the model might disproportionately highlight brands like Apple or Samsung in the top results. This can occur even if equally competitive alternatives are available simply because these brands appear more frequently in the training data. As a result, users are more likely to see reviews, articles, and recommendations for these prominent brands, reinforcing their market dominance. This biased visibility can shape consumer preferences and purchasing decisions, subtly nudging them toward well-known brands over lesser-known or emerging ones.

Learn More: How Compass by Blackbird.AI Uses Generative AI to Help Organizations Fight Narrative Attacks

This claim was checked by Compass by Blackbird.AI.

If a GenAI model consistently favors certain brands in its outputs, consumers may be more likely to choose those brands, often without realizing the underlying influence. This can create an uneven playing field, where some brands gain an unfair advantage simply because of the biases embedded in the GenAI models they interact with. Moreover, the subtle nature of these biases makes them particularly insidious. Brand bias in GenAI operates at the point of user interaction, shaping perceptions and decisions in ways that are difficult for companies to detect and address directly. This calls for a concerted effort to identify and mitigate these biases on the web and in training data, ensuring that GenAI models provide a level playing field for all brands.

In the coming weeks, we will provide details about the Blackbird.AI Brand Bias benchmark, which seeks to measure these biases. We have found that large language models consistently replicate known biases and narratives around popular brands. Language models associate specific attributes (e.g., trust, innovation, political bias, etc.) with particular brands. Using our benchmark, organizations can quantify the level of brand bias of various open-source and closed-source models. We plan on releasing this benchmark to drive further research and mitigation of brand biases in standard open-source large language models.

Learn More: The Evolution of Narrative Attacks and Their Organizational Risk

For near-term mitigation, Blackbird.AI offers advanced solutions to combat narrative attacks that can introduce brand bias in GenAI models. The Constellation Narrative Intelligence Platform provides real-time monitoring and analysis of harmful content to understand risks within the digital ecosystem. By analyzing narratives, influencers, and bot campaigns, the platform allows brands to proactively address and neutralize biases, ensuring fair representation in AI-generated content. This comprehensive approach mitigates the effects of harmful narratives and validates trending content’s authenticity. The use of Blackbird.AI’s solutions forms the core component of ensuring brands receive fair representation in GenAI training data, helping to create a level playing field in the AI-driven digital world.

In the long term, as GenAI content becomes a larger share of the web, these biases could compound. New GenAI models will be trained on data increasingly contaminated by synthetically generated content, potentially increasing brand bias. Additionally, GenAI models will make it increasingly easy for actors to launch narrative attacks on brands and thus contaminate the web-data ecosystem with harmful narratives. We believe that the narrative attack mitigation tools offered by Blackbird.AI offer brands the best chance to guarantee the safety of their brand from these abuses.

To learn more about how Blackbird.AI can help you in these situations, book a demo.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.