Indian Voters Inundated with Deepfakes During the Largest Democratic Exercise in the World

Politicians exploited AI tools to create propaganda and further campaign efforts in the 2024 Indian general election, and this was not India’s first election targeted by deceptive technology.

Emily Kohlman

Editorial Note: Every link in this story was checked by Compass by Blackbird.AI.

Deepfakes are making waves this year, aiming to influence voters casting ballots in over 50 elections, and India’s general election – the largest in the world – was no exception. In India, where nearly a billion people are eligible to vote, creating political deepfakes is a booming business. AI-generated audio and visual creators worked tirelessly throughout India’s seven-phase elections from April 19 to June 1 to produce content for politicians from different political parties. For synthetic media creators in India, political deepfakes are a lucrative source of income, wielding significant power due to their extensive reach. More than 75% of Indians were exposed to political deepfakes in India’s general election, and nearly one in four believed AI-generated content was real.

These statistics underscore the harm that AI-generated deepfakes can cause the general populace by adding a layer of complexity to spotting and combatting election-related disinformation. Blackbird.AI’s Constellation Narrative Intelligence Platform analyzed millions of social media conversations around the Indian general election, uncovering the influence of deepfakes, and Compass by Blackbird.AI provided necessary context by checking claims.

Generative AI has been a notable factor in many elections this year, but the widespread use of deepfakes during the general election in the world’s largest democracy stood out. With over 460 million social media users, India has a vast online landscape that is easy to manipulate. In such a highly connected community, false claims and deepfakes can travel very quickly, making it even more challenging to discern what is real and why certain narratives are being propagated.

LEARN MORE: Use Case: Why Government Leaders and Policymakers Need Narrative Risk Intelligence

Leveraging Deepfakes as Campaign Tools

During India’s general election, politicians in incumbent Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP) and opposition parties leaned on the use of audio and visual deepfakes to boost their campaigns and reach voters. In a country with over 120 major languages and thousands of regional dialects, deepfake clones offered Indian politicians a way to communicate with voters in languages they do not speak. Politicians also used cloned pictures and videos to create holographic avatars to deliver personalized messages to voters.

This election was not India’s first experience using innovative technology as a campaign tool. In 2014, when the BJP first came to power, the party projected 3D holograms of Modi at 100 campaign rallies simultaneously. During the Delhi legislative assembly elections in 2020, the BJP used deepfake technology to create videos of their president speaking in different languages to share on social media, reaching around 15 million people.

Political campaigns also employed deepfakes to bring back former leaders and political figures who have died in order to rally supporters and persuade voters. The Dravida Munnetra Kazhagam (DMK) party used deepfake technology to resurrect former party leader Muthuvel Karunanidhi – who passed away in 2018 – to campaign for the leadership of his son, Muthuvel Karunanidhi (MK) Stalin.

LEARN MORE: How Compass by Blackbird.AI Uses Generative AI to Help Organizations Fight Narrative Attacks

This claim was checked by Compass by Blackbird.AI.

AI-Driven Misinformation Narratives

As deepfakes circulated to millions of voters, AI-powered misinformation narratives gained traction. AI-generated technology had seemingly few limits in India – at least until AI tools insinuated an association between Modi and fascism. In response, the Indian Ministry of Electronics and Information issued an advisory requiring tech companies to seek government approval before launching new generative AI tools. 

Modi, who was just re-elected to his third term as prime minister, has been in power for a decade. Around 640 million people voted in the election, which was held in seven phases over the course of six weeks, with final results declared on June 4. Modi’s BJP emerged as the single largest party but fell short of a majority on its own, winning the election through the National Democratic Alliance (NDA). The opposition INDIA alliance, led by the Indian National Congress (INC), performed better than exit polls had projected. Even though the BJP secured top cabinet positions, Modi’s re-election is obscured by remaining in power through the support of a coalition.

The BJP’s IT cell – a department that manages social media campaigns for the party and its members – relied on various social media platforms to spread misinformation to voters in an attempt to influence the election outcomes, including false claims about party achievements. The BJP, along with other political parties, also leaned on AI-powered chatbots to call constituents using the voices of political leaders.

Deepfakes of prominent Bollywood actors, including Aamir Khan and Ranveer Singh, were used to further misinformation narratives during the election by falsely depicting them as making political endorsements and criticizing Modi. The false narratives arising from the deepfakes continued to spread even though the actors submitted police complaints stating that the viral videos were made without their consent.

LEARN MORE: Misinformation and Disinformation Attack Readiness Assessment

The Decline of Press Freedom and the Rise of False Claims

After Russia invaded Ukraine in 2022, Modi made calls to Russian President Putin and Ukrainian President Zelensky to ensure the safe evacuation of Indian students. This spurred narratives claiming that Modi “paused” the war to facilitate the evacuation of his country’s citizens. This narrative has been widely circulated by the BJP during election campaigns, including this most recent one, even after the Indian Ministry of External Affairs denied assertions that Russia stopped the war specifically for the evacuation of Indian students. Attempting to boost Modi’s image, the BJP’s campaign implied that Modi’s intervention led to a temporary pause in fighting. As part of their 2024 general election campaign, the BJP attempted to further this narrative by sharing a video depicting a woman crying tears of joy and exclaiming that Modi has stopped the war in Ukraine. 

In all ten years Modi has been in power, he has rarely addressed the public outside of campaign rallies. In 2019, he held his only press conference in India and deferred all questions to a colleague. Since Modi came to power in 2014, India has slipped in press freedom rankings. According to the 2024 World Press Freedom Index, India now ranks 159 out of 180 countries. The continuing decline of press freedom in India further increases people’s susceptibility to AI-generated content used to spread disinformation and enable a splintered environment.

Religion has long been a divisive issue in Indian politics. While nearly 80% of the population practices Hinduism, India still has the third largest Muslim population in the world. One of India’s most prominent Muslim politicians, Asaduddin Owaisi, was depicted in a deepfake singing devotional Hindu songs, illustrating how AI-generated content was used as a tactic to mock political opponents. 

Modi’s BJP is often associated with Hindu nationalism. The BJP’s IT cell has spread posts aimed at exploiting the fears of India’s Hindu majority across social media platforms and has been accused of disseminating misogyny, Islamophobia, and hatred. During one of Modi’s campaign speeches for this election, he referred to Muslims as “infiltrators” and suggested that his opposition would redistribute wealth to Muslims if elected, likely as a strategy to consolidate the Hindu vote by portraying Muslims as a threat. Social media narratives with substantial negative sentiment and anger were detected in response to these statements.  

LEARN MORE: How to Combat Misinformation and Disinformation: Lessons from a Social Media Trust and Safety Expert

This graph from Blackbird.AI’s Constellation Narrative Intelligence Platform visualizes networked interactions between narratives, hashtags, and URLs discussing hate speech related to the BJP or Modi, colorized on a white-to-red gradient based on negative sentiment.

The Way Forward

Deepfakes have been used in general elections around the world to support candidates as well as to harm them – but always to try to influence voters. The convenience of AI tools and the lack of comprehensive regulation in countries like India make it challenging to control the proliferation of deepfakes. When left unchecked, AI-generated content contributes to a complex media environment where distinguishing real from fake becomes increasingly challenging. With the power to deceive and manipulate, deepfakes pose a real threat to democratic processes. 

AI-powered disinformation narratives have the potential to spread rapidly, especially in countries like India that have significant social media communities. Blackbird.AI’s Constellation Narrative Intelligence Platform and Compass by Blackbird.AI can track these damaging disinformation narratives arising from deepfakes and provide crucial context for informed decision-making.

‍To learn more about how Blackbird.AI can help you with election integrity, book a demo.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Our AI-driven Narrative Intelligence Platform – identifies key narratives that impact your organization/industry, the influence behind them, the networks they touch, the anomalous behavior that scales them, and the cohorts and communities that connect them. This information enables organizations to proactively understand narrative threats as they scale and become harmful for better strategic decision-making. A diverse team of AI experts, threat intelligence analysts, and national security professionals founded Blackbird.AI to defend information integrity and fight a new class of narrative threats. Learn more at Blackbird.AI.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.