Election Safety in the Age of GenAI |
More than four billion people exercised their right to vote in 2024, marking an unprecedented moment in the history of democracy. This year, the electoral process unfolded across the globe, including in the United States, the European Union, India, Taiwan, Pakistan, and Turkey.
The massive turnouts in these elections point to a larger narrative unfolding—one that interrogates the robustness of democratic institutions at a time when the future of democracy globally seems uncertain. The decline in the number of democracies worldwide and increasing threats to the peaceful transfer of power, as seen in the US in 2021, cast a shadow over these elections.
This democratic test coincides with another crucial junction - the emergence of Generative Artificial Intelligence (GenAI). GenAI's applications in political campaigns and electoral processes have been both innovative and controversial. From creating hyper-realistic deepfakes to automating the production of targeted propaganda, GenAI tools have the potential to significantly influence both public opinion and election outcome.
The dichotomy of GenAI’s impact is palpable. On one hand, it offers unprecedented opportunities for enhancing political engagement, making information dissemination more efficient, and even enabling suppressed voices to circumvent censorship. On the other hand, it presents a formidable challenge to electoral integrity, with the proliferation of dis- and mis-information and the creation of deceptive media content threatening to undermine public trust in democratic processes.
The rise of enemies of democracy paired with the seemingly unstoppable acceleration of AI leaves us with a tough and discomforting question: Are democracies able to defend themselves against these challenges, or will the age of democracy remain a short-lived intermezzo in the long history of human societies?
To provide a more robust empirical foundation to answer this question, I led a research project at the Harvard Kennedy School analyzing the impact of AI on three recent elections in Argentina, Pakistan and Taiwan. This commentary summarizes the findings and recommendations from this research that were also published as an ebook entitled Election Safety in the Age of Generative AI in July 2024.[1]
Argentina's 2023 election became a milestone as one of the world’s first campaigns heavily influenced by AI. The competing candidates—Javier Milei and Sergio Massa—both used GenAI to shape public perception, with varying results. Massa’s team leaned on AI-generated visual content to revive Peronism for younger audiences, creating campaign images that placed Massa in heroic roles such as Indiana Jones and a Soviet-style leader. AI’s power to quickly generate engaging political narratives helped candidates mobilize their bases more efficiently than ever.
However, the election also saw AI cross ethical lines. Deepfakes circulated by both camps aimed to discredit their opponent, such as a video falsely portraying Milei discussing the sale of human organs. While the campaign warned that the footage was AI-generated, it was still convincing enough to stir controversy and erode public trust. This election exemplifies the ambiguity of AI: while it enhanced political messaging, it also amplified misinformation, leaving voters unsure of what to believe.
In Pakistan’s 2024 election, GenAI took on a different role. With Imran Khan imprisoned and barred from conventional campaigning, his supporters turned to AI to circumvent government censorship. Using AI-generated video clips, they recreated Khan’s speeches, enabling him to reach millions of supporters despite being physically absent. These videos, labeled as synthetic, connected with a young, tech-savvy electorate and underscored the value of AI as a democratizing force.
However, not all AI-driven content was transparent. Rival actors created fake AI messages urging voters to boycott the election, highlighting how AI can also be weaponized to confuse voters. Despite these challenges, many observers viewed Khan’s PTI Party’s use of AI as a legitimate way to counter repressive government tactics. This case underscores how GenAI can bolster democracy in authoritarian settings by offering political actors tools to bypass censorship.
Taiwan’s 2024 presidential election showed the dark side of AI when wielded by foreign actors. Chinese-backed campaigns flooded social media with GenAI-generated videos, memes, and deepfakes aimed at swaying public opinion. Some videos impersonated American officials endorsing candidates, while others criticized U.S. influence in Taiwan, sowing distrust among voters.
China’s computational propaganda campaigns blended real and fake narratives so seamlessly that even tech-savvy Taiwanese voters struggled to distinguish them. These efforts, aimed at destabilizing Taiwan’s democratic institutions, highlight GenAI’s potential as a weapon for cognitive warfare. Taiwan’s robust fact-checking organizations and media literacy efforts served as a crucial line of defense, demonstrating how public awareness can mitigate some of GenAI’s harmful effects.
The case studies from Argentina, Pakistan, and Taiwan reveal the urgent need for comprehensive but appropriate regulatory safeguards to manage AI’s impact on democracy. The thoughts below are meant to be a starting point, not an exhaustive list:
The regulation of GenAI in elections should follow a risk-based approach, similar to the EU AI Act. AI tools should be classified based on their potential impact on democratic processes. Low-risk applications, such as using AI to summarize policy positions, could remain lightly regulated. High-risk applications, like deepfakes designed to undermine public trust, should face strict oversight or even outright bans. This tiered framework ensures that regulation is proportionate, balancing innovation with electoral integrity.
An effective regulatory framework must address every stakeholder of the AI value chain to mitigate election risks:
AI technology companies: Providers of large foundation models, such as OpenAI and Stability AI, should be required to establish Know Your Customer (KYC) processes and integrate safeguards (e.g., watermarking tools) into their platforms to increase traceability.
Campaign staff: Candidates and their teams creating GenAI content should be required to reveal how they are using (Gen)AI for their campaigns. Another potential step would be for government authorities to develop a GenAI code of conduct that candidates need to abide by to partake in elections.
Social Media Platforms: At the distribution stage, platforms should be required to develop and use mechanisms to detect and verify watermarks. If distributors cannot verify proper labelling, the content should be removed from their platforms. Collaboration with fact-checking organizations will further enhance content integrity.
Voters and Civil Society: Media literacy campaigns should equip voters to recognize manipulated content, while watchdog organizations monitor AI’s use in elections and alert the public to potential abuses. A public registry should be made available, which would also allow whistleblowers to report anonymously.
This comprehensive, multi-stakeholder approach ensures that the regulation of GenAI addresses risks at every point in the electoral process—from content creation to distribution and voter engagement.
Given the global nature of AI risks, international cooperation is essential. The eBook referenced above advocates for the formation of an Intergovernmental Panel on Election Safety (IPES), modeled after the IPCC for climate change and the Global Electoral Risk Management (GERM) framework. This panel would facilitate knowledge-sharing and coordination among countries to address cross-border AI-driven election risks, such as foreign interference and computational propaganda. The IPES could also set global standards for the ethical use of AI in elections, helping nations align their regulatory efforts and respond to emerging threats swiftly.
Copyright ©2024 by [Insert Copyright Holder Name Here]
This article was published in the Journal of Business and Artificial Intelligence under the "gold" open access model, where authors retain the copyright of their articles. The author grants us a license to publish the article under a Creative Commons (CC) license, which allows the work to be freely accessed, shared, and used under certain conditions. This model encourages wider dissemination and use of the work while allowing the author to maintain control over their intellectual property.
Patrick Tammer is a Harvard Kennedy School of Government alum (MPA 2022-24) where his research focused on AI and Tech Policy. Research topics included amongst others: Employee Ownership Models to Mitigate the Economic Impacts of AI, Historical Precedents to Inform the Regulation of AI in Canada, and a Comparative Analysis of AI Regulatory Frameworks in the US, EU, and Canada.
Currently, Patrick Tammer is a Senior Investment Director at Scale AI, the Canadian AI Innovation Cluster, where he oversees a $125M+ portfolio of AI innovation projects. As an advisor to business executives and the Canadian Government, he is directly working on the transformative impact of AI on our economy and society. He is also an Editor at the Journal of Business and Artificial Intelligence.
The Journal of Business and Artificial Intelligence (ISSN: 2995-5971) is the leading publication at the nexus of artificial intelligence (AI) and business practices. Our primary goal is to serve as a premier forum for the dissemination of practical, case-study-based insights into how AI can be effectively applied to various business problems. The journal focuses on a wide array of topics, including product development, market research, discovery, sales & marketing, compliance, and manufacturing & supply chain. By providing in-depth analyses and showcasing innovative applications of AI, we seek to guide businesses in harnessing AI's potential to optimize their operations and strategies.
In addition to these areas, the journal places a significant emphasis on how AI can aid in scaling organizations, enhancing revenue growth, financial forecasting, and all facets of sales, sales operations, and business operations. We cater to a diverse readership that ranges from AI professionals and business executives to academic researchers and policymakers. By presenting well-researched case studies and empirical data, The Journal of Business and Artificial Intelligence is an invaluable resource that not only informs but also inspires new, transformative approaches in the rapidly evolving landscape of business and technology. Our overarching aim is to bridge the gap between theoretical AI advancements and their practical, profitable applications in the business world.