The US is heading into its first presidential election since generative AI tools have gone mainstream. And the companies offering these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle the months leading up to it.
How AI companies are reckoning with elections


This election season, we’ve already seen AI-generated images in ads and attempts to mislead voters with voice cloning. The potential harms from AI chatbots aren’t as visible in the public eye — yet, anyway. But chatbots are known to confidently provide made-up facts, including in responses to good-faith questions about basic voting information. In a high-stakes election, that could be disastrous.
One plausible solution is to try to avoid election-related queries altogether. In December, Google announced that Gemini would simply refuse to answer election-related questions in the US, referring users to Google Search instead. Google spokesperson Christa Muldoon confirmed to The Verge via email the change is now rolling out globally. (Of course, the quality of Google Search’s own results presents its own set of issues.) Muldoon said Google has “no plans” to lift these restrictions, which she said also “apply to all queries and outputs” generated by Gemini, not just text.
Earlier this year, OpenAI said that ChatGPT would start referring users to CanIVote.org, generally considered one of the best online resources for local voting information. The company’s policy now forbids impersonating candidates or local governments using ChatGPT. It likewise prohibits using its tools for campaigning, lobbying, discouraging voting, or otherwise misrepresenting the voting process, under the updated rules.
In a statement emailed to The Verge, Aravind Srinivas, CEO of the AI search company Perplexity, said Perplexity’s algorithms prioritize “reliable and reputable sources like news outlets” and that it always provides links so users can verify its output.
Microsoft said it’s working on improving the accuracy of its chatbot’s responses after a December report found that Bing, now Copilot, regularly gave false information about elections. Microsoft didn’t respond to a request for more information about its policies.
All of these companies’ responses (maybe Google’s most of all) are very different from how they’ve tended to approach elections with their other products. In the past, Google has used Associated Press partnerships to bring factual election information to the top of search results and has tried to counter false claims about mail-in voting by using labels on YouTube. Other companies have made similar efforts — see Facebook’s voter registration links and Twitter’s anti-misinformation banner.
Yet major events like the US presidential election seem like a real opportunity to prove whether AI chatbots are actually a useful shortcut to legitimate information. I asked a couple of Texas voting questions of some chatbots to get an idea of their usefulness. OpenAI’s ChatGPT 4 was able to correctly list the seven different forms of valid ID for voters, and it also identified that the next significant election is the primary runoff election on May 28th. Perplexity AI answered those questions correctly as well, linking multiple sources at the top. Copilot got its answers right and even did one better by telling me what my options were if I didn’t have any of the seven forms of ID. (ChatGPT also coughed up this addendum on a second try).
Gemini just referred me to Google Search, which got me the right answers about ID, but when I asked for the date of next election, an out-of-date box at the top referred me to the March 5th primary.
Many of the companies working on AI have made various commitments to prevent or mitigate the intentional misuse of their products. Microsoft says it will work with candidates and political parties to curtail election misinformation. The company has also started releasing what it says will be regular reports on foreign influences in key elections — its first such threat analysis came in November.
Google says it will digitally watermark images created with its products using DeepMind’s SynthID. OpenAI and Microsoft have both announced that they would use the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials to denote AI-generated images with a CR symbol. But each company has said that these approaches aren’t enough. One way Microsoft plans to account for that is through its website that lets political candidates report deepfakes.
Stability AI, which owns the Stable Diffusion image generator, updated its policies recently to ban using its product for “fraud or the creation or promotion of disinformation.” Midjourney told Reuters last week that “updates related specifically to the upcoming U.S. election are coming soon.” Its image generator performed the worst when it came to making misleading images, according to a Center for Countering Digital Hate report published last week.
Meta announced in November of last year that it would require political advertisers to disclose if they used “AI or other digital techniques” to create ads published on its platforms. The company has also banned the use of its generative AI tools by political campaigns and groups.
Several companies, including all of the ones above, signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.
In January, two companies in Texas cloned President Biden’s voice to discourage voting in the New Hampshire primary. It won’t be the last time generative AI makes an unwanted appearance in this election cycle. As the 2024 race heats up, we’ll surely see these companies tested on the safeguards they’ve built and the commitments they’ve made.
The US is heading into its first presidential election since generative AI tools have gone mainstream. And the companies offering these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle the months leading up to it. This election season, we’ve already…
Recent Posts
- FTC Chair praises Justice Thomas as ‘the most important judge of the last 100 years’ for Black History Month
- HP acquires Humane Ai and gives the AI pin a humane death
- DOGE can keep accessing government data for now, judge rules
- In a test, 2000 people were shown deepfake content, and only two of them managed to get a perfect score
- Quordle hints and answers for Wednesday, February 19 (game #1122)
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010