Depending on whom you ask in politics, the sudden advances in artificial intelligence will either transform American democracy for the better or bring about its ruin. At the moment, the doomsayers are louder. Voice-impersonation technology and deep-fake videos are scaring campaign strategists, who fear that their deployment in the days before the 2024 election could decide the winner. Even some AI developers are worried about what they’ve unleashed: Last week the CEO of the company behind ChatGPT practically begged Congress to regulate his industry. (Whether that was genuine civic-mindedness or self-serving performance remains to be seen.)
Amid the growing panic, however, a new generation of tech entrepreneurs is selling a more optimistic future for the merger of AI and politics. In their telling, the awesome automating power of AI has the potential to achieve in a few years what decades of attempted campaign-finance reform have failed to do—dramatically reduce the cost of running for election in the United States. With AI’s ability to handle a campaign’s most mundane and time-consuming tasks—think churning out press releases or identifying and targeting supporters—candidates would have less need to hire high-priced consultants. The result could be a more open and accessible democracy, in which small, bare-bones campaigns can compete with well-funded juggernauts.
Martin Kurucz, the founder of a Democratic fundraising company that is betting big on AI, calls the technology “a great equalizer.” “You will see a lot more representation,” he told me, “because people who didn’t have access to running for elected office now will have that. That in and of itself is huge.”
Kurucz told me that his firm, Sterling Data Company, has used AI to help more than 1,000 Democratic campaigns and committees, including the Democratic Congressional Campaign Committee and now-Senator John Fetterman, identify potential donors. The speed with which AI can sort through donor files meant that Sterling was able to cut its prices last year by nearly half, Kurucz said, allowing even small campaigns to afford its services. “I don’t think there have ever been this many down-ballot candidates with some level of digital fundraising operation,” Kurucz said. “These candidates now have access to a proper campaign infrastructure.”
Campaigns big and small have begun using generative-AI software such as ChatGPT and DALL-E to create digital ads, proofread, and even write press releases and fundraising pitches. A handful of consultants told me they were mostly just experimenting with AI, but Kurucz said that its influence is more pervasive. “Almost half of the first drafts of fundraising emails are being produced by ChatGPT,” he claimed. “Not many [campaigns] will publicly admit it.”
The adoption of AI may not be such welcome news, however, for voters who are already sick of being bombarded with ads, canned emails, and fundraising requests during election season. Advertising will become even more hyper-targeted, Tom Newhouse, a GOP strategist, told me, because campaigns can use AI to sort through voter data, run performance tests, and then create dozens of highly specific ads with far fewer staff. The shift, he said, could narrow the gap between small campaigns and their richer rivals.
But several political consultants I spoke with were skeptical that the technology would democratize campaigning anytime soon. For one, AI won’t aid only the scrappy, underfunded campaigns. Deeper-pocketed organizations could use it to expand their capacity exponentially, whether to test and quick produce hundreds of highly specific ads or pinpoint their canvassing efforts in ways that widen their advantage.
Amanda Litman, the founder of Run for Something, an organization that recruits first-time progressive candidates, told me that the office seekers she works with aren’t focused on AI. Hyperlocal races are still won by the candidates who knock on the most doors; robots haven’t taken up that task, and even if they could, who would want them to? “The most important thing for a candidate is the relationship with a voter,” Litman said. “AI can’t replicate that. At least not yet.”
Although campaigns have started using AI, its impact—even to people in politics—is not always apparent. Fetterman’s Pennsylvania campaign worked with Kurucz’s AI-first firm, but two former advisers to Fetterman scoffed at the suggestion that the technology contributed meaningfully to his victory. “I don’t remember anyone using AI for anything on that campaign,” Kenneth Pennington, a digital consultant and one of the Fetterman campaign’s earliest hires, told me. Pennington is a partner at a progressive consulting firm called Middle Seat, which he said had not adopted the use of generative AI in any significant way and had no immediate plans to. “Part of what our approach and selling point is as a team, and as a firm, is authenticity and creativity, which I think is not a strong suit of a tool like ChatGPT,” Pennington said. “It’s robotic. I don’t think it’s ready for prime time in politics.”
If AI optimists and pessimists agree on anything, it’s that the technology will allow more people to participate in the political process. Whether that’s a good thing is another question.
Just as AI platforms could allow, say, a schoolteacher running for city council to draft press releases in between grading papers, so too can they help a far-right activist with millions of followers create a semi-believable deep-fake video of President Joe Biden announcing a military draft.
“We’ve democratized access to the ability to create sophisticated fakes,” Hany Farid, a digital-forensics expert at UC Berkeley, told me.
Fears over deep-fakes have escalated in the past month. In response to Biden’s formal declaration of his reelection bid, the Republican National Committee released a video that used AI-generated images to depict a dystopian future. Within days, Democratic Representative Yvette Clarke of New York introduced legislation to require political ads to disclose any use of generative AI (which the RNC ad did). Early this month, the bipartisan American Association of Political Consultants issued a statement condemning the use of “deep-fake generative AI content” as a violation of its code of ethics.
Nearly everyone I interviewed for this story expressed some degree of concern over the role that deep-fakes could play in the 2024 election. One scenario that came up repeatedly was the possibility that a compelling deep-fake could be released on the eve of the election, leaving too little time for it to be widely debunked. Clarke told me she worried specifically about a bad actor suppressing the vote by releasing invented audio or video of a trusted voice in a particular community announcing a change or closure of polling sites.
But the true nightmare scenario is what Farid called “death by a thousand cuts”—a slow bleed of deep-fakes that destroys trust in authentic sound bites and videos. “If we enter this world where anything could be fake, you can deny reality. Nothing has to be real,” Farid said.
This alarm extends well beyond politics. A consortium of media and tech companies are advocating for a global set of standards for the use of AI, including efforts to authenticate images and videos as well as to identify, through watermarks or other digital fingerprints, content that has been generated or manipulated by AI. The group is led by Adobe, whose Photoshop helped introduce the widespread use of computer-image editing. “We believe that this is an existential threat to democracy if we don’t solve the deep-fake problem,” Dana Rao, Adobe’s general counsel, told me. “If people don’t have a way to believe the truth, we’re not going to be able to decide policy, laws, government issues.”
Not everyone is so concerned. As vice president of the American Association of Political Consultants, Larry Hyuhn helped draft the statement that the organization put out denouncing deep-fakes and warning its members against using them. But he’s relatively untroubled about the threats they pose. “Frankly, in my experience, it’s harder than everyone thinks it is,” said Hyuhn, whose day job is providing digital strategy to Democratic clients who include Senate Majority Leader Chuck Schumer. “Am I afraid of it? No,” Hyuhn told me. “Does it concern me that there are always going to be bad actors doing bad things? That’s just life.”
Betsy Hoover, a former Obama-campaign organizer who now runs a venture-capital fund that invests in campaign tech, argued that voters are more discerning than people give them credit for. In her view, decades of steadily more sophisticated disinformation campaigns have conditioned the electorate to question what they see on the internet. “Voters have had to decide what to listen to and where to get their information for a really long time,” she told me. “And at the end of the day, for the most part, they’ve figured it out.”
Deep-fake videos are sure to get more convincing, but for the time being, many are pretty easy to spot. Those that impersonate Biden, for example, do a decent job of capturing his voice and appearance. But they make him sound slightly, well, younger than he is. His speech is smoother, without the verbal stumbles and stuttering that have become more pronounced in recent years. The technology “does require someone with some real skill to make use of,” he said. “You can give me a football; I still can’t throw it 50 yards.”
The same limitations apply to AI’s potential for revolutionizing campaigns, as anyone who’s played around with ChatGPT can attest. When I asked ChatGPT to write a press release from the Trump campaign announcing a hypothetical endorsement of the former president by his current Republican rival, Nikki Haley, within seconds the bot delivered a serviceable first draft that accurately captured the format of a press release and made up believable, if generic, quotes from Trump and Haley. But it omitted key background information that any junior-level staffer would have known to include—that Haley was the governor of South Carolina, for example, and then served as Trump’s ambassador to the United Nations.
Still, anyone confident enough to predict AI’s impact on an election nearly a year and a half away is making a risky bet. ChatGPT didn’t even exist six months ago. Uncertainty pervaded my conversations with the technology’s boosters and skeptics alike. Pennington told me to take everything he said about AI, both its promise and its peril, “with a grain of salt” because he could be proved wrong. “I think some people are overhyping it. I think some people are not thinking about it who should be,” Hoover said. “There’s a really wide spectrum because all of this is just evolving so much day to day.”
That constant and rapid evolution is what sets AI apart from other technologies that have been touted as democratic disrupters. “This is one of the few technologies in the history of planet Earth that is continuously and exponentially bettering itself,” Kurucz, Sterling’s founder, said. Of all the predictions I heard about AI’s impact on campaigns, his were the most assured. (Because AI forms the basis of his sales pitch to clients, perhaps his prognostication, too, should be taken with a grain of salt.) Although he was unsure exactly how fast AI could transform campaigns, he was certain it would.
“You no longer need average people and average consultants and average anything,” Kurucz said. “Because AI can do average.” He compared the skeptics in his field to executives at Blockbuster who passed on the chance to buy Netflix before the start-up eventually destroyed the video-rental giant. “The old guard,” Kurucz concluded, “is just not ready to be replaced.”
Hoover offered no such bravado, but she said Democrats in particular shouldn’t let their fears of AI stop them from trying to harness its potential. “The genie is out of the bottle,” she said. “We have a choice, then, as campaigners: to take the good from it and allow it to make our work better and more effective, or to hide under a rock and pretend it’s not here, because we’re afraid of it.”
“I don’t think we can afford to do the latter,” she added.