Society technology

2024: The Year of AI Elections

A novel factor in the coming year’s elections is how AI, which has improved with startling speed over the past 12 months, will upend political campaigns and voting processes.

Major elections around the world in 2024 will determine the next American president, Indian prime minister and European Parliament, among other political bodies of international consequence. It would be an unusually important year for democracy in normal times. But the barrage of crises over the past decade has led several commentators to pronounce 2024 as critical to democracy’s very survival. 

A novel factor in the coming elections is AI and how the technology, which has improved with startling speed over the past 12 months, might, or more accurately will, upend political campaigns and voting processes. Worries about AI initially concerned its threat to human labor. Those preoccupations persist. But fears about the technology
have branched out well beyond the employment sector, and contributed to the recent, dizzying chaos at OpenAI.

Talk of AI as a broad, existential threat is no longer relegated to the crackpot fringes, and you’ll be hearing a lot of it next year as elections abroad lead up to the US races on November 5. 

There is cause for serious concern, but, in the OpenAI mess of all places, glimmers of hope. 

I’ll start with the bad news. The persuasive powers of AI, and specifically, Artificial Generative Intelligence, will likely make the bots and troll farms of 2016 and 2020 look like child’s play.  The technology’s ability to produce audio and video imitations of candidates is uncanny. And the production is cheap, wide and fast, producing viral content in minutes that expert teams of graphic designers might take weeks to complete. 

Such content has already been deployed this month in elections in Argentina and the Netherlands, whose outcomes proved the durable appeal of populist right-wingers courting the fed up and disenfranchised, whether through promised immigration crackdowns or vast economic overhauls. The RNC has made use of AGI to play to voter fears, with one AGI video conjuring a dystopian San Francisco right out of science-fiction and an explosive Chinese invasion of Taiwan. (The center-left candidate in Argentina’s elections also took advantage of AGI, as have Democrats in the US, highlighting its bipartisan appeal.)

Beyond its use by campaigns to promote their candidates or tarnish foes, AI could undermine the voting process itself. Election officials, the targets in 2020 of several high-profile coercion attempts, would be vulnerable to phishing attacks aided by the enormous troves of public data that AI can find and synthesize, putting their confidential data at risk of exposure. Election researchers have rung the alarm about AI helping bad actors cull voter rolls. The technology might also spread disinformation to keep less-informed voters away from the polls by, for instance, spreading falsehoods about the threat of violence at polling stations. 

The lights-peed progress of AI cannot be undone, and further advancements will surely be made before next year’s pivotal elections. Fears of the dark side of AI innovation are held by many of the people responsible for its recent strides. 

Those fears were one reported factor in the ousting of OpenAI founder Sam Altman by the company’s board earlier this month. Altman was reinstated five days later after a whiplash-inducing ride that included his hiring by Microsoft and a threatened mutiny by
all but 20 of OpenAI’s 770 employees if the board didn’t reverse course. 

The board’s reasons for giving Altman the boot are still murky, but the industry scuttlebutt is that the move had less to do with boardmembers’ reservations about the too-fast growth of AI capabilities than with their eroded trust in Altman as he grew less
“candid” about the company’s future. 

What the soap opera made clear is that the white-hot demand for AI talent has given employees of topnotch outfits like Altman’s the power to effectively hobble the boards that oversee them. It also exposed as delusional the thought that a single entity like the OpenAI board, with its stated responsibility to “humanity,” could avoid collateral
damage while overseeing epochal innovation. 

There’s a lesson that the OpenAI mess can teach us about the governance of AI that could mitigate the damage it does to elections next year and beyond: such governance should be overseen by governments. 

Boards, no matter how high-minded, appear enfeebled. Even once idealistic outfits like OpenAI, with its bizarre for- and non-profit split setup, are unlikely to turn down calls from Microsoft, its biggest shareholder. And giant platforms like Meta and X have hollowed out content moderation teams.

The US government has lagged behind those in Australia, Canada and Europe when it comes to regulating Big Tech. AI chipping away at a pillar of democracy, fair elections, ought to inspire a firmer hand among our elected leaders.