Categories
Uncategorized

Artificial Information

What worries me more than the season’s social media snafus, and even more than its crude culture war battles, is the destabilizing and misleading role that Artificial Intelligence is likely to play in the long, muddy march to the White House.

The 2024 presidential election season is upon us, and the media coverage of it is off to a bumpy start. Ron DeSantis’ glitch-filled campaign announcement on Twitter, presided over by that platform’s glitchy owner, made clear that the cycle to play out over the next 18 months will, like the two that preceded it, depart from norms that we can safely relegate to the dustbin of history. 

What worries me more than the season’s social media snafus, and even more than its crude culture war battles, is the destabilizing and misleading role that Artificial Intelligence is likely to play in the long, muddy march to the White House. Fake news and flagrant lies were a sad inevitability of the process even before the ground-shaking advancements in AI capabilities over recent months. 

The threat of vastly more sophisticated disinformation campaigns is already here. And given the blitzkrieg pace of AI improvements, it’s bound to grow. 

Two recent reports, by NewsGuard and ShadowDragon, make this clear. NewsGuard found 125 websites with content written mostly or entirely by bots. They include purported news and medical information websites rife with inaccuracies (in a basic overview of bipolar disorder, for instance) that, says NewsGuard Co-CEO Steven Brill, “further trust erosion” among the public in news platforms. The ShadowDragon research focused on AI-generated content on Instagram and other social media platforms, as well as on Amazon. 

Some of these lies and hoaxes are produced in full by bots, others by people using chatbot technology to hone their nefarious human purposes. Given the already precarious state of factual news dissemination, and the fraught political landscape (in the US and abroad), the further facilitation of fake news delivery requires immediate action. 

The problem is that even the experts at the vanguard of AI, themselves worried about the potential threats it poses, don’t know what form that action should take. 

“I think if this technology goes wrong, it can go quite wrong,” Sam Altman, the CEO of OpenAI, recently said in a Senate subcommittee hearing. “It’s one of my areas of greatest concern. The more general ability of these models to manipulate, to persuade, to provide one-on-one interactive disinformation.”

But his recommendations were vague. “For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities,” he said. 

Even leaving aside the lack of cutting-edge tech savvy among legislators, this proposal only hints at the breakneck advancement of those capabilities. Who would determine the threshold at which to regulate? How quickly and often would that bar be raised? 

“I think it’s complicated,” Altman said at last month’s Sohn Conference in a session about AI’s potential dangers to children more vulnerable than adults to mistaking bots for trustworthy friends. (Fifty-two percent of children between 11 and 17 feel as “comfortable” interacting with a bot or influencer as with a friend, according to a report by Spain’s Fundacion Mapfre.) 

Complicated is an understatement. And AI regulation already in the works reveals the murkiness of the endeavor. Next month, New York City will begin enforcing rules on the role that AI can play in hiring and promotion decisions. Among other measures, companies will have to alert candidates that they’re being screened via automated systems. The rules will also require those systems to be checked annually for biases by independent auditors. Even these rather modest proposals have stoked criticism, with public interest advocates saying they’re riddled with loopholes and business groups saying the “nascent” stage of the technology, and auditing it, renders them impractical. 

The difficulty in pinning down such a fresh technology in the midst of a dramatic growth spurt must not deter attempts at controlling it. On the contrary, the inevitability of improved technology enabling improved deception makes the task at hand all the more urgent. 

I’m heartened to see local, state and federal governments taking action, however imperfect it may be. The EU, so often ahead of the curve when it comes to such regulation, recently published a pioneering, 108-page draft of rules meant to temper the effects of the technology. California, New Jersey, New York, Vermont and Washington, DC are, like New York City, working to limit AI’s impact on jobs. And the issue arose during the recent Group of 7 Summit in Japan. 

While legislation is hammered out, I’d strongly urge influencers, journalists and, especially, AI CEOs themselves to continue ringing the alarm bells. 

A statement released last week by top AI executives saying that mitigating the damage of AI should be a “global priority alongside other societal-scale risks such as pandemics and nuclear war,” will encourage a form of regulation absolutely critical to keeping AI in check: that of self-regulation by, and awareness among, individual human users of how easily bots can mislead and deceive them. 

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php