Categories
technology

The European AI Act, Explained

The stubborn question of whether regulation strangles innovation has greeted the act.

The news this week that the EU Commission had hit Apple with a nearly $2 billion lawsuit over unfairly manipulating the competition between music streamers was the latest instance of Europe setting the tone of tech regulation worldwide. A bigger story on that front is developing in Brussels as lawmakers finetune legislation that would be at the international vanguard of regulating Artificial Intelligence. 

The EU announced these regulations in December 2023 and optimistic politicians have said they could be adopted as soon as May. Here’s what to know. 

The Problem 

It’s been 15 months since Open AI´s ChatGPT became the talk of Silicon Valley and tech media. The ease with which users could employ its potent generative text capabilities made it one of the biggest, fastest tech launches ever. 

But industry elation mixed with serious concern over the staggering speed of AI innovation and its potential consequences. These fears touched on everything from AI’s ability to replace human workers at an unprecedented level, spread misinformation at previously unfathomable scale and pose new, grave threats to people’s privacy, to name just a few. 

The Proposed Fix 

The measures announced in December make Europe the first continent to take a full-throated approach to AI regulation. The guiding principle was to have “humanity’s interest” at the heart of the proposals, which took a two-tier approach to curbing AI’s harmful excesses: transparency requirements for all general-purpose AI models and stronger requirements for powerful models with systemic impacts. 

These included strict restrictions on the use of facial recognition tech outside of narrowly-defined law enforcement purposes; the exploitation of the vulnerable due to age, disability or poverty; predictive policing; and “social scoring,” or measuring someone’s general upstandingness. 

Violators would be fined €35 million, or 7% of global revenue. 

EU lawmakers stressed during the announcement that all of these goals could be achieved without imposing an “excessive burden” on companies. 

The background

The EU has long been the most aggressive large political body when it comes to tech regulation, a position solidified by the 2018 passage of the General Data Protection Regulation and reinforced by a steady run of massive lawsuits since. 

The debate 

The AI Act can’t be faulted for its intentions of putting citizens before corporations. And many of its measures are admirable, especially those concerning facial recognition constraints, predictive policing out of a sci-fi movie and rigorous tagging of AI-generated content, which could be a bulwark against misinformation. 

But the persistent question of whether regulation strangles innovation has put the act in the cross-hairs of tech executives and engineers as well as European leaders who bemoan the continent´s laggard status on innovation when compared to the US and China. 

The saying goes that America innovates, China emulates and Europe regulates. Data on AI innovation bears this out. Fifty-four percent of makers of AI models were American in 2022, according to The Economist, compared to just 3% from Germany, the top European country. Private investment in America totaled $249 billion between 2013 and 2022, while Germany spent only $7 billion. 

Critics have decried the overreach of the AI Act, saying that rather than curtailing nefarious uses of the technology, certain measures impede the technology itself. Bolstering their argument is a finding by Stanford that not a single one of the 10 leading AI models came close to meeting standards that the European Parliament proposed last June. 

French President Emmanuel Macron echoed these sentiments last last year, saying “We [the EU] will regulate things we no longer produce or invent. This is never a good idea.” 

All of this speaks to broader concerns about what experts dub the European Valley of Death, or gap between world-class research and development and successfully bringing the innovations they produce to market. It’s estimated that this gap between the EU and US totals $100 billion annually. 

Another solution

People in favor of protecting citizens from the pernicious byproducts of AI–and that includes many working in the AI sector–are urging EU regulators to support the region’s players by regulating foundational tech innovation with a lighter hand and instead focusing on privacy, legal and misinformation protections. 

The Brussels Effect, a laudable tendency of regulations created in the EU being emulated in other parts of the world, may repeat itself in the AI sphere. But the intense competition between nations in the AI race may make that less likely than in measures concerning, for instance, data protection. 

Experts also contend that Europe, which is unlikely to become an AI power player anytime soon, ought to be a cheerleader for the technology by encouraging major companies to adapt it. 

A taller order is mastering the tricky art of regulatory timing. If done too early, innovation is stifled. If done too late, devastating consequences may well await. 

css.php