Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU policymakers reach an agreement on the AI Act, published by tlevin on December 15, 2023 on LessWrong.
On December 8, EU policymakers
announced an agreement on the AI Act. This post aims to briefly explain the context and implications for the governance of global catastrophic risks from advanced AI. My portfolio on Open Philanthropy's AI Governance and Policy Team includes EU matters (among other jurisdictions), but I am not an expert on EU policy or politics and could be getting some things in this post wrong, so please feel free to correct it or add more context or opinions in the comments!
If you have useful skills, networks, or other resources that you might like to direct toward an impactful implementation of the AI Act, you can indicate your interest in doing so via
this short Google form.
Context
The AI Act has been in the works since 2018, and for the last ~8 months, it has been in the "trilogue" stage. The EU Commission, which is roughly analogous to the executive branch (White House or 10 Downing Street), drafted the bill; then, the European Parliament (analogous to the U.S. House of Representatives, with population-proportional membership from each country) and the Council of the EU (analogous to the U.S.
conference committees in the US Congress).
In my understanding, AI policy folks who are worried about catastrophic risk were hoping that the Act would include regulations on all sufficiently capable GPAI (general-purpose AI) systems, with no exemptions for open-source models (at least for the most important regulations from a safety perspective), and ideally additional restrictions on "very capable foundation models" (those above a certain compute threshold), an idea floated by some negotiators in October.
threat assessments/dangerous capabilities evaluations and
cybersecurity measures, with a lot of the details to be figured out later by that Office and by standard-setting bodies like
CEN-CENELEC's JTC-21.
GPAI regulations
appeared in danger of being excluded after Mistral, Aleph Alpha, and the national governments of France, Germany, and Italy objected to what they perceived as regulatory overreach and threatened to derail the Act in November. There was also some reporting that the Act would totally exempt open-source models from regulation.
What's in it?
Sabrina Küspert, an AI policy expert working at the EU Commission, summarized the results on some of these questions in a
thread on X:
The agreement does indeed include regulations on "general-purpose AI," or GPAI.
There does appear to be a version of the "very capable foundation models" idea in the form of "GPAI models with systemic risks," which are based on capabilities and "reach," which I think means how widely deployed they are.
It looks like GPAI models are
presumed to have these capabilities if they're trained on 10^25 FLOP, which is one order of magnitude smaller than the October 30 Biden executive order's cutoff for reporting requirements (and which would
probably include GPT-4 and
maybe Gemini, but
no other current models as far as I know).
Küspert also says "no exemptions," which I interpret to mean "no exemptions to the systemic-risk rules for open-source systems."
Other reporting suggests there are wide exemptions for open-source models, but the requirements kick back in if the models pose systemic risks. However, Yann LeCun is
celebrating based on this part of a
Washington Post article: "The legislation ultimately included restrictions for foundation models but gave broad exemptions to "open-source models," which are developed using code that's freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France's Mistral and Germany's Aleph Alpha, as well as Meta, which relea...
view more