Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RTFB: On the New Proposed CAIP AI Bill, published by Zvi on April 10, 2024 on LessWrong.
A New Bill Offer Has Arrived
Center for AI Policy proposes a concrete actual model bill for us to look at.
Here was their announcement:
WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility.
"This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose."
The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers.
The key provisions of the model legislation include:
1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks.
2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels.
3. Provisions for hardware monitoring, analysis, and reporting of AI systems.
4. Civil + criminal liability measures for non-compliance or misuse of AI systems.
5. Emergency powers for the administration to address imminent AI threats.
6. Whistleblower protection measures for reporting concerns or violations.
The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations.
"As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency."
Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. "CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems.
Our door is open."
I discovered this via Cato's Will Duffield, whose statement was:
Will Duffield: I know these AI folks are pretty new to policy, but this proposal is an outlandish, unprecedented, and abjectly unconstitutional system of prior restraint.
To which my response was essentially:
I bet he's from Cato or Reason.
Yep, Cato.
Sir, this is a Wendy's.
Wolf.
We need people who will warn us when bills are unconstitutional, unworkable, unreasonable or simply deeply unwise, and who are well calibrated in their judgment and their speech on these questions.
I want someone who will tell me 'Bill 1001 is unconstitutional and would get laughed out of court, Bill 1002 has questionable constitutional muster in practice and unconstitutional in theory, we would throw out Bill 1003 but it will stand up these days because SCOTUS thinks the commerc...
view more