Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Proposed California SB 1047, published by Zvi on February 14, 2024 on LessWrong.
California Senator Scott Wiener of San Francisco introduces SB 1047 to regulate AI. I have put up a market on how likely it is to become law.
"If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law, I'll be the first to cheer that, but I'm not holding my breath," Wiener said in an interview. "We need to get ahead of this so we maintain public trust in AI."
Congress is certainly highly dysfunctional. I am still generally against California trying to act like it is the federal government, even when the cause is good, but I understand.
Can California effectively impose its will here?
On the biggest players, for now, presumably yes.
In the longer run, when things get actively dangerous, then my presumption is no.
There is a potential trap here. If we put our rules in a place where someone with enough upside can ignore them, and we never then pass anything in Congress.
So what does it do, according to the bill's author?
California Senator Scott Wiener: SB 1047 does a few things:
Establishes clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. These standards apply only to the largest models, not startups.
Establish CalCompute, a public AI cloud compute cluster. CalCompute will be a resource for researchers, startups, & community groups to fuel innovation in CA, bring diverse perspectives to bear on AI development, & secure our continued dominance in AI.
prevent price discrimination & anticompetitive behavior
institute know-your-customer requirements
protect whistleblowers at large AI companies
@geoffreyhinton called SB 1047 "a very sensible approach" to balancing these needs. Leaders representing a broad swathe of the AI community have expressed support.
People are rightfully concerned that the immense power of AI models could present serious risks. For these models to succeed the way we need them to, users must trust that AI models are safe and aligned w/ core values. Fulfilling basic safety duties is a good place to start.
With AI, we have the opportunity to apply the hard lessons learned over the past two decades. Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences, and we should take reasonable precautions this time around.
As usual, RTFC (Read the Card, or here the bill) applies.
Close Reading of the Bill
Section 1 names the bill.
Section 2 says California is winning in AI (see this song), AI has great potential but could do harm. A missed opportunity to mention existential risks.
Section 3 22602 offers definitions. I have some notes.
Usual concerns with the broad definition of AI.
Odd that 'a model autonomously engaging in a sustained sequence of unsafe behavior' only counts as an 'AI safety incident' if it is not 'at the request of a user.' If a user requests that, aren't you supposed to ensure the model doesn't do it? Sounds to me like a safety incident.
Covered model is defined primarily via compute, not sure why this isn't a 'foundation' model, I like the secondary extension clause: "The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and
relevant standard setting organizations OR The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.."
Critical harm is either mass casualties or 500 million in damage, or comparable.
Full shutdown means full s...
view more