Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #71: Farewell to Chevron, published by Zvi on July 5, 2024 on LessWrong.
Chevron deference is no more. How will this impact AI regulation?
The obvious answer is it is now much harder for us to 'muddle through via existing laws and regulations until we learn more,' because the court narrowed our affordances to do that. And similarly, if and when Congress does pass bills regulating AI, they are going to need to 'lock in' more decisions and grant more explicit authority, to avoid court challenges. The argument against state regulations is similarly weaker now.
Similar logic also applies outside of AI. I am overall happy about overturning Chevron and I believe it was the right decision, but 'Congress decides to step up and do its job now' is not in the cards. We should be very careful what we have wished for, and perhaps a bit burdened by what has been.
The AI world continues to otherwise be quiet. I am sure you will find other news.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. How will word get out?
4. Language Models Don't Offer Mundane Utility. Ask not what you cannot do.
5. Man in the Arena. Why is Claude Sonnet 3.5 not at the top of the Arena ratings?
6. Fun With Image Generation. A map of your options.
7. Deepfaketown and Botpocalypse Soon. How often do you need to catch them?
8. They Took Our Jobs. The torture of office culture is now available for LLMs.
9. The Art of the Jailbreak. Rather than getting harder, it might be getting easier.
10. Get Involved. NYC space, Vienna happy hour, work with Bengio, evals, 80k hours.
11. Introducing. Mixture of experts becomes mixture of model sizes.
12. In Other AI News. Pixel screenshots as the true opt-in Microsoft Recall.
13. Quiet Speculations. People are hard to impress.
14. The Quest for Sane Regulation. SB 1047 bad faith attacks continue.
15. Chevron Overturned. A nation of laws. Whatever shall we do?
16. The Week in Audio. Carl Shulman on 80k hours and several others.
17. Oh Anthropic. You also get a nondisparagement agreement.
18. Open Weights Are Unsafe and Nothing Can Fix This. Says Lawrence Lessig.
19. Rhetorical Innovation. You are here.
20. Aligning a Smarter Than Human Intelligence is Difficult. Fix your own mistakes?
21. People Are Worried About AI Killing Everyone. The path of increased risks.
22. Other People Are Not As Worried About AI Killing Everyone. Feel no AGI.
23. The Lighter Side. Don't. I said don't.
Language Models Offer Mundane Utility
Guys. Guys.
Ouail Kitouni: if you don't know what claude is im afraid you're not going to get what this ad even is :/
Ben Smith: Claude finds this very confusing.
I get it, because I already get it. But who is the customer here? I would have spent a few extra words to ensure people knew this was an AI and LLM thing?
Anthropic's marketing problem is that no one knows about Claude or Anthropic. They do not even know Claude is a large language model. Many do not even appreciate what a large language model is in general.
I realize this is SFO. Claude anticipates only 5%-10% of people will understand what it means, and while some will be intrigued and look it up, most won't. So you are getting very vague brand awareness and targeting the congnesenti who run the tech companies, I suppose? Claude calls it a 'bold move that reflects confidence.'
Language Models Don't Offer Mundane Utility
David Althus reports that Claude does not work for him because of its refusals around discussions of violence.
Once again, where are all our cool AI games?
Summarize everything your users did yesterday?
Steve Krouse: As a product owner it'd be nice to have an llm summary of everything my users did yesterday. Calling out cool success stories or troublesome error states I should reach out to debug. Has anyone tried such a thing? I am th...
view more