Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #73: Openly Evil AI, published by Zvi on July 18, 2024 on LessWrong.
What do you call a clause explicitly saying that you waive the right to whistleblower compensation, and that you need to get permission before sharing information with government regulators like the SEC?
I have many answers.
I also know that OpenAI, having f***ed around, seems poised to find out, because that is the claim made by whistleblowers to the SEC. Given the SEC fines you for merely not making an explicit exception to your NDA for whistleblowers, what will they do once aware of explicit clauses going the other way?
(Unless, of course, the complaint is factually wrong, but that seems unlikely.)
We also have rather a lot of tech people coming out in support of Trump. I go into the reasons why, which I do think is worth considering. There is a mix of explanations, and at least one very good reason.
Then I also got suckered into responding to a few new (well, not really new, but renewed) disingenuous attacks on SB 1047. The entire strategy is to be loud and hyperbolic, especially on Twitter, and either hallucinate or fabricate a different bill with different consequences to attack, or simply misrepresent how the law works, then use that, to create the illusion the bill is unliked or harmful.
Few others respond to correct such claims, and I constantly worry that the strategy might actually work. But that does not mean you, my reader who already knows, need to read all that.
Also a bunch of fun smaller developments. Karpathy is in the AI education business.
Table of Contents
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Fight the insurance company.
4. Language Models Don't Offer Mundane Utility. Have you tried using it?
5. Clauding Along. Not that many people are switching over.
6. Fun With Image Generation. Amazon Music and K-Pop start to embrace AI.
7. Deepfaketown and Botpocalypse Soon. FoxVox, turn Fox into Vox or Vox into Fox.
8. They Took Our Jobs. Take away one haggling job, create another haggling job.
9. Get Involved. OpenPhil request for proposals. Job openings elsewhere.
10. Introducing. Karpathy goes into AI education.
11. In Other AI News. OpenAI's Q* is now named Strawberry. Is it happening?
12. Denying the Future. Projects of the future that think AI will never improve again.
13. Quiet Speculations. How to think about stages of AI capabilities.
14. The Quest for Sane Regulations. EU, UK, The Public.
15. The Other Quest Regarding Regulations. Many in tech embrace The Donald.
16. SB 1047 Opposition Watch (1). I'm sorry. You don't have to read this.
17. SB 1047 Opposition Watch (2). I'm sorry. You don't have to read this.
18. Open Weights are Unsafe and Nothing Can Fix This. What to do about it?
19. The Week in Audio. Joe Rogan talked to Sam Altman and I'd missed it.
20. Rhetorical Innovation. Supervillains, oh no.
21. Oh Anthropic. More details available, things not as bad as they look.
22. Openly Evil AI. Other things, in other places, on the other hand, look worse.
23. Aligning a Smarter Than Human Intelligence is Difficult. Noble attempts.
24. People Are Worried About AI Killing Everyone. Scott Adams? Kind of?
25. Other People Are Not As Worried About AI Killing Everyone. All glory to it.
26. The Lighter Side. A different kind of mental gymnastics.
Language Models Offer Mundane Utility
Let Claude write your prompts for you. He suggests using the Claude prompt improver.
Sully: convinced that we are all really bad at writing prompts
I'm personally never writing prompts by hand again
Claude is just too good - managed to feed it evals and it just optimized for me
Probably a crude version of dspy but insane how much prompting can make a difference.
Predict who will be the shooting victim. A machine learning model did this for citizens of Chicago (a ...
view more