Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #61: Meta Trouble, published by Zvi on May 5, 2024 on LessWrong.
Note by habryka: This post failed to import automatically from RSS for some reason, so it's a week late. Sorry for the hassle.
The week's big news was supposed to be Meta's release of two versions of Llama-3.
Everyone was impressed. These were definitely strong models.
Investors felt differently. After earnings yesterday showed strong revenues but that Meta was investing heavily in AI, they took Meta stock down 15%.
DeepMind and Anthropic also shipped, but in their cases it was multiple papers on AI alignment and threat mitigation. They get their own sections.
We also did identify someone who wants to do what people claim the worried want to do, who is indeed reasonably identified as a 'doomer.'
Because the universe has a sense of humor, that person's name is Tucker Carlson.
Also we have a robot dog with a flamethrower.
Table of Contents
Previous post: On Llama-3 and Dwarkesh Patel's Podcast with Zuckerberg.
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Take the XML. Leave the hypnosis.
4. Language Models Don't Offer Mundane Utility. I have to praise you. It's my job.
5. Llama We Doing This Again. Investors are having none of it.
6. Fun With Image Generation. Everything is fun if you are William Shatner.
7. Deepfaketown and Botpocalypse Soon. How to protect your image model?
8. They Took Our Jobs. Well, they took some particular jobs.
9. Get Involved. OMB, DeepMind and CivAI are hiring.
10. Introducing. A robot dog with a flamethrower. You in?
11. In Other AI News. Mission first. Lots of other things after.
12. Quiet Speculations. Will it work? And if so, when?
13. Rhetorical Innovation. Sadly predictable.
14. Wouldn't You Prefer a Nice Game of Chess. Game theory in action.
15. The Battle of the Board. Reproducing an exchange on it for posterity.
16. New Anthropic Papers. Sleeper agents, detected and undetected.
17. New DeepMind Papers. Problems with agents, problems with manipulation.
18. Aligning a Smarter Than Human Intelligence is Difficult. Listen to the prompt.
19. People Are Worried About AI Killing Everyone. Tucker Carlson. I know.
20. Other People Are Not As Worried About AI Killing Everyone. Roon.
21. The Lighter Side. Click here.
Language Models Offer Mundane Utility
I too love XML for this and realize I keep forgetting to use it. Even among humans, every time I see or use it I think 'this is great, this is exceptionally clear.'
Hamel Husain: At first when I saw xml for Claude I was like "WTF Why XML". Now I LOVE xml so much, can't prompt without it.
Never going back.
Example from the docs: User: Hey Claude. Here is an email: {{EMAIL}}. Make this email more {{ADJECTIVE}}. Write the new version in XML tags. Assistant: Also notice the "prefill" for the answer (a nice thing to use w/xml)
Imbure's CEO suggests that agents are not 'empowering' to individuals or 'democratizing' unless the individuals can code their own agent. The problem is of course that almost everyone wants to do zero setup work let alone writing of code. People do not even want to toggle a handful of settings and you want them creating their own agents?
And of course, when we say 'set up your own agent' what we actually mean is 'type into a chat box what you want and someone else's agent creates your agent.' Not only is this not empowering to individuals, it seems like a good way to start disempowering humanity in general.
Claude can hypnotize a willing user. [EDIT: It has been pointed out to me that I misinterpreted this, and Janus was not actually hypnotized. I apologize for the error. I do still strongly believe that Claude could do it to a willing user, but we no longer have the example.]
The variable names it chose are… somethi...
view more