Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #51: Altman's Ambition, published by Zvi on February 21, 2024 on LessWrong.
[Editor's note: I forgot to post this to WorldPress on Thursday. I'm posting it here now. Sorry about that.]
Sam Altman is not playing around.
He wants to build new chip factories in the decidedly unsafe and unfriendly UAE. He wants to build up the world's supply of energy so we can run those chips.
What does he say these projects will cost?
Oh, up to seven trillion dollars. Not a typo.
Even scaling back the misunderstandings, this is what ambition looks like.
It is not what safety looks like. It is not what OpenAI's non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI's strategy is invalidated by this move.
I have spun off reactions to Gemini Ultra to their own post.
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Can't go home? Declare victory.
Language Models Don't Offer Mundane Utility. Is AlphaGeometry even AI?
The Third Gemini. Its own post, link goes there. Reactions are mixed.
GPT-4 Real This Time. Do you remember when ChatGPT got memory?
Deepfaketown and Botpocalypse Soon. Bot versus bot, potential for AI hacking.
They Took Our Jobs. The question is, will they also take the replacement jobs?
Get Involved. A new database of surprising AI actions.
Introducing. Several new competitors.
Altman's Ambition. Does he actually seek seven trillion dollars?
Yoto. You only train once. Good luck! I don't know why. Perhaps you'll die.
In Other AI News. Andrej Karpathy leaves OpenAI, self-discover algorithm.
Quiet Speculations. Does every country need their own AI model?
The Quest for Sane Regulation. A standalone post on California's SR 1047.
Washington D.C. Still Does Not Get It. No, we are not confused about this.
Many People are Saying. New Yorkers do not care for AI, want regulations.
China Watch. Not going great over there, one might say.
Roon Watch. If you can.
How to Get Ahead in Advertising. Anthropic super bowl ad.
The Week in Audio. Sam Altman at the World Government Summit.
Rhetorical Innovation. Several excellent new posts, and a protest.
Please Speak Directly Into this Microphone. AI killer drones now?
Aligning a Smarter Than Human Intelligence is Difficult. Oh Goody.
Other People Are Not As Worried About AI Killing Everyone. Timothy Lee.
The Lighter Side. So, what you're saying is…
Language Models Offer Mundane Utility
Washington D.C. government exploring using AI for mundane utility.
Deliver your Pakistani presidential election victory speech while you are in prison.
Terrance Tao suggests a possible application for AlphaGeometry.
Help rescue your Fatorio save from incompatible mods written in Lua.
Shira Ovide says you should use it to summarize documents, find the exact right word, get a head start on writing something difficult, dull or unfamiliar, or make cool images you imagine, but not to use it to get info about an image, define words, identify synonyms, get personalized recommendations or to give you a final text. Her position is mostly that this second set of uses is unreliable.
Which is true, and you do not want to exclusively or non-skeptically rely on the outputs, but so what? Still seems highly useful.
Language Models Don't Offer Mundane Utility
AlphaGeometry is not about AI? It seems that what AlphaGeometry is mostly doing is combining DD+AR, essentially labeling everything you can label and hoping the solution pops out. The linked post claims that doing this without AI is good enough in 21 of the 25 problems that it solved, although a commentor notes the paper seems to claim it was somewhat less than that.
If it was indeed 21, and to some extent even if it wasn't...
view more