Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 in AI predictions, published by jessicata on January 1, 2024 on LessWrong.
Lots of people have made AI predictions in 2023. Here I compile a subset. I have a habit of setting an email reminder for the date of the prediction, when I see AI predictions, so that when they are resolved I can point out their accuracy or inaccuracy. I have compiled most of the email reminders from 2023 in chronological format (predictions with an early to late target date). I'm planning to make these posts yearly, checking in on predictions whose date has expired. Feel free to add more references to predictions made in 2023 to the comments.
In some cases people are referring to the predictions of others in a way that could be taken to imply that they agree. This is not a certain interpretation, but I'm including them for the sake of completeness.
March 2024
the gears to ascension: "Hard problem of alignment is going to hit us like a train in 3 to 12 months at the same time some specific capabilities breakthroughs people have been working on for the entire history of ML finally start working now that they have a weak AGI to apply to, and suddenly critch's stuff becomes super duper important to understand."
October 2024
John Pressman: "6-12 month prediction (80%): The alignment problem as the core of AI X-Risk will become a historical artifact as it's largely solved or on track to being solved in the eyes of most parties and arguments increasingly become about competition and misuse. Few switch sides."
July 2025
Jessica Taylor: "Wouldn't be surprised if this exact prompt got solved, but probably something nearby that's easy for humans won't be solved?"
The prompt: "Find a sequence of words that is: - 20 words long - contains exactly 2 repetitions of the same word twice in a row - contains exactly 2 repetitions of the same word thrice in a row"
(note: thread contains variations and a harder problem.)
November 2026
Max Tegmark: "It's crazy how the time left to weak AGI has plummeted from 20 years to 3 in just 18 months on http://metaculus.com. So you better stop calling AGI a 'long-term' possibility, or someone might call you a dinosaur stuck in the past"
The Metaculus question.
Siqi Chen: "what it means is within 3 years you will either be dead or have a god as a servant".
Elon Musk: "If you say 'smarter than the smartest human at anything'? It may not quite smarter than all humans - or machine-augmented humans, because, you know, we have computers and stuff, so there's a higher bar... but if you mean, it can write a novel as good as JK Rowling, or discover new physics, invent new technology? I would say we are less than 3 years from that point."
December 2026
Jai Bhavnani: "Baseline expectation: 90%+ of smart contracts will get exploited in the next 3 years. These exploits will be found by AIs. We need solutions."
October 2028
Stuart Russell: "Everyone has gone from 30-50 years, to 3-5 years."
November 2028
Tammy: "when i say 'we have approximately between 0 and 5 years' people keep thinking that i'm saying 'we have approximately 5 years'. we do not have approximately 5 years. i fucking wish. we have approximately between 0 and 5 years. we could actually all die of AI next month."
December 2028
Tyler John: "Yep. If discontinuous leaps in AI capabilities are 3-5 years away we should probably start to think a little bit about how to prepare for that. The EU AI Act has been in development for 5 years and still isn't passed yet. We just can't take the wait and see approach any longer."
Mustafa Stuleyman: "[Current models have already] ... arguably passed the Turing Test.
I've proposed a test which involves [AIs] going off and taking $100,000 investment, and over the course of three months, try to set about creating a new product, researching the market, seeing what consumers might like, gen...
view more