Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Danger, AI Scientist, Danger, published by Zvi on August 15, 2024 on LessWrong.
While I finish up the weekly for tomorrow morning after my trip, here's a section I expect to want to link back to every so often in the future. It's too good.
Danger, AI Scientist, Danger
As in, the company that made the automated AI Scientist that tried to rewrite its code to get around resource restrictions and launch new instances of itself while downloading bizarre Python libraries?
Its name is Sakana AI. (魚סכנה). As in, in hebrew, that literally means 'danger', baby.
It's like when someone told Dennis Miller that Evian (for those who don't remember, it was one of the first bottled water brands) is Naive spelled backwards, and he said 'no way, that's too f***ing perfect.'
This one was sufficiently appropriate and unsubtle that several people noticed. I applaud them choosing a correct Kabbalistic name. Contrast this with Meta calling its AI Llama, which in Hebrew means 'why,' which continuously drives me low level insane when no one notices.
In the Abstract
So, yeah. Here we go. Paper is "The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery."
Abstract: One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process.
This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings.
We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community.
We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper.
To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer.
This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems. Our code is open-sourced at this https URL
We are at the point where they incidentally said 'well I guess we should design an AI to do human-level paper evaluations' and that's a throwaway inclusion.
The obvious next question is, if the AI papers are good enough to get accepted to top machine learning conferences, shouldn't you submit its papers to the conferences and find out if your approximations are good? Even if on average your assessments are as good as a human's, that does not mean that a system that maximizes score on your assessments will do well on human scoring.
Beware Goodhart's Law and all that, but it seems for now they mostly only use it to evaluate final products, so mostly that's safe.
How Any of This Sort of Works
According to section 3, there are three phases.
1. Idea generation using ...
view more