Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Introducing the AI Alignment Forum (FAQ), published by Oliver Habryka, Ben Pace, Raymond Arnold, Jim Babcock on the AI Alignment Forum.
After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum.
What are the five most important highlights about the AI Alignment Forum in this FAQ?
The vision for the forum is of a single online hub for alignment researchers to have conversations about all ideas in the field...
...while also providing a better onboarding experience for people getting involved with alignment research than exists currently.
There are three new sequences focusing on some of the major approaches to alignment, which will update daily for the coming 6-8 weeks.
Embedded Agency, written by Scott Garrabrant and Abram Demski of MIRI
Iterated Amplification, written and compiled by Paul Christiano of OpenAI
Value Learning, written and compiled by Rohin Shah of CHAI
For non-members and future researchers, the place to interact with the content is LessWrong.com, where all Forum content will be crossposted.
The site will continue to be improved in the long-term, as the team comes to better understands the needs and goals of researchers.
What is the purpose of the AI Alignment Forum?
Our first priority is obviously to avert catastrophic outcomes from unaligned Artificial Intelligence. We think the best way to achieve this at the margin is to build an online-hub for AI Alignment research, which both allows the existing top researchers in the field to talk about cutting-edge ideas and approaches, as well as the onboarding of new researchers and contributors.
We think that to solve the AI Alignment problem, the field of AI Alignment research needs to be able to effectively coordinate a large number of researchers from a large number of organisations, with significantly different approaches. Two decades ago we might have invested heavily in the development of a conference or a journal, but with the onset of the internet, an online forum with its ability to do much faster and more comprehensive forms of peer-review seemed to us like a more promising way to help the field form a good set of standards and methodologies.
Who is the AI Alignment Forum for?
There exists an interconnected community of Alignment researchers in industry, academia, and elsewhere, who have spent many years thinking carefully about a variety of approaches to alignment. Such research receives institutional support from organisations including FHI, CHAI, DeepMind, OpenAI, MIRI, Open Philanthropy, and others. The Forum membership currently consists of researchers at these organisations and their respective collaborators.
The Forum is also intended to be a way to interact with and contribute to the cutting edge research for people not connected to these institutions either professionally or socially. There have been many such individuals on LessWrong, and that is the current best place for such people to start contributing, to be given feedback and skill-up in this domain.
There are about 50-100 members of the Forum. These folks will be able to post and comment on the Forum, and this group will not grow in size quickly.
Why do we need another website for alignment research?
There are many places online that host research on the alignment problem, such as the OpenAI blog, the DeepMind Safety Research blog, the Intelligent Agent Foundations Forum, AI-Alignment.com, and of course LessWrong.com.
But none of these spaces are set up to host discussion amongst the 50-100 people working in the field. And those that do host discussion have unclear assumptions about what’s common knowledge.
What type of content is ap...
view more