Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025), published by Linda Linsefors on August 23, 2024 on The AI Alignment Forum.
Do you have AI Safety research ideas that you would like to work on with others? Is there a project you want to do and you want help finding a team? AI Safety Camp could be the solution for you!
Summary
AI Safety Camp Virtual is a 3-month long online research program from January to April 2025, where participants form teams to work on pre-selected projects. We want you to suggest the projects!
If you have an AI Safety project idea and some research experience, apply to be a Research Lead. If accepted, we offer some assistance to develop your idea into a plan suitable for AI Safety Camp. When project plans are ready, we open up team member applications. You get to review applications for your team, and select who joins as a team member. From there, it's your job to guide work on your project.
Who is qualified?
We require that you have some previous research experience. If you are at least 1 year into a PhD or if you have completed an AI Safety research program (such as a previous AI Safety Camp,
PIBBSS,
MATS, and similar), or done a research internship with an AI Safety org, then you are qualified already. Other research experience can count, too.
More senior researchers are of course also welcome, as long as you think our format of leading an online team inquiring into your research questions suits you and your research.
We require that all Research Leads are active participants in their projects and spend at least 10h/week on AISC.
Apply here
If you are unsure, or have any questions you are welcome to:
Book a call with
Robert
Send an email to
Robert
Choosing project idea(s)
AI Safety Camp is about ensuring future AIs are either reasonably safe or not built at all.
We welcome many types of projects including projects aimed at stopping or pausing AI development, aligning AI, deconfusion research, or anything else you think will help make the world safer from AI. If you like, you can read more of
our perspectives on AI safety, or look at
past
projects.
If you already have an idea for what project you would like to lead, that's great.
Apply with that one!
However, you don't need to come up with an original idea. What matters is you understanding the idea you want to work on, and why. If you base your proposal on someone else's idea, make sure to cite them.
1. For ideas on stopping harmful AI, see
here and/or email
Remmelt.
2. For some mech-interp ideas see
here.
3. We don't have specific recommendations for where to find other types of project ideas, so just take inspiration wherever you find it.
You can
submit as many project proposals as you want. However, you are only allowed to lead one project.
Use
this template to describe each of your project proposals. We want one document per proposal.
We'll help you improve your project
As part of the Research Lead application process, we'll help you improve your project. The organiser whose ideas match best with yours, will work with you to create the best version of your project.
We will also ask for assistance from previous Research Leads, and up to a handful of other trusted people, to give you additional feedback.
Timeline
Research Lead applications
September 22 (Sunday): Application deadline for Research Leads.
October 20 (Sunday): Deadline for refined proposals.
Team member applications:
October 25 (Friday): Accepted proposals are posted on the AISC website. Application to join teams open.
November 17 (Sunday): Application to join teams closes.
December 22 (Sunday): Deadline for Research Leads to choose their team.
Program
Jan 11 - 12: Opening weekend.
Jan 13
Apr 25: Research is happening.
Teams meet weekly, and plan in their own work hours.
April 26 - 27 (preliminary dates):...
view more