Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding case: AI Safety Camp, published by Remmelt on December 12, 2023 on LessWrong.
Project summary
AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.
We support up-and-coming researchers outside the Bay Area and London hubs.
We are out of funding. To make the 10th edition happen, fund our stipends and salaries.
What are this project's goals and how will you achieve them?
AI Safety Camp is a program for inquiring how to work on ensuring future AI is
safe, and try concretely working on that in a team.
For the 9th edition of AI Safety Camp we opened applications for
29 projects.
We are first to host a special area to support "Pause AI" work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.
We are excited about our new research lead
format, since it combines:
Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research.
Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant's fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates.
Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead - instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety.
Flexible hours: Participants can work remotely from their timezone alongside their degree or day job - to test their fit for an AI Safety career.
How will this funding be used?
We are fundraising to pay for:
Salaries for the organisers for the current AISC
Funding future camps (see budget section)
Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.
Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.
AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends - but nothing for salaries, and nothing for future AISCs.
If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.
By default we'll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.
Potential budgets for various versions of AISC
These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we'll do something in between.
Virtual AISC - Budget version
Software etc
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants
$0
Total
$58K
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.
Salaries are calculated based on $7K per person per month.
Virtual AISC - Normal version
Software etc
$2K
Organiser salaries, 3 ...
view more