Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MATS Summer 2023 Postmortem, published by Rocket on December 2, 2023 on LessWrong.
The
ML Alignment & Theory Scholars program (MATS, formerly SERI MATS) is an education and research mentorship program for emerging AI safety researchers. This summer, we held the fourth iteration of the MATS program, in which 60 scholars received mentorship from 15 research mentors. In this post, we explain the elements of the program, lay out some of the thinking behind them, and evaluate our impact.
Summary
Key details about the Summer 2023 Program:
Educational attainment of MATS scholars:
30% of scholars are students.
88% have at least a Bachelor's degree.
10% are in a Master's program.
10% are in a PhD program.
13% have a PhD.
If not for MATS, scholars might have worked at a tech company (41%), upskilled independently (46%), or conducted research independently over the summer (50%). (Note: this was a multiple-choice response.)
Key takeaways from our impact evaluation:
MATS scholars are highly likely to recommend MATS to a friend or colleague. Average likelihood: 8.9/10.
Mentors rated their enthusiasm for their scholars to continue with their research at 7/10 or greater for 94% of scholars.
MATS scholars rate their mentors highly. Average rating: 8.0/10.
61% of scholars report that at least half the value of MATS came from their mentor.
After MATS, scholars reported facing fewer obstacles to a successful alignment career than they did at the start of the program.
Most scholars (75%) still reported their publication record as an obstacle to a successful alignment career at the conclusion of the program.
of final projects involved evals/demos and involved mechanistic interpretability, representing a large proportion of the cohort's research interests.
Scholars self-reported improvements to their research ability on average:
Slight increases to the breadth of their AI safety knowledge (+1.75 on 10-point scale over the program).
Moderate strengthening of technical skills compared to counterfactual summer (7.2/10, where 10/10 is "significant improvement compared to counterfactual summer").
Moderate improvements to ability to independently iterate on research direction (7.0/10, where 10/10 is "significant improvement") and ability to develop a theory of change for their research (5.9/10, where 10/10 is "substantially developed").
The typical scholar reported making 4.5 professional connections (std. dev. = 6.2) and meeting 5 potential research collaborators on average (std. dev. = 6.8).
MATS scholars are likely to recommend Scholar Support, our research/productivity coaching service. Average response: 7.9/10.
49 of the 60 scholars in the Research Phase met with a Scholar Support Specialist at least once.
The average scholar who met with Scholar Support at least once spent 3.4 hours meeting with Scholar Support throughout the program.
The average and median scholar report that they value the Scholar Support they received at $3705 and $750, respectively.
The average scholar reports gaining 22 productive hours over the summer due to Scholar Support.
Key changes we plan to make to MATS for the Winter 2023-24 cohort:
Filtering better during the application process;
Pivoting Scholar Support to additionally focus on research management;
Providing additional forms of support to scholars, particularly technical support and professional development.
Note that it is too early to evaluate any career benefits that MATS provided the most recent cohort; a comprehensive post assessing career outcomes for MATS alumni 6-12 months after their program experience is forthcoming.
Theory of Change
MATS helps expand the talent pipeline for AI safety research by equipping scholars to work on AI safety at existing organizations, found new organizations, or pursue independent research. To this end, MATS provides fu...
view more