Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
- The basic case for working on existential risk from AI
- How to begin figuring out what to do to reduce the risks
- Threat models for the risks of advanced AI
- 'Theories of victory' for how the world mitigates the risks
- 'Intermediate goals' in AI governance
- What useful (and less useful) research looks like for reducing AI x-risk
- Practical advice for usefully contributing to efforts to reduce existential risk from AI
- Resources for getting started and finding job openings
Key links:
- Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
- Rethink Priority's survey on intermediate goals in AI governance
- The Rethink Priorities newsletter
- The Rethink Priorities tab on the Effective Altruism Forum
- Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
- Strategic Perspectives on Long-term AI Governance by Matthijs Maas
- Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
- The 80,000 Hours job board