Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Effective Altruism Foundation: Plans for 2020, published by Jonas Vollmer on the AI Alignment Forum.
Summary
Our mission. We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks).
Our plans for 2020
Research. We aim to investigate the questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence” and other areas.
Research community. We plan to host research workshops, make grants to support work relevant to our priorities, present our work to other research groups, and advise people who are interested in reducing s-risks in their careers and research priorities.
Rebranding. We plan to rebrand from “Effective Altruism Foundation” to a name that better fits our new strategy.
2019 review
Research. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems.
Research workshops. We ran research workshops on s-risks from AI in Berlin, the San Francisco Bay Area, and near London. The participants gave positive feedback.
Location. We moved to London (Primrose Hill) to attract and retain staff better and collaborate with other researchers in London and Oxford.
Fundraising target. We aim to raise $185,000 (stretch goal: $700,000). If you prioritize reducing s-risks, there is a strong case for supporting us. Make a donation.
About us
We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks). (Read more about us and our values.)
We are a London-based nonprofit. Previously, we were located in Switzerland (Basel) and Germany (Berlin). Before shifting our focus to s-risks from artificial intelligence (AI), we implemented projects in global health and development, farm animal welfare, wild animal welfare, and effective altruism (EA) community building and fundraising.
Background on our strategy
For an overview of our strategic thinking, see the following pieces:
Gloor: Cause prioritization for downside-focused value systems
Althaus & Gloor: Reducing Risks of Astronomical Suffering: A Neglected Priority
Gloor: Altruists Should Prioritize Artificial Intelligence (somewhat dated)
The best work on reducing s-risks cuts across a broad range of academic disciplines and interventions. Our recent research agenda, for instance, draws from computer science, economics, political science, and philosophy. That means (a) we must work in many different disciplines and (b) find people who can bridge disciplinary boundaries. The longtermism community brings together people with diverse backgrounds who understand our prioritization and share it to some extent. For this reason, we focus on making reducing s-risks a well-established priority in that community.
Strategic goals
Inspired by GiveWell’s self-evaluations, we are tracking our progress with a set of deliberately vague performance questions:
Building long-term capacity. Have we made progress towards becoming a research group that will have an outsized impact on the research landscape and relevant actors shaping the future?
Research progress. Has our work resulted in research progress that helps reduce s-risks (both in-house and elsewhere)?
Research dissemination. Have we communicated our research to our target audience, and has the target audience engaged with our ideas?
Organizational health. Are we a healthy organization with an effective board, staff in appropriate roles, appropriate evaluation of our work, reliable policies and procedures, adequate financial reserves and reporting, and so forth?
Our team will answer these questions at the end of 2020.
Plans for 2020
Research
Note: We currently carry out some of our research as part of the Foundational Research Institute (FRI). We plan to consolidate our activities r...
view more