Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Open Philanthropy's AI governance and policy RFP, published by JulianHazell on July 17, 2024 on The Effective Altruism Forum.
AI has
enormous beneficial
potential if it is governed well. However, in line with a
growing
contingent of
AI (and
other)
experts from academia, industry, government, and civil society, we also think that AI systems could
soon (e.g. in the next 15 years) cause
catastrophic harm. For example, this could happen if malicious human actors
deliberately misuse advanced AI systems, or if we
lose control of future powerful systems designed to take autonomous actions.[1]
To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below.
Strong applications might be funded by Good Ventures (Open Philanthropy's
partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2] (You can indicate in your application whether we have permission to share your materials with other potential funders.)
As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page.
Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3] We will evaluate EOIs on a rolling basis. See below for more details.
If you have any questions, please
email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this
short feedback form.
1. Eligible proposal subject areas
We are primarily seeking EOIs in the following subject areas, but will consider exceptional proposals outside of these areas, as long as they are relevant to mitigating catastrophic risks from AI:
Technical AI governance: Developing and vetting technical mechanisms that improve the efficacy or feasibility of AI governance interventions, or answering technical questions that can inform governance decisions. Examples include
compute governance,
model evaluations,
technical safety and security standards for AI developers,
cybersecurity for model weights, and
privacy-preserving transparency mechanisms.
Policy development: Developing and vetting government policy proposals in enough detail that they can be debated and implemented by policymakers. Examples of policies that seem like they might be valuable (but which typically need more development and debate) include some of those mentioned e.g.
here,
here, and
here.
Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling "red lines" and "if-then commitments," incident reporting protocols, and third-party audits. See e.g.
here,
here, and
here.
International AI governance: Developing and vetting paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors. See e.g.
here.
Law: Developing and vetting legal frameworks for AI governance, exploring relevant legal issues such as liability and antitrust, identifying concrete legal tools for implementing high-level AI governance solutions, encouraging sound legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals. See e.g.
here.
Strategic analysis and threat modeling: Improving society's understanding of the strategic landscape around
transformative ...
view more