Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Long-Term Future Fund: November 2020 grant recommendations, published by Habryka on the AI Alignment Forum.
This is a linkpost for
Introduction
As part of its November 2020 grant application round, the Long-Term Future Fund supported ten projects that we expect to positively influence the long-term trajectory of civilization, with a total of up to $355,000. We also made an additional off-cycle grant of up to $150,000.
Compared to our previous round, we received triple the number of total applications this round, and double the number of high-quality applications (as measured by average fund manager score). We've added an optional referral question to our application form to understand why this is. Our current guess is that it's largely a result of better outreach, particularly through the 80,000 Hours job board and newsletter.
To help improve fund transparency, we've written a document describing our overall process for making grants. We'll also be running an 'Ask Me Anything' session on the Effective Altruism Forum from December 4th to 7th, where we answer any questions people might have about the fund.
We received feedback this round that our payout reports might discourage individuals from applying if they don't want their grant described in detail. We encourage applicants in this position to apply anyway. We are very sympathetic to circumstances wherein a grantee might be uncomfortable with a detailed public description of their grant. We run all of our grant reports by grantees and think carefully about what information to include to be as transparent as we can while still respecting grantees' preferences. If considerations around reporting make it difficult for us to fund an application, we can refer to private donors who don't publish payout reports. We might also be able to make an anonymous grant, as we did in this round.
Highlights
Our grants include:
An up-to-$150,000 grant to Richard Ngo to do a PhD at Cambridge University on understanding the analogy between the development of human intelligence and artificial general intelligence (AGI). This grant is part of our efforts to reduce potential risks from transformative artificial intelligence. Richard has a strong background for this work: He previously worked at DeepMind and completed a Bachelor's degree in computer science and philosophy at the University of Oxford, and a Master's degree with distinction in computer science at the University of Cambridge. He has also released impressive work in this area before. Human-AGI analogies form the foundation of many researchers' current beliefs about future AI systems; further clarifying them is likely to bring major benefits to the field of AI safety research.
A $3,579 grant to Maximilian Negele to investigate the historical longevity of institutions, in order to better understand the feasibility of setting up charitable foundations that last hundreds of years. This grant is part of our efforts to set up institutions that protect future generations. Existing work on patient philanthropy relies on the ability to transfer wealth and resources into the future; understanding how likely an institution is to be able to do this will be hugely informative for understanding whether to spend long-termist resources now or later.
Grant recipients
See below for a list of grantees' names, grant amounts, and project descriptions. Most of the grants have been accepted, but in some cases, the final grant amount is still uncertain.
Grants made during the last grant application round:
Anonymous (up to $40,000): Supporting a PhD student's career in technical AI safety.
David Bernard (up to $55,000): Testing how the accuracy of impact forecasting varies with the timeframe of prediction.
Lee Sharkey ($44,668): Researching methods to continuously monitor and analyse artificial agents...
view more