Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'The AI Dilemma: Growth vs Existential Risk': An Extension for EAs and a Summary for Non-economists, published by TomHoulden on April 22, 2024 on The Effective Altruism Forum.
In this post I first summarize a recent paper by Chad Jones focused on the decision to deploy AI that has the potential to increase both economic growth and existential risk (section 1). Jones offers some simple insights which I think could be interesting for effective altruists and may be influential for how policy-makers think about trade-offs related to existential risk. I then consider some extensions which make the problem more realistic, but more complicated (section 2).
These extensions include the possibility of pausing deployment of advanced AI to work on AI safety, as well allowing for the possibility of economic growth outside of deployment of AI (I show this weakens the case for accepting high levels of risk from AI).
At times, I have slightly adjusted notation used by Jones where I thought it would be helpful to further simplify some of the key points.[1]
I. Summary
AI may boost economic growth to a degree never seen before. Davidson (2021), for example, suggests a tentative 30% probability in greater than 30% growth lasting at least ten years before 2100. As many in the effective altruism community are acutely aware, advanced AI may also pose risks, perhaps even a risk of human extinction.
The decision problem that Jones introduces is: given the potential for unusually high economic growth from AI, how much existential risk should we be willing to tolerate to deploy this AI? In his simple framework, Jones demonstrates that this tolerance is mainly determined by three factors: the growth benefits that AI may bring, the threat that AI poses, and the parameter that underlies how utility is influenced by consumption levels.
Here, I will talk in the language of a 'social planner' who applies some discount to future welfare; a discount rate in the range of 2%-4% seems to be roughly in line with that rate applied in the US and UK,[2] though longtermists may generally choose to calibrate with a lower discount rate (eg.
view more