Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'The AI Dilemma: Growth vs Existential Risk': An Extension for EAs and a Summary for Non-economists, published by TomHoulden on April 22, 2024 on The Effective Altruism Forum.
In this post I first summarize a recent paper by Chad Jones focused on the decision to deploy AI that has the potential to increase both economic growth and existential risk (section 1). Jones offers some simple insights which I think could be interesting for effective altruists and may be influential for how policy-makers think about trade-offs related to existential risk. I then consider some extensions which make the problem more realistic, but more complicated (section 2).
These extensions include the possibility of pausing deployment of advanced AI to work on AI safety, as well allowing for the possibility of economic growth outside of deployment of AI (I show this weakens the case for accepting high levels of risk from AI).
At times, I have slightly adjusted notation used by Jones where I thought it would be helpful to further simplify some of the key points.[1]
I. Summary
AI may boost economic growth to a degree never seen before. Davidson (2021), for example, suggests a tentative 30% probability in greater than 30% growth lasting at least ten years before 2100. As many in the effective altruism community are acutely aware, advanced AI may also pose risks, perhaps even a risk of human extinction.
The decision problem that Jones introduces is: given the potential for unusually high economic growth from AI, how much existential risk should we be willing to tolerate to deploy this AI? In his simple framework, Jones demonstrates that this tolerance is mainly determined by three factors: the growth benefits that AI may bring, the threat that AI poses, and the parameter that underlies how utility is influenced by consumption levels.
Here, I will talk in the language of a 'social planner' who applies some discount to future welfare; a discount rate in the range of 2%-4% seems to be roughly in line with that rate applied in the US and UK,[2] though longtermists may generally choose to calibrate with a lower discount rate (eg.
In the rest of this post when I say 'it is optimal to...' or something to this effect, this is just shorthand for: 'for social planner who gets to make decisions about AI deployment with a discount rate X, it is optimal to...'.
The Basic Economic Framework
Utility functions (Bounded and unbounded)
A utility function is an expression which assigns some value to particular states of the world for, let's say, individual people. Here, Jones (and often macroeconomics more generally) assumes that utility for an individual is just a function of their consumption. The so called 'constant relative risk aversion' utility function assumes utility
is given by
Where c is consumption, and
γ(>0) and u
will be helpful to calibrate this utility function for real-world applications, where
γ adjusts the curvature and
u scales utility up or down.[3] There is a key difference between these two functions (more specifically, when γ>1
vs γ1
): for
γ>1 utility is bounded above, while for
γ1 utility is not. A utility function is bounded above if, as consumption increases to infinity, utility rises toward an upper bound that isn't infinite. A utility function is unbounded above if, as consumption increases to infinity, utility does too.
The distinction between bounded and unbounded utility functions becomes particularly important when considering the growth benefits of AI, since prolonged periods of high growth can cause us to move along x-axis (of the above plot) quite far. In the most extreme case, Jones considers what happens to our willingness to deploy AI when that AI will be guaranteed to deliver an economic singularity (infinite growth in finite time).
In this case we can see that if uti...
view more