Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making AI Welfare an EA priority requires justifications that have not been given, published by JWS on July 8, 2024 on The Effective Altruism Forum.
Author's Note: Written in a slightly combative tone [1]as I have found the arguments for the proposition this week to be insufficiently compelling for the debate statement at hand. Also, I'm very rushed getting this out in time so I with more time I would probably have focused more on the ideas and had more time to add nuance and caveats. I apologise in advance for my shortcomings, and hope you can take the good parts of it and overlook the bad.
Parsing the Debate statement correctly means that supporting it entails supporting radical changes to EA
The statement for AI Welfare Debate Week (hereafter AWDW) is "AI welfare should be an EA priority". However, expanding this with the clarifications provided by the Forum team leads to the expanded statement: "5%+ of unrestricted EA talent and funding should be focused on the potential well-being of future artificial intelligence systems".
Furthermore, I'm interpreting this as a "right now course of action" claim and not an "in an ideal world wouldn't it be nice if" claim. A second interpretation I had about AWDW was that posters were meant to argue directly for the proposition instead of providing information to help voters make up their minds. I think, in either case, though especially the first, the argument for the proposition has been severely underargued.
To get even more concrete, I estimate the following:
As a rough estimate for the number of EAs, I take the number of GWWC Pledgers even if they'd consider themselves 'EA-Adjacent'.[2] At my last check, the lifetime members page stated there were 8,983 members, so 5% of that would be ~449 EAs working specifically or primarily on the potential well-being of future artificial intelligence systems.
For funding, I indexed on Tyler Maule's 2023 estimates of EA funding. That stood at $980.8M in estimated funding, so 5% of that would be ~$49.04M in yearly funding spent on AI Welfare.
This is obviously a quick and dirty method, but given the time constraints I hope it's in the rough order of magnitude of the claims that we're talking about.
Furthermore, I think the amount of money and talent that are spent on AI Welfare in EA is already is quite low, so unless one thinks there can be an influx of new talent and donors to EA specifically to work on AI Welfare then this re-prioritisation must necessarily come at the cost of other causes that EA cares about.[3]
These changes can only be justified if the case to do so is strongly justified
This counterfactual impact on other EA causes cannot, therefore, be separated from arguments for AI Welfare. In my opinion, one of the Forum's best ever posts is Holly Elmore's We are in triage every second of every day. Engaging with Effective Altruism should help make us all more deeply realise that the counterfactual costs of our actions can be large.
To me, making such a dramatic and sudden shifts to EA priorities would require strong justifications, especially given the likely high counterfactual costs of the change.[4]
As an example, Tyler estimated that 2023 EA funding for Animal Welfare was around ~$54M. In the world where AI Welfare was made a priority as per the statement definition then it would likely gain some resources at the expense of Animal Welfare, and plausibly become a higher EA priority by money and talent.
This is a result I would prima facie think that many or most EAs would not support, and so I wonder if all of those who voted strongly or relatively in favour of AWDW's proposition fully grasped the practical implications of their view.
Most posts on AI Welfare Debate Week have failed to make this case
The burden of proof for prioritising AI Welfare requires stronger argumen...
view more