The Nonlinear Library: EA Forum
Education
EA - Memo on some neglected topics by Lukas Finnveden
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Memo on some neglected topics, published by Lukas Finnveden on November 11, 2023 on The Effective Altruism Forum.I originally wrote this for theMeta Coordination Forum. The organizers were interested in a memo on topics other than alignment that might be increasingly important as AI capabilities rapidly grow - in order to inform the degree to which community-building resources should go towards AI safety community building vs. broader capacity building. This is a lightly edited version of my memo on that. All views are my own.Some example neglected topics (without much elaboration)Here are a few example topics that could matter a lot if we're inthe most important century, which aren't always captured in a normal "AI alignment" narrative:The potential moral value of AI. [1]The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations (whether it ends up intent-aligned or not).Questions about how human governance institutions will keep up if AI leads to explosive growth.Ways in which AI could cause human deliberation to get derailed, e.g. powerful persuasion abilities.Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.)(More elaboration on thesebelow.)Here are a few examples of somewhat-more-concrete things that it might (or might not) be good for some people to do on these (and related) topics:Develop proposals for how labs could treat digital minds better, and advocate for them to be implemented. (C.f.this nearcasted proposal.)Advocate for people to try to avoid building AIs with large-scale preferences about the world (at least until we better understand what we're doing). In order to avoid a scenario where, if some generation of AIs turn out to be sentient and worthy of rights, we're forced to choose between "freely hand over political power to alien preferences" and "deny rights to AIs on no reasonable basis".Differentially accelerate AI being used to improve our ability to find the truth, compared to being used for propaganda and manipulation.E.g.: Start an organization that uses LLMs to produce epistemically rigorous investigations of many topics. If you're the first to do a great job of this, and if you're truth-seeking and even-handed, then you might become a trusted source on controversial topics. And your investigations would just get better as AI got better.E.g.: Evaluate and write-up facts about current LLM's forecasting ability, to incentivize labs to make LLMs state correct and calibrated beliefs about the world.E.g.: ImproveAI ability to help with thorny philosophical problems.Implications for community building?â¦with a focus on "the extent to which community-building resources should go towards AI safety vs. broader capacity building".Ethics, philosophy, and prioritization matter more for research on these topics than it does for alignment research.For some issues in AI alignment, there's a lot of convergence on what's important regardless of your ethical perspective, which means that ethics & philosophy aren't that important for getting people to contribute. By contrast, when thinking about "everything but alignment", I think we should expect somewhat more divergence, which could raise the importance of those subjects.For example:How much to care about digital minds?How much to focus on "deliberation could get off track forever" (which is of great longtermist importance) vs. short-term events (e.g. the speed at which AI gets deployed to solve all of the world's current problems.)But to be clear, I wouldn't want to go hard on any one ethical framework here (e.g. just utilitarianism). Some diversity and pluralism seems ...
Create your
podcast in
minutes
It is Free