Clearer Thinking with Spencer Greenberg
Science:Social Sciences
Concrete actions anyone can take to help improve AI safety (with Kat Woods)
Read the full transcript here.
Why should we consider slowing AI development? Could we slow down AI development even if we wanted to? What is a "minimum viable x-risk"? What are some of the more plausible, less Hollywood-esque risks from AI? Even if an AI could destroy us all, why would it want to do so? What are some analogous cases where we slowed the development of a specific technology? And how did they turn out? What are some reasonable, feasible regulations that could be implemented to slow AI development? If an AI becomes smarter than humans, wouldn't it also be wiser than humans and therefore more likely to know what we need and want and less likely to destroy us? Is it easier to control a more intelligent AI or a less intelligent one? Why do we struggle so much to define utopia? What can the average person do to encourage safe and ethical development of AI?
Kat Woods is a serial charity entrepreneur who's founded four effective altruist charities. She runs Nonlinear, an AI safety charity. Prior to starting Nonlinear, she co-founded Charity Entrepreneurship, a charity incubator that has launched dozens of charities in global poverty and animal rights. Prior to that, she co-founded Charity Science Health, which helped vaccinate 200,000+ children in India, and, according to GiveWell's estimates at the time, was similarly cost-effective to AMF. You can follow her on Twitter at @kat__woods; you can read her EA writing here and here; and you can read her personal blog here.
Further reading:
Staff
Music
Affiliates
Create your
podcast in
minutes
It is Free