AISN #9: Statement on Extinction Risks, Competitive Pressures, and When Will AI Reach Human-Level? .
Top Scientists Warn of Extinction Risks from AI
Last week, hundreds of AI scientists and notable public figures signed a public statement on AI risks written by the Center for AI Safety. The statement reads:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The statement was signed by a broad, diverse coalition. The statement represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems.
The international community is [...]
---
Outline:
(00:10) Top Scientists Warn of Extinction Risks from AI
(03:35) Competitive Pressures in AI Development
(07:22) When Will AI Reach Human Level?
(12:47) Links
---
First published:
June 6th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-9
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Create your
podcast in
minutes
It is Free