AISN #11: An Overview of Catastrophic AI Risks.
An Overview of Catastrophic AI Risks
Global leaders are concerned that artificial intelligence could pose catastrophic risks. 42% of CEOs polled at the Yale CEO Summit agree that AI could destroy humanity in five to ten years. The Secretary General of the United Nations said we “must take these warnings seriously.” Amid all these frightening polls and public statements, there’s a simple question that’s worth asking: why exactly is AI such a risk?
The Center for AI Safety has released a new paper to provide a clear and comprehensive answer to this question. We detail the precise risks posed by AI, the structural dynamics making these problems so difficult to solve, and the technical, social, and political responses required to overcome this [...]
---
Outline:
(00:08) An Overview of Catastrophic AI Risks
(00:56) Malicious actors can use AIs to cause harm.
(02:18) Racing towards an AI disaster.
(04:05) Safety should be a goal, not a constraint.
(05:46) The challenge of AI control.
(07:53) Positive visions for the future of AI.
(09:02) Links
---
First published:
June 22nd, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-11
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Create your
podcast in
minutes
It is Free