AISN #8: Why AI could go rogue, how to screen for AI risks, and grants for research on democratic governance of AI.
Yoshua Bengio makes the case for rogue AI
AI systems pose a variety of different risks. Renowned AI scientist Yoshua Bengio recently argued for one particularly concerning possibility: that advanced AI agents could pursue goals in conflict with human values.
Human intelligence has accomplished impressive feats, from flying to the moon to building nuclear weapons. But Bengio argues that across a range of important intellectual, economic, and social activities, human intelligence could be matched and even surpassed by AI.
How would advanced AIs change our world? Many technologies are tools, such as toasters and calculators, which humans use to accomplish our goals. AIs are different, Bengio says. [...]
---
Outline:
(00:11) Yoshua Bengio makes the case for rogue AI
(05:11) How to screen AIs for extreme risks
(09:12) Funding for Work on Democratic Inputs to AI
(10:43) Links
---
First published:
May 30th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-8
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Create your
podcast in
minutes
It is Free