AISN #12: Policy Proposals from NTIA’s Request for Comment, and Reconsidering Instrumental Convergence.
Policy Proposals from NTIA’s Request for Comment
The National Telecommunications and Information Administration publicly requested comments on the matter from academics, think tanks, industry leaders, and concerned citizens. They asked 34 questions and received more than 1,400 responses on how to govern AI for the public benefit. This week, we cover some of the most promising proposals found in the NTIA submissions.
Technical Proposals for Evaluating AI Safety
Several NTIA submissions focused on the technical question of how to evaluate the safety of an AI system. We review two areas of active research: red-teaming and transparency.
Red Teaming: Acting like an Adversary
Several submissions proposed government support for evaluating AIs via red teaming. In this evaluation method, a [...]
---
Outline:
(00:11) Policy Proposals from NTIA’s Request for Comment
(00:48) Technical Proposals for Evaluating AI Safety
(01:04) Red Teaming: Acting like an Adversary
(02:24) Transparency: Understanding AIs From the Inside
(03:51) Governance Proposals for Improving Safety Processes
(04:25) Requiring a License for Frontier AI Systems
(06:29) Unifying Sector-Specific Expertise and General AI Oversight
(07:51) Does Antitrust Prevent Cooperation Between AI Labs?
(08:40) Reconsidering Instrumental Convergence
(10:39) Links
---
First published:
June 27th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-12
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Create your
podcast in
minutes
It is Free