AISN #18: Challenges of Reinforcement Learning from Human Feedback, Microsoft’s Security Breach, and Conceptual Research on AI Safety.
Challenges of Reinforcement Learning from Human Feedback
If you’ve used ChatGPT, you might’ve noticed the “thumbs up” and “thumbs down” buttons next to each of its answers. Pressing these buttons provides data that OpenAI uses to improve their models through a technique called reinforcement learning from human feedback (RLHF).
RLHF is popular for teaching models about human preferences, but it faces fundamental limitations. Different people have different preferences, but instead of modeling the diversity of human values, RLHF trains models to earn the approval of whoever happens to give feedback. Furthermore, as AI systems become more capable, they can learn to deceive human evaluators into giving undue approval.
Here we discuss a new [...]
---
Outline:
(00:13) Challenges of Reinforcement Learning from Human Feedback
(05:26) Microsoft’s Security Breach
(06:59) Conceptual Research on AI Safety
(09:25) Links
---
First published:
August 8th, 2023
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-18
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
Create your
podcast in
minutes
It is Free