Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work, published by Rohin Shah on August 20, 2024 on LessWrong.
We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We're the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our
last post, we've evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the
Frontier Safety Framework, including developing and running dangerous capability evaluations). We've also been growing since our last post: by 39% last year, and by 37% so far this year. The leadership team is Anca Dragan, Rohin Shah, Allan Dafoe, and Dave Orr, with Shane Legg as executive sponsor.
We're part of the overall AI Safety and Alignment org led by Anca, which also includes Gemini Safety (focusing on safety training for the current Gemini models), and Voices of All in Alignment, which focuses on alignment techniques for value and viewpoint pluralism.
What have we been up to?
It's been a while since our last update, so below we list out some key work published in 2023 and the first part of 2024, grouped by topic / sub-team.
Our big bets for the past 1.5 years have been 1) amplified oversight, to enable the right learning signal for aligning models so that they don't pose catastrophic risks, 2) frontier safety, to analyze whether models are capable of posing catastrophic risks in the first place, and 3) (mechanistic) interpretability, as a potential enabler for both frontier safety and alignment goals. Beyond these bets, we experimented with promising areas and ideas that help us identify new bets we should make.
Frontier Safety
The mission of the Frontier Safety team is to ensure safety from extreme harms by anticipating, evaluating, and helping Google prepare for powerful capabilities in frontier models. While the focus so far has been primarily around misuse threat models, we are also working on misalignment threat models.
FSF
We recently published our
Frontier Safety Framework, which, in broad strokes, follows the approach of
responsible capability scaling, similar to Anthropic's
Responsible Scaling Policy and OpenAI's
Preparedness Framework. The key difference is that the FSF applies to Google: there are many different frontier LLM deployments across Google, rather than just a single chatbot and API (this in turn affects stakeholder engagement, policy implementation, mitigation plans, etc).
We're excited that our small team led the Google-wide strategy in this space, and demonstrated that responsible capability scaling can work for large tech companies in addition to small startups.
A key area of the FSF we're focusing on as we pilot the Framework, is how to map between the critical capability levels (CCLs) and the mitigations we would take. This is high on our list of priorities as we iterate on future versions.
Some commentary (e.g.
here) also highlighted (accurately) that the FSF doesn't include commitments. This is because the science is in early stages and best practices will need to evolve. But ultimately, what we care about is whether the work is actually done. In practice, we did run and report dangerous capability evaluations for Gemini 1.5 that we think are sufficient to rule out extreme risk with high confidence.
Dangerous Capability Evaluations
Our paper on
Evaluating Frontier Models for Dangerous Capabilities is the broadest suite of dangerous capability evaluations published...
view more