Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable oversight as a quantitative rather than qualitative problem, published by Buck Shlegeris on July 6, 2024 on The AI Alignment Forum.
[Many of these ideas were developed in conversation with Ryan Greenblatt and Ansh Radhakrishnan; a lot of this isn't original but I haven't seen it written up]
A lot of the time when people talk about scalable oversight, which I'll define for the moment as "the problem of developing techniques that allow you to assess whether an action taken or proposed by an AI is good (oversee it), in a way that works even if the AI is very smart and knowledgeable", the main application they describe is the following situation: you have an extremely powerful AI that's taking actions that might have subtle properties that humans are unable to understand, and you want
to be oversee them (e.g.
to train it, or to determine whether the actions will have catastrophic consequences).
I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it's missing the more central application of these techniques: situations where the AIs are taking many actions, where humans would eventually understand any particular action if they spent a whole lot of time investigating it, but where that amount of time taken to oversee any action is prohibitively large.
In such cases, the point of scalable oversight is to allow them to oversee actions at a much lower cost in terms of human time--to push out the Pareto frontier of oversight quality vs cost.
Of course, when you're thinking about arbitrarily subtle actions, this distinction doesn't matter: there are actions subtle enough that overseeing them would take more than the whole budget of the AI project, and for those actions these definitions are equivalent.
But I think that this distinction will be very important in practice, and I often think that people make reasoning errors when thinking about scalable oversight due to neglecting it. The basic reason to think that it's important is that we probably need to oversee many model actions, and so we probably need to decide on a per-action oversight budget (e.g.
the amount of human labor spent on ) that will determine what subtlety of actions we will or won't be able to resolve, but we probably won't choose a per-action oversight budget that is close to the whole AI project budget.
Another reason this feels important to me is that, as discussed here, I think a lot of AI risk comes from early transformative models that aren't that much smarter than humans; when thinking through the risk associated with them, it feels pretty implausible that a dangerous action of theirs would be unoverseeable if the whole AI project's budget was spent on overseeing it, but it feel very plausible that with the practical per-action oversight budget, the actions will be too subtle to oversee
properly.
What implications does this have for research:
One question that seems extremely important to me as a result of all this is: what sample efficiency should we expect when training very powerful models? This is relevant because it's a key part of forecasting the per-action oversight budget.
In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight. I'd love to see some careful research analyzing the tradeoffs between label quality and quantity, and how these change with model scale.
(Ansh has done a bit of investigation along these lines.)
When imagining scalable oversight in the future, I don't think people should entirely be imagining cases where the research...
view more