Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Soft Nationalization: How the US Government Will Control AI Labs, published by Deric Cheng on August 27, 2024 on LessWrong.
We have yet to see anyone describe a critical element of effective AI safety planning: a realistic model of the upcoming role the US government will play in controlling frontier AI.
The rapid development of AI will lead to increasing national security concerns, which will in turn pressure the US to progressively take action to control frontier AI development. This process has already begun,[1] and it will only escalate as frontier capabilities advance.
However, we argue that existing descriptions of nationalization[2] along the lines of a new Manhattan Project[3] are unrealistic and reductive. The state of the frontier AI industry - with more than $1 trillion[4] in private funding, tens of thousands of participants, and pervasive economic impacts - is unlike nuclear research or any previously nationalized industry.
The traditional interpretation of nationalization, which entails bringing private assets under the ownership of a state government,[5] is not the only option available. Government consolidation of frontier AI development is legally, politically, and practically unlikely.
We expect that AI nationalization won't look like a consolidated government-led "Project", but rather like an evolving application of US government control over frontier AI labs. The US government can select from many different policy levers to gain influence over these labs, and will progressively pull these levers as geopolitical circumstances, particularly around national security, seem to demand it.
Government control of AI labs will likely escalate as concerns over national security grow. The boundary between "regulation" and "nationalization" will become hazy. In particular, we believe the US government can and will satisfy its national security concerns in nearly all scenarios by combining sets of these policy levers, and would only turn to total nationalization as a last resort.
We're calling the process of progressively increasing government control over frontier AI labs via iterative policy levers soft nationalization.
It's important to clarify that we are not advocating for a national security approach to AI governance, nor yet supporting any individual policy actions. Instead, we are describing a model of US behavior that we believe is likely to be accurate to improve the effectiveness of AI safety agendas.
Part 1: What is Soft Nationalization?
Our Model of US Control Over AI Labs
We'd like to define a couple terms used in this article:
Total nationalization: The traditional meaning of "nationalization", where a government transforms private industry or organizations into a public asset, taking over full ownership and control.
Soft nationalization: In contrast to total nationalization, soft nationalization encompasses a wide-ranging set of policy levers governments can use to increase control over the direction, impact, and applications of a private industry or organization. These levers may allow governments to achieve their high-level goals without taking full ownership of said entity.
We argue that soft nationalization is a useful model to characterize the upcoming involvement of the US government in frontier AI labs, based on our following observations:
1. Private US AI labs are currently the leading organizations pushing the frontier of AI development, and will be among the first to develop AI with transformative capabilities.
2. Advanced AI will have significant impacts on national security and the balance of global power.
3. A key priority for the US government is to ensure global military and technological superiority - in particular, relative to geopolitical rivals such as China.
4. Hence, the US government will begin to exert greater control and ...
view more