This week we talk about regulatory capture, Open AI, and Biden’s executive order.
We also discuss the UK’s AI safety summit, open source AI models, and flogging fear.
Recommended Book: The Resisters by Gish Jen
Transcript
Regulatory capture refers to the corruption of a regulatory body by entities to which the regulations that body creates and enforces, apply.
So an organization that wants to see less funding for public schools and more for private and home schooling options getting one of their people into a position at the Department of Education, or someone from Goldman Sachs or another, similar financial institution getting shoehorned into a position at the Federal Reserve, could—through some lenses at least, and depending on how many connections those people in those positions have to those other, affiliated, ideological and commercial institutions—could be construed as engaging in regulatory capture, because they're now able to control the levers of regulation that apply to their own business or industry, or their peers, the folks they previously worked with and people to whom they maybe owe favors, or vice versa, and that could lead to regulations that are more favorable to them and their preferred causes, and those of their fellow travelers.
This is in contrast to regulatory bodies that apply limits to such businesses and organizations, figuring out where they might overstep or lock in their own power at the expense of the industry in which they operate, and slowly, over time, plugging loopholes, finding instances of not-quite-illegal misdeeds that nonetheless lead to negative outcomes, and generally being the entity in charge in spaces that might otherwise be dominated by just one or two businesses that can kill off all their competition and make things worse for consumers and workers.
Often, rather than regulatory capture being a matter of one person from a group insinuating themselves into the relevant regulatory body, the regulatory body, itself, will ask representatives from the industry they regulate to help them make law, because, ostensibly at least, those regulatees should know the business better than anyone else, and in helping to create their own constraints—again, ostensibly—they should be more willing to play by the rules, because they helped develop the rules to which they're meant to abide, and probably helped develop rules that they can live with and thrive under; because most regulators aren't trying to kill ambition or innovation or profit, they're just trying to prevent abuses and monopolistic hoarding.
This sort of capture has taken many shapes over the years, and occurred at many scales.
In the late-19th century, for instance, railroad tycoons petitioned the US government for regulation to help them bypass a clutter of state-level regulations that were making it difficult and expensive for them to do business, and in doing so—in asking to be regulated and helping the federal government develop the applicable regulations—they were able to make their own lives easier, while also creating what was effectively a cartel for themselves with the blessing of the government that regulated their power; the industry as it existed when those regulations were signed into law, was basically locked into place, in such a way that no new competitors could practically arise.
Similar efforts have been launched, at times quite successfully, by entities in the energy space, across various aspects of the financial world, and in just about every other industry you can imagine, from motorcyclists' protective clothing to cheerleading competitions to aviation and its many facets—all have been to some degree and at some point allegedly regulatorily captured so that those being regulated to some degree control the regulations under which they operate, and which as a consequence has at times allowed them to create constraints that benefit them and entrench their own power, rather than opening their industry up and increasing competition, safety, and the treatment and benefits afforded to customers and workers, as is generally the intended outcome of these regulations.
What I'd like to talk about today is the burgeoning world of artificial intelligence and why some players in this space are being accused of attempting the time-tested tactic of regulatory capture at a pivotal moment of AI development and deployment.
—
At the tail-end of October, 2023, US President Biden announced that he was signing a fairly expansive executive order on AI: the first of its kind, and reportedly the first step toward still-greater and more concrete regulation.
A poll conducted by the AI Policy Institute suggests that Americans are generally in favor of this sort of regulatory move, weighing in at 68% in favor of the initiative, which is a really solid in-favor number, especially at a moment as politically divided as this one, and most of the companies working in this space—at least at a large enough scale to show up on the map for AI at this point—seem to be in favor of this executive order, as well, with some caveats that I'll get to in a bit.
That indicates the government probably got things pretty close to where they need to be, in terms of folks actually adhering to these rules, though it's important to note that part of why there's such broad acceptance of the tenets of this order is that there aren't any real teeth to these rules: it's largely voluntary stuff, and mostly only applies to the anticipated next generation of AI—the current generation isn't powerful enough to fall under its auspices, in most cases, so AI companies don't need to do much of anything yet to adhere to these standards, and when they eventually do need to do something to remain in accordance with them, it'll mostly be providing reports to government employees so they can keep tabs on developments, including those happening behind close doors, in this space.
Now that is not nothing: at the moment, this industry is essentially a black box as far as would-be regulators are concerned, so simply providing a process by which companies working on advanced AI and AI applications can keep the government informed on their efforts is a big step that raises visibility from 0 to some meaningful level.
It also provides mechanisms through which such entities can get funding from the government, and pathways through which international AI experts can come to the United States with less friction than would be the case for folks without that expertise.
So AI industry entities generally like all this because it's easy for them to work with, is flexible enough not to punish them if they fail in some regard, but it also provides them with more resources, both monetary and human, and sets the US up, in many ways, to maintain its current purported AI dominance well into the future, despite essentially everyone—especially but not exclusively China—investing a whole lot to catch up and surpass the US in the coming years.
Another response to this order, though, and the regulatory infrastructure it creates, was voiced by the founder of Google Brain, Andrew Ng, who has been working on AI systems and applications for a long time, and who basically says that some of the biggest players in AI, today, are playing up the idea that artificial intelligence systems might be dangerous, even to the point of being world-ending, because they hope to create exactly this kind of regulatory framework at this exact moment, because right now they are the kings of the AI ecosystem, and they're hoping to lock that influence in, denying easy access to any future competitors.
This theory is predicated on that concept I mentioned in the intro, regulatory capture, and history is rich with examples of folks in positions of power in various spaces telling their governments to put their industry on lockdown, and making the case for why this is necessary, because they know, in doing so, their position at the top will probably be locked in, because it will become more difficult and expensive and thus, out of reach, for any newer, smaller, not already influential and powerful competitor, to then challenge them moving forward.
One way this might manifest in the AI space, according to Ng, is through the licensing of powerful AI models—essentially saying if you want to use the more powerful AI systems for your product or research, you need to register with the government, and you need to buy access, basically, from one of these government-sanctioned providers. Only then will we allow you to play in this potentially dangerous space with these highest-end AI models.
This, in turn, would substantially reduce innovation, as other entities wouldn't be able to legally evolve their AI in different directions, at least not at a high level, and it would make today's behemoths—the OpenAI's and Meta's of the world—all but invulnerable to future challenges, because their models would be the ones made available to everyone else to use; no one else could compete, not practically, at least.
This would be not-great for smaller, upstart AI companies, but it would be especially detrimental to open source large language models—versions of the most popular, LLM-based AI systems that're open to the public to mess around with and use however they see fit, rather than being controlled and sold by a single company.
These models would be unlikely to have the resources or governing body necessary to step into the position of regulator-approved moderator of potentially dangerous AI systems, and the open source credo doesn't really play well with that kind of setup to begin with, as the idea is that all the code is open and available to take and use and change, so locking it down at all would violate those principles; and this sort of regulatory approach would be all about the lockdown, on fears of bad actors getting their hands on high-end AI systems—fears that have been flogged by entities like OpenAI.
So that collection of fears are potentially fueling the relatively fast-moving regulatory developments related to AI in the US, right now; regulation, by the way, that's typically slower-moving in the US, which is part of why this is so notable.
This is not a US-exclusive concern, though, nor is this executive order the only big, new regulatory effort in this space.
At a summit in the UK just days after the US executive order was announced, AI companies from around the world, and those who govern such entities, met up to discuss the potential national security risks inherent in artificial intelligence tools, and to sign a legally non-binding agreement to let their governments test their newest, most powerful models for risks before they're released to the public.
The US participated in this summit, as well, and a lot of these new rules overlap with each other, as the executive order shares a lot of tenets with the agreement signed at that meeting in the UK—though the EO was US-specific and included non-security elements, as well, and that will be the case for laws and orders passed in the many different countries to which these sorts of global concerns apply, each with their own approach to implementing those more broadly agreed-upon specifics at the national level.
This summit announced the creation of a international panel of experts who will publish an annual report on the state of the art within the AI space, especially as it applies to national security risks, like misinformation and cybersecurity issues, and when questioned about whether the UK should take things a step further, locking some of these ideas and rules into place and making them legal requirements rather than things corporations agree to do but aren't punished for not doing, the Prime Minister, Rishi Sunak said, in essence, that this sort of thing takes time; and that's a sentiment that's been echoed by many other lawmakers and by people within this industry, as well.
We know there need to be stricter and more enforceable regulations in this space, but because of where we are with this collection of technologies and the culture and rules and applications of them, right now, we don't really know what laws would make the most sense, in other words.
No nation wants to tie its own hands in developing increasingly useful and powerful AI tools, and moving too fast on the concrete versions of these sort of agreements could end up doing exactly that; there's no way to know what the best rules and regulations will be, yet, because we're standing at the precipice of what looks like a long journey toward a bunch of new discoveries and applications.
That's why the US executive order is set up the way it is, too: Biden and his advisors don't want to slow down the development in this space within the US, they want to amplify it, while also providing some foundational structure for whatever they decide needs to be built next—but those next-step decisions will be shaped by how these technologies and industries evolve over the next few years.
The US and other countries are also setting up agencies and institutes and all sorts of safety precautions related to this space, but most of them lack substance at this point, and as with the aforementioned regulations, these agency setups are primarily just first draft guide rails, if that, at this point.
Notably, the EU seems to be orienting around somewhat sterner regulations, but they haven't been able to agree on anything concrete quite yet, so despite typically taking the lead on this sort of thing, the US is a little bit ahead of the EU in terms of AI regulation right now—though it's likely that when the EU does finally put something into place, it'll be harder-core than what the US has, currently.
A few analysts in this space have argued that these new regulations—lightweight as they are, both on the global and US level—by definition will hobble innovation because regulations tend to do that: they're opinionated about what's important and what's not, and that then shapes the direction makers in the regulated space will tend go.
There's also a chance that, as I mentioned before, that this set of regulations laid out in this way, will lock the power of incumbent AI companies into place, protecting them from future competitors, and in doing so also killing off a lot of the forces of innovation that would otherwise lead to unpredictable sorts of outcomes.
One big question, then, is how light a touch these initial regulations will actually end up having, how the AI and adjacent industries will reshape themselves to account for these and predicted future regulations, and to what degree open source alternatives—and other third-party alternatives, beyond the current incumbents—will be able to step in and take market share, nudging things in different directions, and potentially either then being incorporated into and shaping those future, more toothy regulations, or halting the deployment of those regulations by showing that the current direction of regulatory development no longer makes sense.
We'll also see how burdensome the testing and other security-related requirements in these initial rules end up being, as there's a chance more attention and resources will shift toward lighter-weight, less technically powerful, but more useful and deployable versions of these current AI tools, which is already something that many entities are experimenting with, because that comes with other benefits, like being able to run AI on devices like a smartphone, without needing to connect, through the internet, to a huge server somewhere.
Refocusing on smaller models could also allow some developers and companies to move a lot faster than their more powerful but plodding and regulatorily hobbled kin, rewiring the industry in their favor, rather than toward those who are currently expected to dominate this space for the foreseeable future.
Show Notes
On the EO
https://www.aijobstracker.com/ai-executive-order
Reactions to EO
https://archive.ph/RdpLh
https://theaipi.org/poll-biden-ai-executive-order-10-30/
https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html?ref=readtangle.com
https://qz.com/does-anyone-not-like-bidens-new-guidelines-on-ai-1850974346
https://archive.ph/wwRXj
https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz
https://twitter.com/ylecun/status/1718670073391378694?utm_source=substack&utm_medium=email
https://stratechery.com/2023/attenuating-innovation-ai/
First take on EO
What EO means for openness in AI
Biden’s regulation plans
https://www.reuters.com/technology/eu-lawmakers-face-struggle-reach-agreement-ai-rules-sources-2023-10-23/
https://archive.ph/IwLZu
https://techcrunch.com/2023/11/01/politicians-commit-to-collaborate-to-tackle-ai-safety-us-launches-safety-institute/
https://indianexpress.com/article/explained/explained-sci-tech/on-ai-regulation-the-us-steals-a-march-over-europe-amid-the-uks-showpiece-summit-9015032/
Create your
podcast in
minutes
It is Free