Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk and the US Presidential Candidates, published by Zane on January 7, 2024 on LessWrong.
It's the new year, and the 2024 primaries are approaching, starting with the Iowa Republican causus on January 15. For a lot of people here on LessWrong, the issue of AI risk will likely be an important factor in making a decision. AI hasn't been mentioned much during any of the candidates' campaigns, but I'm attempting to analyze the information that there is, and determine which candidate is most likely to bring about a good outcome.
A few background facts about my own position - such that if these statements do not apply to you, you won't necessarily want to take my recommendation:
I believe that, barring some sort of action to prevent this, the default result of creating artificial superintelligence is human extinction.
I believe that our planet is very far behind in alignment research compared to capabilities, and that this means we will likely need extensive international legislation to slow/pause/stop the advance of AI systems in order to survive.
I believe that preventing ASI from killing humanity is so much more important than any[1] other issue in American politics that I intend to vote solely on the basis of AI risk, even if this requires voting for candidates I would otherwise not have wanted to vote for.[2]
I believe that no mainstream politicians are currently suggesting any plans that would be sufficient for survival, nor do they even realize the problem exists. Most mainstream discourse on AI safety is focused on comparatively harmless risks, like misinformation and bias. The question I am asking is "which of these candidates seems most likely to end up promoting a somewhat helpful AI policy" rather than "which of these candidates has already noticed the problem and proposed the ideal solution," since the answer to the second question is none of them.
(Justification for these beliefs is not the subject of this particular post.)
And a few other background facts about the election, just in case you haven't been following American politics:
As the incumbent president, Joe Biden is essentially guaranteed to be the Democratic nominee, unless he dies or is otherwise incapacitated.
Donald Trump is leading in the polls for Republican nominee by very wide margins, followed by Nikki Haley, Ron DeSantis, Vivek Ramaswamy, and Chris Christie. Manifold[3] currently gives him an 88% chance of winning the nomination.
However, Trump is facing criminal charges regarding the Capitol attack on January 6, 2021, and the Supreme Courts of Colorado and Maine have attempted to disqualify him from the election.
As usual, candidates from outside the Democratic and Republican parties are not getting much support, although Robert F. Kennedy Jr. is polling unusually well for an independent candidate.
Joe Biden
Biden's most notable action regarding AI was Executive Order 14110[4]. The executive order was intended to limit various risks from AI... none of which were at all related to human extinction, except maybe bioweapons. The order covers risks from misinformation, cybersecurity, algorithmic discrimination, and job loss, while also focusing on trying to reap potential benefits of AI.
But the measures contained in the order, while limited in scope, seem to be a step in the right direction. Most importantly, anyone training a model with 10^26 floating point operations or more must report their actions and safety precautions to the government. That's a necessary piece of any future regulation on such large models.
Biden has spoken with the UN about international cooperation on AI, and frequently speaks of AI and other new technologies as both a source of "enormous potential and enormous peril," or other similar phrasings. "We need to make sure they're used as tools of opportunity, not wea...
view more