Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "TESCREAL" Bungle, published by ozymandias on June 4, 2024 on The Effective Altruism Forum.
A specter is haunting Silicon Valley - the specter of TESCREALism.
"TESCREALism" is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to:
Transhumanism - the belief that we should develop and use "human enhancement" technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann's.
Extropianism - the belief that we should settle outer space and create or become innumerable kinds of "posthuman" minds very different from present humanity.
Singularitarianism - the belief that humans are going to create a superhuman intelligence in the medium-term future.
Cosmism - a near-synonym to extropianism.
Rationalism - a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people's ability to make good decisions and come to true beliefs.
Effective altruism - a community focused on using reason and evidence to improve the world as much as possible.
Longtermism - the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1]
TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times.
The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley - principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen - two influential thinkers Torres and Gebru have identified as TESCREAList - don't agree on much.
Eliezer Yudkowsky believes that with our current understanding of AI we're unable to program an artificial general intelligence that won't wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands.
But their very disagreement depends on a number of common assumptions: that human minds aren't special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3]
As an analogy, Republicans and Democrats don't seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone.
To explain what was going on, you'd call this "liberal democracy." Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it's easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear.
It's easy to stumble across Andreesen's or Yudkowsky's writing without knowing anything about transhumanism. The TESCREALism concept can clarify what's going on for confused outsiders.
How...
view more