Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: Against the singularity hypothesis, published by Global Priorities Institute on May 22, 2024 on The Effective Altruism Forum.
This is a summary of the GPI Working Paper "Against the singularity hypothesis" by David Thorstad (published in Philosophical Studies). The summary was written by Riley Harris.
The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further.
At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019).
It is characteristic of the singularity hypothesis that AI will take years or months at the most to become many times more intelligent than even the most intelligent human.[1] Such extraordinary claims require extraordinary evidence. In the paper "Against the singularity hypothesis", David Thorstad claims that we do not have enough evidence to justify the belief in the singularity hypothesis, and we should consider it unlikely unless stronger evidence emerges.
Reasons to think the singularity is unlikely
Thorstad is sceptical that machine intelligence can grow quickly enough to justify the singularity hypothesis. He gives several reasons for this.
Low-hanging fruit. Innovative ideas and technological improvements tend to become more difficult over time. For example, consider "Moore's law", which is (roughly) the observation that hardware capacities double every two years. Between 1971 and 2014 Moore's law was maintained only with an astronomical increase in the amount of capital and labour invested into semiconductor research (Bloom et al. 2020).
In fact, according to one leading estimate, there was an eighteen-fold drop in productivity over this period. While some features of future AI systems will allow them to increase the rate of progress compared to human scientists and engineers, they are still likely to experience diminishing returns as the easiest discoveries have already been made and only more difficult ideas are left.
Bottlenecks. AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if any of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening.
AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019).
Constraints. Resource and physical constraints may also limit the rate of progress. To take an analogy, Moore's law gets more difficult to maintain because it is expensive, physically difficult and energy-intensive to cram ever more transistors in the same space. Here we might expect progress to eventually slow as physical and financial constraints provide ever greater barriers to maintaining progress.
Sublinear growth. How do improvements in hardware translate to intelligence growth? Thompson and colleagues (2022) find that exponential hardware improvements translate to linear gains in performance on problems such as Chess, Go, protein folding, weather prediction and the modelling of underground oil reservoirs. Over the past 50 years,...
view more