Summary Summary .
LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible.
Longer summaryThere is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduction to that research, and asks:
- Whether this limitation is illusory or actually exists.
- If it exists, whether it will be solved by scaling or is a problem fundamental to LLMs.
- If fundamental, whether it can be overcome by scaffolding & tooling.
If this is a real and fundamental limitation that can't be fully overcome by scaffolding, we should be skeptical of arguments like Leopold Aschenbrenner's (in his recent 'Situational Awareness') that we can just 'follow straight lines on graphs' and expect AGI in the next few years.
Introduction Introduction .
Leopold Aschenbrenner's [...]
The original text contained 9 footnotes which were omitted from this narration.
---
First published: June 24th, 2024
Source: https://www.lesswrong.com/posts/k38sJNLk7YbJA72ST/llm-generality-is-a-timeline-crux
---
Narrated by TYPE III AUDIO.