Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 (and All Time) Posts by Pingback Count, published by Raemon on December 17, 2023 on LessWrong.
For the past couple years I've wished LessWrong had a "sort posts by number of pingbacks, or, ideally, by total karma of pingbacks". I particularly wished for this during the Annual Review, where "which posts got cited the most?" seemed like a useful thing to track for potential hidden gems.
We still haven't built a full-fledged feature for this, but I just ran a query against the database, and made it into a spreadsheet, which you can view here:
LessWrong 2022 Posts by Pingbacks
Here are the top 100 posts, sorted by Total Pingback Karma
Title/Link
Post Karma
Pingback Count
Total Pingback Karma
Avg Pingback Karma
AGI Ruin: A List of Lethalities
870
158
12,484
79
MIRI announces new "Death With Dignity" strategy
334
73
8,134
111
A central AI alignment problem: capabilities generalization, and the sharp left turn
273
96
7,704
80
Simulators
612
127
7,699
61
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
367
83
5,123
62
Reward is not the optimization target
341
62
4,493
72
A Mechanistic Interpretability Analysis of Grokking
367
48
3,450
72
How To Go From Interpretability To Alignment: Just Retarget The Search
167
45
3,374
75
On how various plans miss the hard bits of the alignment challenge
292
40
3,288
82
[Intro to brain-like-AGI safety] 3. Two subsystems: Learning & Steering
79
36
3,023
84
How likely is deceptive alignment?
101
47
2,907
62
The shard theory of human values
238
42
2,843
68
Mysteries of mode collapse
279
32
2,842
89
[Intro to brain-like-AGI safety] 2. "Learning from scratch" in the brain
57
30
2,731
91
Why Agent Foundations? An Overly Abstract Explanation
285
42
2,730
65
A Longlist of Theories of Impact for Interpretability
124
26
2,589
100
How might we align transformative AI if it's developed very soon?
136
32
2,351
73
A transparency and interpretability tech tree
148
31
2,343
76
Discovering Language Model Behaviors with Model-Written Evaluations
100
19
2,336
123
A note about differential technological development
185
20
2,270
114
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
195
35
2,267
65
Supervise Process, not Outcomes
132
25
2,262
90
Shard Theory: An Overview
157
28
2,019
72
Epistemological Vigilance for Alignment
61
21
2,008
96
A shot at the diamond-alignment problem
92
23
1,848
80
Where I agree and disagree with Eliezer
862
27
1,836
68
Brain Efficiency: Much More than You Wanted to Know
201
27
1,807
67
Refine: An Incubator for Conceptual Alignment Research Bets
143
21
1,793
85
Externalized reasoning oversight: a research direction for language model alignment
117
28
1,788
64
Humans provide an untapped wealth of evidence about alignment
186
19
1,647
87
Six Dimensions of Operational Adequacy in AGI Projects
298
20
1,607
80
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
240
16
1,575
98
Godzilla Strategies
137
17
1,573
93
(My understanding of) What Everyone in Technical Alignment is Doing and Why
411
23
1,530
67
Two-year update on my personal AI timelines
287
18
1,530
85
[Intro to brain-like-AGI safety] 15. Conclusion: Open problems, how to help, AMA
90
16
1,482
93
[Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL
66
25
1,460
58
Human values & biases are inaccessible to the genome
90
14
1,450
104
You Are Not Measuring What You Think You Are Measuring
350
21
1,449
69
Open Problems in AI X-Risk [PAIS #5]
59
14
1,446
103
[Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now?
146
25
1,407
56
Conditioning Generative Models
24
11
1,362
124
Conjecture: Internal Infohazard Policy
132
14
1,340
96
A challenge for AGI organizations, and a ch...
view more