welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Frank Feedback Given To Very Junior Researchers, published by NunoSempere on the effective altruism forum.
Over the last year, I have found myself giving feedback on various drafts, something that I'm generally quite happy to do. Recently, I got to give two variations of this same feedback in quick succession, so I noticed the commonalities, and then realized that these commonalities were also present on past pieces of feedback. I thought I'd write the general template up, in case others might find it valuable.
High level comments
You are working at the wrong level of abstraction and depth / you are biting more than you can chew / being too ambitious.
In particular, the questions that you analyze are likely to have many cruxes, i.e, factors that might change the conclusion completely. But you only identify a few such cruxes, and thus your analysis doesn't seem likely to be that robust.
I guess that the opposite error is possible—focus too much on one specific scenario which isn't that likely to happen. I just haven't seen it as much, and it doesn't seem as crippling when it happens.
Because you're being too ambitious, you don't have the tools necessary to analyze what you want to analyze, and to some extent those tools may not exist.
Compare with: Forecasting transformative AI timelines using biological anchors, Report on Semi-informative Priors on Transformative AI or Invertebrate Sentience: Summary of findings, which are much more constrained and have specific technical/semi-technical intellectual tool suited to the job (comparison with biological systems, variations on Laplace's law and other priors, markers of consciousness like reaction to harmful stimuli). You don't have an equivalent technical tool.
There is a missing link between the individual facts you outline, and the conclusions you reach (e.g., about [redacted] and [redacted]). I think that the correct thing to do here is to sit with the uncertainty, or to consider a range of scenarios, rather than to reach one specific conclusion. Alternatively, you could highlight that different dynamics could still be possible, but that on the balance of probabilities, you personally think that your favored hypothesis is more likely.
But in that case, it's be great if you more clearly defined your concepts and then expressed your certainty in terms of probabilities, because those are easier to criticize or put to the test, or even notice that there is a disagreement to be had.
Judgment calls
I get the impression that you rely too much on secondary sources, rather than on deeply understanding what you're talking about.
You are making the wrong tradeoff between formality and ¿clarity of thought?
Your report was difficult to read because of the trappings of scholarship—formal tone, long sentences and paragraphs, etc.) An index would have helped.
Your classification scheme is not exhaustive, and thus less useful.
This seems particularly important when considering intelligent adversaries.
I get the impression that you are not deeply familiar with the topic you are talking about. For example, when giving your overview, you don't consider [redacted], which is really the company working on this space.
In particular, I expect that the funders or decision-makers (for instance, Open Philanthropy) whom you might be attempting to influence or inform will be more familiar with the topic than you, and would thus not outsource their intellectual labor to your report.
I don't really know whether you are characterizing the literature faithfully, whether you're just citing the top few most salient experts that you found, or whether there are other factors at play. For instance, maybe the people who [redacted] don't want to be talking about it. Even if you are representing the academic consensus fairly, I don't know how much to trust it....
view more