The Nonlinear Library: EA Forum
Education
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles, published by Yanni Kyriacos on March 16, 2024 on The Effective Altruism Forum.The following story is fictional and does not depict any actual person or event...da da.(You better believe this is a draft amnesty thingi).Epistemic status: very low confidence, but niggling worry. Would LOVE for people to tell me this isn't something to worry about.I've been around EA for about six years, and every now and then I have an old sneaky peak at the old LinkedIn profile.Something I've noticed is that there seems to be a lot of people in leadership positions whose LinkedIn looks a lot like Profile #1 and not a lot who look like Profile #2. Allow me to spell out some of the important distinctions:Profile #1:Immediately jumped into the EA ecosystem as an individual contributorWorked their way up through the ranks through good old fashioned hard workHas approximately zero experience in the non-EA workforce and definitely non managing non-EAs. Now they manage peopleProfile #2:Like Profile #1, went to a prestigious uni, maybe did post grad, doesn't matter, not the major point of this postGot some grad gig in Mega Large Corporation and got exposure to normal people, probably crushed by the bureaucracy and politics at some pointMost importantly, Fucked Around And Found Out (FAAFO) for the next five years. Did lots of different things across multiple industries. Gained a bunch of skills in the commercial world. Had their heart broken. Was not fossilized by EA norms. But NOW THEY'RE BACK BAYBEEE....If I had more time and energy I'd probably make some more evidenced claims about Meta issues, and how things like SBF, sexual misconduct cases or Nonlinear could have been helped with more of #2 than #1 but don't have the time or energy (I'm also less sure about this claim).I also expect people in group 1 to downvote this and people in group 2 to upvote it, but most importantly, I want feedback on whether people think this is a thing, and if it is a thing, is it bad.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
EA - Effective Aspersions: How the Nonlinear Investigation Went Wrong by TracingWoodgrains
EA - 80,000 Hours spin out announcement and fundraising by 80000 Hours
EA - Summary: The scope of longtermism by Global Priorities Institute
EA - Bringing about animal-inclusive AI by Max Taylor
EA - OpenAI's Superalignment team has opened Fast Grants by Yadav
EA - Launching Asimov Press by xander balwit
EA - EA for Christians 2024 Conference in D.C. | May 18-19 by JDBauman
EA - The Global Fight Against Lead Poisoning, Explained (A Happier World video) by Jeroen Willems
EA - What is the current most representative EA AI x-risk argument? by Matthew Barnett
EA - #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast) by 80000 Hours
EA - My quick thoughts on donating to EA Funds' Global Health and Development Fund and what it should do by Vasco Grilo
EA - Announcing Surveys on Community Health, Causes, and Harassment by David Moss
EA - On-Ramps for Biosecurity - A Model by Sofya Lebedeva
EA - Risk Aversion in Wild Animal Welfare by Rethink Priorities
EA - Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 by JorgeTorresC
EA - Will AI Avoid Exploitation? (Adam Bales) by Global Priorities Institute
EA - Faunalytics' Plans & Priorities For 2024 by JLRiedi
EA - GWWC is spinning out of EV by Luke Freeman
EA - EV updates: FTX settlement and the future of EV by Zachary Robinson
EA - Center on Long-Term Risk: Annual review and fundraiser 2023 by Center on Long-Term Risk
Create your
podcast in
minutes
It is Free
Visualize Meditations
The No-Frills Teacher Podcast
Teachers Talk Radio
The Jordan B. Peterson Podcast
The Mel Robbins Podcast