Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My first EAG: a mix of feelings, published by Lovkush on June 12, 2024 on The Effective Altruism Forum.
TLDR
I had a mix of feelings before and throughout EAG London 2024. Overall, the experience was excellent and I am more motivated and excited about my next steps in EA and AI safety. However, I am actually unsure if I will attend EAG next year, because I am yet to exhaust other means of networking, especially since I live in London.
Why might this be useful for you?
This is a narrative that is different to most others.
Depending on your background/personality, this will reduce the pressure to optimise every aspect of your time at EAG. I am not saying to do no optimization, but that there is a different balance for different people.
If you have not been to an EAG, this provides a flavour of the interactions and feelings - both positive and negative - that are possible.
My background
I did pure maths from undergraduate to PhD, then lectured maths for foundation year students for a few years, then moved to industry and have been a data scientist at Shell for three years. I took the GWWC pledge in 2014, but I had not actively engaged with the community or chosen a career based on EA principles.
A few years ago I made an effort to apply EA principles to my career. I worked through the 80000 Hours career template with AI safety being the obvious top choice, took the AI Safety Fundamentals course, applied to EAG London (and did not get accepted, which was reasonable), and also tried volunteering for SoGive for a couple of months.
Ultimately the arguments for AI doom overwhelmed me and put me into defeatist mindset ('How can you out-think a god-like super intelligence?') so I just put my head in the sand instead of contributing.
In 2023, with ChatGPT and the prominence of AI, my motivation to contribute came back. I did take several actions, but spread out over several months:
I finally learned enough PyTorch to train my first CNN and RNN.
I attended an EA hackathon for software engineers and contributed to Stampy. The contributions were minimal though: shock-horror, the coding one does as a data scientist is not the same as what software engineers do!
I applied to some AI safety roles (Epoch AI Analyst, Quantum Leap founding learning engineer, Cohere AI Data Trainer)
I joined a Mech Interp Discord and within that a reading group for Mathematics for Machine Learning.
I go into these details to illustrate a key way I differ from the prototypical EA: I am not particularly agentic! Somebody more rational would have created more concrete plans, accountability systems, and explored more thoroughly the options and actions available. Despite being familiar with rationality / EA for several years, I had not absorbed the ideas enough to apply them in my life. I was a Bob who waits for opportunities to arise, and thus ends up making little progress.
The breakthrough came when I got accepted into ML4Good. I have written my thoughts on that experience, but the relevant thing is it gave me a huge boost in motivation and confidence to work on AI safety.
Preparing for EAG
I actually did not plan to attend EAG London! My next steps in AI Safety were clear (primarily upskilling by getting hands-on experience on projects) and I was unsure what I could bring to the table for other participants. However, three weeks before EAG, somebody in my ML4Good group chat asked who was going, so I figured I may as well apply and see what happens.
Given I am writing this, I was accepted! When reading the recommended EA Forum posts for EAG first-timers, I was taken aback by how practical and strategic these people were. This had a two-sided effect for me: it was intimidating and made me question how valuable I could be to other EAG participants, but it did also help me be more agentic and help me push mys...
view more