Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda, published by Cameron Berg on December 18, 2023 on LessWrong.
Many thanks to Samuel Hammond, Cate Hall, Beren Millidge, Steve Byrnes, Lucius Bushnaq, Joar Skalse, Kyle Gracey, Gunnar Zarncke, Ross Nordby, David Lambert, Simeon Campos, Bogdan Ionut-Cirstea, Ryan Kidd, Eric Ho, and Ashwin Acharya for critical comments and suggestions on earlier drafts of this agenda, as well as Philip Gubbins, Diogo de Lucena, Rob Luke, and Mason Seale from AE Studio for their support and feedback throughout.
TL;DR
Our initial theory of change at AE Studio was a 'neglected approach' that involved rerouting profits from our consulting business towards the development of brain-computer interface (BCI) technology to dramatically enhance human agency, better enabling us to do things like solve alignment. Now, given shortening timelines, we're updating our theory of change to scale up our technical alignment efforts.
With a solid technical foundation in BCI, neuroscience, and machine learning, we are optimistic that we'll be able to contribute meaningfully to AI safety. We are particularly keen on pursuing neglected technical alignment agendas that seem most creative, promising, and plausible. We are currently onboarding promising researchers and kickstarting our internal alignment team.
As we forge ahead, we're actively soliciting expert insights from the broader alignment community and are in search of data scientists and alignment researchers who resonate with our vision of enhancing human agency and helping to solve alignment.
About us
Hi! We are
AE Studio, a bootstrapped software and data science consulting business. Our mission has always been to reroute our profits directly into building technologies that have the promise of dramatically enhancing human agency, like Brain-Computer Interfaces (BCI). We also donate
5% of our revenue directly to
effective charities. Today, we are ~150 programmers, product designers, and ML engineers; we are profitable and growing. We also have a team of top neuroscientists and data scientists with significant experience in developing ML solutions for
leading
BCI
companies, and we are now leveraging our technical experience and learnings in these domains to assemble an alignment team dedicated to exploring neglected alignment research directions that draw on our expertise in BCI, data science, and machine learning.
As we are becoming more public with our
AI Alignment efforts, we thought it would be helpful to share our strategy and vision for how we at AE prioritize what problems to work on and how to make the best use of our comparative advantage.
Why and how we think we can help solve alignment
We can probably do with alignment what we already did with BCI
You might think that AE has no business getting involved in alignment - and we agree.
AE's initial theory of change sought to realize a highly "neglected approach" to doing good in the world: bootstrap a profitable software consultancy, incubate our own
startups on the side,
sell them, and reinvest the profits in
Brain Computer Interfaces (BCI) in order to do things like dramatically increase human agency, mitigate BCI-related s-risks, and make humans sufficiently intelligent, wise, and capable to do things like solve alignment. While the vision of BCI-mediated cognitive enhancement to do good in the world is
increasingly
common today, it was viewed as highly idiosyncratic when we first set out in 2016.
Initially, many said that AE had no business getting involved in the BCI space (and we also agreed at the time) - but after hiring leading experts in the field and taking
increasingly ambitious A/B-tested steps in the right direction, we emerged as a
respected player in the space (see
here,
here,
here, and
here for some examples).
Now, ...
view more