Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speaking to Congressional staffers about AI risk, published by Akash on December 5, 2023 on LessWrong.
In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting on the experience and some of my takeaways, and I figured it could be a good topic for a LessWrong dialogue. I saw that hath had offered to do LW dialogues with folks, and I reached out.
In this dialogue, we discuss how I decided to chat with staffers, my initial observations in DC, some context about how Congressional offices work, what my meetings looked like, lessons I learned, and some miscellaneous takes about my experience.
Context
Hey! In your message, you mentioned a few topics that relate to your time in DC.
I figured we should start with your experience talking to congressional offices about AI risk. I'm quite interested in learning more; there don't seem to be many public resources on what that kind of outreach looks like.
How'd that start? What made you want to do that?
In March of 2023, I started working on some AI governance projects at the Center for AI Safety. One of my projects involved helping CAIS respond to a Request for Comments about AI Accountability that was released by the NTIA.
As part of that work, I started thinking a lot about what a good regulatory framework for frontier AI would look like. For instance: if I could set up a licensing regime for frontier AI systems, what would it look like? Where in the US government would it be housed? What information would I want it to assess?
I began to wonder how actual policymakers would react to these ideas. I was also curious to know more about how policymakers were thinking about AI extinction risks and catastrophic risks.
I started asking other folks in AI Governance. The vast majority had not talked to congressional staffers (at all). A few had experience talking to staffers but had not talked to them about AI risk. A lot of people told me that they thought engagement with policymakers was really important but very neglected. And of course, there are downside risks, so you don't want someone doing it poorly.
After consulting something like 10-20 AI governance folks, I asked CAIS if I could go to DC and start talking to congressional offices. The goals were to (a) raise awareness about AI risks, (b) get a better sense of how congressional offices were thinking about AI risks, (c) get a better sense of what kinds of AI-related priorities people at congressional offices had, and (d) get feedback on my NTIA request for comment ideas.
CAIS approved, and I went to DC in May-June 2023. And just to be clear, this wasn't something CAIS told me to do- this was more of an "Akash thing" that CAIS was aware was happening.
Whoa, that's really interesting. A couple random questions:
And of course, there are downside risks, so you don't want someone doing it poorly.
How does one go about doing it non-poorly? How does one learn to interact with policymakers?
Also, what's your background? Did you do policy stuff before this?
Yeah, great question. I'm not sure what the best way to learn is, but here are some things I tried:
Talk to people who have experience interacting with policymakers. Ask them what they say, what they found surprising, what mistakes they made, what downside risks they've noticed, etc.
Read books. I found Master of the Senate and Act of Congress to be especially helpful. I'm currently reading The Devil's Chessboard to better understand the CIA & intelligence agencies, and I'm finding it informative so far.
Do roleplays with policymakers you already know and ask them for blunt feedback.
Get practice in lower-stakes meetings, and use those experiences to iterate.
I hadn't done much policy stuff before this. In college, I wrote for the Harvard Poli...
view more