Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing The Midas Project - and our first campaign!, published by Tyler Johnston on June 13, 2024 on The Effective Altruism Forum.
Summary
The Midas Project is a new AI safety organization. We use public advocacy to incentivize stronger self-governance from the companies developing and deploying high-risk AI products.
This week, we're launching our first major campaign, targeting the AI company Cognition. Cognition is a rapidly growing startup [1] developing autonomous coding agents. Unfortunately, they've told the public virtually nothing about how, or even if, they will conduct risk evaluations to prevent misuse and other unintended outcomes. In fact, they've said virtually nothing about safety at all.
We're calling on Cognition to release an industry-standard evaluation-based safety policy. We need your help to make this campaign a success. Here are five ways you can help, sorted by level of effort:
1. Keep in the loop about our campaigns by following us on Twitter and joining our mailing list.
2. Offer feedback and suggestions, by commenting on this post or by reaching out at info@themidasproject.com
3. Share our Cognition campaign on social media, sign the petition, or engage with our campaigns directly on our action hub.
4. Donate to support our future campaigns (tax-exempt status pending).
5. Sign up to volunteer, or express interest in joining our team full-time.
Background
The risks posed by AI are, at least partially, the result of a market failure.
Tech companies are locked in an arms race that is forcing everyone (even the most safety-concerned) to move fast and cut corners. Meanwhile, consumers broadly agree that AI risks are serious and that the industry should move slower. However, this belief is disconnected from their everyday experience with AI products, and there isn't a clear Schelling point allowing consumers to express their preference via the market.
Usually, the answer to a market failure like this is regulation. When it comes to AI safety, this is certainly the solution I find most promising. But such regulation isn't happening quickly enough. And even if governments were moving quicker, AI safety as a field is pre-paradigmatic. Nobody knows exactly what guardrails will be most useful, and new innovations are needed.
So companies are largely being left to voluntarily implement safety measures. In an ideal world, AI companies would be in a race to the top, competing against each other to earn the trust of the public through comprehensive voluntary safety measures while minimally stifling innovation and the benefits of near-term applications. But the incentives aren't clearly pointing in that direction - at least not yet.
However: EA-supported organizations have previously been successful at shifting corporate incentives in the past. Take the case of cage-free campaigns.
By engaging in advocacy that threatens to expose specific food companies for falling short of customers' basic expectations regarding animal welfare, groups like The Humane League and Mercy For Animals have been able to create a race to the top for chicken welfare, leading virtually all US food companies to commit to going cage-free.
[2] Creating this change was as simple as making the connection in the consumer's mind between their pre-existing disapproval of inhumane battery cages and the eggs being served at their local fast food chain.
I believe this sort of public advocacy can be extremely effective. In fact, in the case of previous emerging technologies, I would go so far as to say it's been too effective. Public advocacy played a major role in preventing the widespread adoption of GM crops and nuclear power in the twentieth century, despite huge financial incentives to develop these technologies. [3]
We haven't seen this sort of activism leveraged to demand meaningful...
view more