Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ilya Sutskever created a new AGI startup, published by harfe on June 19, 2024 on LessWrong.
[copy of the whole text of the announcement on ssi.inc, not an endorsement]
Safe Superintelligence Inc.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It's called Safe Superintelligence Inc.
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else.
If that's you, we offer an opportunity to do your life's work and help solve the most important technical challenge of our age.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
view more