The Nonlinear Library: EA Forum
Education
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing StakeOut.AI, published by Harry Luk on February 17, 2024 on The Effective Altruism Forum.We are excited to announce the launch of a new advocacy nonprofit,StakeOut.AI.The mission statement of our nonprofitStakeOut.AI fights to safeguard humanity from AI-driven risks. We use evidence-based outreach to inform people of the threats that advanced AI poses to their economic livelihoods and personal safety. Our mission is to create a united front for humanity, driving national and international coordination on robust solutions to AI-driven disempowerment.We pursue this mission via partnerships (e.g., with other nonprofits, content creators, and AI-threatened professional associations) and media-based awareness campaigns (e.g., traditional media, social media, and webinars).Our modus operandi is to tell the stories of the AI industry's powerless victims, such as:people worldwide, especially women and girls, who have been victimized by nonconsensual deepfake pornography of their likenessesunemployed artists whose copyrighted hard work were essentially stolen by AI companies without their consent, in order to train their economic AI replacementsparents who fear that their children will be economically replaced, and perhaps even replaced as aspecies, by "highly autonomous systems that outperform humans at most economically valuable work" (OpenAI'smission)We connect these victims' stories to powerful people who can protect them. Who are the powerful people? The media, the governments, and most importantly: the grassroots public.StakeOut.AI's mottoThe Right AI Laws, to Right Our Future.We believe AI has great potential to help humanity. But like all other industries that put the public at risk, AI must be regulated. We must unite, as humans have done historically, to work towards ensuring that AI helps humanity flourish rather than cause our devastation. By uniting globally with a single voice to express our concerns, we can push governments to pass the right AI laws that can right our future.However, StakeOut.AI's Safer AI Global Grassroots United Front movement isn't for everybody.It's not for those who don't mind being enslaved by robot overlords.It's not for those whose first instincts are to avoid making waves, rather than to help the powerless victims tell their stories to the people who can protect them.It's not for those who say they 'miss the days' when only intellectual elites talked about AI safety.It's not for those who insist, even after years of trying, that attempting to solve technical AI alignment while continuing to advance AI capabilities is the only way to prevent the threat of AI-driven human extinction.It's not for those who think the public is too stupid to handle the truth about AI. No matter how much certain groups say they are trying to 'shield' regular folks for their 'own good,' the regular folks are learning about AI one way or another.It's also not for those who are indifferent to the AI industry's role in invading privacy, exploiting victims, and replacing humans.So to help save your time, please stop reading this post if any of the above statements reflect your views.But, if you do want transparency and accountability from the AI industry, and you desire a moral and safe AI environment for your family and for future generations, then the United Front may be for you.By prioritizing high-impact projects over fundraising in our early months, we at StakeOut.AI were able to achieve five publicly known milestones for AI safety:researched a'scorecard' evaluating various AI governance proposals, which was presented by Professor Max Tegmark at the first-ever international AI Safety Summit in the U.K. (as part of The Future of Life Institute's governance proposal for the Summit),raised awareness, such as by holding a ...
Create your
podcast in
minutes
It is Free