Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISC 2024 - Project Summaries, published by NickyP on November 29, 2023 on LessWrong.
Apply to AI Safety Camp 2024 by 1st December 2023. All mistakes here are my own.
Below are some summaries for each project proposal, listed in order of how they appear on the website. These are edited by me, and most have not yet been reviewed by the project leads. I think having a list like this makes it easier for people to navigate all the different projects, and the original post/website did not have one, so I made this.
If a project catches your interest, click on the title to read more about it.
Note that the summarisation here is lossy. The desired skills as here may be misrepresented, and if you are interested, you should check the original project for more details. In particular, many of the "desired skills" are often listed such that having only a few would be helpful, but this isn't consistent.
List of AISC Projects
To not build uncontrollable AI
1. Towards realistic ODDs for foundation model based AI offerings
Project Lead: Igor Krawczuk
Goal: Current methods for alignment applied to language models is akin to "blacklisting" behaviours that are bad.
Operational Design Domain (OOD) is instead, akin to more exact "whitelisting" design principles, and now allowing deviations from this. The project wants to build a proof of concept, and show that this is hopefully feasible, economical and effective.
Team (Looking for 4-6 people):
"Spec Researcher": Draft the spec for guidelines, and publish a request for comments. Should have experience in safety settings
"Mining Researcher": Look for use cases, and draft the "slicing" of OOD.
"User Access Researcher": Write drafts on feasibility of KYC and user access levels.
"Lit Review Researcher(s)": Reading recent relevant literature on high-assurance methods for ML.
"Proof of Concept Researcher": build a proof of concept. Should have knowledge of OpenAI and interfacing with/architecting APIs.
2. Luddite Pro: information for the refined luddite
Project Lead: Brian Penny
Goal: Develop a news website filled with stories, information, and resources related to the development of artificial intelligence in society. Cover specific stories related to the industry and of widespread interest (e.g:
Adobe's Firefly payouts
,
start of the Midjourney
,
proliferation of undress and deepfake apps). Provide
valuable resources (e.g:
list of experts on AI,
book lists, and pre-made letters/comments to
USCO and
Congress). The goal is to spread via social media and rank in search engines while sparking group actions to ensure a narrative of ethical and safe AI is prominent in everybody's eyes.
Desired Skills (any of the below):
Art, design, and photography - Develop visual content to use as header images for every story. If you have any visual design skills, these are very necessary.
Journalism - journalistic and research backgrounds capable of interviewing subject-matter experts & writing long-form stories related to AI companies.
Technical Writing - Tutorials of technical tools like Glaze and Nightshade. Experience in technical writing & being familiar with these applications.
Wordpress/Web Development - Refine pages to be more user-friendly as well as help setting up templates for people to fill out for calls to action. Currently, the site is running a default WordPress template.
Marketing/PR - The website is filled with content, but it requires a lot of marketing and PR efforts to reach the target audience. If you have any experience working in an agency or in-house marketing/comms, we would love to hear from you.
3. Lawyers (and coders) for restricting AI data laundering
Project Lead: Remmelt Ellen
Goal: Generative AI relies on laundering large amounts of data. Legal injunctions on companies laundering copyrighted data puts their training and deploymen...
view more