Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Essay competition on the Automation of Wisdom and Philosophy - $25k in prizes, published by Owen Cotton-Barratt on April 16, 2024 on The Effective Altruism Forum.
With AI Impacts, we're pleased to announce an essay competition on the automation of wisdom and philosophy. Submissions are due by July 14th. The first prize is $10,000, and there is a total of $25,000 in prizes available.
The full announcement text is reproduced here:
Background
AI is likely to automate more and more categories of thinking with time.
By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking.
Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for people to make. To what degree people will understand these situations well enough to make wise choices remains to be seen. To some extent this will depend on how much good human thinking is devoted to these questions; but at some point it will probably depend crucially on how advanced, reliable, and widespread the automation of high-quality thinking about novel situations is.
We believe[1] that this area could be a crucial target for differential technological development, but is at present poorly understood and receives little attention. This competition aims to encourage and to highlight good thinking on the topics of what would be needed for such automation, and how it might (or might not) arise in the world.
For more information about what we have in mind, see some of the suggested essay prompts or the FAQ below.
Scope
To enter, please submit a link to a piece of writing, not published before 2024. This could be published or unpublished; although if selected for a prize we will require publication (at least in pre-print form; optionally on the AI Impacts website) in order to pay out the prize.
There are no constraints on the format - we will accept essays, blog posts, papers[2], websites, or other written artefacts[3] of any length. However, we primarily have in mind essays of 500-5,000 words. AI assistance is welcome but its nature and extent should be disclosed. As part of your submission you will be asked to provide a summary of 100-200 words.
Your writing should aim to make progress on a question related to the automation of wisdom and philosophy. A non-exhaustive set of questions of interest, in four broad categories:
Automation of wisdom
What is the nature of the sort of good thinking we want to be able to automate? How can we distinguish the type of thinking it's important to automate well and early from types of thinking where that's less important?
What are the key features or components of this good thinking?
How do we come to recognise new ones?
What are traps in thinking that is smart but not wise?
How can this be identified in automatable ways?
How could we build metrics for any of these things?
Automation of philosophy
What types of philosophy are language models well-equipped to produce, and what do they struggle with?
What would it look like to develop a "science of philosophy", testing models' abilities to think through new questions, with ground truth held back, and seeing empirically what is effective?
What have the trend lines for automating philosophy looked like, compared to other tasks performed by language models?
What types of training/finetuning/prompting/scaffolding help with the automation of wisdom/philosophy?
How much do they help, especially compared to how much they help other types of reasoning?
Thinking ahead
Considering the research agenda that will (presumably) eventually be needed to automate high quality wisdo...
view more