This is the full version of my special livestreamed event on Artificial General Intelligence / AGI on July 18 and 19, 2024
You can listen to the edited Livestream here https://soundcloud.com/gleonhard/futurists-david-wood-and-gerd-leonhard-discuss-artificial-general-intelligence-livestream-editmp3?utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing&si=01bb0bd3eee445b7b6dbb8ffed22f381
Watch the full version on YouTube here https://www.youtube.com/watch?v=W3dRQ7QZ_wc
Watch the edited (only Q&A) version on YouTube here https://www.youtube.com/watch?v=yYyTIky2MLc&t=0s
I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs i.e. 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that imho should not be undertaken or self-governed by private enterprises, multi-national corporations or venture-capital funded startups.
I believe we need an AGI-Non-Proliferation-Agreement. I outline what the difference between IA/AI and AGI or ASI (super intelligence) is, why it matters and how we could go about it.
IA /AI yes *but with clear rules, standards and guardrails. AGI: NO unless we're all on the same page.
Who will be Mission Control for humanity?
view more