How Should the World Regulate Artificial Intelligence?
From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems.
Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short.
Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems.
Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today.
It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited.
There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI.
Show Notes:
Create your
podcast in
minutes
It is Free