Artificial Intelligence and You
Technology
This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Michael Hind is a Distinguished Research Staff Member in the IBM
Research AI department in Yorktown Heights, New York. His current
research passion is the area of Trusted AI, focusing on governance,
transparency, explainability, and fairness of AI systems. He helped launch several successful open source projects, such as
AI Fairness 360 and AI Explainability 360.
In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
Create your
podcast in
minutes
It is Free