In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, a senior research scientist in human-machine interaction, and Jonathan Spring, a senior vulnerability researcher, discuss the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce risks.
Deep Learning in Depth: IARPA's Functional Map of the World Challenge
Deep Learning in Depth: Deep Learning versus Machine Learning
How to Be a Network Traffic Analyst
Workplace Violence and Insider Threat
Why Does Software Cost So Much?
Cybersecurity Engineering & Software Assurance: Opportunities & Risks
Software Sustainment and Product Lines
Best Practices in Cyber Intelligence
Deep Learning in Depth: The Good, the Bad, and the Future
The Evolving Role of the Chief Risk Officer
Obsidian: A Safer Blockchain Programming Language
Agile DevOps
Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon University
Is Software Spoiling Us? Technical Innovations in the Department of Defense
Is Software Spoiling Us? Innovations in Daily Life from Software
How Risk Management Fits into Agile & DevOps in Government
5 Best Practices for Preventing and Responding to Insider Threat
Pharos Binary Static Analysis: An Update
Positive Incentives for Reducing Insider Threat
Mission-Practical Biometrics
Create your
podcast in
minutes
It is Free