In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, a senior research scientist in human-machine interaction, and Jonathan Spring, a senior vulnerability researcher, discuss the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce risks.
NIST Catalog of Security and Privacy Controls, Including Insider Threat
Cisco's Adoption of CERT Secure Coding Standards
How to Become a Cyber Warrior
Considering Security and Privacy in the Move to Electronic Health Records
Measuring Operational Resilience
Why Organizations Need a Secure Domain Name System
Controls for Monitoring the Security of Cloud Services
Building a Malware Analysis Capability
Using the Smart Grid Maturity Model (SGMM)
Integrated, Enterprise-Wide Risk Management: NIST 800-39 and CERT-RMM
Conducting Cyber Exercises at the National Level
Indicators and Controls for Mitigating Insider Threat
How Resilient Is My Organization?
Public-Private Partnerships: Essential for National Cyber Security
Software Assurance: A Master's Level Curriculum
How to Develop More Secure Software - Practices from Thirty Organizations
Mobile Device Security: Threats, Risks, and Actions to Take
Establishing a National Computer Security Incident Response Team (CSIRT)
Securing Industrial Control Systems
The Power of Fuzz Testing to Reduce Security Vulnerabilities
Create your
podcast in
minutes
It is Free