In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
Build Security In Maturity Model (BSIMM) – Practices from Seventy Eight Organizations
An Interview with Grady Booch
Structuring the Chief Information Security Officer Organization
How Cyber Insurance Is Driving Risk and Technology Management
A Field Study of Technical Debt
How the University of Pittsburgh Is Using the NIST Cybersecurity Framework
A Software Assurance Curriculum for Future Engineers
Four Types of Shift Left Testing
Capturing the Expertise of Cybersecurity Incident Handlers
Toward Speed and Simplicity: Creating a Software Library for Graph Analytics
Improving Quality Using Architecture Fault Analysis with Confidence Arguments
A Taxonomy of Testing Types
Reducing Complexity in Software & Systems
Designing Security Into Software-Reliant Systems
Agile Methods in Air Force Sustainment
Defect Prioritization With the Risk Priority Number
SEI-HCII Collaboration Explores Context-Aware Computing for Soldiers
An Introduction to Context-Aware Computing
Data Driven Software Assurance
Applying Agile in the DoD: Twelfth Principle
Create your
podcast in
minutes
It is Free