In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
A Method for Assessing Cloud Adoption Risks
Software Architecture Patterns for Deployability
ML-Driven Decision Making in Realistic Cyber Exercises
A Roadmap for Creating and Using Virtual Prototyping Software
Software Architecture Patterns for Robustness
A Platform-Independent Model for DevSecOps
Using the Quantum Approximate Optimization Algorithm (QAOA) to Solve Binary-Variable Optimization Problems
Trust and AI Systems
A Dive into Deepfakes
Challenges and Metrics in Digital Engineering
The 4 Phases of the Zero Trust Journey
DevSecOps for AI Engineering
Undiscovered Vulnerabilities: Not Just for Critical Software
Explainable AI Explained
Model-Based Systems Engineering Meets DevSecOps
Incorporating Supply-Chain Risk and DevSecOps into a Cybersecurity Strategy
Software and Systems Collaboration in the Era of Smart Systems
Securing the Supply Chain for the Defense Industrial Base
Building on Ghidra: Tools for Automating Reverse Engineering and Malware Analysis
Envisioning the Future of Software Engineering
Create your
podcast in
minutes
It is Free