In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
AADL and Edgewater
Security and Wireless Emergency Alerts
Safety and Behavior Specification Using the Architecture Analysis and Design Language
Characterizing and Prioritizing Malicious Code
Applying Agile in the DoD: Sixth Principle
Using Quality Attributes to Improve Acquisition
Best Practices for Trust in the Wireless Emergency Alerts Service
Three Variations on the V Model for System and Software Testing
Adapting the PSP to Incorporate Verified Design by Contract
Comparing IT Risk Assessment and Analysis Methods
AADL and Aerospace
Assuring Open Source Software
Security Pattern Assurance through Roundtrip Engineering
The Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2)
Applying Agile in the DoD: Fifth Principle
Software Assurance Cases
Raising the Bar - Mainstreaming CERT C Secure Coding Rules
AADL and Télécom Paris Tech
From Process to Performance-Based Improvement
An Approach to Managing the Software Engineering Challenges of Big Data
Create your
podcast in
minutes
It is Free