In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
Supply Chain Risk Management: Managing Third Party and External Dependency Risk
Introduction to the Mission Thread Workshop
Applying Agile in the DoD: Eleventh Principle
A Workshop on Measuring What Matters
Applying Agile in the DoD: Tenth Principle
Predicting Software Assurance Using Quality and Reliability Measures
Applying Agile in the DoD: Ninth Principle
Cyber Insurance and Its Role in Mitigating Cybersecurity Risk
AADL and Dassault Aviation
Tactical Cloudlets
Agile Software Teams and How They Engage with Systems Engineering on DoD Acquisition Programs
Coding with AADL
The State of Agile
Applying Agile in the DoD: Eighth Principle
A Taxonomy of Operational Risks for Cyber Security
Agile Metrics
Four Principles for Engineering Scalable, Big Data Systems
An Appraisal of Systems Engineering: Defense v. Non-Defense
HTML5 for Mobile Apps at the Edge
Applying Agile in the DoD: Seventh Principle
Create your
podcast in
minutes
It is Free