The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
ML-Driven Decision Making in Realistic Cyber Exercises
A Roadmap for Creating and Using Virtual Prototyping Software
Software Architecture Patterns for Robustness
A Platform-Independent Model for DevSecOps
Using the Quantum Approximate Optimization Algorithm (QAOA) to Solve Binary-Variable Optimization Problems
Trust and AI Systems
A Dive into Deepfakes
Challenges and Metrics in Digital Engineering
The 4 Phases of the Zero Trust Journey
DevSecOps for AI Engineering
Undiscovered Vulnerabilities: Not Just for Critical Software
Explainable AI Explained
Model-Based Systems Engineering Meets DevSecOps
Incorporating Supply-Chain Risk and DevSecOps into a Cybersecurity Strategy
Software and Systems Collaboration in the Era of Smart Systems
Securing the Supply Chain for the Defense Industrial Base
Building on Ghidra: Tools for Automating Reverse Engineering and Malware Analysis
Envisioning the Future of Software Engineering
Implementing the DoD's Ethical AI Principles
Walking Fast Into the Future: Evolvable Technical Reference Frameworks for Mixed-Criticality Systems
Create your
podcast in
minutes
It is Free