The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
Predicting Software Assurance Using Quality and Reliability Measures
Applying Agile in the DoD: Ninth Principle
Cyber Insurance and Its Role in Mitigating Cybersecurity Risk
AADL and Dassault Aviation
Tactical Cloudlets
Agile Software Teams and How They Engage with Systems Engineering on DoD Acquisition Programs
Coding with AADL
The State of Agile
Applying Agile in the DoD: Eighth Principle
A Taxonomy of Operational Risks for Cyber Security
Agile Metrics
Four Principles for Engineering Scalable, Big Data Systems
An Appraisal of Systems Engineering: Defense v. Non-Defense
HTML5 for Mobile Apps at the Edge
Applying Agile in the DoD: Seventh Principle
AADL and Edgewater
Security and Wireless Emergency Alerts
Safety and Behavior Specification Using the Architecture Analysis and Design Language
Characterizing and Prioritizing Malicious Code
Applying Agile in the DoD: Sixth Principle
Create your
podcast in
minutes
It is Free