The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
Using Quality Attributes to Improve Acquisition
Best Practices for Trust in the Wireless Emergency Alerts Service
Three Variations on the V Model for System and Software Testing
Adapting the PSP to Incorporate Verified Design by Contract
Comparing IT Risk Assessment and Analysis Methods
AADL and Aerospace
Assuring Open Source Software
Security Pattern Assurance through Roundtrip Engineering
The Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2)
Applying Agile in the DoD: Fifth Principle
Software Assurance Cases
Raising the Bar - Mainstreaming CERT C Secure Coding Rules
AADL and Télécom Paris Tech
From Process to Performance-Based Improvement
An Approach to Managing the Software Engineering Challenges of Big Data
Using the Cyber Resilience Review to Help Critical Infrastructures Better Manage Operational Resilience
Situational Awareness Mashups
Applying Agile in the DoD: Fourth Principle
Architecting Systems of the Future
Acquisition Archetypes
Create your
podcast in
minutes
It is Free