Algorithms make millions of decisions about us every day. For example, they determine our insurance premiums, whether we get a mortgage, and how we perform on the job.
Yet, what is more alarming is that data scientists also write the code that fires good teachers, drives up the cost of college degrees and lets criminals evade detection. Their mathematical models are biased in ways that wreak deep and lasting havoc on people, especially the poor.
Cathy O’Neil explains all this and more in her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Cathy earned a Ph.D. in mathematics from Harvard, taught at Barnard College, and worked in the private sector as a data scientist. She shares her ideas on the blog mathbabe.org and appears weekly on the Slate Money podcast.
Here are some of the things that came up in our conversation:
The shame she felt as a data scientist working for a hedge fund during the financial crisis
How most of us trust and fear math to the point where we stop asking questions
How a faulty algorithm cost a high-performing teacher her job
How value-added models of evaluation miss the mark
How a mathematical model is nothing more than an automated set of rules
The fact that every mathematical model has built-in blindspots
What is hard to measure typically does not get included in an algorithm
The cost to colleges and applications of leaving price out of college ranking algorithms
Crime prediction models can fail because of incomplete data
The big error in the findings of A National at Risk report and how we still pay for it
How poverty lies at the heart of the achievement gap
What allows big data to profile people efficiently and effectively
Where we may be headed with individual insurance costs because of big data
Why we need rules to ensure fairness when it comes to health insurance algorithms
Data scientists have become de facto policy makers and that is a problem
The set of questions all data scientists should be asking
The fact that FB serves up an echo chamber of emotional content to hook us
How data is just a tool to automate a system that we, as humans, must weigh in on
Why healthy algorithms need feedback loops
Why we have a problem when we cannot improve a model or reveal it as flawed
Why we need to stop blindly trusting algorithms
Questions we should be asking to demand accountability of algorithm designers
Episode Links
@mathbabedotorg
https://mathbabe.org/
Sarah Wysocki
U.S. News & World Report college ranking system
PredPol
Rise of the Robots by Martin Ford
A Nation at Risk
The Achievement Gap
If you enjoy the podcast, please rate and review it on iTunes - your ratings make all the difference. For automatic delivery of new episodes, be sure to subscribe. As always, thanks for listening!
view more