325. Privacy Meets Security: Keeping Our Data Safe feat. Daniel J. Solove
When it comes to data privacy laws these days, it’s still sort of like the Wild West out there. There’s no federal agency holding software makers responsible for security holes, consumers don’t understand how much risk there is, and the laws that are on the books are inadequate.
Daniel J. Solove is a leading authority on privacy law and is a professor at the George Washington University Law School. He’s written numerous books and articles on data security and privacy laws, including his most recent book, Breached!: Why Data Security Law Fails and How to Improve it and his textbook, Information Privacy Law.
Daniel and Greg discuss why current privacy laws are counterproductive, what a useful federal law regulating data security could look like, and why being forced to change your password regularly is actually bad advice.
*unSILOed Podcast is produced by University FM.*
The need for a human element when it comes to security
38:32: Security does need to think about the human element. And that's a different kind of thinking than what might be best for security. And that's what makes security so tricky. There are good technologies and weaker technologies for security. I think two-factor authentication is good. There are a lot of things that people can do that will make very effective security. But there's also this human dimension, and that's a dimension that a lot of them are not trained on. It's just they're not experts in human psychology, human cognitive abilities, and what humans are likely or unlikely to do. But we need that expertise involved if we're going to create the right security framework for a company.
Is the law focusing on data breaches so much that it's making them worse?
13:25: The law, unfortunately, has focused way too obsessively on breach and failed to focus on things that could actually address this problem in a much more effective way.
The role that companies play in data breaches
32:51: If we had companies devise ways that they authenticated themselves to people, then we would be a lot safer, and fewer people would be falling for hacker tricks. And if the company is doing some practice that is miseducating, you should be penalized.
Do we make exceptions for technology when it comes to security?
17:40: There's a bit of exceptionalism when it comes to technology, where we accept the risks and dangers of technology and don't hold the makers of it accountable in ways we would never do with any other product.
Create your
podcast in
minutes
It is Free