Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
Episode 40: Deep learning and image compression
Episode 39: What is L1-norm and L2-norm?
Episode 38: Collective intelligence (Part 2)
Episode 38: Collective intelligence (Part 1)
Episode 37: Predicting the weather with deep learning
Episode 36: The dangers of machine learning and medicine
Episode 35: Attacking deep learning models
Episode 34: Get ready for AI winter
Episode 33: Decentralized Machine Learning and the proof-of-train
Episode 32: I am back. I have been building fitchain
Founder Interview – Francesco Gadaleta of Fitchain
Episode 31: The End of Privacy
Episode 30: Neural networks and genetic evolution: an unfeasible approach
Episode 29: Fail your AI company in 9 steps
Episode 28: Towards Artificial General Intelligence: preliminary talk
Episode 27: Techstars accelerator and the culture of fireflies
Episode 26: Deep Learning and Alzheimer
Episode 25: How to become data scientist [RB]
Episode 24: How to handle imbalanced datasets
Episode 23: Why do ensemble methods work?
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast