There is a connection between gradient descent based optimizers and the dynamics of damped harmonic oscillators. What does that mean? We now have a better theory for optimization algorithms.
In this episode I explain how all this works.
All the formulas I mention in the episode can be found in the post The physics of optimization algorithms
Enjoy the show.
Capturing Data at the Edge (Ep. 180)
[RB] Composable Artificial Intelligence (Ep. 179)
What is a data mesh and why it is relevant (Ep. 178)
Environmentally friendly AI (Ep. 177)
Do you fear of AI? Why? (Ep. 176)
Composable models and artificial general intelligence (Ep. 175)
Ethics and explainability in AI with Erika Agostinelli from IBM (ep. 174)
Is neural hash by Apple violating our privacy? (Ep. 173)
Fighting Climate Change as a Technologist (Ep. 172)
AI in the Enterprise with IBM Global AI Strategist Mara Pometti (Ep. 171)
Speaking about data with Mikkel Settnes from Dreamdata.io (Ep. 170)
Send compute to data with POSH data-aware shell (Ep. 169)
How are organisations doing with data and AI? (Ep. 168)
Don't fight! Cooperate. Generative Teaching Networks (Ep. 167)
CSV sucks. Here is why. (Ep. 166)
Reinforcement Learning is all you need. Or is it? (Ep. 165)
What's happening with AI today? (Ep. 164)
2 effective ways to explain your predictions (Ep. 163)
The Netflix challenge. Fair or what? (Ep. 162)
Artificial Intelligence for Blockchains with Jonathan Ward CTO of Fetch AI (Ep. 161)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Elliot in the Morning