The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.
Polars: the fastest dataframe crate in Rust - with Ritchie Vink (Ep. 146)
Apache Arrow, Ballista and Big Data in Rust with Andy Grove (Ep. 145)
Pandas vs Rust (Ep. 144)
Concurrent is not parallel - Part 2 (Ep. 143)
Concurrent is not parallel - Part 1 (Ep. 142)
Backend technologies for machine learning in production (Ep. 141)
You are the product (Ep. 140)
How to reinvent banking and finance with data and technology (Ep. 139)
What's up with WhatsApp? (Ep. 138)
Is Rust flexible enough for a flexible data model? (Ep. 137)
Is Apple M1 good for machine learning? (Ep.136)
Rust and deep learning with Daniel McKenna (Ep. 135)
Scaling machine learning with clusters and GPUs (Ep. 134)
What is data ethics? (Ep. 133)
A Standard for the Python Array API (Ep. 132)
What happens to data transfer after Schrems II? (Ep. 131)
Test-First Machine Learning [RB] (Ep. 130)
Similarity in Machine Learning (Ep. 129)
Distill data and train faster, better, cheaper (Ep. 128)
Machine Learning in Rust: Amadeus with Alec Mocatta [RB] (ep. 127)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Lex Fridman Podcast
All-In with Chamath, Jason, Sacks & Friedberg