Extracting knowledge from large datasets with large number of variables is always tricky. Dimensionality reduction helps in analyzing high dimensional data, still maintaining most of the information hidden behind complexity. Here are some methods that you must try before further analysis (Part 1).
Embedded Machine Learning: Part 4 - Machine Learning Compilers (Ep. 185)
Embedded Machine Learning: Part 3 - Network Quantization (Ep. 184)
Embedded Machine Learning: Part 2 (Ep. 183)
Embedded Machine Learning: Part 1 (Ep.182)
History of Data Science (Ep. 181)
Capturing Data at the Edge (Ep. 180)
[RB] Composable Artificial Intelligence (Ep. 179)
What is a data mesh and why it is relevant (Ep. 178)
Environmentally friendly AI (Ep. 177)
Do you fear of AI? Why? (Ep. 176)
Composable models and artificial general intelligence (Ep. 175)
Ethics and explainability in AI with Erika Agostinelli from IBM (ep. 174)
Is neural hash by Apple violating our privacy? (Ep. 173)
Fighting Climate Change as a Technologist (Ep. 172)
AI in the Enterprise with IBM Global AI Strategist Mara Pometti (Ep. 171)
Speaking about data with Mikkel Settnes from Dreamdata.io (Ep. 170)
Send compute to data with POSH data-aware shell (Ep. 169)
How are organisations doing with data and AI? (Ep. 168)
Don't fight! Cooperate. Generative Teaching Networks (Ep. 167)
CSV sucks. Here is why. (Ep. 166)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Lex Fridman Podcast
All-In with Chamath, Jason, Sacks & Friedberg