Extracting knowledge from large datasets with large number of variables is always tricky. Dimensionality reduction helps in analyzing high dimensional data, still maintaining most of the information hidden behind complexity. Here are some methods that you must try before further analysis (Part 1).
Episode 28: Towards Artificial General Intelligence: preliminary talk
Episode 27: Techstars accelerator and the culture of fireflies
Episode 26: Deep Learning and Alzheimer
Episode 25: How to become data scientist [RB]
Episode 24: How to handle imbalanced datasets
Episode 23: Why do ensemble methods work?
Episode 22: Parallelising and distributing Deep Learning
Episode 21: Additional optimisation strategies for deep learning
Episode 20: How to master optimisation in deep learning
Episode 19: How to completely change your data analytics strategy with deep learning
Episode 18: Machines that learn like humans
Episode 17: Protecting privacy and confidentiality in data and communications
Episode 16: 2017 Predictions in Data Science
Episode 15: Statistical analysis of phenomena that smell like chaos
Episode 14: The minimum required by a data scientist
Episode 13: Data Science and Fraud Detection at iZettle
Episode 11: Representative Subsets For Big Data Learning
Episode 10: History and applications of Deep Learning
Episode 9: Markov Chain Montecarlo with full conditionals
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Lex Fridman Podcast
All-In with Chamath, Jason, Sacks & Friedberg