This week we are joined by Ari Morcos. Ari is a research scientist at Facebook AI Research (FAIR) in Menlo Park working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. In particular, he has worked on a variety of topics, including understanding the lottery ticket hypothesis, self-supervised learning, the mechanisms underlying common regularizers, and the properties predictive of generalization, as well as methods to compare representations across networks, the role of single units in computation, and on strategies to measure abstraction in neural network representations. Previously, he worked at DeepMind in London.
Ari earned his PhD working with Chris Harvey at Harvard University. For his thesis, he developed methods to understand how neuronal circuits perform the computations necessary for complex behaviour. In particular, his research focused on how parietal cortex contributes to evidence accumulation decision-making.
In this episode, we discuss the importance of certain layers within neural networks.
Underrated ML Twitter: https://twitter.com/underrated_ml
Naila Murray Twitter: https://twitter.com/arimorcos
Please let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8
Link to the paper:
"Are All Layers Created Equal?" [paper]
Create your
podcast in
minutes
It is Free