Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn.
As you can imagine Feed AI is responsible for curating all the content you see daily on the LinkedIn site. What’s less apparent to those that don’t work on this type of product is the wide variety of opposing factors that need to be considered in organizing the feed. As you’ll learn in our conversation, Tim calls this the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.
We’d like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit https://engineering.linkedin.com/blog.
The complete show notes can be found at https://twimlai.com/talk/224.
Powering AI with the World's Largest Computer Chip with Joel Hestness - #684
AI for Power & Energy with Laurent Boinot - #683
Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682
GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681
Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680
Localizing and Editing Knowledge in LLMs with Peter Hase - #679
Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678
V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677
Video as a Universal Interface for AI Reasoning with Sherry Yang - #676
Assessing the Risks of Open AI Models with Sayash Kapoor - #675
OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674
Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673
Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672
Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671
AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670
Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669
Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668
Learning Transformer Programs with Dan Friedman - #667
AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666
AI Trends 2024: Computer Vision with Naila Murray - #665
Create your
podcast in
minutes
It is Free
20/20
The Dropout
10% Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week