Large Language Model (LLM) Talk

Large Language Model (LLM) Talk

https://anchor.fm/s/ffe783a8/podcast/rss
7 Followers 68 Episodes Claim Ownership
AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.

Episode List

Context Engineering

Jan 21st, 2026 5:46 AM

Context engineering is the system-level discipline of architecting the dynamic information environment for AI models. Unlike prompt engineering, which focuses on phrasing specific instructions, context engineering programmatically assembles the model's "working memory" using retrieved data, tool outputs, and conversation history. It employs strategies like selection, compression, and ordering to manage token limits and prevent "context rot." By orchestrating how information is filtered and presented at runtime, context engineering ensures LLMs remain grounded and reliable for complex, long-horizon tasks, effectively serving as the operating system for agentic AI.

Manus AI

Jan 19th, 2026 7:57 PM

Manus AI is a general-purpose autonomous agent designed to function as a digital worker rather than a passive chatbot. Developed by Monica and acquired by Meta, it utilizes a Planner-Executor architecture to orchestrate foundation models like Claude and Qwen within cloud-based sandboxes. Manus excels at complex, asynchronous tasks—including app deployment, massive parallel research, and data analysis—by autonomously planning workflows and executing actions via a virtual file system and browser. Its unique Context Engineering and multi-agent approach enable it to manage long-horizon tasks efficiently without constant human oversight.

Kimi K2

Jul 22nd, 2025 6:42 AM

Kimi K2, developed by Moonshot AI, is an open agentic intelligence model built on a Mixture-of-Experts (MoE) architecture. It features 1 trillion total parameters, with 32 billion active during inference. Trained on 15.5 trillion tokens using the stable MuonClip optimizer, Kimi K2 is optimized for advanced reasoning, coding, and tool use. It offers strong performance and significantly lower pricing than many competitors, making cutting-edge AI accessible and fostering innovation.

Mixture-of-Recursions (MoR)

Jul 18th, 2025 5:03 AM

Mixture-of-Recursions (MoR) is a unified framework built on a Recursive Transformer architecture, designed to enhance the efficiency of large language models. It achieves this by combining three core paradigms: parameter sharing (reusing shared layers across recursion steps), adaptive computation (dynamically assigning different processing depths to individual tokens via lightweight routers), and efficient Key-Value (KV) caching (selectively storing or sharing KV pairs). This integrated approach enables MoR to deliver large-model quality with significantly reduced computational and memory overhead, improving efficiency for both training and inference.

MeanFlow

Jul 10th, 2025 6:12 AM

MeanFlow models introduce the concept of average velocity to fundamentally reformulate one-step generative modeling. Unlike Flow Matching, which focuses on instantaneous velocity, MeanFlow directly models the displacement over a time interval. This approach allows for highly efficient one-step or few-step generation using a single network evaluation. MeanFlow is built on a principled mathematical identity between average and instantaneous velocities, guiding network training without requiring pre-training, distillation, or curriculum learning. It achieves state-of-the-art performance for one-step generation, significantly narrowing the gap with multi-step models.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free