Join Ads Marketplace to earn through podcast sponsorships.
Manage your ads with dynamic ad insertion capability.
Monetize with Apple Podcasts Subscriptions via Podbean.
Earn rewards and recurring income from Fan Club membership.
Get the answers and support you need.
Resources and guides to launch, grow, and monetize podcast.
Stay updated with the latest podcasting tips and trends.
Check out our newest and recently released features!
Podcast interviews, best practices, and helpful tips.
The step-by-step guide to start your own podcast.
Create the best live podcast and engage your audience.
Tips on making the decision to monetize your podcast.
The best ways to get more eyes and ears on your podcast.
Everything you need to know about podcast advertising.
The ultimate guide to recording a podcast on your phone.
Steps to set up and use group recording in the Podbean app.
Join Ads Marketplace to earn through podcast sponsorships.
Manage your ads with dynamic ad insertion capability.
Monetize with Apple Podcasts Subscriptions via Podbean.
Earn rewards and recurring income from Fan Club membership.
Get the answers and support you need.
Resources and guides to launch, grow, and monetize podcast.
Stay updated with the latest podcasting tips and trends.
Check out our newest and recently released features!
Podcast interviews, best practices, and helpful tips.
The step-by-step guide to start your own podcast.
Create the best live podcast and engage your audience.
Tips on making the decision to monetize your podcast.
The best ways to get more eyes and ears on your podcast.
Everything you need to know about podcast advertising.
The ultimate guide to recording a podcast on your phone.
Steps to set up and use group recording in the Podbean app.
arxiv preprint - KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
In this episode, we discuss KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization by Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, Amir Gholami. The paper introduces KVQuant, a novel method for reducing memory usage in Large Language Models (LLMs) by efficiently quantizing key-value (KV) cache activations to sub-4-bit precision. KVQuant improves the accuracy of ultra-low precision representations through techniques such as per-channel and pre-rotary positional embedding quantization, non-uniform datatypes, per-vector dense-and-sparse quantization, and normalization of quantization centroids. The application of KVQuant results in negligible performance loss, increased maximum context lengths on GPUs, and a speedup in computation, with the code made available for public use.
Create your
podcast in
minutes
It is Free