Soaring from 4K to 400K: Extending LLM’s Context with Activation Beacon
Parameter-Efficient Transfer Learning for NLP
Mixtral of Experts
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Video Understanding with Large Language Models: A Survey
GPT-4V(ision) is a Generalist Web Agent, if Grounded
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
AnyText: Multilingual Visual Text Generation And Editing
KwaiAgents: Generalized Information-seeking Agent System with Large Language Models
Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4
Fast Inference of Mixture-of-Experts Language Models with Offloading
Retrieval-Augmented Generation for Large Language Models: A Survey
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
Pearl: A Production-ready Reinforcement Learning Agent
Are Emergent Abilities in Large Language Models just In-Context Learning?
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models
Instruction Tuning for Large Language Models: A Survey
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
AI Deep Dive
Cyber Security Headlines
Cybersecurity Today
The 404 Media Podcast
The WAN Show