QLoRA: Efficient Finetuning of Quantized LLMs
SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
LLM-Pruner: On the Structural Pruning of Large Language Models
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Training language models to follow instructions with human feedback
Language Models Trained on Media Diets Can Predict Public Opinion
LoRA: Low-Rank Adaptation of Large Language Models
Pretraining Without Attention
ImageBind: One Embedding Space To Bind Them All
ZipIt! Merging Models from Different Tasks without Training
Chain of Thought Prompting Elicits Reasoning in Large Language Models
CodeGen2: Lessons for Training LLMs on Programming and Natural Languages
Shap-E: Generating Conditional 3D Implicit Functions
OPT: Open Pre-trained Transformer Language Models
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Large Language Models Can Self-Improve
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
WizardLM: Empowering Large Language Models to Follow Complex Instructions
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
Track Anything: Segment Anything Meets Videos
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Panic World
Click Here
The 404 Media Podcast
Babbage from The Economist
The WAN Show