Llama 2: Open Foundation and Fine-Tuned Chat Models
Self-regulating Prompts: Foundational Model Adaptation without Forgetting
Deep Unrestricted Document Image Rectification
MMBench: Is Your Multi-modal Model an All-around Player?
Editing Large Language Models: Problems, Methods, and Opportunities
Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Secrets of RLHF in Large Language Models Part I: PPO
Liquid Time-constant Networks
RoFormer: Enhanced Transformer with Rotary Position Embedding
LTC-SE: Expanding the Potential of Liquid Time-Constant Neural Networks for Scalable AI and Embedded Systems
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Focused Transformer: Contrastive Training for Context Scaling
A Survey on Evaluation of Large Language Models
Voice Conversion With Just Nearest Neighbors
Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias
Enhancing Chat Language Models by Scaling High-quality Instructional Conversations
Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language
Augmenting Language Models with Long-Term Memory
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Cyber Security Headlines
The WAN Show
Cybersecurity Today
Babbage from The Economist
The 404 Media Podcast