LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Sparks of Artificial General Intelligence: Early experiments with GPT-4
X-Risk Analysis for AI Research
Zero-1-to-3: Zero-shot One Image to 3D Object
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Zero-Shot Information Extraction via Chatting with ChatGPT
Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis
Self-Instruct: Aligning Language Model with Self Generated Instructions
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
GPT-4 Technical Report
Prismer: A Vision-Language Model with An Ensemble of Experts
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
LLaMA: Open and Efficient Foundation Language Models
Dropout Reduces Underfitting
Cross-domain Compositing with Pretrained Diffusion Models
REaLTabFormer: Generating Realistic Relational and Tabular Data using Transformers
Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey
Fine-Tuning Language Models from Human Preferences
AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Babbage from The Economist
Cyber Security Headlines
Software Engineering Daily
Cybersecurity Today
AI Deep Dive