A Literature Study of Embeddings on Source Code
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
Let’s Verify Step by Step
Large Language Models as Tool Makers
Gorilla: Large Language Model Connected with Massive APIs
CodeT5+: Open Code Large Language Models for Code Understanding and Generation
VanillaNet: the Power of Minimalism in Deep Learning
QLoRA: Efficient Finetuning of Quantized LLMs
SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
LLM-Pruner: On the Structural Pruning of Large Language Models
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Training language models to follow instructions with human feedback
Language Models Trained on Media Diets Can Predict Public Opinion
LoRA: Low-Rank Adaptation of Large Language Models
Pretraining Without Attention
ImageBind: One Embedding Space To Bind Them All
ZipIt! Merging Models from Different Tasks without Training
Chain of Thought Prompting Elicits Reasoning in Large Language Models
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Cyber Security Headlines
The WAN Show
The 404 Media Podcast
Babbage from The Economist
Cybersecurity Today