arxiv preprint - Gecko: Versatile Text Embeddings Distilled from Large Language Models
In this episode, we discuss Gecko: Versatile Text Embeddings Distilled from Large Language Models by Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernandez Abrego, Weiqiang Shi, Nithi Gupta, Aditya Kusupati, Prateek Jain, Siddhartha Reddy Jonnalagadda, Ming-Wei Chang, Iftekhar Naim. Gecko is a new text embedding model designed for efficient retrieval, using a novel two-step knowledge distillation process from large language models. First, it creates varied synthetic query-passage pairs, then it improves the data by selecting and relabeling high-quality candidates. Despite its smaller size, Gecko demonstrates superior retrieval performance, outpacing larger models with higher dimensionality on a benchmark test.
Create your
podcast in
minutes
It is Free