In this episode, we discuss Time Travel in LLMs: Tracing Data Contamination in Large Language Models by Shahriar Golchin, Mihai Surdeanu. The paper presents a method to detect test data contamination in large language models by checking if the model's output closely matches specific segments of reference data. This process involves guided instructions using dataset names and partition types, comparing the model's output to reference instances, and assessing partitions based on statistical overlap measures or classification by GPT-4's few-shot in-context learning. The results show high accuracy in identifying contamination, revealing that GPT-4 has been contaminated with certain datasets such as AG News, WNLI, and XSum.
view more