In this episode, we discuss Contextual Position Encoding: Learning to Count What's Important by Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar. The paper introduces Contextual Position Encoding (CoPE), a new position encoding method for Large Language Models (LLMs) that incrementally alters position based on context rather than just token count. This approach enables more sophisticated addressing, such as targeting specific types of words or sentences, beyond the capabilities of current token-based methods. Through experiments, CoPE demonstrates improved performance on tasks like selective copy, counting, and Flip-Flop, as well as enhancements in language modeling and coding task perplexity.
view more