In this episode we discuss Masked Motion Encoding for Self-Supervised Video Representation Learning
by Xinyu Sun, Peihao Chen, Liangwei Chen, Changhao Li, Thomas H. Li, Mingkui Tan, Chuang Gan. The paper proposes a new pre-training paradigm called Masked Motion Encoding (MME) for learning discriminative video representation from unlabeled videos. The authors address the limitations of previous approaches that only focused on predicting appearance contents in masked regions. MME reconstructs both appearance and motion information to explore temporal clues and focuses on representing long-term motion and obtaining fine-grained temporal clues from sparsely sampled videos. The model is pre-trained with MME and able to anticipate long-term and fine-grained motion details. Code is available on GitHub.
view more