CVPR 2023 - Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations
In this episode we discuss Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations by Yiwu Zhong, Licheng Yu, Yang Bai, Shangwen Li, Xueting Yan, Yin Li. The paper proposes a method to learn a video representation that encodes both action steps and their temporal ordering from a large-scale dataset of web instructional videos without human annotations. The method involves jointly learning a video representation for individual step concepts and a deep probabilistic model to capture temporal dependencies and individual variations in the step ordering. The model achieves significant improvements in step classification and forecasting as well as promising results in zero-shot inference and predicting diverse and plausible steps for incomplete procedures. The code is available on GitHub.
Create your
podcast in
minutes
It is Free