CVPR 2023 - Self-positioning Point-based Transformer for Point Cloud Understanding
In this episode we discuss Self-positioning Point-based Transformer for Point Cloud Understanding by Authors: - Jinyoung Park - Sanghyeok Lee - Sihyeon Kim - Yunyang Xiong - Hyunwoo J. Kim Affiliations: - Jinyoung Park, Sanghyeok Lee, Sihyeon Kim, and Hyunwoo J. Kim: Korea University - Yunyang Xiong: Meta Reality Labs. The paper presents a new architecture called Self-Positioning point-based Transformer (SPoTr) designed to capture local and global shape contexts in point clouds with reduced complexity. It consists of local self-attention and self-positioning point-based global cross-attention. The self-positioning points, located adaptively based on the input shape, consider both spatial and semantic information to improve expressive power, while the global cross-attention allows the attention module to compute attention weights with only a small set of self-positioning points, improving scalability. SPoTr achieves improved accuracy on three point cloud tasks and offers interpretability through the analysis of self-positioning points. Code is available on Github.
Create your
podcast in
minutes
It is Free