Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT).
2021: Kai Han, An Xiao, E. Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang
Ranked #6 on Fine-Grained Image Classification on Oxford-IIIT Pets
https://arxiv.org/pdf/2103.00112v3.pdf
view more