In this episode we discuss Efficient Multimodal Fusion via Interactive Prompting
by Authors:
- Yaowei Li
- Ruijie Quan
- Linchao Zhu
- Yi Yang
Affiliations:
- Yaowei Li: ReLER, AAII, University of Technology Sydney
- Ruijie Quan, Linchao Zhu, Yi Yang: CCAI, Zhejiang University
Contact information:
- Yaowei Li: yaowei.li@uts.edu.au
- Ruijie Quan, Linchao Zhu, Yi Yang: {quanruijie, zhulinchao, yangyics}@zju.edu.cn. The paper proposes an efficient and flexible multimodal fusion method, called PMF, for fusing unimodally pre-trained transformers. The proposed method disentangles vanilla prompts into three types to learn different optimizing objectives for multimodal learning. The method adds prompt vectors only on the deep layers of the unimodal transformers, significantly reducing the training memory usage. Experimental results show that the proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3% trainable parameters and up to 66% saving of training memory usage.
view more